This algorithm performs a state estimation by variational minimization of the
classical :math:`J` function in static data assimilation:
-.. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x})
+.. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-H(\mathbf{x}))^T.\mathbf{R}^{-1}.(\mathbf{y}^o-H(\mathbf{x}))
which is usually designed as the "*3D-VAR*" function (see for example
[Talagrand97]_).
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
these variables being calculated and stored by default. The possible names
- are in the following list: ["APosterioriCovariance", "BMA", "CostFunctionJ",
- "CurrentState", "OMA", "OMB", "Innovation", "SigmaObs2",
- "MahalanobisConsistency", "SimulatedObservationAtBackground",
+ are in the following list: ["APosterioriCorrelations",
+ "APosterioriCovariance", "APosterioriStandardDeviations",
+ "APosterioriVariances", "BMA", "CostFunctionJ",
+ "CostFunctionJAtCurrentOptimum", "CurrentOptimum", "CurrentState",
+ "IndexOfOptimum", "Innovation", "InnovationAtCurrentState",
+ "MahalanobisConsistency", "OMA", "OMB", "SigmaObs2",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum",
"SimulationQuantiles"].
- Example : ``{"StoreSupplementaryCalculations":["BMA","Innovation"]}``
+ Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
Quantiles
This list indicates the values of quantile, between 0 and 1, to be estimated
The conditional outputs of the algorithm are the following:
+ APosterioriCorrelations
+ *List of matrices*. Each element is an *a posteriori* error correlations
+ matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance
+ matrix.
+
+ Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
+
APosterioriCovariance
*List of matrices*. Each element is an *a posteriori* error covariance
matrix :math:`\mathbf{A}*` of the optimal state.
Example : ``A = ADD.get("APosterioriCovariance")[-1]``
+ APosterioriStandardDeviations
+ *List of matrices*. Each element is an *a posteriori* error standard
+ errors diagonal matrix of the optimal state, coming from the
+ :math:`\mathbf{A}*` covariance matrix.
+
+ Exemple : ``S = ADD.get("APosterioriStandardDeviations")[-1]``
+
+ APosterioriVariances
+ *List of matrices*. Each element is an *a posteriori* error variance
+ errors diagonal matrix of the optimal state, coming from the
+ :math:`\mathbf{A}*` covariance matrix.
+
+ Example : ``V = ADD.get("APosterioriVariances")[-1]``
+
BMA
*List of vectors*. Each element is a vector of difference between the
background and the optimal state.
Example : ``bma = ADD.get("BMA")[-1]``
+ CostFunctionJAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J`.
+ At each step, the value corresponds to the optimal state found from the
+ beginning.
+
+ Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
+
+ CostFunctionJbAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J^b`,
+ that is of the background difference part. At each step, the value
+ corresponds to the optimal state found from the beginning.
+
+ Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
+
+ CostFunctionJoAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J^o`,
+ that is of the observation difference part. At each step, the value
+ corresponds to the optimal state found from the beginning.
+
+ Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
+
+ CurrentOptimum
+ *List of vectors*. Each element is the optimal state obtained at the current
+ step of the optimization algorithm. It is not necessarely the last state.
+
+ Exemple : ``Xo = ADD.get("CurrentOptimum")[:]``
+
CurrentState
*List of vectors*. Each element is a usual state vector used during the
optimization algorithm procedure.
Example : ``Xs = ADD.get("CurrentState")[:]``
+ IndexOfOptimum
+ *List of integers*. Each element is the iteration index of the optimum
+ obtained at the current step the optimization algorithm. It is not
+ necessarely the number of the last iteration.
+
+ Exemple : ``i = ADD.get("IndexOfOptimum")[-1]``
+
Innovation
*List of vectors*. Each element is an innovation vector, which is in static
the difference between the optimal and the background, and in dynamic the
Example : ``d = ADD.get("Innovation")[-1]``
+ InnovationAtCurrentState
+ *List of vectors*. Each element is an innovation vector at current state.
+
+ Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
+
MahalanobisConsistency
*List of values*. Each element is a value of the Mahalanobis quality
indicator.
Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ SimulatedObservationAtCurrentOptimum
+ *List of vectors*. Each element is a vector of observation simulated from
+ the optimal state obtained at the current step the optimization algorithm,
+ that is, in the observation space.
+
+ Exemple : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
+
SimulatedObservationAtCurrentState
*List of vectors*. Each element is an observed vector at the current state,
that is, in the observation space.
- Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
SimulatedObservationAtOptimum
*List of vectors*. Each element is a vector of observation simulated from