This algorithm performs a state estimation by variational minimization of the
classical :math:`J` function in static data assimilation:
-.. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x})
+.. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-H(\mathbf{x}))^T.\mathbf{R}^{-1}.(\mathbf{y}^o-H(\mathbf{x}))
which is usually designed as the "*3D-VAR*" function (see for example
[Talagrand97]_).
these variables being calculated and stored by default. The possible names
are in the following list: ["APosterioriCorrelations",
"APosterioriCovariance", "APosterioriStandardDeviations",
- "APosterioriVariances", "BMA", "CostFunctionJ", "CurrentOptimum",
- "CurrentState", "IndexOfOptimum", "Innovation", "MahalanobisConsistency",
- "OMA", "OMB", "SigmaObs2", "SimulatedObservationAtBackground",
- "SimulatedObservationAtCurrentOptimum",
+ "APosterioriVariances", "BMA", "CostFunctionJ",
+ "CostFunctionJAtCurrentOptimum", "CurrentOptimum", "CurrentState",
+ "IndexOfOptimum", "Innovation", "InnovationAtCurrentState",
+ "MahalanobisConsistency", "OMA", "OMB", "SigmaObs2",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum",
"SimulationQuantiles"].
Example : ``bma = ADD.get("BMA")[-1]``
+ CostFunctionJAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J`.
+ At each step, the value corresponds to the optimal state found from the
+ beginning.
+
+ Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
+
+ CostFunctionJbAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J^b`,
+ that is of the background difference part. At each step, the value
+ corresponds to the optimal state found from the beginning.
+
+ Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
+
+ CostFunctionJoAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J^o`,
+ that is of the observation difference part. At each step, the value
+ corresponds to the optimal state found from the beginning.
+
+ Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
+
CurrentOptimum
*List of vectors*. Each element is the optimal state obtained at the current
step of the optimization algorithm. It is not necessarely the last state.
Example : ``d = ADD.get("Innovation")[-1]``
+ InnovationAtCurrentState
+ *List of vectors*. Each element is an innovation vector at current state.
+
+ Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
+
MahalanobisConsistency
*List of values*. Each element is a value of the Mahalanobis quality
indicator.