Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Minimizer
-.. index:: single: Bounds
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: CostDecrementTolerance
-.. index:: single: ProjectedGradientTolerance
-.. index:: single: GradientNormTolerance
-.. index:: single: StoreSupplementaryCalculations
-.. index:: single: Quantiles
-.. index:: single: SetSeed
-.. index:: single: NumberOfSamplesForQuantiles
-.. index:: single: SimulationForQuantiles
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
Minimizer
+ .. index:: single: Minimizer
+
This key allows to choose the optimization minimizer. The default choice is
"LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
(nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
strongly recommended to stay with the default.
- Example : ``{"Minimizer":"LBFGSB"}``
-
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained optimizers.
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
-
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which is very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some optimizers, the effective stopping step can be
- slightly different of the limit due to algorithm internal control
- requirements.
-
- Example : ``{"MaximumNumberOfSteps":100}``
-
- CostDecrementTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 1.e-7, and it is
- recommended to adapt it to the needs on real problems.
+ Example :
+ ``{"Minimizer":"LBFGSB"}``
- Example : ``{"CostDecrementTolerance":1.e-7}``
+ .. include:: snippets/BoundsWithNone.rst
- ProjectedGradientTolerance
- This key indicates a limit value, leading to stop successfully the iterative
- optimization process when all the components of the projected gradient are
- under this limit. It is only used for constrained optimizers. The default is
- -1, that is the internal default of each minimizer (generally 1.e-5), and it
- is not recommended to change it.
+ .. include:: snippets/MaximumNumberOfSteps.rst
- Example : ``{"ProjectedGradientTolerance":-1}``
+ .. include:: snippets/CostDecrementTolerance.rst
- GradientNormTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained optimizers. The default is
- 1.e-5 and it is not recommended to change it.
+ .. include:: snippets/ProjectedGradientTolerance.rst
- Example : ``{"GradientNormTolerance":1.e-5}``
+ .. include:: snippets/GradientNormTolerance.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum",
"SimulationQuantiles"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
-
- Quantiles
- This list indicates the values of quantile, between 0 and 1, to be estimated
- by simulation around the optimal state. The sampling uses a multivariate
- Gaussian random sampling, directed by the *a posteriori* covariance matrix.
- This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default is a void list.
-
- Example : ``{"Quantiles":[0.1,0.9]}``
-
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
- NumberOfSamplesForQuantiles
- This key indicates the number of simulation to be done in order to estimate
- the quantiles. This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default is 100, which is often
- sufficient for correct estimation of common quantiles at 5%, 10%, 90% or
- 95%.
+ .. include:: snippets/Quantiles.rst
- Example : ``{"NumberOfSamplesForQuantiles":100}``
+ .. include:: snippets/SetSeed.rst
- SimulationForQuantiles
- This key indicates the type of simulation, linear (with the tangent
- observation operator applied to perturbation increments around the optimal
- state) or non-linear (with standard observation operator applied to
- perturbed states), one want to do for each perturbation. It changes mainly
- the time of each elementary calculation, usually longer in non-linear than
- in linear. This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default value is "Linear", and
- the possible choices are "Linear" and "NonLinear".
+ .. include:: snippets/NumberOfSamplesForQuantiles.rst
- Example : ``{"SimulationForQuantiles":"Linear"}``
+ .. include:: snippets/SimulationForQuantiles.rst
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
+ .. include:: snippets/Analysis.rst
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/CostFunctionJ.rst
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
-
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlations
- matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance
- matrix.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard
- errors diagonal matrix of the optimal state, coming from the
- :math:`\mathbf{A}*` covariance matrix.
-
- Example : ``S = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variance
- errors diagonal matrix of the optimal state, coming from the
- :math:`\mathbf{A}*` covariance matrix.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J`.
- At each step, the value corresponds to the optimal state found from the
- beginning.
-
- Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
-
- CostFunctionJbAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
-
- CostFunctionJoAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
-
- CurrentOptimum
- *List of vectors*. Each element is the optimal state obtained at the current
- step of the optimization algorithm. It is not necessarily the last state.
-
- Example : ``Xo = ADD.get("CurrentOptimum")[:]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- IndexOfOptimum
- *List of integers*. Each element is the iteration index of the optimum
- obtained at the current step the optimization algorithm. It is not
- necessarily the number of the last iteration.
-
- Example : ``i = ADD.get("IndexOfOptimum")[-1]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/APosterioriCorrelations.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/APosterioriCovariance.rst
- InnovationAtCurrentState
- *List of vectors*. Each element is an innovation vector at current state.
+ .. include:: snippets/APosterioriStandardDeviations.rst
- Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
+ .. include:: snippets/APosterioriVariances.rst
- MahalanobisConsistency
- *List of values*. Each element is a value of the Mahalanobis quality
- indicator.
+ .. include:: snippets/BMA.rst
- Example : ``m = ADD.get("MahalanobisConsistency")[-1]``
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/CurrentOptimum.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CurrentState.rst
- SigmaObs2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^o)^2` of the observation part.
+ .. include:: snippets/IndexOfOptimum.rst
- Example : ``so2 = ADD.get("SigmaObs")[-1]``
+ .. include:: snippets/Innovation.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/InnovationAtCurrentState.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/MahalanobisConsistency.rst
- SimulatedObservationAtCurrentOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the optimal state obtained at the current step the optimization algorithm,
- that is, in the observation space.
+ .. include:: snippets/OMA.rst
- Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
+ .. include:: snippets/OMB.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/SigmaObs2.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- SimulationQuantiles
- *List of vectors*. Each element is a vector corresponding to the observed
- state which realize the required quantile, in the same order than the
- quantiles required by the user.
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
- Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]``
+ .. include:: snippets/SimulationQuantiles.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Bounds
-.. index:: single: ConstrainedBy
-.. index:: single: EstimationOf
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: CostDecrementTolerance
-.. index:: single: ProjectedGradientTolerance
-.. index:: single: GradientNormTolerance
-.. index:: single: StoreSupplementaryCalculations
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/EvolutionError.rst
+
+ .. include:: snippets/EvolutionModel.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
Minimizer
+ .. index:: single: Minimizer
+
This key allows to choose the optimization minimizer. The default choice is
"LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
(nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
strongly recommended to stay with the default.
- Example : ``{"Minimizer":"LBFGSB"}``
-
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained optimizers.
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
-
- ConstrainedBy
- This key allows to choose the method to take into account the bounds
- constraints. The only one available is the "EstimateProjection", which
- projects the current state estimate on the bounds constraints.
-
- Example : ``{"ConstrainedBy":"EstimateProjection"}``
-
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which is very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some optimizers, the effective stopping step can be
- slightly different of the limit due to algorithm internal control
- requirements.
-
- Example : ``{"MaximumNumberOfSteps":100}``
+ Example :
+ ``{"Minimizer":"LBFGSB"}``
- CostDecrementTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 1.e-7, and it is
- recommended to adapt it to the needs on real problems.
+ .. include:: snippets/BoundsWithNone.rst
- Example : ``{"CostDecrementTolerance":1.e-7}``
+ .. include:: snippets/ConstrainedBy.rst
- EstimationOf
- This key allows to choose the type of estimation to be performed. It can be
- either state-estimation, with a value of "State", or parameter-estimation,
- with a value of "Parameters". The default choice is "State".
+ .. include:: snippets/MaximumNumberOfSteps.rst
- Example : ``{"EstimationOf":"Parameters"}``
+ .. include:: snippets/CostDecrementTolerance.rst
- ProjectedGradientTolerance
- This key indicates a limit value, leading to stop successfully the iterative
- optimization process when all the components of the projected gradient are
- under this limit. It is only used for constrained optimizers. The default is
- -1, that is the internal default of each minimizer (generally 1.e-5), and it
- is not recommended to change it.
+ .. include:: snippets/EstimationOf.rst
- Example : ``{"ProjectedGradientTolerance":-1}``
+ .. include:: snippets/ProjectedGradientTolerance.rst
- GradientNormTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained optimizers. The default is
- 1.e-5 and it is not recommended to change it.
-
- Example : ``{"GradientNormTolerance":1.e-5}``
+ .. include:: snippets/GradientNormTolerance.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/Analysis.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J`.
- At each step, the value corresponds to the optimal state found from the
- beginning.
-
- Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
-
- CostFunctionJbAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
-
- CostFunctionJoAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
+ .. include:: snippets/BMA.rst
- CurrentOptimum
- *List of vectors*. Each element is the optimal state obtained at the current
- step of the optimization algorithm. It is not necessarily the last state.
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
- Example : ``Xo = ADD.get("CurrentOptimum")[:]``
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/CurrentOptimum.rst
- IndexOfOptimum
- *List of integers*. Each element is the iteration index of the optimum
- obtained at the current step the optimization algorithm. It is not
- necessarily the number of the last iteration.
+ .. include:: snippets/CurrentState.rst
- Example : ``i = ADD.get("IndexOfOptimum")[-1]``
+ .. include:: snippets/IndexOfOptimum.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: ObservationOperator
-.. index:: single: AmplitudeOfInitialDirection
-.. index:: single: EpsilonMinimumExponent
-.. index:: single: InitialDirection
-.. index:: single: SetSeed
-.. index:: single: StoreSupplementaryCalculations
The general required commands, available in the editing user interface, are the
following:
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control
- :math:`U` included in the observation, the operator has to be applied to a
- pair :math:`(X,U)`.
+ .. include:: snippets/CheckingPoint.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- AmplitudeOfInitialDirection
- This key indicates the scaling of the initial perturbation build as a vector
- used for the directional derivative around the nominal checking point. The
- default is 1, that means no scaling.
-
- Example : ``{"AmplitudeOfInitialDirection":0.5}``
-
- EpsilonMinimumExponent
- This key indicates the minimal exponent value of the power of 10 coefficient
- to be used to decrease the increment multiplier. The default is -8, and it
- has to be between 0 and -20. For example, its default value leads to
- calculate the residue of the formula with a fixed increment multiplied from
- 1.e0 to 1.e-8.
-
- Example : ``{"EpsilonMinimumExponent":-12}``
-
- InitialDirection
- This key indicates the vector direction used for the directional derivative
- around the nominal checking point. It has to be a vector. If not specified,
- this direction defaults to a random perturbation around zero of the same
- vector size than the checking point.
+ .. include:: snippets/AmplitudeOfInitialDirection.rst
- Example : ``{"InitialDirection":[0.1,0.1,100.,3}``
+ .. include:: snippets/EpsilonMinimumExponent.rst
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
+ .. include:: snippets/InitialDirection.rst
- Example : ``{"SetSeed":1000}``
+ .. include:: snippets/SetSeed.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
are in the following list: ["CurrentState", "Residu",
"SimulatedObservationAtCurrentState"].
- Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CurrentState"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Residu
- *List of values*. Each element is the value of the particular residue
- verified during a checking algorithm, in the order of the tests.
-
- Example : ``r = ADD.get("Residu")[:]``
+ .. include:: snippets/Residu.rst
The conditional outputs of the algorithm are the following:
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/CurrentState.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: StoreSupplementaryCalculations
-.. index:: single: Quantiles
-.. index:: single: SetSeed
-.. index:: single: NumberOfSamplesForQuantiles
-.. index:: single: SimulationForQuantiles
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
-
- Quantiles
- This list indicates the values of quantile, between 0 and 1, to be estimated
- by simulation around the optimal state. The sampling uses a multivariate
- Gaussian random sampling, directed by the *a posteriori* covariance matrix.
- This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default is a void list.
-
- Example : ``{"Quantiles":[0.1,0.9]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
+ .. include:: snippets/Quantiles.rst
- Example : ``{"SetSeed":1000}``
+ .. include:: snippets/SetSeed.rst
- NumberOfSamplesForQuantiles
- This key indicates the number of simulation to be done in order to estimate
- the quantiles. This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default is 100, which is often
- sufficient for correct estimation of common quantiles at 5%, 10%, 90% or
- 95%.
+ .. include:: snippets/NumberOfSamplesForQuantiles.rst
- Example : ``{"NumberOfSamplesForQuantiles":100}``
-
- SimulationForQuantiles
- This key indicates the type of simulation, linear (with the tangent
- observation operator applied to perturbation increments around the optimal
- state) or non-linear (with standard observation operator applied to
- perturbed states), one want to do for each perturbation. It changes mainly
- the time of each elementary calculation, usually longer in non-linear than
- in linear. This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default value is "Linear", and
- the possible choices are "Linear" and "NonLinear".
-
- Example : ``{"SimulationForQuantiles":"Linear"}``
+ .. include:: snippets/SimulationForQuantiles.rst
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
The conditional outputs of the algorithm are the following:
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlation
- matrix of the optimal state.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard
- deviation matrix of the optimal state.
-
- Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variance matrix
- of the optimal state.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
-
- Example : ``J = ADD.get("CostFunctionJ")[:]``
-
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/APosterioriCorrelations.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/APosterioriCovariance.rst
- MahalanobisConsistency
- *List of values*. Each element is a value of the Mahalanobis quality
- indicator.
+ .. include:: snippets/APosterioriStandardDeviations.rst
- Example : ``m = ADD.get("MahalanobisConsistency")[-1]``
+ .. include:: snippets/APosterioriVariances.rst
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/BMA.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/CostFunctionJ.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CostFunctionJo.rst
- SigmaBck2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^b)^2` of the background part.
+ .. include:: snippets/Innovation.rst
- Example : ``sb2 = ADD.get("SigmaBck")[-1]``
+ .. include:: snippets/MahalanobisConsistency.rst
- SigmaObs2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^o)^2` of the observation part.
+ .. include:: snippets/OMA.rst
- Example : ``so2 = ADD.get("SigmaObs")[-1]``
+ .. include:: snippets/OMB.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/SigmaBck2.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/SigmaObs2.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
- SimulationQuantiles
- *List of vectors*. Each element is a vector corresponding to the observed
- state which realize the required quantile, in the same order than the
- quantiles required by the user.
+ .. include:: snippets/SimulationQuantiles.rst
- Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]``
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Minimizer
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: MaximumNumberOfFunctionEvaluations
-.. index:: single: StateVariationTolerance
-.. index:: single: CostDecrementTolerance
-.. index:: single: QualityCriterion
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- Minimizer
- This key allows to choose the optimization minimizer. The default choice is
- "BOBYQA", and the possible ones are
- "BOBYQA" (minimization with or without constraints by quadratic approximation [Powell09]_),
- "COBYLA" (minimization with or without constraints by linear approximation [Powell94]_ [Powell98]_).
- "NEWUOA" (minimization with or without constraints by iterative quadratic approximation [Powell04]_),
- "POWELL" (minimization unconstrained using conjugate directions [Powell64]_),
- "SIMPLEX" (minimization with or without constraints using Nelder-Mead simplex algorithm [Nelder65]_),
- "SUBPLEX" (minimization with or without constraints using Nelder-Mead on a sequence of subspaces [Rowan90]_).
- Remark: the "POWELL" method perform a dual outer/inner loops optimization,
- leading then to less control on the cost function evaluation number because
- it is the outer loop limit than is controlled. If precise control on this
- cost function evaluation number is required, choose an another minimizer.
-
- Example : ``{"Minimizer":"BOBYQA"}``
-
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained optimizers.
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
-
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which is very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some optimizers, the effective stopping step can be
- slightly different of the limit due to algorithm internal control
- requirements.
-
- Example : ``{"MaximumNumberOfSteps":50}``
-
- MaximumNumberOfFunctionEvaluations
- This key indicates the maximum number of evaluation of the cost function to
- be optimized. The default is 15000, which is an arbitrary limit. It is then
- recommended to adapt this parameter to the needs on real problems. For some
- optimizers, the effective number of function evaluations can be slightly
- different of the limit due to algorithm internal control requirements.
-
- Example : ``{"MaximumNumberOfFunctionEvaluations":50}``
-
- StateVariationTolerance
- This key indicates the maximum relative variation of the state for stopping
- by convergence on the state. The default is 1.e-4, and it is recommended to
- adapt it to the needs on real problems.
-
- Example : ``{"StateVariationTolerance":1.e-4}``
-
- CostDecrementTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 1.e-7, and it is
- recommended to adapt it to the needs on real problems.
-
- Example : ``{"CostDecrementTolerance":1.e-7}``
-
- QualityCriterion
- This key indicates the quality criterion, minimized to find the optimal
- state estimate. The default is the usual data assimilation criterion named
- "DA", the augmented weighted least squares. The possible criteria has to be
- in the following list, where the equivalent names are indicated by the sign
- "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA",
- "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2",
- "AbsoluteValue"="L1", "MaximumError"="ME"].
-
- Example : ``{"QualityCriterion":"DA"}``
+ .. include:: snippets/Minimizer_DFO.rst
+
+ .. include:: snippets/BoundsWithNone.rst
+
+ .. include:: snippets/MaximumNumberOfSteps.rst
+
+ .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst
+
+ .. include:: snippets/StateVariationTolerance.rst
+
+ .. include:: snippets/CostDecrementTolerance.rst
+
+ .. include:: snippets/QualityCriterion.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
-
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/Analysis.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/CostFunctionJ.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
+ .. include:: snippets/CostFunctionJb.rst
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
+ .. include:: snippets/CostFunctionJo.rst
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/CurrentState.rst
The conditional outputs of the algorithm are the following:
- CostFunctionJAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J`.
- At each step, the value corresponds to the optimal state found from the
- beginning.
-
- Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
-
- CostFunctionJbAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
-
- CostFunctionJoAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
-
- CurrentOptimum
- *List of vectors*. Each element is the optimal state obtained at the current
- step of the optimization algorithm. It is not necessarily the last state.
-
- Example : ``Xo = ADD.get("CurrentOptimum")[:]``
-
- IndexOfOptimum
- *List of integers*. Each element is the iteration index of the optimum
- obtained at the current step the optimization algorithm. It is not
- necessarily the number of the last iteration.
-
- Example : ``i = ADD.get("IndexOfOptimum")[-1]``
-
- InnovationAtCurrentState
- *List of vectors*. Each element is an innovation vector at current state.
-
- Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
-
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CurrentOptimum.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/IndexOfOptimum.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/InnovationAtCurrentState.rst
- SimulatedObservationAtCurrentOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the optimal state obtained at the current step the optimization algorithm,
- that is, in the observation space.
+ .. include:: snippets/OMA.rst
- Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
+ .. include:: snippets/OMB.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: SetSeed
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
+ .. include:: snippets/SetSeed.rst
+
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["CurrentState", "Innovation"]}``
-
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CurrentState", "Innovation"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/Analysis.rst
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/CurrentState.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/Innovation.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: StoreSupplementaryCalculations
-.. index:: single: Quantiles
-.. index:: single: SetSeed
-.. index:: single: NumberOfSamplesForQuantiles
-.. index:: single: SimulationForQuantiles
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
-
- Quantiles
- This list indicates the values of quantile, between 0 and 1, to be estimated
- by simulation around the optimal state. The sampling uses a multivariate
- gaussian random sampling, directed by the *a posteriori* covariance matrix.
- This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default is a void list.
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
- Example : ``{"Quantiles":[0.1,0.9]}``
+ .. include:: snippets/Quantiles.rst
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
+ .. include:: snippets/SetSeed.rst
- Example : ``{"SetSeed":1000}``
+ .. include:: snippets/NumberOfSamplesForQuantiles.rst
- NumberOfSamplesForQuantiles
- This key indicates the number of simulation to be done in order to estimate
- the quantiles. This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default is 100, which is often
- sufficient for correct estimation of common quantiles at 5%, 10%, 90% or
- 95%.
-
- Example : ``{"NumberOfSamplesForQuantiles":100}``
-
- SimulationForQuantiles
- This key indicates the type of simulation, linear (with the tangent
- observation operator applied to perturbation increments around the optimal
- state) or non-linear (with standard observation operator applied to
- perturbated states), one want to do for each perturbation. It changes mainly
- the time of each elementary calculation, usually longer in non-linear than
- in linear. This option is useful only if the supplementary calculation
- "SimulationQuantiles" has been chosen. The default value is "Linear", and
- the possible choices are "Linear" and "NonLinear".
-
- Example : ``{"SimulationForQuantiles":"Linear"}``
+ .. include:: snippets/SimulationForQuantiles.rst
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
The conditional outputs of the algorithm are the following:
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlation
- matrix of the optimal state.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard
- deviation matrix of the optimal state.
-
- Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variance matrix
- of the optimal state.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
-
- Example : ``J = ADD.get("CostFunctionJ")[:]``
-
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
-
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/APosterioriCorrelations.rst
- MahalanobisConsistency
- *List of values*. Each element is a value of the Mahalanobis quality
- indicator.
+ .. include:: snippets/APosterioriCovariance.rst
- Example : ``m = ADD.get("MahalanobisConsistency")[-1]``
+ .. include:: snippets/APosterioriStandardDeviations.rst
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/APosterioriVariances.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/BMA.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/CostFunctionJ.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CostFunctionJb.rst
- SigmaBck2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^b)^2` of the background part.
+ .. include:: snippets/CostFunctionJo.rst
- Example : ``sb2 = ADD.get("SigmaBck")[-1]``
+ .. include:: snippets/Innovation.rst
- SigmaObs2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^o)^2` of the observation part.
+ .. include:: snippets/MahalanobisConsistency.rst
- Example : ``so2 = ADD.get("SigmaObs")[-1]``
+ .. include:: snippets/OMA.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/OMB.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/SigmaBck2.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SigmaObs2.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- SimulationQuantiles
- *List of vectors*. Each element is a vector corresponding to the observed
- state which realize the required quantile, in the same order than the
- quantiles required by the user.
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
- Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]``
+ .. include:: snippets/SimulationQuantiles.rst
See also
++++++++
+++++++++++
This algorithm realizes an estimation of the state of a dynamic system by a
-extended Kalman Filter, using a non-linear calculation of the state.
+extended Kalman Filter, using a non-linear calculation of the state and the
+incremental evolution (process).
+
+In case of really non-linear operators, one can easily use the
+:ref:`section_ref_algorithm_EnsembleKalmanFilter` or the
+:ref:`section_ref_algorithm_UnscentedKalmanFilter`, which are often far more
+adapted to non-linear behavior but more costly. One can verify the linearity of
+the operators with the help of the :ref:`section_ref_algorithm_LinearityTest`.
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Bounds
-.. index:: single: ConstrainedBy
-.. index:: single: EstimationOf
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/EvolutionError.rst
+
+ .. include:: snippets/EvolutionModel.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with extreme values every time there
- is no bound (``None`` is not allowed when there is no bound).
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
+ .. include:: snippets/BoundsWithExtremes.rst
- ConstrainedBy
- This key allows to choose the method to take into account the bounds
- constraints. The only one available is the "EstimateProjection", which
- projects the current state estimate on the bounds constraints.
+ .. include:: snippets/ConstrainedBy.rst
- Example : ``{"ConstrainedBy":"EstimateProjection"}``
-
- EstimationOf
- This key allows to choose the type of estimation to be performed. It can be
- either state-estimation, with a value of "State", or parameter-estimation,
- with a value of "Parameters". The default choice is "State".
-
- Example : ``{"EstimationOf":"Parameters"}``
+ .. include:: snippets/EstimationOf.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"APosterioriVariances", "BMA", "CostFunctionJ", "CostFunctionJb",
"CostFunctionJo", "CurrentState", "Innovation"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
The conditional outputs of the algorithm are the following:
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlation
- matrix of the optimal state.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard
- deviation matrix of the optimal state.
-
- Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variance matrix
- of the optimal state.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/APosterioriCorrelations.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/APosterioriCovariance.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/APosterioriStandardDeviations.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
+ .. include:: snippets/APosterioriVariances.rst
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
+ .. include:: snippets/BMA.rst
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/CostFunctionJo.rst
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/CurrentState.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/Innovation.rst
See also
++++++++
References to other sections:
- :ref:`section_ref_algorithm_KalmanFilter`
+ - :ref:`section_ref_algorithm_EnsembleKalmanFilter`
- :ref:`section_ref_algorithm_UnscentedKalmanFilter`
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: ObservationOperator
-.. index:: single: NumberOfPrintedDigits
-.. index:: single: NumberOfRepetition
-.. index:: single: SetDebug
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/CheckingPoint.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- NumberOfPrintedDigits
- This key indicates the number of digits of precision for floating point
- printed output. The default is 5, with a minimum of 0.
-
- Example : ``{"NumberOfPrintedDigits":5}``
-
- NumberOfRepetition
- This key indicates the number of time to repeat the function evaluation. The
- default is 1.
+ .. include:: snippets/NumberOfPrintedDigits.rst
- Example : ``{"NumberOfRepetition":3}``
+ .. include:: snippets/NumberOfRepetition.rst
- SetDebug
- This key requires the activation, or not, of the debug mode during the
- function evaluation. The default is "False", the choices are "True" or
- "False".
-
- Example : ``{"SetDebug":False}``
+ .. include:: snippets/SetDebug.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
are in the following list: ["CurrentState",
"SimulatedObservationAtCurrentState"].
- Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CurrentState"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The conditional outputs of the algorithm are the following:
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/CurrentState.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
See also
++++++++
The general required commands, available in the editing user interface, are the
following:
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control
- :math:`U` included in the observation, the operator has to be applied to a
- pair :math:`(X,U)`.
+ .. include:: snippets/CheckingPoint.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- AmplitudeOfInitialDirection
- This key indicates the scaling of the initial perturbation build as a vector
- used for the directional derivative around the nominal checking point. The
- default is 1, that means no scaling.
-
- Example : ``{"AmplitudeOfInitialDirection":0.5}``
-
- EpsilonMinimumExponent
- This key indicates the minimal exponent value of the power of 10 coefficient
- to be used to decrease the increment multiplier. The default is -8, and it
- has to be between 0 and -20. For example, its default value leads to
- calculate the residue of the scalar product formula with a fixed increment
- multiplied from 1.e0 to 1.e-8.
+ .. include:: snippets/AmplitudeOfInitialDirection.rst
- Example : ``{"EpsilonMinimumExponent":-12}``
+ .. include:: snippets/EpsilonMinimumExponent.rst
- InitialDirection
- This key indicates the vector direction used for the directional derivative
- around the nominal checking point. It has to be a vector. If not specified,
- this direction defaults to a random perturbation around zero of the same
- vector size than the checking point.
+ .. include:: snippets/InitialDirection.rst
- Example : ``{"InitialDirection":[0.1,0.1,100.,3}``
+ .. include:: snippets/SetSeed.rst
ResiduFormula
+ .. index:: single: ResiduFormula
+
This key indicates the residue formula that has to be used for the test. The
default choice is "Taylor", and the possible ones are "Taylor" (normalized
residue of the Taylor development of the operator, which has to decrease
the norm of the Taylor development at zero order approximation, which
approximate the gradient, and which has to remain constant).
- Example : ``{"ResiduFormula":"Taylor"}``
-
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ Example :
+ ``{"ResiduFormula":"Taylor"}``
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
are in the following list: ["CurrentState", "Residu",
"SimulatedObservationAtCurrentState"].
- Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CurrentState"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Residu
- *List of values*. Each element is the value of the particular residue
- verified during a checking algorithm, in the order of the tests.
-
- Example : ``r = ADD.get("Residu")[:]``
+ .. include:: snippets/Residu.rst
The conditional outputs of the algorithm are the following:
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/CurrentState.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
See also
++++++++
It is theoretically reserved for observation and incremental evolution operators
cases which are linear, even if it sometimes works in "slightly" non-linear
-cases. One can verify the linearity of the observation operator with the help of
+cases. One can verify the linearity of the operators with the help of
the :ref:`section_ref_algorithm_LinearityTest`.
In case of non-linearity, even slightly marked, it will be preferred the
-:ref:`section_ref_algorithm_ExtendedKalmanFilter` or the
-:ref:`section_ref_algorithm_UnscentedKalmanFilter`.
+:ref:`section_ref_algorithm_ExtendedKalmanFilter`, or the
+:ref:`section_ref_algorithm_UnscentedKalmanFilter` and the
+:ref:`section_ref_algorithm_UnscentedKalmanFilter` that are more powerful.
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: EstimationOf
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/EvolutionError.rst
+
+ .. include:: snippets/EvolutionModel.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- EstimationOf
- This key allows to choose the type of estimation to be performed. It can be
- either state-estimation, with a value of "State", or parameter-estimation,
- with a value of "Parameters". The default choice is "State".
-
- Example : ``{"EstimationOf":"Parameters"}``
+ .. include:: snippets/EstimationOf.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
The conditional outputs of the algorithm are the following:
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlation
- matrix of the optimal state.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard
- deviation matrix of the optimal state.
-
- Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variance matrix
- of the optimal state.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/APosterioriCorrelations.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/APosterioriCovariance.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/APosterioriStandardDeviations.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
+ .. include:: snippets/APosterioriVariances.rst
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
+ .. include:: snippets/BMA.rst
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/CostFunctionJo.rst
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/CurrentState.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/Innovation.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"CostFunctionJb", "CostFunctionJo", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["OMA", "CurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["OMA", "CurrentState"]}``
*Tips for this algorithm:*
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/CostFunctionJ.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/CostFunctionJb.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
-
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/OMA.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: ObservationOperator
-.. index:: single: AmplitudeOfInitialDirection
-.. index:: single: EpsilonMinimumExponent
-.. index:: single: InitialDirection
-.. index:: single: ResiduFormula
-.. index:: single: SetSeed
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control
- :math:`U` included in the observation, the operator has to be applied to a
- pair :math:`(X,U)`.
+ .. include:: snippets/CheckingPoint.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- AmplitudeOfInitialDirection
- This key indicates the scaling of the initial perturbation build as a vector
- used for the directional derivative around the nominal checking point. The
- default is 1, that means no scaling.
-
- Example : ``{"AmplitudeOfInitialDirection":0.5}``
+ .. include:: snippets/AmplitudeOfInitialDirection.rst
- EpsilonMinimumExponent
- This key indicates the minimal exponent value of the power of 10 coefficient
- to be used to decrease the increment multiplier. The default is -8, and it
- has to be between 0 and -20. For example, its default value leads to
- calculate the residue of the scalar product formula with a fixed increment
- multiplied from 1.e0 to 1.e-8.
+ .. include:: snippets/EpsilonMinimumExponent.rst
- Example : ``{"EpsilonMinimumExponent":-12}``
+ .. include:: snippets/InitialDirection.rst
- InitialDirection
- This key indicates the vector direction used for the directional derivative
- around the nominal checking point. It has to be a vector. If not specified,
- this direction defaults to a random perturbation around zero of the same
- vector size than the checking point.
-
- Example : ``{"InitialDirection":[0.1,0.1,100.,3}``
+ .. include:: snippets/SetSeed.rst
ResiduFormula
+ .. index:: single: ResiduFormula
+
This key indicates the residue formula that has to be used for the test. The
default choice is "CenteredDL", and the possible ones are "CenteredDL"
(residue of the difference between the function at nominal point and the
order 1 approximations of the operator, normalized by RMS to the nominal
point, which has to stay close to 0).
- Example : ``{"ResiduFormula":"CenteredDL"}``
-
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ Example :
+ ``{"ResiduFormula":"CenteredDL"}``
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
are in the following list: ["CurrentState", "Residu",
"SimulatedObservationAtCurrentState"].
- Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CurrentState"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Residu
- *List of values*. Each element is the value of the particular residue
- verified during a checking algorithm, in the order of the tests.
-
- Example : ``r = ADD.get("Residu")[:]``
+ .. include:: snippets/Residu.rst
The conditional outputs of the algorithm are the following:
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/CurrentState.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Minimizer
-.. index:: single: Bounds
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: CostDecrementTolerance
-.. index:: single: ProjectedGradientTolerance
-.. index:: single: GradientNormTolerance
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
Minimizer
+ .. index:: single: Minimizer
+
This key allows to choose the optimization minimizer. The default choice is
"LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
(nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
strongly recommended to stay with the default.
- Example : ``{"Minimizer":"LBFGSB"}``
-
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained optimizers.
+ Example :
+ ``{"Minimizer":"LBFGSB"}``
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
+ .. include:: snippets/BoundsWithNone.rst
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which is very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some optimizers, the effective stopping step can be
- slightly different due to algorithm internal control requirements.
+ .. include:: snippets/MaximumNumberOfSteps.rst
- Example : ``{"MaximumNumberOfSteps":100}``
+ .. include:: snippets/CostDecrementTolerance.rst
- CostDecrementTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 1.e-7, and it is
- recommended to adapt it to the needs on real problems.
+ .. include:: snippets/ProjectedGradientTolerance.rst
- Example : ``{"CostDecrementTolerance":1.e-7}``
-
- ProjectedGradientTolerance
- This key indicates a limit value, leading to stop successfully the iterative
- optimization process when all the components of the projected gradient are
- under this limit. It is only used for constrained optimizers. The default is
- -1, that is the internal default of each minimizer (generally 1.e-5), and it
- is not recommended to change it.
-
- Example : ``{"ProjectedGradientTolerance":-1}``
-
- GradientNormTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained optimizers. The default is
- 1.e-5 and it is not recommended to change it.
-
- Example : ``{"GradientNormTolerance":1.e-5}``
+ .. include:: snippets/GradientNormTolerance.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum", "SimulatedObservationAtCurrentOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
*Tips for this algorithm:*
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
+ .. include:: snippets/Analysis.rst
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/CostFunctionJ.rst
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
-
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- IndexOfOptimum
- *List of integers*. Each element is the iteration index of the optimum
- obtained at the current step the optimization algorithm. It is not
- necessarily the number of the last iteration.
-
- Example : ``i = ADD.get("IndexOfOptimum")[-1]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
-
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/BMA.rst
- InnovationAtCurrentState
- *List of vectors*. Each element is an innovation vector at current state.
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
- Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/CurrentOptimum.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/CurrentState.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/IndexOfOptimum.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/Innovation.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/InnovationAtCurrentState.rst
- SimulatedObservationAtCurrentOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the optimal state obtained at the current step the optimization algorithm,
- that is, in the observation space.
+ .. include:: snippets/OMA.rst
- Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
+ .. include:: snippets/OMB.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: ObservationOperator
-.. index:: single: Observers
-
The general required commands, available in the editing user interface, are the
following:
- Observers
- *Optional command*. This command allows to set internal observers, that are
- functions linked with a particular variable, which will be executed each
- time this variable is modified. It is a convenient way to monitor variables
- of interest during the data assimilation or optimization process, by
- printing or plotting it, etc. Common templates are provided to help the user
- to start or to quickly make his case.
+ .. include:: snippets/Observers.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`.
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: MaximumNumberOfFunctionEvaluations
-.. index:: single: NumberOfInsects
-.. index:: single: SwarmVelocity
-.. index:: single: GroupRecallRate
-.. index:: single: QualityCriterion
-.. index:: single: BoxBounds
-.. index:: single: SetSeed
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
command.
The options of the algorithm are the following:
+.. index:: single: NumberOfInsects
+.. index:: single: SwarmVelocity
+.. index:: single: GroupRecallRate
+.. index:: single: QualityCriterion
+.. index:: single: BoxBounds
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 50, which is an arbitrary limit. It is then
- recommended to adapt this parameter to the needs on real problems.
-
- Example : ``{"MaximumNumberOfSteps":100}``
+ .. include:: snippets/MaximumNumberOfSteps_50.rst
- MaximumNumberOfFunctionEvaluations
- This key indicates the maximum number of evaluation of the cost function to
- be optimized. The default is 15000, which is an arbitrary limit. It is then
- recommended to adapt this parameter to the needs on real problems. For some
- optimizers, the effective number of function evaluations can be slightly
- different of the limit due to algorithm internal control requirements.
+ .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst
- Example : ``{"MaximumNumberOfFunctionEvaluations":50}``
+ .. include:: snippets/QualityCriterion.rst
NumberOfInsects
This key indicates the number of insects or particles in the swarm. The
default is 100, which is a usual default for this algorithm.
- Example : ``{"NumberOfInsects":100}``
+ Example :
+ ``{"NumberOfInsects":100}``
SwarmVelocity
This key indicates the part of the insect velocity which is imposed by the
swarm. It is a positive floating point value. The default value is 1.
- Example : ``{"SwarmVelocity":1.}``
+ Example :
+ ``{"SwarmVelocity":1.}``
GroupRecallRate
This key indicates the recall rate at the best swarm insect. It is a
floating point value between 0 and 1. The default value is 0.5.
- Example : ``{"GroupRecallRate":0.5}``
-
- QualityCriterion
- This key indicates the quality criterion, minimized to find the optimal
- state estimate. The default is the usual data assimilation criterion named
- "DA", the augmented weighted least squares. The possible criteria has to be
- in the following list, where the equivalent names are indicated by the sign
- "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA",
- "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2",
- "AbsoluteValue"="L1", "MaximumError"="ME"].
-
- Example : ``{"QualityCriterion":"DA"}``
+ Example :
+ ``{"GroupRecallRate":0.5}``
BoxBounds
This key allows to define upper and lower bounds for *increments* on every
(``None`` is not allowed when there is no bound). This key is required and
there is no default values.
- Example : ``{"BoxBounds":[[-0.5,0.5], [0.01,2.], [0.,1.e99], [-1.e99,1.e99]]}``
+ Example :
+ ``{"BoxBounds":[[-0.5,0.5], [0.01,2.], [0.,1.e99], [-1.e99,1.e99]]}``
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ .. include:: snippets/SetSeed.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
-
- Example : ``J = ADD.get("CostFunctionJ")[:]``
-
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/Analysis.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
-
- Example : ``d = ADD.get("Innovation")[-1]``
-
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
-
- Example : ``oma = ADD.get("OMA")[-1]``
-
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/BMA.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CurrentState.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/Innovation.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/OMA.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/OMB.rst
- Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: Observation
-.. index:: single: ObservationOperator
-.. index:: single: Quantile
-.. index:: single: Minimizer
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: CostDecrementTolerance
-.. index:: single: Bounds
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- Quantile
- This key allows to define the real value of the desired quantile, between
- 0 and 1. The default is 0.5, corresponding to the median.
-
- Example : ``{"Quantile":0.5}``
-
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which is very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems.
+ .. include:: snippets/Quantile.rst
- Example : ``{"MaximumNumberOfSteps":100}``
+ .. include:: snippets/MaximumNumberOfSteps.rst
- CostDecrementTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function or the surrogate
- decreases less than this tolerance at the last step. The default is 1.e-6,
- and it is recommended to adapt it to the needs on real problems.
+ .. include:: snippets/CostDecrementTolerance_6.rst
- Example : ``{"CostDecrementTolerance":1.e-7}``
-
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained optimizers.
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
+ .. include:: snippets/BoundsWithNone.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
*Tips for this algorithm:*
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/Analysis.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
-
- Example : ``d = ADD.get("Innovation")[-1]``
-
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
-
- Example : ``oma = ADD.get("OMA")[-1]``
-
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/BMA.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CurrentState.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`.
+ .. include:: snippets/Innovation.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/OMA.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/OMB.rst
- Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: SampleAsnUplet
-.. index:: single: SampleAsExplicitHyperCube
-.. index:: single: SampleAsMinMaxStepHyperCube
-.. index:: single: SampleAsIndependantRandomVariables
-.. index:: single: QualityCriterion
-.. index:: single: SetDebug
-.. index:: single: SetSeed
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/CheckingPoint.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
command.
The options of the algorithm are the following:
+.. index:: single: SampleAsnUplet
+.. index:: single: SampleAsExplicitHyperCube
+.. index:: single: SampleAsMinMaxStepHyperCube
+.. index:: single: SampleAsIndependantRandomVariables
SampleAsnUplet
This key describes the calculations points as a list of n-uplets, each
n-uplet being a state.
- Example : ``{"SampleAsnUplet":[[0,1,2,3],[4,3,2,1],[-2,3,-4,5]]}`` for 3 points in a state space of dimension 4
+ Example :
+ ``{"SampleAsnUplet":[[0,1,2,3],[4,3,2,1],[-2,3,-4,5]]}`` for 3 points in a state space of dimension 4
SampleAsExplicitHyperCube
This key describes the calculations points as an hyper-cube, from a given
That is then a list of the same size than the one of the state. The bounds
are included.
- Example : ``{"SampleAsMinMaxStepHyperCube":[[0.,1.,0.25],[-1,3,1]]}`` for a state space of dimension 2
+ Example :
+ ``{"SampleAsMinMaxStepHyperCube":[[0.,1.,0.25],[-1,3,1]]}`` for a state space of dimension 2
SampleAsIndependantRandomVariables
This key describes the calculations points as an hyper-cube, for which the
'uniform' of parameters (low,high), or 'weibull' of parameter (shape). That
is then a list of the same size than the one of the state.
- Example : ``{"SampleAsIndependantRandomVariables":[ ['normal',[0.,1.],3], ['uniform',[-2,2],4]]`` for a state space of dimension 2
-
- QualityCriterion
- This key indicates the quality criterion, used to find the state estimate.
- The default is the usual data assimilation criterion named "DA", the
- augmented weighted least squares. The possible criteria has to be in the
- following list, where the equivalent names are indicated by the sign "=":
- ["AugmentedWeightedLeastSquares"="AWLS"="DA", "WeightedLeastSquares"="WLS",
- "LeastSquares"="LS"="L2", "AbsoluteValue"="L1", "MaximumError"="ME"].
-
- Example : ``{"QualityCriterion":"DA"}``
+ Example :
+ ``{"SampleAsIndependantRandomVariables":[ ['normal',[0.,1.],3], ['uniform',[-2,2],4]]`` for a state space of dimension 2
- SetDebug
- This key requires the activation, or not, of the debug mode during the
- function evaluation. The default is "True", the choices are "True" or
- "False".
+ .. include:: snippets/QualityCriterion.rst
- Example : ``{"SetDebug":False}``
+ .. include:: snippets/SetDebug.rst
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ .. include:: snippets/SetSeed.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"CostFunctionJo", "CurrentState", "InnovationAtCurrentState",
"SimulatedObservationAtCurrentState"].
- Example : ``{"StoreSupplementaryCalculations":["CostFunctionJ", "SimulatedObservationAtCurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CostFunctionJ", "SimulatedObservationAtCurrentState"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
-
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- InnovationAtCurrentState
- *List of vectors*. Each element is an innovation vector at current state.
-
- Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
+ .. include:: snippets/CurrentState.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/InnovationAtCurrentState.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
See also
++++++++
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: ObservationOperator
-.. index:: single: AmplitudeOfInitialDirection
-.. index:: single: EpsilonMinimumExponent
-.. index:: single: InitialDirection
-.. index:: single: SetSeed
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control
- :math:`U` included in the observation, the operator has to be applied to a
- pair :math:`(X,U)`.
+ .. include:: snippets/CheckingPoint.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- AmplitudeOfInitialDirection
- This key indicates the scaling of the initial perturbation build as a vector
- used for the directional derivative around the nominal checking point. The
- default is 1, that means no scaling.
-
- Example : ``{"AmplitudeOfInitialDirection":0.5}``
-
- EpsilonMinimumExponent
- This key indicates the minimal exponent value of the power of 10 coefficient
- to be used to decrease the increment multiplier. The default is -8, and it
- has to be between 0 and -20. For example, its default value leads to
- calculate the residue of the scalar product formula with a fixed increment
- multiplied from 1.e0 to 1.e-8.
-
- Example : ``{"EpsilonMinimumExponent":-12}``
+ .. include:: snippets/AmplitudeOfInitialDirection.rst
- InitialDirection
- This key indicates the vector direction used for the directional derivative
- around the nominal checking point. It has to be a vector. If not specified,
- this direction defaults to a random perturbation around zero of the same
- vector size than the checking point.
+ .. include:: snippets/EpsilonMinimumExponent.rst
- Example : ``{"InitialDirection":[0.1,0.1,100.,3}``
+ .. include:: snippets/InitialDirection.rst
- SetSeed
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
- Example : ``{"SetSeed":1000}``
+ .. include:: snippets/SetSeed.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
are in the following list: ["CurrentState", "Residu",
"SimulatedObservationAtCurrentState"].
- Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["CurrentState"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Residu
- *List of values*. Each element is the value of the particular residue
- verified during a checking algorithm, in the order of the tests.
-
- Example : ``r = ADD.get("Residu")[:]``
+ .. include:: snippets/Residu.rst
The conditional outputs of the algorithm are the following:
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/CurrentState.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
See also
++++++++
operators for the observation and evolution operators, as in the simple or
extended Kalman filter.
+It applies to non-linear observation and incremental evolution (process)
+operators with excellent robustness and performance qualities. It can be
+compared to the :ref:`section_ref_algorithm_EnsembleKalmanFilter`, whose
+qualities are similar for non-linear systems.
+
+In case of linear of "slightly" non-linear operators, one can easily use the
+:ref:`section_ref_algorithm_ExtendedKalmanFilter` or even the
+:ref:`section_ref_algorithm_KalmanFilter`, which are often far less expensive
+to evaluate on small systems. One can verify the linearity of the operators
+with the help of the :ref:`section_ref_algorithm_LinearityTest`.
+
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Bounds
-.. index:: single: ConstrainedBy
-.. index:: single: EstimationOf
-.. index:: single: Alpha
-.. index:: single: Beta
-.. index:: single: Kappa
-.. index:: single: Reconditioner
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/EvolutionError.rst
+
+ .. include:: snippets/EvolutionModel.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
The options of the algorithm are the following:
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with extreme values every time there
- is no bound (``None`` is not allowed when there is no bound).
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
+ .. include:: snippets/BoundsWithExtremes.rst
- ConstrainedBy
- This key allows to choose the method to take into account the bounds
- constraints. The only one available is the "EstimateProjection", which
- projects the current state estimate on the bounds constraints.
+ .. include:: snippets/ConstrainedBy.rst
- Example : ``{"ConstrainedBy":"EstimateProjection"}``
-
- EstimationOf
- This key allows to choose the type of estimation to be performed. It can be
- either state-estimation, with a value of "State", or parameter-estimation,
- with a value of "Parameters". The default choice is "State".
-
- Example : ``{"EstimationOf":"Parameters"}``
+ .. include:: snippets/EstimationOf.rst
Alpha, Beta, Kappa, Reconditioner
+ .. index:: single: Alpha
+ .. index:: single: Beta
+ .. index:: single: Kappa
+ .. index:: single: Reconditioner
+
These keys are internal scaling parameters. "Alpha" requires a value between
1.e-4 and 1. "Beta" has an optimal value of 2 for Gaussian *a priori*
distribution. "Kappa" requires an integer value, and the right default is
obtained by setting it to 0. "Reconditioner" requires a value between 1.e-3
and 10, it defaults to 1.
- Example : ``{"Alpha":1,"Beta":2,"Kappa":0,"Reconditioner":1}``
+ Example :
+ ``{"Alpha":1,"Beta":2,"Kappa":0,"Reconditioner":1}``
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
"APosterioriVariances", "BMA", "CostFunctionJ", "CostFunctionJb",
"CostFunctionJo", "CurrentState", "Innovation"].
- Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
Information and variables available at the end of the algorithm
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
The conditional outputs of the algorithm are the following:
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlation
- matrix of the optimal state.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard
- deviation matrix of the optimal state.
-
- Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variance matrix
- of the optimal state.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/APosterioriCorrelations.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/APosterioriCovariance.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
+ .. include:: snippets/APosterioriStandardDeviations.rst
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
+ .. include:: snippets/APosterioriVariances.rst
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
+ .. include:: snippets/BMA.rst
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJ.rst
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/CostFunctionJo.rst
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/CurrentState.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/Innovation.rst
See also
++++++++
.. index:: single: Algorithm
.. index:: single: AlgorithmParameters
-.. index:: single: Background
-.. index:: single: BackgroundError
.. index:: single: ControlInput
.. index:: single: Debug
-.. index:: single: EvolutionError
-.. index:: single: EvolutionModel
.. index:: single: InputVariables
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
.. index:: single: Observer
.. index:: single: Observers
.. index:: single: Observer Template
:ref:`section_ref_options_Algorithm_Parameters` for the detailed use of this
command part.
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" type object.
+ .. include:: snippets/Background.rst
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
+ .. include:: snippets/BackgroundError.rst
ControlInput
*Optional command*. This indicates the control vector used to force the
information. The choices are limited between 0 (for False) and 1 (for
True).
- EvolutionError
- *Optional command*. This indicates the evolution error covariance matrix,
- usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- EvolutionModel
- *Optional command*. This indicates the evolution model operator, usually
- noted :math:`M`, which describes an elementary step of evolution. Its value
- is defined as a "*Function*" type object or a "*Matrix*" type one. In the
- case of "*Function*" type, different functional forms can be used, as
- described in the section :ref:`section_ref_operator_requirements`. If there
- is some control :math:`U` included in the evolution model, the operator has
- to be applied to a pair :math:`(X,U)`.
+ .. include:: snippets/EvolutionError.rst
+
+ .. include:: snippets/EvolutionModel.rst
InputVariables
*Optional command*. This command allows to indicates the name and size of
physical variables that are bundled together in the state vector. This
information is dedicated to data processed inside an algorithm.
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control
- :math:`U` included in the observation, the operator has to be applied to a
- pair :math:`(X,U)`.
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
Observers
*Optional command*. This command allows to set internal observers, that are
.. index:: single: Algorithm
.. index:: single: AlgorithmParameters
-.. index:: single: CheckingPoint
-.. index:: single: BackgroundError
.. index:: single: Debug
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
.. index:: single: Observer
.. index:: single: Observers
.. index:: single: Observer Template
:ref:`section_ref_options_Algorithm_Parameters` for the detailed use of this
command part.
- CheckingPoint
- *Required command*. This indicates the vector used as the state around which
- to perform the required check, noted :math:`\mathbf{x}` and similar to the
- background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
+ .. include:: snippets/CheckingPoint.rst
- BackgroundError
- *Required command*. This indicates the background error covariance matrix,
- previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
- type object, a "*ScalarSparseMatrix*" type object, or a
- "*DiagonalSparseMatrix*" type object.
+ .. include:: snippets/BackgroundError.rst
Debug
*Optional command*. This define the level of trace and intermediary debug
information. The choices are limited between 0 (for False) and 1 (for
True).
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
Observers
*Optional command*. This command allows to set internal observers, that are
Inventory of potentially available information at the output
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-.. index:: single: Dry
-.. index:: single: Forecast
-
-The set of potentially available information at the output is listed here
-regardless of algorithms, for inventory.
+The main set of potentially available information at the output is listed here
+regardless of algorithms, for inventory. One has to look directly to algorithm
+details to get full inventory.
The optimal state is an information that is always naturally available after an
optimization or a data assimilation calculation. It is indicated by the
following keywords:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
The following variables are input variables. They are made available to the
user at the output in order to facilitate the writing of post-processing
procedures, and are conditioned by a user request using a boolean "*Stored*"
-at the input.
-
- Background
- *Vector*, whose availability is conditioned by "*Stored*" at the input. It
- is the background vector :math:`\mathbf{x}^b`.
-
- Example : ``Xb = ADD.get("Background")``
-
- BackgroundError
- *Matrix*, whose availability is conditioned by "*Stored*" at the input. It
- is the matrix :math:`\mathbf{B}` of *a priori* background errors
- covariances.
+at the input. All these returned input variables can be obtained with the
+standard command ".get(...)", which return the unique object given on input.
- Example : ``B = ADD.get("BackgroundError")``
+ .. include:: snippets/Background.rst
- EvolutionError
- *Matrix*, whose availability is conditioned by "*Stored*" at the input. It
- is the matrix :math:`\mathbf{M}` of *a priori* evolution errors covariances.
+ .. include:: snippets/BackgroundError.rst
- Example : ``M = ADD.get("EvolutionError")``
+ .. include:: snippets/EvolutionError.rst
- Observation
- *Vector*, whose availability is conditioned by "*Stored*" at the input. It
- is the observation vector :math:`\mathbf{y}^o`.
+ .. include:: snippets/Observation.rst
- Example : ``Yo = ADD.get("Observation")``
-
- ObservationError
- *Matrix*, whose availability is conditioned by "*Stored*" at the input. It
- is the matrix :math:`\mathbf{R}` of *a priori* observation errors
- covariances.
-
- Example : ``R = ADD.get("ObservationError")``
+ .. include:: snippets/ObservationError.rst
All other information are conditioned by the algorithm and/or the user requests
-of availability. They are the following, in alphabetical order:
-
- APosterioriCorrelations
- *List of matrices*. Each element is an *a posteriori* error correlations
- matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance
- matrix.
-
- Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
-
- APosterioriCovariance
- *List of matrices*. Each element is an *a posteriori* error covariance
- matrix :math:`\mathbf{A}*` of the optimal state.
-
- Example : ``A = ADD.get("APosterioriCovariance")[-1]``
-
- APosterioriStandardDeviations
- *List of matrices*. Each element is an *a posteriori* error standard errors
- diagonal matrix of the optimal state, coming from the :math:`\mathbf{A}*`
- covariance matrix.
-
- Example : ``S = ADD.get("APosterioriStandardDeviations")[-1]``
-
- APosterioriVariances
- *List of matrices*. Each element is an *a posteriori* error variances
- diagonal matrix of the optimal state, coming from the :math:`\mathbf{A}*`
- covariance matrix.
-
- Example : ``V = ADD.get("APosterioriVariances")[-1]``
-
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
-
- Example : ``bma = ADD.get("BMA")[-1]``
-
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
-
- Example : ``J = ADD.get("CostFunctionJ")[:]``
-
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
-
- CostFunctionJAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J`.
- At each step, the value corresponds to the optimal state found from the
- beginning.
-
- Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
-
- CostFunctionJbAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
-
- CostFunctionJoAtCurrentOptimum
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part. At each step, the value
- corresponds to the optimal state found from the beginning.
-
- Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
-
- CurrentOptimum
- *List of vectors*. Each element is the optimal state obtained at the current
- step of the optimization algorithm. It is not necessarily the last state.
-
- Example : ``Xo = ADD.get("CurrentOptimum")[:]``
-
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
-
- Example : ``Xs = ADD.get("CurrentState")[:]``
-
- IndexOfOptimum
- *List of integers*. Each element is the iteration index of the optimum
- obtained at the current step the optimization algorithm. It is not
- necessarily the number of the last iteration.
-
- Example : ``i = ADD.get("MahalanobisConsistency")[-1]``
-
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
-
- Example : ``d = ADD.get("Innovation")[-1]``
+of availability. The main ones are the following, in alphabetical order:
- InnovationAtCurrentState
- *List of vectors*. Each element is an innovation vector at current state.
+ .. include:: snippets/APosterioriCorrelations.rst
- Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
+ .. include:: snippets/APosterioriCovariance.rst
- MahalanobisConsistency
- *List of values*. Each element is a value of the Mahalanobis quality
- indicator.
+ .. include:: snippets/APosterioriStandardDeviations.rst
- Example : ``m = ADD.get("MahalanobisConsistency")[-1]``
+ .. include:: snippets/APosterioriVariances.rst
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/BMA.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/CostFunctionJ.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/CostFunctionJb.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/CostFunctionJo.rst
- Residu
- *List of values*. Each element is the value of the particular residu
- verified during a checking algorithm, in the order of the tests.
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
- Example : ``r = ADD.get("Residu")[:]``
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
- SigmaBck2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^b)^2` of the background part.
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
- Example : ``sb2 = ADD.get("SigmaBck")[-1]``
+ .. include:: snippets/CurrentOptimum.rst
- SigmaObs2
- *List of values*. Each element is a value of the quality indicator
- :math:`(\sigma^o)^2` of the observation part.
+ .. include:: snippets/CurrentState.rst
- Example : ``so2 = ADD.get("SigmaObs")[-1]``
+ .. include:: snippets/IndexOfOptimum.rst
- SimulatedObservationAtBackground
- *List of vectors*. Each element is a vector of observation simulated from
- the background :math:`\mathbf{x}^b`. It is the forecast using the
- background, and it is sometimes called "*Dry*".
+ .. include:: snippets/Innovation.rst
- Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+ .. include:: snippets/InnovationAtCurrentState.rst
- SimulatedObservationAtCurrentOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the optimal state obtained at the current step the optimization algorithm,
- that is, in the observation space.
+ .. include:: snippets/OMA.rst
- Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
+ .. include:: snippets/OMB.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/Residu.rst
- Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or the optimal state :math:`\mathbf{x}^a`. It is the forecast
- using the analysis or the optimal state, and it is sometimes called
- "*Forecast*".
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- SimulationQuantiles
- *List of vectors*. Each element is a vector corresponding to the observed
- state which realize the required quantile, in the same order than the
- quantiles required by the user.
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
- Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]``
+ .. include:: snippets/SimulationQuantiles.rst
.. [#] For more information on PARAVIS, see the *PARAVIS module* and its integrated help available from the main menu *Help* of the SALOME platform.
--- /dev/null
+.. index:: single: APosterioriCorrelations
+
+APosterioriCorrelations
+ *List of matrices*. Each element is an *a posteriori* error correlations
+ matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance
+ matrix.
+
+ Example :
+ ``C = ADD.get("APosterioriCorrelations")[-1]``
--- /dev/null
+.. index:: single: APosterioriCovariance
+
+APosterioriCovariance
+ *List of matrices*. Each element is an *a posteriori* error covariance
+ matrix :math:`\mathbf{A}*` of the optimal state.
+
+ Example :
+ ``A = ADD.get("APosterioriCovariance")[-1]``
--- /dev/null
+.. index:: single: APosterioriStandardDeviations
+
+APosterioriStandardDeviations
+ *List of matrices*. Each element is an *a posteriori* error standard
+ errors diagonal matrix of the optimal state, coming from the
+ :math:`\mathbf{A}*` covariance matrix.
+
+ Example :
+ ``S = ADD.get("APosterioriStandardDeviations")[-1]``
--- /dev/null
+.. index:: single: APosterioriVariances
+
+APosterioriVariances
+ *List of matrices*. Each element is an *a posteriori* error variance
+ errors diagonal matrix of the optimal state, coming from the
+ :math:`\mathbf{A}*` covariance matrix.
+
+ Example :
+ ``V = ADD.get("APosterioriVariances")[-1]``
--- /dev/null
+.. index:: single: AmplitudeOfInitialDirection
+
+AmplitudeOfInitialDirection
+ This key indicates the scaling of the initial perturbation build as a vector
+ used for the directional derivative around the nominal checking point. The
+ default is 1, that means no scaling.
+
+ Example :
+ ``{"AmplitudeOfInitialDirection":0.5}``
--- /dev/null
+.. index:: single: Analysis
+
+Analysis
+ *List of vectors*. Each element of this variable is an optimal state
+ :math:`\mathbf{x}*` in optimization or an analysis :math:`\mathbf{x}^a` in
+ data assimilation.
+
+ Example :
+ ``Xa = ADD.get("Analysis")[-1]``
--- /dev/null
+.. index:: single: BMA
+
+BMA
+ *List of vectors*. Each element is a vector of difference between the
+ background and the optimal state.
+
+ Example :
+ ``bma = ADD.get("BMA")[-1]``
--- /dev/null
+.. index:: single: Background
+
+Background
+ *Required command*. The variable indicates the background or initial vector
+ used, previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
+ "*Vector*" or a *VectorSerie*" type object. Its availability in output is
+ conditioned by the boolean "*Stored*" associated with input.
--- /dev/null
+.. index:: single: BackgroundError
+
+BackgroundError
+ *Required command*. This indicates the background error covariance matrix,
+ previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
+ type object, a "*ScalarSparseMatrix*" type object, or a
+ "*DiagonalSparseMatrix*" type object, as described in detail in the section
+ :ref:`section_ref_covariance_requirements`. Its availability in output is
+ conditioned by the boolean "*Stored*" associated with input.
--- /dev/null
+.. index:: single: Bounds
+
+Bounds
+ This key allows to define upper and lower bounds for every state variable
+ being optimized. Bounds have to be given by a list of list of pairs of
+ lower/upper bounds for each variable, with extreme values every time there
+ is no bound (``None`` is not allowed when there is no bound).
+
+ Example :
+ ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
--- /dev/null
+.. index:: single: Bounds
+
+Bounds
+ This key allows to define upper and lower bounds for every state variable
+ being optimized. Bounds have to be given by a list of list of pairs of
+ lower/upper bounds for each variable, with possibly ``None`` every time
+ there is no bound. The bounds can always be specified, but they are taken
+ into account only by the constrained optimizers.
+
+ Example :
+ ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
--- /dev/null
+.. index:: single: CheckingPoint
+
+CheckingPoint
+ *Required command*. The variable indicates the vector used as the state
+ around which to perform the required check, noted :math:`\mathbf{x}` and
+ similar to the background :math:`\mathbf{x}^b`. It is defined as a "*Vector*"
+ type object. Its availability in output is conditioned by the boolean
+ "*Stored*" associated with input.
--- /dev/null
+.. index:: single: ConstrainedBy
+
+ConstrainedBy
+ This key allows to choose the method to take into account the bounds
+ constraints. The only one available is the "EstimateProjection", which
+ projects the current state estimate on the bounds constraints.
+
+ Example :
+ ``{"ConstrainedBy":"EstimateProjection"}``
--- /dev/null
+.. index:: single: CostDecrementTolerance
+
+CostDecrementTolerance
+ This key indicates a limit value, leading to stop successfully the
+ iterative optimization process when the cost function decreases less than
+ this tolerance at the last step. The default is 1.e-7, and it is
+ recommended to adapt it to the needs on real problems.
+
+ Example :
+ ``{"CostDecrementTolerance":1.e-7}``
--- /dev/null
+.. index:: single: CostDecrementTolerance
+
+CostDecrementTolerance
+ This key indicates a limit value, leading to stop successfully the
+ iterative optimization process when the cost function decreases less than
+ this tolerance at the last step. The default is 1.e-6, and it is
+ recommended to adapt it to the needs on real problems.
+
+ Example : ``{"CostDecrementTolerance":1.e-6}``
--- /dev/null
+.. index:: single: CostFunctionJ
+
+CostFunctionJ
+ *List of values*. Each element is a value of the chosen error function
+ :math:`J`.
+
+ Example :
+ ``J = ADD.get("CostFunctionJ")[:]``
--- /dev/null
+.. index:: single: CostFunctionJAtCurrentOptimum
+
+CostFunctionJAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J`.
+ At each step, the value corresponds to the optimal state found from the
+ beginning.
+
+ Example :
+ ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
--- /dev/null
+.. index:: single: CostFunctionJb
+
+CostFunctionJb
+ *List of values*. Each element is a value of the error function :math:`J^b`,
+ that is of the background difference part. If this part does not exist in the
+ error function, its value is zero.
+
+ Example :
+ ``Jb = ADD.get("CostFunctionJb")[:]``
--- /dev/null
+.. index:: single: CostFunctionJbAtCurrentOptimum
+
+CostFunctionJbAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J^b`. At
+ each step, the value corresponds to the optimal state found from the
+ beginning. If this part does not exist in the error function, its value is
+ zero.
+
+ Example :
+ ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
--- /dev/null
+.. index:: single: CostFunctionJo
+
+CostFunctionJo
+ *List of values*. Each element is a value of the error function :math:`J^o`,
+ that is of the observation difference part.
+
+ Example :
+ ``Jo = ADD.get("CostFunctionJo")[:]``
--- /dev/null
+.. index:: single: CostFunctionJoAtCurrentOptimum
+
+CostFunctionJoAtCurrentOptimum
+ *List of values*. Each element is a value of the error function :math:`J^o`,
+ that is of the observation difference part. At each step, the value
+ corresponds to the optimal state found from the beginning.
+
+ Example :
+ ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
--- /dev/null
+.. index:: single: CurrentOptimum
+
+CurrentOptimum
+ *List of vectors*. Each element is the optimal state obtained at the current
+ step of the optimization algorithm. It is not necessarily the last state.
+
+ Example :
+ ``Xo = ADD.get("CurrentOptimum")[:]``
--- /dev/null
+.. index:: single: CurrentState
+
+CurrentState
+ *List of vectors*. Each element is a usual state vector used during the
+ iterative algorithm procedure.
+
+ Example :
+ ``Xs = ADD.get("CurrentState")[:]``
--- /dev/null
+.. index:: single: EpsilonMinimumExponent
+
+EpsilonMinimumExponent
+ This key indicates the minimal exponent value of the power of 10 coefficient
+ to be used to decrease the increment multiplier. The default is -8, and it
+ has to be between 0 and -20. For example, its default value leads to
+ calculate the residue of the scalar product formula with a fixed increment
+ multiplied from 1.e0 to 1.e-8.
+
+ Example :
+ ``{"EpsilonMinimumExponent":-12}``
--- /dev/null
+.. index:: single: EstimationOf
+
+EstimationOf
+ This key allows to choose the type of estimation to be performed. It can be
+ either state-estimation, with a value of "State", or parameter-estimation,
+ with a value of "Parameters". The default choice is "State".
+
+ Example :
+ ``{"EstimationOf":"Parameters"}``
--- /dev/null
+.. index:: single: EvolutionError
+
+EvolutionError
+ *Matrix*. The variable indicates the evolution error covariance matrix,
+ usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type
+ object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
+ type object, as described in detail in the section
+ :ref:`section_ref_covariance_requirements`. Its availability in output is
+ conditioned by the boolean "*Stored*" associated with input.
--- /dev/null
+.. index:: single: EvolutionModel
+
+EvolutionModel
+ *Operator*. The variable indicates the evolution model operator, usually
+ noted :math:`M`, which describes an elementary step of evolution. Its value
+ is defined as a "*Function*" type object or a "*Matrix*" type one. In the
+ case of "*Function*" type, different functional forms can be used, as
+ described in the section :ref:`section_ref_operator_requirements`. If there
+ is some control :math:`U` included in the evolution model, the operator has
+ to be applied to a pair :math:`(X,U)`.
--- /dev/null
+.. index:: single: GradientNormTolerance
+
+GradientNormTolerance
+ This key indicates a limit value, leading to stop successfully the
+ iterative optimization process when the norm of the gradient is under this
+ limit. It is only used for non-constrained optimizers. The default is
+ 1.e-5 and it is not recommended to change it.
+
+ Example :
+ ``{"GradientNormTolerance":1.e-5}``
--- /dev/null
+.. index:: single: IndexOfOptimum
+
+IndexOfOptimum
+ *List of integers*. Each element is the iteration index of the optimum
+ obtained at the current step the optimization algorithm. It is not
+ necessarily the number of the last iteration.
+
+ Example :
+ ``i = ADD.get("IndexOfOptimum")[-1]``
--- /dev/null
+.. index:: single: InitialDirection
+
+InitialDirection
+ This key indicates the vector direction used for the directional derivative
+ around the nominal checking point. It has to be a vector. If not specified,
+ this direction defaults to a random perturbation around zero of the same
+ vector size than the checking point.
+
+ Example :
+ ``{"InitialDirection":[0.1,0.1,100.,3}``
--- /dev/null
+.. index:: single: Innovation
+
+Innovation
+ *List of vectors*. Each element is an innovation vector, which is in static
+ the difference between the optimal and the background, and in dynamic the
+ evolution increment.
+
+ Example :
+ ``d = ADD.get("Innovation")[-1]``
--- /dev/null
+.. index:: single: InnovationAtCurrentState
+
+InnovationAtCurrentState
+ *List of vectors*. Each element is an innovation vector at current state.
+
+ Example :
+ ``ds = ADD.get("InnovationAtCurrentState")[-1]``
--- /dev/null
+.. index:: single: MahalanobisConsistency
+
+MahalanobisConsistency
+ *List of values*. Each element is a value of the Mahalanobis quality
+ indicator.
+
+ Example :
+ ``m = ADD.get("MahalanobisConsistency")[-1]``
--- /dev/null
+.. index:: single: MaximumNumberOfFunctionEvaluations
+
+MaximumNumberOfFunctionEvaluations
+ This key indicates the maximum number of evaluation of the cost function to
+ be optimized. The default is 15000, which is an arbitrary limit. It is then
+ recommended to adapt this parameter to the needs on real problems. For some
+ optimizers, the effective number of function evaluations can be slightly
+ different of the limit due to algorithm internal control requirements.
+
+ Example :
+ ``{"MaximumNumberOfFunctionEvaluations":50}``
--- /dev/null
+.. index:: single: MaximumNumberOfSteps
+
+MaximumNumberOfSteps
+ This key indicates the maximum number of iterations allowed for iterative
+ optimization. The default is 15000, which is very similar to no limit on
+ iterations. It is then recommended to adapt this parameter to the needs on
+ real problems. For some optimizers, the effective stopping step can be
+ slightly different of the limit due to algorithm internal control
+ requirements.
+
+ Example :
+ ``{"MaximumNumberOfSteps":100}``
--- /dev/null
+.. index:: single: MaximumNumberOfSteps
+
+MaximumNumberOfSteps
+ This key indicates the maximum number of iterations allowed for iterative
+ optimization. The default is 50, which is an arbitrary limit. It is then
+ recommended to adapt this parameter to the needs on real problems.
+
+ Example :
+ ``{"MaximumNumberOfSteps":50}``
--- /dev/null
+.. index:: single: Minimizer
+
+Minimizer
+ This key allows to choose the optimization minimizer. The default choice is
+ "BOBYQA", and the possible ones are
+ "BOBYQA" (minimization with or without constraints by quadratic approximation [Powell09]_),
+ "COBYLA" (minimization with or without constraints by linear approximation [Powell94]_ [Powell98]_).
+ "NEWUOA" (minimization with or without constraints by iterative quadratic approximation [Powell04]_),
+ "POWELL" (minimization unconstrained using conjugate directions [Powell64]_),
+ "SIMPLEX" (minimization with or without constraints using Nelder-Mead simplex algorithm [Nelder65]_),
+ "SUBPLEX" (minimization with or without constraints using Nelder-Mead on a sequence of subspaces [Rowan90]_).
+ Remark: the "POWELL" method perform a dual outer/inner loops optimization,
+ leading then to less control on the cost function evaluation number because
+ it is the outer loop limit than is controlled. If precise control on this
+ cost function evaluation number is required, choose an another minimizer.
+
+ Example :
+ ``{"Minimizer":"BOBYQA"}``
--- /dev/null
+.. index:: single: NumberOfMembers
+
+NumberOfMembers
+ This key indicates the number of members used to realize the ensemble method.
+ The default is 100, and it is recommended to adapt it to the needs on real
+ problems.
+
+ Example :
+ ``{"NumberOfMembers":100}``
--- /dev/null
+.. index:: single: NumberOfPrintedDigits
+
+NumberOfPrintedDigits
+ This key indicates the number of digits of precision for floating point
+ printed output. The default is 5, with a minimum of 0.
+
+ Example :
+ ``{"NumberOfPrintedDigits":5}``
--- /dev/null
+.. index:: single: NumberOfRepetition
+
+NumberOfRepetition
+ This key indicates the number of time to repeat the function evaluation. The
+ default is 1.
+
+ Example :
+ ``{"NumberOfRepetition":3}``
--- /dev/null
+.. index:: single: NumberOfSamplesForQuantiles
+
+NumberOfSamplesForQuantiles
+ This key indicates the number of simulation to be done in order to estimate
+ the quantiles. This option is useful only if the supplementary calculation
+ "SimulationQuantiles" has been chosen. The default is 100, which is often
+ sufficient for correct estimation of common quantiles at 5%, 10%, 90% or
+ 95%.
+
+ Example :
+ ``{"NumberOfSamplesForQuantiles":100}``
--- /dev/null
+.. index:: single: OMA
+
+OMA
+ *List of vectors*. Each element is a vector of difference between the
+ observation and the optimal state in the observation space.
+
+ Example :
+ ``oma = ADD.get("OMA")[-1]``
--- /dev/null
+.. index:: single: OMB
+
+OMB
+ *List of vectors*. Each element is a vector of difference between the
+ observation and the background state in the observation space.
+
+ Example :
+ ``omb = ADD.get("OMB")[-1]``
--- /dev/null
+.. index:: single: Observation
+
+Observation
+ *Vector*. The variable indicates the observation vector used for data
+ assimilation or optimization, usually noted as :math:`\mathbf{y}^o`. Its
+ value is defined as a "*Vector*" or a *VectorSerie* type object. Its
+ availability in output is conditioned by the boolean "*Stored*" associated
+ with input.
--- /dev/null
+.. index:: single: ObservationError
+
+ObservationError
+ *Matrix*. The variable indicates the observation error covariance matrix,
+ usually noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
+ object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
+ type object, as described in detail in the section
+ :ref:`section_ref_covariance_requirements`. Its availability in output is
+ conditioned by the boolean "*Stored*" associated with input.
--- /dev/null
+.. index:: single: ObservationOperator
+
+ObservationOperator
+ *Operator*. The variable indicates the observation operator, usually noted as
+ :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
+ results :math:`\mathbf{y}` to be compared to observations
+ :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or a
+ "*Matrix*" type one. In the case of "*Function*" type, different functional
+ forms can be used, as described in the section
+ :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
+ included in the observation, the operator has to be applied to a pair
+ :math:`(X,U)`.
--- /dev/null
+.. index:: single: Observers
+
+Observers
+ *List of functions linked to variables*. This command allows to set internal
+ observers, that are functions linked with a particular variable, which will
+ be executed each time this variable is modified. It is a convenient way to
+ monitor variables of interest during the data assimilation or optimization
+ process, by printing or plotting it, etc. Common templates are provided to
+ help the user to start or to quickly make his case.
--- /dev/null
+.. index:: single: ProjectedGradientTolerance
+
+ProjectedGradientTolerance
+ This key indicates a limit value, leading to stop successfully the iterative
+ optimization process when all the components of the projected gradient are
+ under this limit. It is only used for constrained optimizers. The default is
+ -1, that is the internal default of each minimizer (generally 1.e-5), and it
+ is not recommended to change it.
+
+ Example :
+ ``{"ProjectedGradientTolerance":-1}``
--- /dev/null
+.. index:: single: QualityCriterion
+
+QualityCriterion
+ This key indicates the quality criterion, minimized to find the optimal state
+ estimate. The default is the usual data assimilation criterion named "DA",
+ the augmented weighted least squares. The possible criteria has to be in the
+ following list, where the equivalent names are indicated by the sign "<=>":
+ ["AugmentedWeightedLeastSquares"<=>"AWLS"<=>"DA",
+ "WeightedLeastSquares"<=>"WLS", "LeastSquares"<=>"LS"<=>"L2",
+ "AbsoluteValue"<=>"L1", "MaximumError"<=>"ME"].
+
+ Example :
+ ``{"QualityCriterion":"DA"}``
--- /dev/null
+.. index:: single: Quantile
+
+Quantile
+ This key allows to define the real value of the desired quantile, between
+ 0 and 1. The default is 0.5, corresponding to the median.
+
+ Example :
+ ``{"Quantile":0.5}``
--- /dev/null
+.. index:: single: Quantiles
+
+Quantiles
+ This list indicates the values of quantile, between 0 and 1, to be estimated
+ by simulation around the optimal state. The sampling uses a multivariate
+ Gaussian random sampling, directed by the *a posteriori* covariance matrix.
+ This option is useful only if the supplementary calculation
+ "SimulationQuantiles" has been chosen. The default is a void list.
+
+ Example :
+ ``{"Quantiles":[0.1,0.9]}``
--- /dev/null
+.. index:: single: Residu
+
+Residu
+ *List of values*. Each element is the value of the particular residue
+ verified during a checking algorithm, in the order of the tests.
+
+ Example :
+ ``r = ADD.get("Residu")[:]``
--- /dev/null
+.. index:: single: SetDebug
+
+SetDebug
+ This key requires the activation, or not, of the debug mode during the
+ function or operator evaluation. The default is "False", the choices are
+ "True" or "False".
+
+ Example :
+ ``{"SetDebug":False}``
--- /dev/null
+.. index:: single: SetSeed
+
+SetSeed
+ This key allow to give an integer in order to fix the seed of the random
+ generator used in the algorithm. A simple convenient value is for example
+ 1000. By default, the seed is left uninitialized, and so use the default
+ initialization from the computer, which then change at each study. To ensure
+ the reproducibility of results involving random samples, it is strongly
+ advised to initialize the seed.
+
+ Example :
+ ``{"SetSeed":1000}``
--- /dev/null
+.. index:: single: SigmaBck2
+
+SigmaBck2
+ *List of values*. Each element is a value of the quality indicator
+ :math:`(\sigma^b)^2` of the background part.
+
+ Example :
+ ``sb2 = ADD.get("SigmaBck")[-1]``
--- /dev/null
+.. index:: single: SigmaObs2
+
+SigmaObs2
+ *List of values*. Each element is a value of the quality indicator
+ :math:`(\sigma^o)^2` of the observation part.
+
+ Example :
+ ``so2 = ADD.get("SigmaObs")[-1]``
--- /dev/null
+.. index:: single: SimulatedObservationAtBackground
+.. index:: single: Dry
+
+SimulatedObservationAtBackground
+ *List of vectors*. Each element is a vector of observation simulated by the
+ observation operator from the background :math:`\mathbf{x}^b`. It is the
+ forecast from the background, and it is sometimes called "*Dry*".
+
+ Example :
+ ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
--- /dev/null
+.. index:: single: SimulatedObservationAtCurrentOptimum
+
+SimulatedObservationAtCurrentOptimum
+ *List of vectors*. Each element is a vector of observation simulated from
+ the optimal state obtained at the current step the optimization algorithm,
+ that is, in the observation space.
+
+ Example :
+ ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
--- /dev/null
+.. index:: single: SimulatedObservationAtCurrentState
+
+SimulatedObservationAtCurrentState
+ *List of vectors*. Each element is an observed vector simulated by the
+ observation operator from the current state, that is, in the observation
+ space.
+
+ Example :
+ ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
--- /dev/null
+.. index:: single: SimulatedObservationAtOptimum
+.. index:: single: Forecast
+
+SimulatedObservationAtOptimum
+ *List of vectors*. Each element is a vector of observation simulated by the
+ observation operator from the analysis or optimal state :math:`\mathbf{x}^a`.
+ It is the forecast from the analysis or the optimal state, and it is
+ sometimes called "*Forecast*".
+
+ Example :
+ ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
--- /dev/null
+.. index:: single: SimulationForQuantiles
+
+SimulationForQuantiles
+ This key indicates the type of simulation, linear (with the tangent
+ observation operator applied to perturbation increments around the optimal
+ state) or non-linear (with standard observation operator applied to
+ perturbed states), one want to do for each perturbation. It changes mainly
+ the time of each elementary calculation, usually longer in non-linear than
+ in linear. This option is useful only if the supplementary calculation
+ "SimulationQuantiles" has been chosen. The default value is "Linear", and
+ the possible choices are "Linear" and "NonLinear".
+
+ Example :
+ ``{"SimulationForQuantiles":"Linear"}``
--- /dev/null
+.. index:: single: SimulationQuantiles
+
+SimulationQuantiles
+ *List of vectors*. Each element is a vector corresponding to the observed
+ state which realize the required quantile, in the same order than the
+ quantiles values required by the user.
+
+ Example :
+ ``sQuantiles = ADD.get("SimulationQuantiles")[:]``
--- /dev/null
+.. index:: single: StateVariationTolerance
+
+StateVariationTolerance
+ This key indicates the maximum relative variation of the state for stopping
+ by convergence on the state. The default is 1.e-4, and it is recommended to
+ adapt it to the needs on real problems.
+
+ Example :
+ ``{"StateVariationTolerance":1.e-4}``