..
- Copyright (C) 2008-2015 EDF R&D
+ Copyright (C) 2008-2018 EDF R&D
This file is part of SALOME ADAO module.
point for the variational minimization.
In all cases, it is recommended to prefer the :ref:`section_ref_algorithm_3DVAR`
-for its stability as for its behaviour during optimization.
+for its stability as for its behavior during optimization.
Optional and required commands
++++++++++++++++++++++++++++++
-.. index:: single: Background
-.. index:: single: Observation
-.. index:: single: ObservationError
-.. index:: single: ObservationOperator
-.. index:: single: Minimizer
-.. index:: single: Bounds
-.. index:: single: MaximumNumberOfSteps
-.. index:: single: CostDecrementTolerance
-.. index:: single: ProjectedGradientTolerance
-.. index:: single: GradientNormTolerance
-.. index:: single: StoreInternalVariables
-.. index:: single: StoreSupplementaryCalculations
-
The general required commands, available in the editing user interface, are the
following:
- Background
- *Required command*. This indicates the background or initial vector used,
- previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
- "*Vector*" or a *VectorSerie*" type object.
-
- Observation
- *Required command*. This indicates the observation vector used for data
- assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" or a *VectorSerie* type object.
-
- ObservationError
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
- object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
- type object.
-
- ObservationOperator
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
- results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
- a "*Matrix*" type one. In the case of "*Function*" type, different
- functional forms can be used, as described in the section
- :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
- included in the observation, the operator has to be applied to a pair
- :math:`(X,U)`.
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
The general optional commands, available in the editing user interface, are
-indicated in :ref:`section_ref_assimilation_keywords`. In particular, the
-optional command "*AlgorithmParameters*" allows to choose the specific options,
+indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
+of the command "*AlgorithmParameters*" allows to choose the specific options,
described hereafter, of the algorithm. See
-:ref:`section_ref_options_AlgorithmParameters` for the good use of this command.
+:ref:`section_ref_options_Algorithm_Parameters` for the good use of this
+command.
The options of the algorithm are the following:
Minimizer
+ .. index:: single: Minimizer
+
This key allows to choose the optimization minimizer. The default choice is
"LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
(nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
strongly recommended to stay with the default.
- Example : ``{"Minimizer":"LBFGSB"}``
-
- Bounds
- This key allows to define upper and lower bounds for every state variable
- being optimized. Bounds have to be given by a list of list of pairs of
- lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained optimizers.
-
- Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
-
- MaximumNumberOfSteps
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which is very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some optimizers, the effective stopping step can be
- slightly different due to algorithm internal control requirements.
-
- Example : ``{"MaximumNumberOfSteps":100}``
+ Example :
+ ``{"Minimizer":"LBFGSB"}``
- CostDecrementTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 1.e-7, and it is
- recommended to adapt it to the needs on real problems.
+ .. include:: snippets/BoundsWithNone.rst
- Example : ``{"CostDecrementTolerance":1.e-7}``
+ .. include:: snippets/MaximumNumberOfSteps.rst
- ProjectedGradientTolerance
- This key indicates a limit value, leading to stop successfully the iterative
- optimization process when all the components of the projected gradient are
- under this limit. It is only used for constrained optimizers. The default is
- -1, that is the internal default of each minimizer (generally 1.e-5), and it
- is not recommended to change it.
+ .. include:: snippets/CostDecrementTolerance.rst
- Example : ``{"ProjectedGradientTolerance":-1}``
+ .. include:: snippets/ProjectedGradientTolerance.rst
- GradientNormTolerance
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained optimizers. The default is
- 1.e-5 and it is not recommended to change it.
-
- Example : ``{"GradientNormTolerance":1.e-5}``
-
- StoreInternalVariables
- This Boolean key allows to store default internal variables, mainly the
- current state during iterative optimization process. Be careful, this can be
- a numerically costly choice in certain calculation cases. The default is
- "False".
-
- Example : ``{"StoreInternalVariables":True}``
+ .. include:: snippets/GradientNormTolerance.rst
StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
calculations or memory consumptions. The default is a void list, none of
these variables being calculated and stored by default. The possible names
- are in the following list: ["BMA", "OMA", "OMB", "Innovation"].
+ are in the following list: ["BMA", "CostFunctionJ",
+ "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
+ "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum",
+ "CurrentState", "CurrentOptimum", "IndexOfOptimum", "Innovation",
+ "InnovationAtCurrentState", "OMA", "OMB",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
+ "SimulatedObservationAtOptimum", "SimulatedObservationAtCurrentOptimum"].
- Example : ``{"StoreSupplementaryCalculations":["BMA","Innovation"]}``
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
*Tips for this algorithm:*
The unconditional outputs of the algorithm are the following:
- Analysis
- *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
- optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
-
- Example : ``Xa = ADD.get("Analysis")[-1]``
+ .. include:: snippets/Analysis.rst
- CostFunctionJ
- *List of values*. Each element is a value of the error function :math:`J`.
+ .. include:: snippets/CostFunctionJ.rst
- Example : ``J = ADD.get("CostFunctionJ")[:]``
+ .. include:: snippets/CostFunctionJb.rst
- CostFunctionJb
- *List of values*. Each element is a value of the error function :math:`J^b`,
- that is of the background difference part.
-
- Example : ``Jb = ADD.get("CostFunctionJb")[:]``
-
- CostFunctionJo
- *List of values*. Each element is a value of the error function :math:`J^o`,
- that is of the observation difference part.
-
- Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+ .. include:: snippets/CostFunctionJo.rst
The conditional outputs of the algorithm are the following:
- BMA
- *List of vectors*. Each element is a vector of difference between the
- background and the optimal state.
+ .. include:: snippets/BMA.rst
+
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
- Example : ``bma = ADD.get("BMA")[-1]``
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
- CurrentState
- *List of vectors*. Each element is a usual state vector used during the
- optimization algorithm procedure.
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
- Example : ``Xs = ADD.get("CurrentState")[:]``
+ .. include:: snippets/CurrentOptimum.rst
- Innovation
- *List of vectors*. Each element is an innovation vector, which is in static
- the difference between the optimal and the background, and in dynamic the
- evolution increment.
+ .. include:: snippets/CurrentState.rst
- Example : ``d = ADD.get("Innovation")[-1]``
+ .. include:: snippets/IndexOfOptimum.rst
- OMA
- *List of vectors*. Each element is a vector of difference between the
- observation and the optimal state in the observation space.
+ .. include:: snippets/Innovation.rst
- Example : ``oma = ADD.get("OMA")[-1]``
+ .. include:: snippets/InnovationAtCurrentState.rst
- OMB
- *List of vectors*. Each element is a vector of difference between the
- observation and the background state in the observation space.
+ .. include:: snippets/OMA.rst
- Example : ``omb = ADD.get("OMB")[-1]``
+ .. include:: snippets/OMB.rst
- SimulatedObservationAtCurrentState
- *List of vectors*. Each element is an observed vector at the current state,
- that is, in the observation space.
+ .. include:: snippets/SimulatedObservationAtBackground.rst
- Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
- SimulatedObservationAtOptimum
- *List of vectors*. Each element is a vector of observation simulated from
- the analysis or optimal state :math:`\mathbf{x}^a`.
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
- Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
See also
++++++++
Bibliographical references:
- [Byrd95]_
- [Morales11]_
+ - [Zhu97]_