2 Copyright (C) 2008-2017 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: 4DVAR
25 .. _section_ref_algorithm_4DVAR:
27 Calculation algorithm "*4DVAR*"
28 -------------------------------
32 in its present version, this algorithm is experimental, and so changes can be
33 required in forthcoming versions.
38 This algorithm realizes an estimation of the state of a dynamic system, by a
39 variational minimization method of the classical :math:`J` function in data
42 .. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+\sum_{t\in T}(\mathbf{y^o}(t)-H(\mathbf{x},t))^T.\mathbf{R}^{-1}.(\mathbf{y^o}(t)-H(\mathbf{x},t))
44 which is usually designed as the "*4D-VAR*" function (see for example
45 [Talagrand97]_). It is well suited in cases of non-linear observation and
46 evolution operators, its application domain is similar to the one of Kalman
47 filters, specially the :ref:`section_ref_algorithm_ExtendedKalmanFilter` or the
48 :ref:`section_ref_algorithm_UnscentedKalmanFilter`.
50 Optional and required commands
51 ++++++++++++++++++++++++++++++
53 .. index:: single: AlgorithmParameters
54 .. index:: single: Background
55 .. index:: single: BackgroundError
56 .. index:: single: Observation
57 .. index:: single: ObservationError
58 .. index:: single: ObservationOperator
59 .. index:: single: Bounds
60 .. index:: single: ConstrainedBy
61 .. index:: single: EstimationOf
62 .. index:: single: MaximumNumberOfSteps
63 .. index:: single: CostDecrementTolerance
64 .. index:: single: ProjectedGradientTolerance
65 .. index:: single: GradientNormTolerance
66 .. index:: single: StoreSupplementaryCalculations
68 The general required commands, available in the editing user interface, are the
72 *Required command*. This indicates the background or initial vector used,
73 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
74 "*Vector*" or a *VectorSerie*" type object.
77 *Required command*. This indicates the background error covariance matrix,
78 previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
79 type object, a "*ScalarSparseMatrix*" type object, or a
80 "*DiagonalSparseMatrix*" type object.
83 *Required command*. This indicates the observation vector used for data
84 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
85 is defined as a "*Vector*" or a *VectorSerie* type object.
88 *Required command*. This indicates the observation error covariance matrix,
89 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
90 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
94 *Required command*. This indicates the observation operator, previously
95 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
96 results :math:`\mathbf{y}` to be compared to observations
97 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
98 a "*Matrix*" type one. In the case of "*Function*" type, different
99 functional forms can be used, as described in the section
100 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
101 included in the observation, the operator has to be applied to a pair
104 The general optional commands, available in the editing user interface, are
105 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
106 of the command "*AlgorithmParameters*" allows to choose the specific options,
107 described hereafter, of the algorithm. See
108 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
111 The options of the algorithm are the following:
114 This key allows to choose the optimization minimizer. The default choice is
115 "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
116 minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
117 constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS"
118 (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
119 strongly recommended to stay with the default.
121 Example : ``{"Minimizer":"LBFGSB"}``
124 This key allows to define upper and lower bounds for every state variable
125 being optimized. Bounds have to be given by a list of list of pairs of
126 lower/upper bounds for each variable, with possibly ``None`` every time
127 there is no bound. The bounds can always be specified, but they are taken
128 into account only by the constrained optimizers.
130 Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
133 This key allows to choose the method to take into account the bounds
134 constraints. The only one available is the "EstimateProjection", which
135 projects the current state estimate on the bounds constraints.
137 Example : ``{"ConstrainedBy":"EstimateProjection"}``
140 This key indicates the maximum number of iterations allowed for iterative
141 optimization. The default is 15000, which is very similar to no limit on
142 iterations. It is then recommended to adapt this parameter to the needs on
143 real problems. For some optimizers, the effective stopping step can be
144 slightly different of the limit due to algorithm internal control
147 Example : ``{"MaximumNumberOfSteps":100}``
149 CostDecrementTolerance
150 This key indicates a limit value, leading to stop successfully the
151 iterative optimization process when the cost function decreases less than
152 this tolerance at the last step. The default is 1.e-7, and it is
153 recommended to adapt it to the needs on real problems.
155 Example : ``{"CostDecrementTolerance":1.e-7}``
158 This key allows to choose the type of estimation to be performed. It can be
159 either state-estimation, with a value of "State", or parameter-estimation,
160 with a value of "Parameters". The default choice is "State".
162 Example : ``{"EstimationOf":"Parameters"}``
164 ProjectedGradientTolerance
165 This key indicates a limit value, leading to stop successfully the iterative
166 optimization process when all the components of the projected gradient are
167 under this limit. It is only used for constrained optimizers. The default is
168 -1, that is the internal default of each minimizer (generally 1.e-5), and it
169 is not recommended to change it.
171 Example : ``{"ProjectedGradientTolerance":-1}``
173 GradientNormTolerance
174 This key indicates a limit value, leading to stop successfully the
175 iterative optimization process when the norm of the gradient is under this
176 limit. It is only used for non-constrained optimizers. The default is
177 1.e-5 and it is not recommended to change it.
179 Example : ``{"GradientNormTolerance":1.e-5}``
181 StoreSupplementaryCalculations
182 This list indicates the names of the supplementary variables that can be
183 available at the end of the algorithm. It involves potentially costly
184 calculations or memory consumptions. The default is a void list, none of
185 these variables being calculated and stored by default. The possible names
186 are in the following list: ["BMA", "CostFunctionJ", "CostFunctionJb",
187 "CostFunctionJo", "CostFunctionJAtCurrentOptimum", "CurrentOptimum",
188 "CurrentState", "IndexOfOptimum"].
190 Example : ``{"StoreSupplementaryCalculations":["BMA", "CurrentState"]}``
192 Information and variables available at the end of the algorithm
193 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
195 At the output, after executing the algorithm, there are variables and
196 information originating from the calculation. The description of
197 :ref:`section_ref_output_variables` show the way to obtain them by the method
198 named ``get`` of the variable "*ADD*" of the post-processing. The input
199 variables, available to the user at the output in order to facilitate the
200 writing of post-processing procedures, are described in the
201 :ref:`subsection_r_o_v_Inventaire`.
203 The unconditional outputs of the algorithm are the following:
206 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
207 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
209 Example : ``Xa = ADD.get("Analysis")[-1]``
212 *List of values*. Each element is a value of the error function :math:`J`.
214 Example : ``J = ADD.get("CostFunctionJ")[:]``
217 *List of values*. Each element is a value of the error function :math:`J^b`,
218 that is of the background difference part.
220 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
223 *List of values*. Each element is a value of the error function :math:`J^o`,
224 that is of the observation difference part.
226 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
228 The conditional outputs of the algorithm are the following:
231 *List of vectors*. Each element is a vector of difference between the
232 background and the optimal state.
234 Example : ``bma = ADD.get("BMA")[-1]``
236 CostFunctionJAtCurrentOptimum
237 *List of values*. Each element is a value of the error function :math:`J`.
238 At each step, the value corresponds to the optimal state found from the
241 Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
243 CostFunctionJbAtCurrentOptimum
244 *List of values*. Each element is a value of the error function :math:`J^b`,
245 that is of the background difference part. At each step, the value
246 corresponds to the optimal state found from the beginning.
248 Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
250 CostFunctionJoAtCurrentOptimum
251 *List of values*. Each element is a value of the error function :math:`J^o`,
252 that is of the observation difference part. At each step, the value
253 corresponds to the optimal state found from the beginning.
255 Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
258 *List of vectors*. Each element is the optimal state obtained at the current
259 step of the optimization algorithm. It is not necessarily the last state.
261 Example : ``Xo = ADD.get("CurrentOptimum")[:]``
264 *List of vectors*. Each element is a usual state vector used during the
265 optimization algorithm procedure.
267 Example : ``Xs = ADD.get("CurrentState")[:]``
270 *List of integers*. Each element is the iteration index of the optimum
271 obtained at the current step the optimization algorithm. It is not
272 necessarily the number of the last iteration.
274 Example : ``i = ADD.get("IndexOfOptimum")[-1]``
279 References to other sections:
280 - :ref:`section_ref_algorithm_3DVAR`
281 - :ref:`section_ref_algorithm_KalmanFilter`
282 - :ref:`section_ref_algorithm_ExtendedKalmanFilter`
284 Bibliographical references: