2 Copyright (C) 2008-2018 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: 3DVAR
25 .. _section_ref_algorithm_3DVAR:
27 Calculation algorithm "*3DVAR*"
28 -------------------------------
33 This algorithm performs a state estimation by variational minimization of the
34 classical :math:`J` function in static data assimilation:
36 .. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-H(\mathbf{x}))^T.\mathbf{R}^{-1}.(\mathbf{y}^o-H(\mathbf{x}))
38 which is usually designed as the "*3D-VAR*" function (see for example
41 Optional and required commands
42 ++++++++++++++++++++++++++++++
44 .. index:: single: AlgorithmParameters
45 .. index:: single: Background
46 .. index:: single: BackgroundError
47 .. index:: single: Observation
48 .. index:: single: ObservationError
49 .. index:: single: ObservationOperator
50 .. index:: single: Minimizer
51 .. index:: single: Bounds
52 .. index:: single: MaximumNumberOfSteps
53 .. index:: single: CostDecrementTolerance
54 .. index:: single: ProjectedGradientTolerance
55 .. index:: single: GradientNormTolerance
56 .. index:: single: StoreSupplementaryCalculations
57 .. index:: single: Quantiles
58 .. index:: single: SetSeed
59 .. index:: single: NumberOfSamplesForQuantiles
60 .. index:: single: SimulationForQuantiles
62 The general required commands, available in the editing user interface, are the
66 *Required command*. This indicates the background or initial vector used,
67 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
68 "*Vector*" or a *VectorSerie*" type object.
71 *Required command*. This indicates the background error covariance matrix,
72 previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
73 type object, a "*ScalarSparseMatrix*" type object, or a
74 "*DiagonalSparseMatrix*" type object.
77 *Required command*. This indicates the observation vector used for data
78 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
79 is defined as a "*Vector*" or a *VectorSerie* type object.
82 *Required command*. This indicates the observation error covariance matrix,
83 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
84 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
88 *Required command*. This indicates the observation operator, previously
89 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
90 results :math:`\mathbf{y}` to be compared to observations
91 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
92 a "*Matrix*" type one. In the case of "*Function*" type, different
93 functional forms can be used, as described in the section
94 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
95 included in the observation, the operator has to be applied to a pair
98 The general optional commands, available in the editing user interface, are
99 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
100 of the command "*AlgorithmParameters*" allows to choose the specific options,
101 described hereafter, of the algorithm. See
102 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
105 The options of the algorithm are the following:
108 This key allows to choose the optimization minimizer. The default choice is
109 "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
110 minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
111 constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS"
112 (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
113 strongly recommended to stay with the default.
115 Example : ``{"Minimizer":"LBFGSB"}``
118 This key allows to define upper and lower bounds for every state variable
119 being optimized. Bounds have to be given by a list of list of pairs of
120 lower/upper bounds for each variable, with possibly ``None`` every time
121 there is no bound. The bounds can always be specified, but they are taken
122 into account only by the constrained optimizers.
124 Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
127 This key indicates the maximum number of iterations allowed for iterative
128 optimization. The default is 15000, which is very similar to no limit on
129 iterations. It is then recommended to adapt this parameter to the needs on
130 real problems. For some optimizers, the effective stopping step can be
131 slightly different of the limit due to algorithm internal control
134 Example : ``{"MaximumNumberOfSteps":100}``
136 CostDecrementTolerance
137 This key indicates a limit value, leading to stop successfully the
138 iterative optimization process when the cost function decreases less than
139 this tolerance at the last step. The default is 1.e-7, and it is
140 recommended to adapt it to the needs on real problems.
142 Example : ``{"CostDecrementTolerance":1.e-7}``
144 ProjectedGradientTolerance
145 This key indicates a limit value, leading to stop successfully the iterative
146 optimization process when all the components of the projected gradient are
147 under this limit. It is only used for constrained optimizers. The default is
148 -1, that is the internal default of each minimizer (generally 1.e-5), and it
149 is not recommended to change it.
151 Example : ``{"ProjectedGradientTolerance":-1}``
153 GradientNormTolerance
154 This key indicates a limit value, leading to stop successfully the
155 iterative optimization process when the norm of the gradient is under this
156 limit. It is only used for non-constrained optimizers. The default is
157 1.e-5 and it is not recommended to change it.
159 Example : ``{"GradientNormTolerance":1.e-5}``
161 StoreSupplementaryCalculations
162 This list indicates the names of the supplementary variables that can be
163 available at the end of the algorithm. It involves potentially costly
164 calculations or memory consumptions. The default is a void list, none of
165 these variables being calculated and stored by default. The possible names
166 are in the following list: ["APosterioriCorrelations",
167 "APosterioriCovariance", "APosterioriStandardDeviations",
168 "APosterioriVariances", "BMA", "CostFunctionJ", "CostFunctionJb",
169 "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
170 "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum",
171 "CurrentOptimum", "CurrentState", "IndexOfOptimum", "Innovation",
172 "InnovationAtCurrentState", "MahalanobisConsistency", "OMA", "OMB",
173 "SigmaObs2", "SimulatedObservationAtBackground",
174 "SimulatedObservationAtCurrentOptimum",
175 "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum",
176 "SimulationQuantiles"].
178 Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
181 This list indicates the values of quantile, between 0 and 1, to be estimated
182 by simulation around the optimal state. The sampling uses a multivariate
183 Gaussian random sampling, directed by the *a posteriori* covariance matrix.
184 This option is useful only if the supplementary calculation
185 "SimulationQuantiles" has been chosen. The default is a void list.
187 Example : ``{"Quantiles":[0.1,0.9]}``
190 This key allow to give an integer in order to fix the seed of the random
191 generator used to generate the ensemble. A convenient value is for example
192 1000. By default, the seed is left uninitialized, and so use the default
193 initialization from the computer.
195 Example : ``{"SetSeed":1000}``
197 NumberOfSamplesForQuantiles
198 This key indicates the number of simulation to be done in order to estimate
199 the quantiles. This option is useful only if the supplementary calculation
200 "SimulationQuantiles" has been chosen. The default is 100, which is often
201 sufficient for correct estimation of common quantiles at 5%, 10%, 90% or
204 Example : ``{"NumberOfSamplesForQuantiles":100}``
206 SimulationForQuantiles
207 This key indicates the type of simulation, linear (with the tangent
208 observation operator applied to perturbation increments around the optimal
209 state) or non-linear (with standard observation operator applied to
210 perturbed states), one want to do for each perturbation. It changes mainly
211 the time of each elementary calculation, usually longer in non-linear than
212 in linear. This option is useful only if the supplementary calculation
213 "SimulationQuantiles" has been chosen. The default value is "Linear", and
214 the possible choices are "Linear" and "NonLinear".
216 Example : ``{"SimulationForQuantiles":"Linear"}``
218 Information and variables available at the end of the algorithm
219 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
221 At the output, after executing the algorithm, there are variables and
222 information originating from the calculation. The description of
223 :ref:`section_ref_output_variables` show the way to obtain them by the method
224 named ``get`` of the variable "*ADD*" of the post-processing. The input
225 variables, available to the user at the output in order to facilitate the
226 writing of post-processing procedures, are described in the
227 :ref:`subsection_r_o_v_Inventaire`.
229 The unconditional outputs of the algorithm are the following:
232 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
233 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
235 Example : ``Xa = ADD.get("Analysis")[-1]``
238 *List of values*. Each element is a value of the error function :math:`J`.
240 Example : ``J = ADD.get("CostFunctionJ")[:]``
243 *List of values*. Each element is a value of the error function :math:`J^b`,
244 that is of the background difference part.
246 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
249 *List of values*. Each element is a value of the error function :math:`J^o`,
250 that is of the observation difference part.
252 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
254 The conditional outputs of the algorithm are the following:
256 APosterioriCorrelations
257 *List of matrices*. Each element is an *a posteriori* error correlations
258 matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance
261 Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
263 APosterioriCovariance
264 *List of matrices*. Each element is an *a posteriori* error covariance
265 matrix :math:`\mathbf{A}*` of the optimal state.
267 Example : ``A = ADD.get("APosterioriCovariance")[-1]``
269 APosterioriStandardDeviations
270 *List of matrices*. Each element is an *a posteriori* error standard
271 errors diagonal matrix of the optimal state, coming from the
272 :math:`\mathbf{A}*` covariance matrix.
274 Example : ``S = ADD.get("APosterioriStandardDeviations")[-1]``
277 *List of matrices*. Each element is an *a posteriori* error variance
278 errors diagonal matrix of the optimal state, coming from the
279 :math:`\mathbf{A}*` covariance matrix.
281 Example : ``V = ADD.get("APosterioriVariances")[-1]``
284 *List of vectors*. Each element is a vector of difference between the
285 background and the optimal state.
287 Example : ``bma = ADD.get("BMA")[-1]``
289 CostFunctionJAtCurrentOptimum
290 *List of values*. Each element is a value of the error function :math:`J`.
291 At each step, the value corresponds to the optimal state found from the
294 Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
296 CostFunctionJbAtCurrentOptimum
297 *List of values*. Each element is a value of the error function :math:`J^b`,
298 that is of the background difference part. At each step, the value
299 corresponds to the optimal state found from the beginning.
301 Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
303 CostFunctionJoAtCurrentOptimum
304 *List of values*. Each element is a value of the error function :math:`J^o`,
305 that is of the observation difference part. At each step, the value
306 corresponds to the optimal state found from the beginning.
308 Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
311 *List of vectors*. Each element is the optimal state obtained at the current
312 step of the optimization algorithm. It is not necessarily the last state.
314 Example : ``Xo = ADD.get("CurrentOptimum")[:]``
317 *List of vectors*. Each element is a usual state vector used during the
318 optimization algorithm procedure.
320 Example : ``Xs = ADD.get("CurrentState")[:]``
323 *List of integers*. Each element is the iteration index of the optimum
324 obtained at the current step the optimization algorithm. It is not
325 necessarily the number of the last iteration.
327 Example : ``i = ADD.get("IndexOfOptimum")[-1]``
330 *List of vectors*. Each element is an innovation vector, which is in static
331 the difference between the optimal and the background, and in dynamic the
334 Example : ``d = ADD.get("Innovation")[-1]``
336 InnovationAtCurrentState
337 *List of vectors*. Each element is an innovation vector at current state.
339 Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
341 MahalanobisConsistency
342 *List of values*. Each element is a value of the Mahalanobis quality
345 Example : ``m = ADD.get("MahalanobisConsistency")[-1]``
348 *List of vectors*. Each element is a vector of difference between the
349 observation and the optimal state in the observation space.
351 Example : ``oma = ADD.get("OMA")[-1]``
354 *List of vectors*. Each element is a vector of difference between the
355 observation and the background state in the observation space.
357 Example : ``omb = ADD.get("OMB")[-1]``
360 *List of values*. Each element is a value of the quality indicator
361 :math:`(\sigma^o)^2` of the observation part.
363 Example : ``so2 = ADD.get("SigmaObs")[-1]``
365 SimulatedObservationAtBackground
366 *List of vectors*. Each element is a vector of observation simulated from
367 the background :math:`\mathbf{x}^b`.
369 Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
371 SimulatedObservationAtCurrentOptimum
372 *List of vectors*. Each element is a vector of observation simulated from
373 the optimal state obtained at the current step the optimization algorithm,
374 that is, in the observation space.
376 Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
378 SimulatedObservationAtCurrentState
379 *List of vectors*. Each element is an observed vector at the current state,
380 that is, in the observation space.
382 Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
384 SimulatedObservationAtOptimum
385 *List of vectors*. Each element is a vector of observation simulated from
386 the analysis or optimal state :math:`\mathbf{x}^a`.
388 Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
391 *List of vectors*. Each element is a vector corresponding to the observed
392 state which realize the required quantile, in the same order than the
393 quantiles required by the user.
395 Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]``
400 References to other sections:
401 - :ref:`section_ref_algorithm_Blue`
402 - :ref:`section_ref_algorithm_ExtendedBlue`
403 - :ref:`section_ref_algorithm_LinearityTest`
405 Bibliographical references: