2 Copyright (C) 2008-2017 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: NonLinearLeastSquares
25 .. _section_ref_algorithm_NonLinearLeastSquares:
27 Calculation algorithm "*NonLinearLeastSquares*"
28 -----------------------------------------------
33 This algorithm realizes a state estimation by variational minimization of the
34 classical :math:`J` function of weighted "Least Squares":
36 .. math:: J(\mathbf{x})=(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x})
38 It is similar to the :ref:`section_ref_algorithm_3DVAR`, without its background
39 part. The background, required in the interface, is only used as an initial
40 point for the variational minimization.
42 In all cases, it is recommended to prefer the :ref:`section_ref_algorithm_3DVAR`
43 for its stability as for its behavior during optimization.
45 Optional and required commands
46 ++++++++++++++++++++++++++++++
48 .. index:: single: AlgorithmParameters
49 .. index:: single: Background
50 .. index:: single: Observation
51 .. index:: single: ObservationError
52 .. index:: single: ObservationOperator
53 .. index:: single: Minimizer
54 .. index:: single: Bounds
55 .. index:: single: MaximumNumberOfSteps
56 .. index:: single: CostDecrementTolerance
57 .. index:: single: ProjectedGradientTolerance
58 .. index:: single: GradientNormTolerance
59 .. index:: single: StoreSupplementaryCalculations
61 The general required commands, available in the editing user interface, are the
65 *Required command*. This indicates the background or initial vector used,
66 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
67 "*Vector*" or a *VectorSerie*" type object.
70 *Required command*. This indicates the observation vector used for data
71 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
72 is defined as a "*Vector*" or a *VectorSerie* type object.
75 *Required command*. This indicates the observation error covariance matrix,
76 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
77 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
81 *Required command*. This indicates the observation operator, previously
82 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
83 results :math:`\mathbf{y}` to be compared to observations
84 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
85 a "*Matrix*" type one. In the case of "*Function*" type, different
86 functional forms can be used, as described in the section
87 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
88 included in the observation, the operator has to be applied to a pair
91 The general optional commands, available in the editing user interface, are
92 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
93 of the command "*AlgorithmParameters*" allows to choose the specific options,
94 described hereafter, of the algorithm. See
95 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
98 The options of the algorithm are the following:
101 This key allows to choose the optimization minimizer. The default choice is
102 "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
103 minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
104 constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS"
105 (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
106 strongly recommended to stay with the default.
108 Example : ``{"Minimizer":"LBFGSB"}``
111 This key allows to define upper and lower bounds for every state variable
112 being optimized. Bounds have to be given by a list of list of pairs of
113 lower/upper bounds for each variable, with possibly ``None`` every time
114 there is no bound. The bounds can always be specified, but they are taken
115 into account only by the constrained optimizers.
117 Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
120 This key indicates the maximum number of iterations allowed for iterative
121 optimization. The default is 15000, which is very similar to no limit on
122 iterations. It is then recommended to adapt this parameter to the needs on
123 real problems. For some optimizers, the effective stopping step can be
124 slightly different due to algorithm internal control requirements.
126 Example : ``{"MaximumNumberOfSteps":100}``
128 CostDecrementTolerance
129 This key indicates a limit value, leading to stop successfully the
130 iterative optimization process when the cost function decreases less than
131 this tolerance at the last step. The default is 1.e-7, and it is
132 recommended to adapt it to the needs on real problems.
134 Example : ``{"CostDecrementTolerance":1.e-7}``
136 ProjectedGradientTolerance
137 This key indicates a limit value, leading to stop successfully the iterative
138 optimization process when all the components of the projected gradient are
139 under this limit. It is only used for constrained optimizers. The default is
140 -1, that is the internal default of each minimizer (generally 1.e-5), and it
141 is not recommended to change it.
143 Example : ``{"ProjectedGradientTolerance":-1}``
145 GradientNormTolerance
146 This key indicates a limit value, leading to stop successfully the
147 iterative optimization process when the norm of the gradient is under this
148 limit. It is only used for non-constrained optimizers. The default is
149 1.e-5 and it is not recommended to change it.
151 Example : ``{"GradientNormTolerance":1.e-5}``
153 StoreSupplementaryCalculations
154 This list indicates the names of the supplementary variables that can be
155 available at the end of the algorithm. It involves potentially costly
156 calculations or memory consumptions. The default is a void list, none of
157 these variables being calculated and stored by default. The possible names
158 are in the following list: ["BMA", "CostFunctionJ",
159 "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
160 "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum",
161 "CurrentState", "CurrentOptimum", "IndexOfOptimum", "Innovation",
162 "InnovationAtCurrentState", "OMA", "OMB",
163 "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
164 "SimulatedObservationAtOptimum", "SimulatedObservationAtCurrentOptimum"].
166 Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
168 *Tips for this algorithm:*
170 As the *"BackgroundError"* command is required for ALL the calculation
171 algorithms in the interface, you have to provide a value, even if this
172 command is not required for this algorithm, and will not be used. The
173 simplest way is to give "1" as a STRING.
175 Information and variables available at the end of the algorithm
176 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
178 At the output, after executing the algorithm, there are variables and
179 information originating from the calculation. The description of
180 :ref:`section_ref_output_variables` show the way to obtain them by the method
181 named ``get`` of the variable "*ADD*" of the post-processing. The input
182 variables, available to the user at the output in order to facilitate the
183 writing of post-processing procedures, are described in the
184 :ref:`subsection_r_o_v_Inventaire`.
186 The unconditional outputs of the algorithm are the following:
189 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
190 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
192 Example : ``Xa = ADD.get("Analysis")[-1]``
195 *List of values*. Each element is a value of the error function :math:`J`.
197 Example : ``J = ADD.get("CostFunctionJ")[:]``
200 *List of values*. Each element is a value of the error function :math:`J^b`,
201 that is of the background difference part.
203 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
206 *List of values*. Each element is a value of the error function :math:`J^o`,
207 that is of the observation difference part.
209 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
211 The conditional outputs of the algorithm are the following:
214 *List of vectors*. Each element is a vector of difference between the
215 background and the optimal state.
217 Example : ``bma = ADD.get("BMA")[-1]``
220 *List of vectors*. Each element is a usual state vector used during the
221 optimization algorithm procedure.
223 Example : ``Xs = ADD.get("CurrentState")[:]``
226 *List of integers*. Each element is the iteration index of the optimum
227 obtained at the current step the optimization algorithm. It is not
228 necessarily the number of the last iteration.
230 Example : ``i = ADD.get("IndexOfOptimum")[-1]``
233 *List of vectors*. Each element is an innovation vector, which is in static
234 the difference between the optimal and the background, and in dynamic the
237 Example : ``d = ADD.get("Innovation")[-1]``
239 InnovationAtCurrentState
240 *List of vectors*. Each element is an innovation vector at current state.
242 Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
245 *List of vectors*. Each element is a vector of difference between the
246 observation and the optimal state in the observation space.
248 Example : ``oma = ADD.get("OMA")[-1]``
251 *List of vectors*. Each element is a vector of difference between the
252 observation and the background state in the observation space.
254 Example : ``omb = ADD.get("OMB")[-1]``
256 SimulatedObservationAtBackground
257 *List of vectors*. Each element is a vector of observation simulated from
258 the background :math:`\mathbf{x}^b`.
260 Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
262 SimulatedObservationAtCurrentOptimum
263 *List of vectors*. Each element is a vector of observation simulated from
264 the optimal state obtained at the current step the optimization algorithm,
265 that is, in the observation space.
267 Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
269 SimulatedObservationAtCurrentState
270 *List of vectors*. Each element is an observed vector at the current state,
271 that is, in the observation space.
273 Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
275 SimulatedObservationAtOptimum
276 *List of vectors*. Each element is a vector of observation simulated from
277 the analysis or optimal state :math:`\mathbf{x}^a`.
279 Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
284 References to other sections:
285 - :ref:`section_ref_algorithm_3DVAR`
287 Bibliographical references: