2 Copyright (C) 2008-2015 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: NonLinearLeastSquares
25 .. _section_ref_algorithm_NonLinearLeastSquares:
27 Calculation algorithm "*NonLinearLeastSquares*"
28 -----------------------------------------------
33 This algorithm realizes a state estimation by variational minimization of the
34 classical :math:`J` function of weighted "Least Squares":
36 .. math:: J(\mathbf{x})=(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x})
38 It is similar to the :ref:`section_ref_algorithm_3DVAR`, without its background
39 part. The background, required in the interface, is only used as an initial
40 point for the variational minimization.
42 In all cases, it is recommended to prefer the :ref:`section_ref_algorithm_3DVAR`
43 for its stability as for its behaviour during optimization.
45 Optional and required commands
46 ++++++++++++++++++++++++++++++
48 .. index:: single: Background
49 .. index:: single: Observation
50 .. index:: single: ObservationError
51 .. index:: single: ObservationOperator
52 .. index:: single: Minimizer
53 .. index:: single: Bounds
54 .. index:: single: MaximumNumberOfSteps
55 .. index:: single: CostDecrementTolerance
56 .. index:: single: ProjectedGradientTolerance
57 .. index:: single: GradientNormTolerance
58 .. index:: single: StoreSupplementaryCalculations
60 The general required commands, available in the editing user interface, are the
64 *Required command*. This indicates the background or initial vector used,
65 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
66 "*Vector*" or a *VectorSerie*" type object.
69 *Required command*. This indicates the observation vector used for data
70 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
71 is defined as a "*Vector*" or a *VectorSerie* type object.
74 *Required command*. This indicates the observation error covariance matrix,
75 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
76 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
80 *Required command*. This indicates the observation operator, previously
81 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
82 results :math:`\mathbf{y}` to be compared to observations
83 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
84 a "*Matrix*" type one. In the case of "*Function*" type, different
85 functional forms can be used, as described in the section
86 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
87 included in the observation, the operator has to be applied to a pair
90 The general optional commands, available in the editing user interface, are
91 indicated in :ref:`section_ref_assimilation_keywords`. In particular, the
92 optional command "*AlgorithmParameters*" allows to choose the specific options,
93 described hereafter, of the algorithm. See
94 :ref:`section_ref_options_AlgorithmParameters` for the good use of this command.
96 The options of the algorithm are the following:
99 This key allows to choose the optimization minimizer. The default choice is
100 "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
101 minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear
102 constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS"
103 (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is
104 strongly recommended to stay with the default.
106 Example : ``{"Minimizer":"LBFGSB"}``
109 This key allows to define upper and lower bounds for every state variable
110 being optimized. Bounds have to be given by a list of list of pairs of
111 lower/upper bounds for each variable, with possibly ``None`` every time
112 there is no bound. The bounds can always be specified, but they are taken
113 into account only by the constrained optimizers.
115 Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
118 This key indicates the maximum number of iterations allowed for iterative
119 optimization. The default is 15000, which is very similar to no limit on
120 iterations. It is then recommended to adapt this parameter to the needs on
121 real problems. For some optimizers, the effective stopping step can be
122 slightly different due to algorithm internal control requirements.
124 Example : ``{"MaximumNumberOfSteps":100}``
126 CostDecrementTolerance
127 This key indicates a limit value, leading to stop successfully the
128 iterative optimization process when the cost function decreases less than
129 this tolerance at the last step. The default is 1.e-7, and it is
130 recommended to adapt it to the needs on real problems.
132 Example : ``{"CostDecrementTolerance":1.e-7}``
134 ProjectedGradientTolerance
135 This key indicates a limit value, leading to stop successfully the iterative
136 optimization process when all the components of the projected gradient are
137 under this limit. It is only used for constrained optimizers. The default is
138 -1, that is the internal default of each minimizer (generally 1.e-5), and it
139 is not recommended to change it.
141 Example : ``{"ProjectedGradientTolerance":-1}``
143 GradientNormTolerance
144 This key indicates a limit value, leading to stop successfully the
145 iterative optimization process when the norm of the gradient is under this
146 limit. It is only used for non-constrained optimizers. The default is
147 1.e-5 and it is not recommended to change it.
149 Example : ``{"GradientNormTolerance":1.e-5}``
151 StoreSupplementaryCalculations
152 This list indicates the names of the supplementary variables that can be
153 available at the end of the algorithm. It involves potentially costly
154 calculations or memory consumptions. The default is a void list, none of
155 these variables being calculated and stored by default. The possible names
156 are in the following list: ["APosterioriCovariance", "BMA",
157 "CostFunctionJ", "CurrentState", "OMA", "OMB", "Innovation", "SigmaObs2",
158 "MahalanobisConsistency", "SimulatedObservationAtCurrentState",
159 "SimulatedObservationAtOptimum"].
161 Example : ``{"StoreSupplementaryCalculations":["BMA","Innovation"]}``
163 *Tips for this algorithm:*
165 As the *"BackgroundError"* command is required for ALL the calculation
166 algorithms in the interface, you have to provide a value, even if this
167 command is not required for this algorithm, and will not be used. The
168 simplest way is to give "1" as a STRING.
170 Information and variables available at the end of the algorithm
171 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
173 At the output, after executing the algorithm, there are variables and
174 information originating from the calculation. The description of
175 :ref:`section_ref_output_variables` show the way to obtain them by the method
176 named ``get`` of the variable "*ADD*" of the post-processing. The input
177 variables, available to the user at the output in order to facilitate the
178 writing of post-processing procedures, are described in the
179 :ref:`subsection_r_o_v_Inventaire`.
181 The unconditional outputs of the algorithm are the following:
184 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
185 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
187 Example : ``Xa = ADD.get("Analysis")[-1]``
190 *List of values*. Each element is a value of the error function :math:`J`.
192 Example : ``J = ADD.get("CostFunctionJ")[:]``
195 *List of values*. Each element is a value of the error function :math:`J^b`,
196 that is of the background difference part.
198 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
201 *List of values*. Each element is a value of the error function :math:`J^o`,
202 that is of the observation difference part.
204 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
206 The conditional outputs of the algorithm are the following:
209 *List of vectors*. Each element is a vector of difference between the
210 background and the optimal state.
212 Example : ``bma = ADD.get("BMA")[-1]``
215 *List of vectors*. Each element is a usual state vector used during the
216 optimization algorithm procedure.
218 Example : ``Xs = ADD.get("CurrentState")[:]``
221 *List of vectors*. Each element is an innovation vector, which is in static
222 the difference between the optimal and the background, and in dynamic the
225 Example : ``d = ADD.get("Innovation")[-1]``
228 *List of vectors*. Each element is a vector of difference between the
229 observation and the optimal state in the observation space.
231 Example : ``oma = ADD.get("OMA")[-1]``
234 *List of vectors*. Each element is a vector of difference between the
235 observation and the background state in the observation space.
237 Example : ``omb = ADD.get("OMB")[-1]``
239 SimulatedObservationAtCurrentState
240 *List of vectors*. Each element is an observed vector at the current state,
241 that is, in the observation space.
243 Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
245 SimulatedObservationAtOptimum
246 *List of vectors*. Each element is a vector of observation simulated from
247 the analysis or optimal state :math:`\mathbf{x}^a`.
249 Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
254 References to other sections:
255 - :ref:`section_ref_algorithm_3DVAR`
257 Bibliographical references: