2 Copyright (C) 2008-2018 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: DerivativeFreeOptimization
25 .. _section_ref_algorithm_DerivativeFreeOptimization:
27 Calculation algorithm "*DerivativeFreeOptimization*"
28 ----------------------------------------------------
32 in its present version, this algorithm is experimental, and so changes can be
33 required in forthcoming versions.
38 This algorithm realizes an estimation of the state of a system by minimization
39 of a cost function :math:`J` without gradient. It is a method that does not use
40 the derivatives of the cost function. It fall, for example, in the same category
41 than the :ref:`section_ref_algorithm_ParticleSwarmOptimization`.
43 This is an optimization method allowing for global minimum search of a general
44 error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
45 with or without weights. The default error function is the augmented weighted
46 least squares function, classically used in data assimilation.
48 Optional and required commands
49 ++++++++++++++++++++++++++++++
51 .. index:: single: AlgorithmParameters
52 .. index:: single: Background
53 .. index:: single: BackgroundError
54 .. index:: single: Observation
55 .. index:: single: ObservationError
56 .. index:: single: ObservationOperator
57 .. index:: single: Minimizer
58 .. index:: single: MaximumNumberOfSteps
59 .. index:: single: MaximumNumberOfFunctionEvaluations
60 .. index:: single: StateVariationTolerance
61 .. index:: single: CostDecrementTolerance
62 .. index:: single: QualityCriterion
63 .. index:: single: StoreSupplementaryCalculations
65 The general required commands, available in the editing user interface, are the
69 *Required command*. This indicates the background or initial vector used,
70 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
71 "*Vector*" or a *VectorSerie*" type object.
74 *Required command*. This indicates the background error covariance matrix,
75 previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
76 type object, a "*ScalarSparseMatrix*" type object, or a
77 "*DiagonalSparseMatrix*" type object.
80 *Required command*. This indicates the observation vector used for data
81 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
82 is defined as a "*Vector*" or a *VectorSerie* type object.
85 *Required command*. This indicates the observation error covariance matrix,
86 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
87 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
91 *Required command*. This indicates the observation operator, previously
92 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
93 results :math:`\mathbf{y}` to be compared to observations
94 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
95 a "*Matrix*" type one. In the case of "*Function*" type, different
96 functional forms can be used, as described in the section
97 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
98 included in the observation, the operator has to be applied to a pair
101 The general optional commands, available in the editing user interface, are
102 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
103 of the command "*AlgorithmParameters*" allows to choose the specific options,
104 described hereafter, of the algorithm. See
105 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
108 The options of the algorithm are the following:
111 This key allows to choose the optimization minimizer. The default choice is
112 "BOBYQA", and the possible ones are
113 "BOBYQA" (minimization with or without constraints by quadratic approximation [Powell09]_),
114 "COBYLA" (minimization with or without constraints by linear approximation [Powell94]_ [Powell98]_).
115 "NEWUOA" (minimization with or without constraints by iterative quadratic approximation [Powell04]_),
116 "POWELL" (minimization unconstrained using conjugate directions [Powell64]_),
117 "SIMPLEX" (minimization with or without constraints using Nelder-Mead simplex algorithm [Nelder65]_),
118 "SUBPLEX" (minimization with or without constraints using Nelder-Mead on a sequence of subspaces [Rowan90]_).
119 Remark: the "POWELL" method perform a dual outer/inner loops optimization,
120 leading then to less control on the cost function evaluation number because
121 it is the outer loop limit than is controlled. If precise control on this
122 cost function evaluation number is required, choose an another minimizer.
124 Example : ``{"Minimizer":"BOBYQA"}``
127 This key allows to define upper and lower bounds for every state variable
128 being optimized. Bounds have to be given by a list of list of pairs of
129 lower/upper bounds for each variable, with possibly ``None`` every time
130 there is no bound. The bounds can always be specified, but they are taken
131 into account only by the constrained optimizers.
133 Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
136 This key indicates the maximum number of iterations allowed for iterative
137 optimization. The default is 15000, which is very similar to no limit on
138 iterations. It is then recommended to adapt this parameter to the needs on
139 real problems. For some optimizers, the effective stopping step can be
140 slightly different of the limit due to algorithm internal control
143 Example : ``{"MaximumNumberOfSteps":50}``
145 MaximumNumberOfFunctionEvaluations
146 This key indicates the maximum number of evaluation of the cost function to
147 be optimized. The default is 15000, which is an arbitrary limit. It is then
148 recommended to adapt this parameter to the needs on real problems. For some
149 optimizers, the effective number of function evaluations can be slightly
150 different of the limit due to algorithm internal control requirements.
152 Example : ``{"MaximumNumberOfFunctionEvaluations":50}``
154 StateVariationTolerance
155 This key indicates the maximum relative variation of the state for stopping
156 by convergence on the state. The default is 1.e-4, and it is recommended to
157 adapt it to the needs on real problems.
159 Example : ``{"StateVariationTolerance":1.e-4}``
161 CostDecrementTolerance
162 This key indicates a limit value, leading to stop successfully the
163 iterative optimization process when the cost function decreases less than
164 this tolerance at the last step. The default is 1.e-7, and it is
165 recommended to adapt it to the needs on real problems.
167 Example : ``{"CostDecrementTolerance":1.e-7}``
170 This key indicates the quality criterion, minimized to find the optimal
171 state estimate. The default is the usual data assimilation criterion named
172 "DA", the augmented weighted least squares. The possible criteria has to be
173 in the following list, where the equivalent names are indicated by the sign
174 "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA",
175 "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2",
176 "AbsoluteValue"="L1", "MaximumError"="ME"].
178 Example : ``{"QualityCriterion":"DA"}``
180 StoreSupplementaryCalculations
181 This list indicates the names of the supplementary variables that can be
182 available at the end of the algorithm. It involves potentially costly
183 calculations or memory consumptions. The default is a void list, none of
184 these variables being calculated and stored by default. The possible names
185 are in the following list: ["BMA", "CostFunctionJ",
186 "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
187 "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum",
188 "CurrentOptimum", "CurrentState", "IndexOfOptimum",
189 "InnovationAtCurrentState", "OMA", "OMB",
190 "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
191 "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
193 Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
195 Information and variables available at the end of the algorithm
196 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
198 At the output, after executing the algorithm, there are variables and
199 information originating from the calculation. The description of
200 :ref:`section_ref_output_variables` show the way to obtain them by the method
201 named ``get`` of the variable "*ADD*" of the post-processing. The input
202 variables, available to the user at the output in order to facilitate the
203 writing of post-processing procedures, are described in the
204 :ref:`subsection_r_o_v_Inventaire`.
206 The unconditional outputs of the algorithm are the following:
209 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
210 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
212 Example : ``Xa = ADD.get("Analysis")[-1]``
215 *List of values*. Each element is a value of the error function :math:`J`.
217 Example : ``J = ADD.get("CostFunctionJ")[:]``
220 *List of values*. Each element is a value of the error function :math:`J^b`,
221 that is of the background difference part.
223 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
226 *List of values*. Each element is a value of the error function :math:`J^o`,
227 that is of the observation difference part.
229 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
232 *List of vectors*. Each element is a usual state vector used during the
233 optimization algorithm procedure.
235 Example : ``Xs = ADD.get("CurrentState")[:]``
237 The conditional outputs of the algorithm are the following:
239 CostFunctionJAtCurrentOptimum
240 *List of values*. Each element is a value of the error function :math:`J`.
241 At each step, the value corresponds to the optimal state found from the
244 Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
246 CostFunctionJbAtCurrentOptimum
247 *List of values*. Each element is a value of the error function :math:`J^b`,
248 that is of the background difference part. At each step, the value
249 corresponds to the optimal state found from the beginning.
251 Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
253 CostFunctionJoAtCurrentOptimum
254 *List of values*. Each element is a value of the error function :math:`J^o`,
255 that is of the observation difference part. At each step, the value
256 corresponds to the optimal state found from the beginning.
258 Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
261 *List of vectors*. Each element is the optimal state obtained at the current
262 step of the optimization algorithm. It is not necessarily the last state.
264 Example : ``Xo = ADD.get("CurrentOptimum")[:]``
267 *List of integers*. Each element is the iteration index of the optimum
268 obtained at the current step the optimization algorithm. It is not
269 necessarily the number of the last iteration.
271 Example : ``i = ADD.get("IndexOfOptimum")[-1]``
273 InnovationAtCurrentState
274 *List of vectors*. Each element is an innovation vector at current state.
276 Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]``
279 *List of vectors*. Each element is a vector of difference between the
280 observation and the optimal state in the observation space.
282 Example : ``oma = ADD.get("OMA")[-1]``
285 *List of vectors*. Each element is a vector of difference between the
286 observation and the background state in the observation space.
288 Example : ``omb = ADD.get("OMB")[-1]``
290 SimulatedObservationAtBackground
291 *List of vectors*. Each element is a vector of observation simulated from
292 the background :math:`\mathbf{x}^b`.
294 Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
296 SimulatedObservationAtCurrentOptimum
297 *List of vectors*. Each element is a vector of observation simulated from
298 the optimal state obtained at the current step the optimization algorithm,
299 that is, in the observation space.
301 Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
303 SimulatedObservationAtCurrentState
304 *List of vectors*. Each element is an observed vector at the current state,
305 that is, in the observation space.
307 Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
309 SimulatedObservationAtOptimum
310 *List of vectors*. Each element is a vector of observation simulated from
311 the analysis or optimal state :math:`\mathbf{x}^a`.
313 Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
318 References to other sections:
319 - :ref:`section_ref_algorithm_ParticleSwarmOptimization`
321 Bibliographical references: