2 Copyright (C) 2008-2017 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: DerivativeFreeOptimization
25 .. _section_ref_algorithm_DerivativeFreeOptimization:
27 Calculation algorithm "*DerivativeFreeOptimization*"
28 ----------------------------------------------------
32 in its present version, this algorithm is experimental, and so changes can be
33 required in forthcoming versions.
38 This algorithm realizes an estimation of the state of a system by minimization
39 of a cost function :math:`J` without gradient. It is a method that doesn't use
40 the derivatives of the cost function. It fall for example in the same category
41 then the :ref:`section_ref_algorithm_ParticleSwarmOptimization`.
43 This is an optimization method allowing for global minimum search of a general
44 error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
45 with or without weights. The default error function is the augmented weighted
46 least squares function, classicaly used in data assimilation.
48 Optional and required commands
49 ++++++++++++++++++++++++++++++
51 .. index:: single: AlgorithmParameters
52 .. index:: single: Background
53 .. index:: single: BackgroundError
54 .. index:: single: Observation
55 .. index:: single: ObservationError
56 .. index:: single: ObservationOperator
57 .. index:: single: Minimizer
58 .. index:: single: MaximumNumberOfSteps
59 .. index:: single: MaximumNumberOfFunctionEvaluations
60 .. index:: single: StateVariationTolerance
61 .. index:: single: CostDecrementTolerance
62 .. index:: single: QualityCriterion
63 .. index:: single: StoreSupplementaryCalculations
65 The general required commands, available in the editing user interface, are the
69 *Required command*. This indicates the background or initial vector used,
70 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
71 "*Vector*" or a *VectorSerie*" type object.
74 *Required command*. This indicates the background error covariance matrix,
75 previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
76 type object, a "*ScalarSparseMatrix*" type object, or a
77 "*DiagonalSparseMatrix*" type object.
80 *Required command*. This indicates the observation vector used for data
81 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
82 is defined as a "*Vector*" or a *VectorSerie* type object.
85 *Required command*. This indicates the observation error covariance matrix,
86 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
87 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
91 *Required command*. This indicates the observation operator, previously
92 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
93 results :math:`\mathbf{y}` to be compared to observations
94 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
95 a "*Matrix*" type one. In the case of "*Function*" type, different
96 functional forms can be used, as described in the section
97 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
98 included in the observation, the operator has to be applied to a pair
101 The general optional commands, available in the editing user interface, are
102 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
103 of the command "*AlgorithmParameters*" allows to choose the specific options,
104 described hereafter, of the algorithm. See
105 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
108 The options of the algorithm are the following:
111 This key allows to choose the optimization minimizer. The default choice is
112 "POWELL", and the possible ones are "POWELL" (modified Powell unconstrained
113 minimizer, see [Powell]_), "SIMPLEX" (simplex or Nelder-Mead unconstrained
114 minimizer, see [Nelder]_), "COBYLA" (constrained optimization by linear
115 approximation). It is recommended to stay with the default when there is no
116 bounds, and to choose "COBYLA" when there are bounds. Remark: the default
117 "POWELL" method perform a dual outer/inner loops optimization, leading then
118 to less control on the cost function evaluation number because it is the
119 outer loop limit than is controled. If precise control on this cost function
120 evaluation number is required, choose the "SIMPLEX" or the "COBYLA" one.
122 Example : ``{"Minimizer":"POWELL"}``
125 This key indicates the maximum number of iterations allowed for iterative
126 optimization. The default is 15000, which is very similar to no limit on
127 iterations. It is then recommended to adapt this parameter to the needs on
128 real problems. For some optimizers, the effective stopping step can be
129 slightly different of the limit due to algorithm internal control
132 Example : ``{"MaximumNumberOfSteps":50}``
134 MaximumNumberOfFunctionEvaluations
135 This key indicates the maximum number of evaluation of the cost function to
136 be optimized. The default is 15000, which is an arbitrary limit. It is then
137 recommended to adapt this parameter to the needs on real problems. For some
138 optimizers, the effective number of function evaluations can be slightly
139 different of the limit due to algorithm internal control requirements.
141 Example : ``{"MaximumNumberOfFunctionEvaluations":50}``
143 StateVariationTolerance
144 This key indicates the maximum relative variation of the state for stopping
145 by convergence on the state. The default is 1.e-4, and it is recommended to
146 adapt it to the needs on real problems.
148 Example : ``{"StateVariationTolerance":1.e-4}``
150 CostDecrementTolerance
151 This key indicates a limit value, leading to stop successfully the
152 iterative optimization process when the cost function decreases less than
153 this tolerance at the last step. The default is 1.e-7, and it is
154 recommended to adapt it to the needs on real problems.
156 Example : ``{"CostDecrementTolerance":1.e-7}``
159 This key indicates the quality criterion, minimized to find the optimal
160 state estimate. The default is the usual data assimilation criterion named
161 "DA", the augmented weighted least squares. The possible criteria has to be
162 in the following list, where the equivalent names are indicated by the sign
163 "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA",
164 "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2",
165 "AbsoluteValue"="L1", "MaximumError"="ME"].
167 Example : ``{"QualityCriterion":"DA"}``
169 StoreSupplementaryCalculations
170 This list indicates the names of the supplementary variables that can be
171 available at the end of the algorithm. It involves potentially costly
172 calculations or memory consumptions. The default is a void list, none of
173 these variables being calculated and stored by default. The possible names
174 are in the following list: ["CurrentState", "CostFunctionJ",
175 "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
176 "CurrentOptimum", "IndexOfOptimum", "InnovationAtCurrentState", "BMA",
177 "OMA", "OMB", "SimulatedObservationAtBackground",
178 "SimulatedObservationAtCurrentOptimum",
179 "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
181 Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
183 Information and variables available at the end of the algorithm
184 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
186 At the output, after executing the algorithm, there are variables and
187 information originating from the calculation. The description of
188 :ref:`section_ref_output_variables` show the way to obtain them by the method
189 named ``get`` of the variable "*ADD*" of the post-processing. The input
190 variables, available to the user at the output in order to facilitate the
191 writing of post-processing procedures, are described in the
192 :ref:`subsection_r_o_v_Inventaire`.
194 The unconditional outputs of the algorithm are the following:
197 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
198 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
200 Example : ``Xa = ADD.get("Analysis")[-1]``
203 *List of values*. Each element is a value of the error function :math:`J`.
205 Example : ``J = ADD.get("CostFunctionJ")[:]``
208 *List of values*. Each element is a value of the error function :math:`J^b`,
209 that is of the background difference part.
211 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
214 *List of values*. Each element is a value of the error function :math:`J^o`,
215 that is of the observation difference part.
217 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
220 *List of vectors*. Each element is a usual state vector used during the
221 optimization algorithm procedure.
223 Example : ``Xs = ADD.get("CurrentState")[:]``
225 The conditional outputs of the algorithm are the following:
227 SimulatedObservationAtBackground
228 *List of vectors*. Each element is a vector of observation simulated from
229 the background :math:`\mathbf{x}^b`.
231 Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
233 SimulatedObservationAtCurrentState
234 *List of vectors*. Each element is an observed vector at the current state,
235 that is, in the observation space.
237 Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
239 SimulatedObservationAtOptimum
240 *List of vectors*. Each element is a vector of observation simulated from
241 the analysis or optimal state :math:`\mathbf{x}^a`.
243 Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
248 References to other sections:
249 - :ref:`section_ref_algorithm_ParticleSwarmOptimization`
251 Bibliographical references: