2 Copyright (C) 2008-2018 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: LinearityTest
25 .. _section_ref_algorithm_LinearityTest:
27 Checking algorithm "*LinearityTest*"
28 ------------------------------------
33 This algorithm allows to check the linear quality of the operator, by
34 calculating a residue with known theoretical properties. Different residue
35 formula are available.
37 In any cases, one take :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` and
38 :math:`\mathbf{dx}=\alpha*\mathbf{dx}_0`. :math:`F` is the calculation code.
43 One observe the following residue, coming from the centered difference of the
44 :math:`F` values at nominal point and at perturbed points, normalized by the
45 value at the nominal point:
47 .. math:: R(\alpha) = \frac{|| F(\mathbf{x}+\alpha*\mathbf{dx}) + F(\mathbf{x}-\alpha*\mathbf{dx}) - 2*F(\mathbf{x}) ||}{|| F(\mathbf{x}) ||}
49 If it stays constantly really small with respect to 1, the linearity hypothesis
50 of :math:`F` is verified.
52 If the residue is varying, or if it is of order 1 or more, and it is small only
53 at a certain order of increment, the linearity hypothesis of :math:`F` is not
56 If the residue is decreasing and the decrease change in :math:`\alpha^2` with
57 respect to :math:`\alpha`, it signifies that the gradient is correctly
58 calculated until the stopping level of the quadratic decrease.
63 One observe the residue coming from the Taylor development of the :math:`F`
64 function, normalized by the value at the nominal point:
66 .. math:: R(\alpha) = \frac{|| F(\mathbf{x}+\alpha*\mathbf{dx}) - F(\mathbf{x}) - \alpha * \nabla_xF(\mathbf{dx}) ||}{|| F(\mathbf{x}) ||}
68 If it stay constantly really small with respect to 1, the linearity hypothesis
69 of :math:`F` is verified.
71 If the residue is varying, or if it is of order 1 or more, and it is small only
72 at a certain order of increment, the linearity hypothesis of :math:`F` is not
75 If the residue is decreasing and the decrease change in :math:`\alpha^2` with
76 respect to :math:`\alpha`, it signifies that the gradient is correctly
77 calculated until the stopping level of the quadratic decrease.
79 "NominalTaylor" residue
80 ***********************
82 One observe the residue build from two approximations of order 1 of
83 :math:`F(\mathbf{x})`, normalized by the value at the nominal point:
85 .. math:: R(\alpha) = \max(|| F(\mathbf{x}+\alpha*\mathbf{dx}) - \alpha * F(\mathbf{dx}) || / || F(\mathbf{x}) ||,|| F(\mathbf{x}-\alpha*\mathbf{dx}) + \alpha * F(\mathbf{dx}) || / || F(\mathbf{x}) ||)
87 If the residue stays constant equal to 1 at less than 2 or 3 percents (that that
88 :math:`|R-1|` stays equal to 2 or 3 percents), the linearity hypothesis of
89 :math:`F` is verified.
91 If it is equal to 1 only on part of the variation domain of increment
92 :math:`\alpha`, it is on this sub-domain that the linearity hypothesis of
93 :math:`F` is verified.
95 "NominalTaylorRMS" residue
96 **************************
98 One observe the residue build from two approximations of order 1 of
99 :math:`F(\mathbf{x})`, normalized by the value at the nominal point, on which
100 one estimate the quadratic root mean square (RMS) with the value at the nominal
103 .. math:: R(\alpha) = \max(RMS( F(\mathbf{x}), F(\mathbf{x}+\alpha*\mathbf{dx}) - \alpha * F(\mathbf{dx}) ) / || F(\mathbf{x}) ||,RMS( F(\mathbf{x}), F(\mathbf{x}-\alpha*\mathbf{dx}) + \alpha * F(\mathbf{dx}) ) / || F(\mathbf{x}) ||)
105 If it stay constantly equal to 0 at less than 1 or 2 percents, the linearity
106 hypothesis of :math:`F` is verified.
108 If it is equal to 0 only on part of the variation domain of increment
109 :math:`\alpha`, it is on this sub-domain that the linearity hypothesis of
110 :math:`F` is verified.
112 Optional and required commands
113 ++++++++++++++++++++++++++++++
115 .. index:: single: AlgorithmParameters
116 .. index:: single: CheckingPoint
117 .. index:: single: ObservationOperator
118 .. index:: single: AmplitudeOfInitialDirection
119 .. index:: single: EpsilonMinimumExponent
120 .. index:: single: InitialDirection
121 .. index:: single: ResiduFormula
122 .. index:: single: SetSeed
123 .. index:: single: StoreSupplementaryCalculations
125 The general required commands, available in the editing user interface, are the
129 *Required command*. This indicates the vector used as the state around which
130 to perform the required check, noted :math:`\mathbf{x}` and similar to the
131 background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
134 *Required command*. This indicates the observation operator, previously
135 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
136 results :math:`\mathbf{y}` to be compared to observations
137 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
138 a "*Matrix*" type one. In the case of "*Function*" type, different
139 functional forms can be used, as described in the section
140 :ref:`section_ref_operator_requirements`. If there is some control
141 :math:`U` included in the observation, the operator has to be applied to a
144 The general optional commands, available in the editing user interface, are
145 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
146 of the command "*AlgorithmParameters*" allow to choose the specific options,
147 described hereafter, of the algorithm. See
148 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
151 The options of the algorithm are the following:
153 AmplitudeOfInitialDirection
154 This key indicates the scaling of the initial perturbation build as a vector
155 used for the directional derivative around the nominal checking point. The
156 default is 1, that means no scaling.
158 Example : ``{"AmplitudeOfInitialDirection":0.5}``
160 EpsilonMinimumExponent
161 This key indicates the minimal exponent value of the power of 10 coefficient
162 to be used to decrease the increment multiplier. The default is -8, and it
163 has to be between 0 and -20. For example, its default value leads to
164 calculate the residue of the scalar product formula with a fixed increment
165 multiplied from 1.e0 to 1.e-8.
167 Example : ``{"EpsilonMinimumExponent":-12}``
170 This key indicates the vector direction used for the directional derivative
171 around the nominal checking point. It has to be a vector. If not specified,
172 this direction defaults to a random perturbation around zero of the same
173 vector size than the checking point.
175 Example : ``{"InitialDirection":[0.1,0.1,100.,3}``
178 This key indicates the residue formula that has to be used for the test. The
179 default choice is "CenteredDL", and the possible ones are "CenteredDL"
180 (residue of the difference between the function at nominal point and the
181 values with positive and negative increments, which has to stay very small),
182 "Taylor" (residue of the Taylor development of the operator normalized by
183 the nominal value, which has to stay very small), "NominalTaylor" (residue
184 of the order 1 approximations of the operator, normalized to the nominal
185 point, which has to stay close to 1), and "NominalTaylorRMS" (residue of the
186 order 1 approximations of the operator, normalized by RMS to the nominal
187 point, which has to stay close to 0).
189 Example : ``{"ResiduFormula":"CenteredDL"}``
192 This key allow to give an integer in order to fix the seed of the random
193 generator used to generate the ensemble. A convenient value is for example
194 1000. By default, the seed is left uninitialized, and so use the default
195 initialization from the computer.
197 Example : ``{"SetSeed":1000}``
199 StoreSupplementaryCalculations
200 This list indicates the names of the supplementary variables that can be
201 available at the end of the algorithm. It involves potentially costly
202 calculations or memory consumptions. The default is a void list, none of
203 these variables being calculated and stored by default. The possible names
204 are in the following list: ["CurrentState", "Residu",
205 "SimulatedObservationAtCurrentState"].
207 Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
209 Information and variables available at the end of the algorithm
210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
212 At the output, after executing the algorithm, there are variables and
213 information originating from the calculation. The description of
214 :ref:`section_ref_output_variables` show the way to obtain them by the method
215 named ``get`` of the variable "*ADD*" of the post-processing. The input
216 variables, available to the user at the output in order to facilitate the
217 writing of post-processing procedures, are described in the
218 :ref:`subsection_r_o_v_Inventaire`.
220 The unconditional outputs of the algorithm are the following:
223 *List of values*. Each element is the value of the particular residue
224 verified during a checking algorithm, in the order of the tests.
226 Example : ``r = ADD.get("Residu")[:]``
228 The conditional outputs of the algorithm are the following:
231 *List of vectors*. Each element is a usual state vector used during the
232 optimization algorithm procedure.
234 Example : ``Xs = ADD.get("CurrentState")[:]``
236 SimulatedObservationAtCurrentState
237 *List of vectors*. Each element is an observed vector at the current state,
238 that is, in the observation space.
240 Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
245 References to other sections:
246 - :ref:`section_ref_algorithm_FunctionTest`