2 Copyright (C) 2008-2015 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: GradientTest
25 .. _section_ref_algorithm_GradientTest:
27 Checking algorithm "*GradientTest*"
28 -----------------------------------
33 This algorithm allows to check the quality of the adjoint operator, by
34 calculating a residue with known theoretical properties. Different residue
35 formula are available.
37 In any cases, one take :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` and
38 :math:`\mathbf{dx}=\alpha*\mathbf{dx}_0`. :math:`F` is the calculation code.
43 One observe the residue coming from the Taylor development of the :math:`F`
44 function, normalized by the value at the nominal point:
46 .. math:: R(\alpha) = \frac{|| F(\mathbf{x}+\alpha*\mathbf{dx}) - F(\mathbf{x}) - \alpha * \nabla_xF(\mathbf{dx}) ||}{|| F(\mathbf{x}) ||}
48 If the residue is decreasing and the decrease change in :math:`\alpha^2` with
49 respect to :math:`\alpha`, it signifies that the gradient is well calculated
50 until the stopping precision of the quadratic decrease, and that :math:`F` is
53 If the residue is decreasing and the decrease change in :math:`\alpha` with
54 respect to :math:`\alpha`, until a certain level after which the residue remains
55 small and constant, it signifies that the :math:`F` is linear and that the
56 residue is decreasing due to the error coming from :math:`\nabla_xF` term
59 "TaylorOnNorm" residue
60 **********************
62 One observe the residue coming from the Taylor development of the :math:`F`
63 function, with respect to the :math:`\alpha` parameter to the square:
65 .. math:: R(\alpha) = \frac{|| F(\mathbf{x}+\alpha*\mathbf{dx}) - F(\mathbf{x}) - \alpha * \nabla_xF(\mathbf{dx}) ||}{\alpha^2}
67 This is a residue essentialy similar to the classical Taylor criterion
68 previously described, but its behaviour can differ depending on the numerical
69 properties of the calculation.
71 If the residue is constant until a certain level after which the residue will
72 growth, it signifies that the gradient is well calculated until this stopping
73 precision, and that :math:`F` is not linear.
75 If the residue is systematicaly growing from a very smal value with respect to
76 :math:`||F(\mathbf{x})||`, it signifies that :math:`F` is (quasi-)linear and
77 that the gradient calculation is correct until the precision for which the
78 residue reachs the numerical order of :math:`||F(\mathbf{x})||`.
83 One observe the residue based on the gradient approximation:
85 .. math:: R(\alpha) = \frac{|| F(\mathbf{x}+\alpha*\mathbf{dx}) - F(\mathbf{x}) ||}{\alpha}
87 which has to remain stable until the calculation precision is reached.
89 Optional and required commands
90 ++++++++++++++++++++++++++++++
92 .. index:: single: AlgorithmParameters
93 .. index:: single: CheckingPoint
94 .. index:: single: ObservationOperator
95 .. index:: single: AmplitudeOfInitialDirection
96 .. index:: single: EpsilonMinimumExponent
97 .. index:: single: InitialDirection
98 .. index:: single: ResiduFormula
99 .. index:: single: SetSeed
100 .. index:: single: StoreSupplementaryCalculations
102 The general required commands, available in the editing user interface, are the
106 *Required command*. This indicates the vector used as the state around which
107 to perform the required check, noted :math:`\mathbf{x}` and similar to the
108 background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object.
111 *Required command*. This indicates the observation operator, previously
112 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
113 results :math:`\mathbf{y}` to be compared to observations
114 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
115 a "*Matrix*" type one. In the case of "*Function*" type, different
116 functional forms can be used, as described in the section
117 :ref:`section_ref_operator_requirements`. If there is some control
118 :math:`U` included in the observation, the operator has to be applied to a
121 The general optional commands, available in the editing user interface, are
122 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
123 of the command "*AlgorithmParameters*" allow to choose the specific options,
124 described hereafter, of the algorithm. See
125 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
128 The options of the algorithm are the following:
130 AmplitudeOfInitialDirection
131 This key indicates the scaling of the initial perturbation build as a vector
132 used for the directional derivative around the nominal checking point. The
133 default is 1, that means no scaling.
135 Example : ``{"AmplitudeOfInitialDirection":0.5}``
137 EpsilonMinimumExponent
138 This key indicates the minimal exponent value of the power of 10 coefficient
139 to be used to decrease the increment multiplier. The default is -8, and it
140 has to be between 0 and -20. For example, its default value leads to
141 calculate the residue of the scalar product formula with a fixed increment
142 multiplied from 1.e0 to 1.e-8.
144 Example : ``{"EpsilonMinimumExponent":-12}``
147 This key indicates the vector direction used for the directional derivative
148 around the nominal checking point. It has to be a vector. If not specified,
149 this direction defaults to a random perturbation around zero of the same
150 vector size than the checking point.
152 Example : ``{"InitialDirection":[0.1,0.1,100.,3}``
155 This key indicates the residue formula that has to be used for the test. The
156 default choice is "Taylor", and the possible ones are "Taylor" (normalized
157 residue of the Taylor development of the operator, which has to decrease
158 with the square power of the perturbation), "TaylorOnNorm" (residue of the
159 Taylor development of the operator with respect to the pertibation to the
160 square, which has to remain constant) and "Norm" (residue obtained by taking
161 the norm of the Taylor development at zero order approximation, which
162 approximate the gradient, and which has to remain constant).
164 Example : ``{"ResiduFormula":"Taylor"}``
167 This key allow to give an integer in order to fix the seed of the random
168 generator used to generate the ensemble. A convenient value is for example
169 1000. By default, the seed is left uninitialized, and so use the default
170 initialization from the computer.
172 Example : ``{"SetSeed":1000}``
174 StoreSupplementaryCalculations
175 This list indicates the names of the supplementary variables that can be
176 available at the end of the algorithm. It involves potentially costly
177 calculations or memory consumptions. The default is a void list, none of
178 these variables being calculated and stored by default. The possible names
179 are in the following list: ["CurrentState", "Residu",
180 "SimulatedObservationAtCurrentState"].
182 Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}``
184 Information and variables available at the end of the algorithm
185 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
187 At the output, after executing the algorithm, there are variables and
188 information originating from the calculation. The description of
189 :ref:`section_ref_output_variables` show the way to obtain them by the method
190 named ``get`` of the variable "*ADD*" of the post-processing. The input
191 variables, available to the user at the output in order to facilitate the
192 writing of post-processing procedures, are described in the
193 :ref:`subsection_r_o_v_Inventaire`.
195 The unconditional outputs of the algorithm are the following:
198 *List of values*. Each element is the value of the particular residu
199 verified during a checking algorithm, in the order of the tests.
201 Example : ``r = ADD.get("Residu")[:]``
203 The conditional outputs of the algorithm are the following:
206 *List of vectors*. Each element is a usual state vector used during the
207 optimization algorithm procedure.
209 Example : ``Xs = ADD.get("CurrentState")[:]``
211 SimulatedObservationAtCurrentState
212 *List of vectors*. Each element is an observed vector at the current state,
213 that is, in the observation space.
215 Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
220 References to other sections:
221 - :ref:`section_ref_algorithm_FunctionTest`
222 - :ref:`section_ref_algorithm_LinearityTest`
223 - :ref:`section_ref_algorithm_TangentTest`
224 - :ref:`section_ref_algorithm_AdjointTest`