3 ================================================================================
4 Reference description of the ADAO commands and keywords
5 ================================================================================
7 This section presents the reference description of the ADAO commands and
8 keywords available through the GUI or through scripts.
10 Each command or keyword to be defined through the ADAO GUI has some properties.
11 The first property is to be *required*, *optional* or only factual, describing a
12 type of input. The second property is to be an "open" variable with a fixed type
13 but with any value allowed by the type, or a "restricted" variable, limited to
14 some specified values. The EFICAS editor GUI having build-in validating
15 capacities, the properties of the commands or keywords given through this GUI
16 are automatically correct.
18 The mathematical notations used afterward are explained in the section
19 :ref:`section_theory`.
21 Examples of using these commands are available in the section
22 :ref:`section_examples` and in example files installed with ADAO module.
24 List of possible input types
25 ----------------------------
27 .. index:: single: Dict
28 .. index:: single: Function
29 .. index:: single: Matrix
30 .. index:: single: String
31 .. index:: single: Script
32 .. index:: single: Vector
34 Each ADAO variable has a pseudo-type to help filling it and validation. The
35 different pseudo-types are:
38 This indicates a variable that has to be filled by a dictionary, usually
42 This indicates a variable that has to be filled by a function, usually given
43 as a script or a component method.
46 This indicates a variable that has to be filled by a matrix, usually given
47 either as a string or as a script.
50 This indicates a string giving a literal representation of a matrix, a
51 vector or a vector serie, such as "1 2 ; 3 4" for a square 2x2 matrix.
54 This indicates a script given as an external file. It can be described by a
55 full absolute path name or only by the file name without path.
58 This indicates a variable that has to be filled by a vector, usually given
59 either as a string or as a script.
61 **VectorSerie** This indicates a variable that has to be filled by a list of
62 vectors, usually given either as a string or as a script.
64 When a command or keyword can be filled by a script file name, the script has to
65 contain a variable or a method that has the same name as the one to be filled.
66 In other words, when importing the script in a YACS Python node, it must create
67 a variable of the good name in the current namespace.
69 Reference description for ADAO calculation cases
70 ------------------------------------------------
72 List of commands and keywords for an ADAO calculation case
73 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
75 .. index:: single: ASSIMILATION_STUDY
76 .. index:: single: Algorithm
77 .. index:: single: AlgorithmParameters
78 .. index:: single: Background
79 .. index:: single: BackgroundError
80 .. index:: single: ControlInput
81 .. index:: single: Debug
82 .. index:: single: EvolutionError
83 .. index:: single: EvolutionModel
84 .. index:: single: InputVariables
85 .. index:: single: Observation
86 .. index:: single: ObservationError
87 .. index:: single: ObservationOperator
88 .. index:: single: Observers
89 .. index:: single: OutputVariables
90 .. index:: single: Study_name
91 .. index:: single: Study_repertory
92 .. index:: single: UserDataInit
93 .. index:: single: UserPostAnalysis
95 The first set of commands is related to the description of a calculation case,
96 that is a *Data Assimilation* procedure or an *Optimization* procedure. The
97 terms are ordered in alphabetical order, except the first, which describes
98 choice between calculation or checking. The different commands are the
101 **ASSIMILATION_STUDY**
102 *Required command*. This is the general command describing the data
103 assimilation or optimization case. It hierarchically contains all the other
107 *Required command*. This is a string to indicate the data assimilation or
108 optimization algorithm chosen. The choices are limited and available through
109 the GUI. There exists for example "3DVAR", "Blue"... See below the list of
110 algorithms and associated parameters in the following subsection `Options
111 and required commands for calculation algorithms`_.
113 **AlgorithmParameters**
114 *Optional command*. This command allows to add some optional parameters to
115 control the data assimilation or optimization algorithm. It is defined as a
116 "*Dict*" type object, that is, given as a script. See below the list of
117 algorithms and associated parameters in the following subsection `Options
118 and required commands for calculation algorithms`_.
121 *Required command*. This indicates the background or initial vector used,
122 previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type
123 object, that is, given either as a string or as a script.
126 *Required command*. This indicates the background error covariance matrix,
127 previously noted as :math:`\mathbf{B}`. It is defined as a "*Matrix*" type
128 object, that is, given either as a string or as a script.
131 *Optional command*. This indicates the control vector used to force the
132 evolution model at each step, usually noted as :math:`\mathbf{U}`. It is
133 defined as a "*Vector*" or a *VectorSerie* type object, that is, given
134 either as a string or as a script. When there is no control, it has to be a
138 *Required command*. This define the level of trace and intermediary debug
139 information. The choices are limited between 0 (for False) and 1 (for
143 *Optional command*. This indicates the evolution error covariance matrix,
144 usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type
145 object, that is, given either as a string or as a script.
148 *Optional command*. This indicates the evolution model operator, usually
149 noted :math:`M`, which describes a step of evolution. It is defined as a
150 "*Function*" type object, that is, given as a script. Different functional
151 forms can be used, as described in the following subsection `Requirements
152 for functions describing an operator`_. If there is some control :math:`U`
153 included in the evolution model, the operator has to be applied to a pair
157 *Optional command*. This command allows to indicates the name and size of
158 physical variables that are bundled together in the control vector. This
159 information is dedicated to data processed inside an algorithm.
162 *Required command*. This indicates the observation vector used for data
163 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
164 is defined as a "*Vector*" or a *VectorSerie* type object, that is, given
165 either as a string or as a script.
168 *Required command*. This indicates the observation error covariance matrix,
169 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
170 object, that is, given either as a string or as a script.
172 **ObservationOperator**
173 *Required command*. This indicates the observation operator, previously
174 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
175 results :math:`\mathbf{y}` to be compared to observations
176 :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is,
177 given as a script. Different functional forms can be used, as described in
178 the following subsection `Requirements for functions describing an
179 operator`_. If there is some control :math:`U` included in the observation,
180 the operator has to be applied to a pair :math:`(X,U)`.
183 *Optional command*. This command allows to set internal observers, that are
184 functions linked with a particular variable, which will be executed each
185 time this variable is modified. It is a convenient way to monitor interest
186 variables during the data assimilation or optimization process, by printing
190 *Optional command*. This command allows to indicates the name and size of
191 physical variables that are bundled together in the output observation
192 vector. This information is dedicated to data processed inside an algorithm.
195 *Required command*. This is an open string to describe the study by a name
199 *Optional command*. If available, this repertory is used to find all the
200 script files that can be used to define some other commands by scripts.
203 *Optional command*. This commands allows to initialize some parameters or
204 data automatically before data assimilation algorithm processing.
207 *Optional command*. This commands allows to process some parameters or data
208 automatically after data assimilation algorithm processing. It is defined as
209 a script or a string, allowing to put post-processing code directly inside
212 Options and required commands for calculation algorithms
213 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
215 .. index:: single: 3DVAR
216 .. index:: single: Blue
217 .. index:: single: EnsembleBlue
218 .. index:: single: KalmanFilter
219 .. index:: single: ExtendedKalmanFilter
220 .. index:: single: LinearLeastSquares
221 .. index:: single: NonLinearLeastSquares
222 .. index:: single: ParticleSwarmOptimization
223 .. index:: single: QuantileRegression
225 .. index:: single: AlgorithmParameters
226 .. index:: single: Bounds
227 .. index:: single: CostDecrementTolerance
228 .. index:: single: GradientNormTolerance
229 .. index:: single: GroupRecallRate
230 .. index:: single: MaximumNumberOfSteps
231 .. index:: single: Minimizer
232 .. index:: single: NumberOfInsects
233 .. index:: single: ProjectedGradientTolerance
234 .. index:: single: QualityCriterion
235 .. index:: single: Quantile
236 .. index:: single: SetSeed
237 .. index:: single: StoreInternalVariables
238 .. index:: single: StoreSupplementaryCalculations
239 .. index:: single: SwarmVelocity
241 Each algorithm can be controlled using some generic or specific options given
242 through the "*AlgorithmParameters*" optional command, as follows for example::
244 AlgorithmParameters = {
245 "Minimizer" : "LBFGSB",
246 "MaximumNumberOfSteps" : 25,
247 "StoreSupplementaryCalculations" : ["APosterioriCovariance","OMA"],
250 This section describes the available options algorithm by algorithm. If an
251 option is specified for an algorithm that doesn't support it, the option is
252 simply left unused. The meaning of the acronyms or particular names can be found
253 in the :ref:`genindex` or the :ref:`section_glossary`. In addition, for each
254 algorithm, the required commands/keywords are given, being described in `List of
255 commands and keywords for an ADAO calculation case`_.
260 *"Background", "BackgroundError",
261 "Observation", "ObservationError",
262 "ObservationOperator"*
264 StoreInternalVariables
265 This boolean key allows to store default internal variables, mainly the
266 current state during iterative optimization process. Be careful, this can be
267 a numerically costly choice in certain calculation cases. The default is
270 StoreSupplementaryCalculations
271 This list indicates the names of the supplementary variables that can be
272 available at the end of the algorithm. It involves potentially costly
273 calculations. The default is a void list, none of these variables being
274 calculated and stored by default. The possible names are in the following
275 list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation",
276 "SigmaBck2", "SigmaObs2", "MahalanobisConsistency"].
278 **"LinearLeastSquares"**
281 *"Observation", "ObservationError",
282 "ObservationOperator"*
284 StoreInternalVariables
285 This boolean key allows to store default internal variables, mainly the
286 current state during iterative optimization process. Be careful, this can be
287 a numerically costly choice in certain calculation cases. The default is
290 StoreSupplementaryCalculations
291 This list indicates the names of the supplementary variables that can be
292 available at the end of the algorithm. It involves potentially costly
293 calculations. The default is a void list, none of these variables being
294 calculated and stored by default. The possible names are in the following
300 *"Background", "BackgroundError",
301 "Observation", "ObservationError",
302 "ObservationOperator"*
305 This key allows to choose the optimization minimizer. The default choice
306 is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
307 minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained
308 minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
309 unconstrained minimizer), "NCG" (Newton CG minimizer).
312 This key allows to define upper and lower bounds for every control
313 variable being optimized. Bounds can be given by a list of list of pairs
314 of lower/upper bounds for each variable, with possibly ``None`` every time
315 there is no bound. The bounds can always be specified, but they are taken
316 into account only by the constrained minimizers.
319 This key indicates the maximum number of iterations allowed for iterative
320 optimization. The default is 15000, which is very similar to no limit on
321 iterations. It is then recommended to adapt this parameter to the needs on
322 real problems. For some minimizers, the effective stopping step can be
323 slightly different due to algorithm internal control requirements.
325 CostDecrementTolerance
326 This key indicates a limit value, leading to stop successfully the
327 iterative optimization process when the cost function decreases less than
328 this tolerance at the last step. The default is 1.e-7, and it is
329 recommended to adapt it to the needs on real problems.
331 ProjectedGradientTolerance
332 This key indicates a limit value, leading to stop successfully the iterative
333 optimization process when all the components of the projected gradient are
334 under this limit. It is only used for constrained minimizers. The default is
335 -1, that is the internal default of each minimizer (generally 1.e-5), and it
336 is not recommended to change it.
338 GradientNormTolerance
339 This key indicates a limit value, leading to stop successfully the
340 iterative optimization process when the norm of the gradient is under this
341 limit. It is only used for non-constrained minimizers. The default is
342 1.e-5 and it is not recommended to change it.
344 StoreInternalVariables
345 This boolean key allows to store default internal variables, mainly the
346 current state during iterative optimization process. Be careful, this can be
347 a numerically costly choice in certain calculation cases. The default is
350 StoreSupplementaryCalculations
351 This list indicates the names of the supplementary variables that can be
352 available at the end of the algorithm. It involves potentially costly
353 calculations. The default is a void list, none of these variables being
354 calculated and stored by default. The possible names are in the following
355 list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation",
356 "SigmaObs2", "MahalanobisConsistency"].
358 **"NonLinearLeastSquares"**
362 "Observation", "ObservationError",
363 "ObservationOperator"*
366 This key allows to choose the optimization minimizer. The default choice
367 is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
368 minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained
369 minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
370 unconstrained minimizer), "NCG" (Newton CG minimizer).
373 This key allows to define upper and lower bounds for every control
374 variable being optimized. Bounds can be given by a list of list of pairs
375 of lower/upper bounds for each variable, with possibly ``None`` every time
376 there is no bound. The bounds can always be specified, but they are taken
377 into account only by the constrained minimizers.
380 This key indicates the maximum number of iterations allowed for iterative
381 optimization. The default is 15000, which is very similar to no limit on
382 iterations. It is then recommended to adapt this parameter to the needs on
383 real problems. For some minimizers, the effective stopping step can be
384 slightly different due to algorithm internal control requirements.
386 CostDecrementTolerance
387 This key indicates a limit value, leading to stop successfully the
388 iterative optimization process when the cost function decreases less than
389 this tolerance at the last step. The default is 1.e-7, and it is
390 recommended to adapt it to the needs on real problems.
392 ProjectedGradientTolerance
393 This key indicates a limit value, leading to stop successfully the iterative
394 optimization process when all the components of the projected gradient are
395 under this limit. It is only used for constrained minimizers. The default is
396 -1, that is the internal default of each minimizer (generally 1.e-5), and it
397 is not recommended to change it.
399 GradientNormTolerance
400 This key indicates a limit value, leading to stop successfully the
401 iterative optimization process when the norm of the gradient is under this
402 limit. It is only used for non-constrained minimizers. The default is
403 1.e-5 and it is not recommended to change it.
405 StoreInternalVariables
406 This boolean key allows to store default internal variables, mainly the
407 current state during iterative optimization process. Be careful, this can be
408 a numerically costly choice in certain calculation cases. The default is
411 StoreSupplementaryCalculations
412 This list indicates the names of the supplementary variables that can be
413 available at the end of the algorithm. It involves potentially costly
414 calculations. The default is a void list, none of these variables being
415 calculated and stored by default. The possible names are in the following
416 list: ["BMA", "OMA", "OMB", "Innovation"].
421 *"Background", "BackgroundError",
422 "Observation", "ObservationError",
423 "ObservationOperator"*
426 This key allow to give an integer in order to fix the seed of the random
427 generator used to generate the ensemble. A convenient value is for example
428 1000. By default, the seed is left uninitialized, and so use the default
429 initialization from the computer.
434 *"Background", "BackgroundError",
435 "Observation", "ObservationError",
436 "ObservationOperator",
437 "EvolutionModel", "EvolutionError",
441 This key allows to choose the type of estimation to be performed. It can be
442 either state-estimation, named "State", or parameter-estimation, named
443 "Parameters". The default choice is "State".
445 StoreSupplementaryCalculations
446 This list indicates the names of the supplementary variables that can be
447 available at the end of the algorithm. It involves potentially costly
448 calculations. The default is a void list, none of these variables being
449 calculated and stored by default. The possible names are in the following
450 list: ["APosterioriCovariance", "BMA", "Innovation"].
452 **"ExtendedKalmanFilter"**
455 *"Background", "BackgroundError",
456 "Observation", "ObservationError",
457 "ObservationOperator",
458 "EvolutionModel", "EvolutionError",
462 This key allows to define upper and lower bounds for every control variable
463 being optimized. Bounds can be given by a list of list of pairs of
464 lower/upper bounds for each variable, with extreme values every time there
465 is no bound. The bounds can always be specified, but they are taken into
466 account only by the constrained minimizers.
469 This key allows to define the method to take bounds into account. The
470 possible methods are in the following list: ["EstimateProjection"].
473 This key allows to choose the type of estimation to be performed. It can be
474 either state-estimation, named "State", or parameter-estimation, named
475 "Parameters". The default choice is "State".
477 StoreSupplementaryCalculations
478 This list indicates the names of the supplementary variables that can be
479 available at the end of the algorithm. It involves potentially costly
480 calculations. The default is a void list, none of these variables being
481 calculated and stored by default. The possible names are in the following
482 list: ["APosterioriCovariance", "BMA", "Innovation"].
484 **"ParticleSwarmOptimization"**
487 *"Background", "BackgroundError",
488 "Observation", "ObservationError",
489 "ObservationOperator"*
492 This key indicates the maximum number of iterations allowed for iterative
493 optimization. The default is 50, which is an arbitrary limit. It is then
494 recommended to adapt this parameter to the needs on real problems.
497 This key indicates the number of insects or particles in the swarm. The
498 default is 100, which is a usual default for this algorithm.
501 This key indicates the part of the insect velocity which is imposed by the
502 swarm. It is a positive floating point value. The default value is 1.
505 This key indicates the recall rate at the best swarm insect. It is a
506 floating point value between 0 and 1. The default value is 0.5.
509 This key indicates the quality criterion, minimized to find the optimal
510 state estimate. The default is the usual data assimilation criterion named
511 "DA", the augmented ponderated least squares. The possible criteria has to
512 be in the following list, where the equivalent names are indicated by "=":
513 ["AugmentedPonderatedLeastSquares"="APLS"="DA",
514 "PonderatedLeastSquares"="PLS", "LeastSquares"="LS"="L2",
515 "AbsoluteValue"="L1", "MaximumError"="ME"]
518 This key allow to give an integer in order to fix the seed of the random
519 generator used to generate the ensemble. A convenient value is for example
520 1000. By default, the seed is left uninitialized, and so use the default
521 initialization from the computer.
523 StoreInternalVariables
524 This boolean key allows to store default internal variables, mainly the
525 current state during iterative optimization process. Be careful, this can be
526 a numerically costly choice in certain calculation cases. The default is
529 StoreSupplementaryCalculations
530 This list indicates the names of the supplementary variables that can be
531 available at the end of the algorithm. It involves potentially costly
532 calculations. The default is a void list, none of these variables being
533 calculated and stored by default. The possible names are in the following
534 list: ["BMA", "OMA", "OMB", "Innovation"].
536 **"QuantileRegression"**
541 "ObservationOperator"*
544 This key allows to define the real value of the desired quantile, between
545 0 and 1. The default is 0.5, corresponding to the median.
548 This key allows to choose the optimization minimizer. The default choice
549 and only available choice is "MMQR" (Majorize-Minimize for Quantile
553 This key indicates the maximum number of iterations allowed for iterative
554 optimization. The default is 15000, which is very similar to no limit on
555 iterations. It is then recommended to adapt this parameter to the needs on
558 CostDecrementTolerance
559 This key indicates a limit value, leading to stop successfully the
560 iterative optimization process when the cost function or the surrogate
561 decreases less than this tolerance at the last step. The default is 1.e-6,
562 and it is recommended to adapt it to the needs on real problems.
564 StoreInternalVariables
565 This boolean key allows to store default internal variables, mainly the
566 current state during iterative optimization process. Be careful, this can be
567 a numerically costly choice in certain calculation cases. The default is
570 StoreSupplementaryCalculations
571 This list indicates the names of the supplementary variables that can be
572 available at the end of the algorithm. It involves potentially costly
573 calculations. The default is a void list, none of these variables being
574 calculated and stored by default. The possible names are in the following
575 list: ["BMA", "OMA", "OMB", "Innovation"].
577 Reference description for ADAO checking cases
578 ---------------------------------------------
580 List of commands and keywords for an ADAO checking case
581 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
583 .. index:: single: CHECKING_STUDY
584 .. index:: single: Algorithm
585 .. index:: single: AlgorithmParameters
586 .. index:: single: CheckingPoint
587 .. index:: single: Debug
588 .. index:: single: ObservationOperator
589 .. index:: single: Study_name
590 .. index:: single: Study_repertory
591 .. index:: single: UserDataInit
593 The second set of commands is related to the description of a checking case,
594 that is a procedure to check required properties on information somewhere else
595 by a calculation case. The terms are ordered in alphabetical order, except the
596 first, which describes choice between calculation or checking. The different
597 commands are the following:
600 *Required command*. This is the general command describing the checking
601 case. It hierarchically contains all the other commands.
604 *Required command*. This is a string to indicate the data assimilation or
605 optimization algorithm chosen. The choices are limited and available through
606 the GUI. There exists for example "FunctionTest", "AdjointTest"... See below
607 the list of algorithms and associated parameters in the following subsection
608 `Options and required commands for checking algorithms`_.
610 **AlgorithmParameters**
611 *Optional command*. This command allows to add some optional parameters to
612 control the data assimilation or optimization algorithm. It is defined as a
613 "*Dict*" type object, that is, given as a script. See below the list of
614 algorithms and associated parameters in the following subsection `Options
615 and required commands for checking algorithms`_.
618 *Required command*. This indicates the vector used,
619 previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type
620 object, that is, given either as a string or as a script.
623 *Required command*. This define the level of trace and intermediary debug
624 information. The choices are limited between 0 (for False) and 1 (for
627 **ObservationOperator**
628 *Required command*. This indicates the observation operator, previously
629 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
630 results :math:`\mathbf{y}` to be compared to observations
631 :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is,
632 given as a script. Different functional forms can be used, as described in
633 the following subsection `Requirements for functions describing an
637 *Required command*. This is an open string to describe the study by a name
641 *Optional command*. If available, this repertory is used to find all the
642 script files that can be used to define some other commands by scripts.
645 *Optional command*. This commands allows to initialize some parameters or
646 data automatically before data assimilation algorithm processing.
648 Options and required commands for checking algorithms
649 +++++++++++++++++++++++++++++++++++++++++++++++++++++
651 .. index:: single: AdjointTest
652 .. index:: single: FunctionTest
653 .. index:: single: GradientTest
655 .. index:: single: AlgorithmParameters
656 .. index:: single: AmplitudeOfInitialDirection
657 .. index:: single: EpsilonMinimumExponent
658 .. index:: single: InitialDirection
659 .. index:: single: ResiduFormula
660 .. index:: single: SetSeed
662 We recall that each algorithm can be controlled using some generic or specific
663 options given through the "*AlgorithmParameters*" optional command, as follows
666 AlgorithmParameters = {
667 "AmplitudeOfInitialDirection" : 1,
668 "EpsilonMinimumExponent" : -8,
671 If an option is specified for an algorithm that doesn't support it, the option
672 is simply left unused. The meaning of the acronyms or particular names can be
673 found in the :ref:`genindex` or the :ref:`section_glossary`. In addition, for
674 each algorithm, the required commands/keywords are given, being described in
675 `List of commands and keywords for an ADAO checking case`_.
681 "ObservationOperator"*
683 AmplitudeOfInitialDirection
684 This key indicates the scaling of the initial perturbation build as a vector
685 used for the directional derivative around the nominal checking point. The
686 default is 1, that means no scaling.
688 EpsilonMinimumExponent
689 This key indicates the minimal exponent value of the power of 10 coefficient
690 to be used to decrease the increment multiplier. The default is -8, and it
691 has to be between 0 and -20. For example, its default value leads to
692 calculate the residue of the scalar product formula with a fixed increment
693 multiplied from 1.e0 to 1.e-8.
696 This key indicates the vector direction used for the directional derivative
697 around the nominal checking point. It has to be a vector. If not specified,
698 this direction defaults to a random perturbation around zero of the same
699 vector size than the checking point.
702 This key allow to give an integer in order to fix the seed of the random
703 generator used to generate the ensemble. A convenient value is for example
704 1000. By default, the seed is left uninitialized, and so use the default
705 initialization from the computer.
711 "ObservationOperator"*
719 "ObservationOperator"*
721 AmplitudeOfInitialDirection
722 This key indicates the scaling of the initial perturbation build as a vector
723 used for the directional derivative around the nominal checking point. The
724 default is 1, that means no scaling.
726 EpsilonMinimumExponent
727 This key indicates the minimal exponent value of the power of 10 coefficient
728 to be used to decrease the increment multiplier. The default is -8, and it
729 has to be between 0 and -20. For example, its default value leads to
730 calculate the residue of the scalar product formula with a fixed increment
731 multiplied from 1.e0 to 1.e-8.
734 This key indicates the vector direction used for the directional derivative
735 around the nominal checking point. It has to be a vector. If not specified,
736 this direction defaults to a random perturbation around zero of the same
737 vector size than the checking point.
740 This key indicates the residue formula that has to be used for the test. The
741 default choice is "Taylor", and the possible ones are "Taylor" (residue of
742 the Taylor development of the operator, which has to decrease with the power
743 of 2 in perturbation) and "Norm" (residue obtained by taking the norm of the
744 Taylor development at zero order approximation, which approximate the
745 gradient, and which has to remain constant).
748 This key allow to give an integer in order to fix the seed of the random
749 generator used to generate the ensemble. A convenient value is for example
750 1000. By default, the seed is left uninitialized, and so use the default
751 initialization from the computer.
753 Requirements for functions describing an operator
754 -------------------------------------------------
756 The operators for observation and evolution are required to implement the data
757 assimilation or optimization procedures. They include the physical simulation
758 numerical simulations, but also the filtering and restriction to compare the
759 simulation to observation. The evolution operator is considered here in its
760 incremental form, representing the transition between two successive states, and
761 is then similar to the observation operator.
763 Schematically, an operator has to give a output solution given the input
764 parameters. Part of the input parameters can be modified during the optimization
765 procedure. So the mathematical representation of such a process is a function.
766 It was briefly described in the section :ref:`section_theory` and is generalized
767 here by the relation:
769 .. math:: \mathbf{y} = O( \mathbf{x} )
771 between the pseudo-observations :math:`\mathbf{y}` and the parameters
772 :math:`\mathbf{x}` using the observation or evolution operator :math:`O`. The
773 same functional representation can be used for the linear tangent model
774 :math:`\mathbf{O}` of :math:`O` and its adjoint :math:`\mathbf{O}^*`, also
775 required by some data assimilation or optimization algorithms.
777 Then, **to describe completely an operator, the user has only to provide a
778 function that fully and only realize the functional operation**.
780 This function is usually given as a script that can be executed in a YACS node.
781 This script can without difference launch external codes or use internal SALOME
782 calls and methods. If the algorithm requires the 3 aspects of the operator
783 (direct form, tangent form and adjoint form), the user has to give the 3
784 functions or to approximate them.
786 There are 3 practical methods for the user to provide the operator functional
789 First functional form: using "*ScriptWithOneFunction*"
790 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
792 .. index:: single: ScriptWithOneFunction
793 .. index:: single: DirectOperator
794 .. index:: single: DifferentialIncrement
795 .. index:: single: CenteredFiniteDifference
797 The first one consist in providing only one potentially non-linear function, and
798 to approximate the tangent and the adjoint operators. This is done by using the
799 keyword "*ScriptWithOneFunction*" for the description of the chosen operator in
800 the ADAO GUI. The user have to provide the function in a script, with a
801 mandatory name "*DirectOperator*". For example, the script can follow the
804 def DirectOperator( X ):
805 """ Direct non-linear simulation operator """
811 In this case, the user can also provide a value for the differential increment,
812 using through the GUI the keyword "*DifferentialIncrement*", which has a default
813 value of 1%. This coefficient will be used in the finite difference
814 approximation to build the tangent and adjoint operators. The finite difference
815 approximation order can also be chosen through the GUI, using the keyword
816 "*CenteredFiniteDifference*", with 0 for an uncentered schema of first order,
817 and with 1 for a centered schema of second order (of twice the first order
818 computational cost). The keyword has a default value of 0.
820 This first operator definition allow easily to test the functional form before
821 its use in an ADAO case, greatly reducing the complexity of implementation.
823 **Important warning:** the name "*DirectOperator*" is mandatory, and the type of
824 the X argument can be either a python list, a numpy array or a numpy 1D-matrix.
825 The user has to treat these cases in his script.
827 Second functional form: using "*ScriptWithFunctions*"
828 +++++++++++++++++++++++++++++++++++++++++++++++++++++
830 .. index:: single: ScriptWithFunctions
831 .. index:: single: DirectOperator
832 .. index:: single: TangentOperator
833 .. index:: single: AdjointOperator
835 The second one consist in providing directly the three associated operators
836 :math:`O`, :math:`\mathbf{O}` and :math:`\mathbf{O}^*`. This is done by using
837 the keyword "*ScriptWithFunctions*" for the description of the chosen operator
838 in the ADAO GUI. The user have to provide three functions in one script, with
839 three mandatory names "*DirectOperator*", "*TangentOperator*" and
840 "*AdjointOperator*". For example, the script can follow the template::
842 def DirectOperator( X ):
843 """ Direct non-linear simulation operator """
847 return something like Y
849 def TangentOperator( (X, dX) ):
850 """ Tangent linear operator, around X, applied to dX """
854 return something like Y
856 def AdjointOperator( (X, Y) ):
857 """ Adjoint operator, around X, applied to Y """
861 return something like X
863 Another time, this second operator definition allow easily to test the
864 functional forms before their use in an ADAO case, reducing the complexity of
867 **Important warning:** the names "*DirectOperator*", "*TangentOperator*" and
868 "*AdjointOperator*" are mandatory, and the type of the X, Y, dX arguments can be
869 either a python list, a numpy array or a numpy 1D-matrix. The user has to treat
870 these cases in his script.
872 Third functional form: using "*ScriptWithSwitch*"
873 +++++++++++++++++++++++++++++++++++++++++++++++++
875 .. index:: single: ScriptWithSwitch
876 .. index:: single: DirectOperator
877 .. index:: single: TangentOperator
878 .. index:: single: AdjointOperator
880 This third form give more possibilities to control the execution of the three
881 functions representing the operator, allowing advanced usage and control over
882 each execution of the simulation code. This is done by using the keyword
883 "*ScriptWithSwitch*" for the description of the chosen operator in the ADAO GUI.
884 The user have to provide a switch in one script to control the execution of the
885 direct, tangent and adjoint forms of its simulation code. The user can then, for
886 example, use other approximations for the tangent and adjoint codes, or
887 introduce more complexity in the argument treatment of the functions. But it
888 will be far more complicated to implement and debug.
890 **It is recommended not to use this third functional form without a solid
891 numerical or physical reason.**
893 If, however, you want to use this third form, we recommend using the following
894 template for the switch. It requires an external script or code named
895 "*Physical_simulation_functions.py*", containing three functions named
896 "*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*" as previously.
897 Here is the switch template::
899 import Physical_simulation_functions
900 import numpy, logging
903 for param in computation["specificParameters"]:
904 if param["name"] == "method":
905 method = param["value"]
906 if method not in ["Direct", "Tangent", "Adjoint"]:
907 raise ValueError("No valid computation method is given")
908 logging.info("Found method is \'%s\'"%method)
910 logging.info("Loading operator functions")
911 Function = Physical_simulation_functions.DirectOperator
912 Tangent = Physical_simulation_functions.TangentOperator
913 Adjoint = Physical_simulation_functions.AdjointOperator
915 logging.info("Executing the possible computations")
917 if method == "Direct":
918 logging.info("Direct computation")
919 Xcurrent = computation["inputValues"][0][0][0]
920 data = Function(numpy.matrix( Xcurrent ).T)
921 if method == "Tangent":
922 logging.info("Tangent computation")
923 Xcurrent = computation["inputValues"][0][0][0]
924 dXcurrent = computation["inputValues"][0][0][1]
925 data = Tangent(numpy.matrix(Xcurrent).T, numpy.matrix(dXcurrent).T)
926 if method == "Adjoint":
927 logging.info("Adjoint computation")
928 Xcurrent = computation["inputValues"][0][0][0]
929 Ycurrent = computation["inputValues"][0][0][1]
930 data = Adjoint((numpy.matrix(Xcurrent).T, numpy.matrix(Ycurrent).T))
932 logging.info("Formatting the output")
933 it = numpy.ravel(data)
934 outputValues = [[[[]]]]
936 outputValues[0][0][0].append(val)
939 result["outputValues"] = outputValues
940 result["specificOutputInfos"] = []
941 result["returnCode"] = 0
942 result["errorMessage"] = ""
944 All various modifications could be done from this template hypothesis.
946 Special case of controled evolution operator
947 ++++++++++++++++++++++++++++++++++++++++++++
949 In some cases, the evolution or the observation operators are required to be
950 controled by an external input control, given a priori. In this case, the
951 generic form of the incremental evolution model is slightly modified as follows:
953 .. math:: \mathbf{y} = O( \mathbf{x}, \mathbf{u})
955 where :math:`\mathbf{u}` is the control over one state increment. In this case,
956 the direct operator has to be applied to a pair of variables :math:`(X,U)`.
957 Schematically, the operator has to be set as::
959 def DirectOperator( (X, U) ):
960 """ Direct non-linear simulation operator """
964 return something like X(n+1) or Y(n+1)
966 The tangent and adjoint operators have the same signature as previously, noting
967 that the derivatives has to be done only partially against :math:`\mathbf{x}`.
968 In such a case with explicit control, only the second functional form (using
969 "*ScriptWithFunctions*") and third functional form (using "*ScriptWithSwitch*")