3 ================================================================================
4 Reference description of the ADAO commands and keywords
5 ================================================================================
7 This section presents the reference description of the ADAO commands and
8 keywords available through the GUI or through scripts.
10 Each command or keyword to be defined through the ADAO GUI has some properties.
11 The first property is to be *required*, *optional* or only factual, describing a
12 type of input. The second property is to be an "open" variable with a fixed type
13 but with any value allowed by the type, or a "restricted" variable, limited to
14 some specified values. The EFICAS editor GUI having build-in validating
15 capacities, the properties of the commands or keywords given through this GUI
16 are automatically correct.
18 The mathematical notations used afterward are explained in the section
19 :ref:`section_theory`.
21 Examples of using these commands are available in the section
22 :ref:`section_examples` and in example files installed with ADAO module.
24 List of possible input types
25 ----------------------------
27 .. index:: single: Dict
28 .. index:: single: Function
29 .. index:: single: Matrix
30 .. index:: single: String
31 .. index:: single: Script
32 .. index:: single: Vector
34 Each ADAO variable has a pseudo-type to help filling it and validation. The
35 different pseudo-types are:
38 This indicates a variable that has to be filled by a dictionary, usually
42 This indicates a variable that has to be filled by a function, usually given
43 as a script or a component method.
46 This indicates a variable that has to be filled by a matrix, usually given
47 either as a string or as a script.
50 This indicates a string giving a literal representation of a matrix, a
51 vector or a vector serie, such as "1 2 ; 3 4" for a square 2x2 matrix.
54 This indicates a script given as an external file. It can be described by a
55 full absolute path name or only by the file name without path.
58 This indicates a variable that has to be filled by a vector, usually given
59 either as a string or as a script.
61 **VectorSerie** This indicates a variable that has to be filled by a list of
62 vectors, usually given either as a string or as a script.
64 When a command or keyword can be filled by a script file name, the script has to
65 contain a variable or a method that has the same name as the one to be filled.
66 In other words, when importing the script in a YACS Python node, it must create
67 a variable of the good name in the current namespace.
69 List of commands and keywords for an ADAO calculation case
70 ----------------------------------------------------------
72 .. index:: single: ASSIMILATION_STUDY
73 .. index:: single: Algorithm
74 .. index:: single: AlgorithmParameters
75 .. index:: single: Background
76 .. index:: single: BackgroundError
77 .. index:: single: ControlInput
78 .. index:: single: Debug
79 .. index:: single: EvolutionError
80 .. index:: single: EvolutionModel
81 .. index:: single: InputVariables
82 .. index:: single: Observation
83 .. index:: single: ObservationError
84 .. index:: single: ObservationOperator
85 .. index:: single: Observers
86 .. index:: single: OutputVariables
87 .. index:: single: Study_name
88 .. index:: single: Study_repertory
89 .. index:: single: UserDataInit
90 .. index:: single: UserPostAnalysis
92 The first set of commands is related to the description of a calculation case,
93 that is a *Data Assimilation* procedure or an *Optimization* procedure. The
94 terms are ordered in alphabetical order, except the first, which describes
95 choice between calculation or checking. The different commands are the
98 **ASSIMILATION_STUDY**
99 *Required command*. This is the general command describing the data
100 assimilation or optimization case. It hierarchically contains all the other
104 *Required command*. This is a string to indicate the data assimilation or
105 optimization algorithm chosen. The choices are limited and available through
106 the GUI. There exists for example "3DVAR", "Blue"... See below the list of
107 algorithms and associated parameters in the following subsection `Options
108 and required commands for algorithms`_.
110 **AlgorithmParameters**
111 *Optional command*. This command allows to add some optional parameters to
112 control the data assimilation or optimization algorithm. It is defined as a
113 "*Dict*" type object, that is, given as a script. See below the list of
114 algorithms and associated parameters in the following subsection `Options
115 and required commands for algorithms`_.
118 *Required command*. This indicates the background or initial vector used,
119 previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type
120 object, that is, given either as a string or as a script.
123 *Required command*. This indicates the background error covariance matrix,
124 previously noted as :math:`\mathbf{B}`. It is defined as a "*Matrix*" type
125 object, that is, given either as a string or as a script.
128 *Optional command*. This indicates the control vector used to force the
129 evolution model at each step, usually noted as :math:`\mathbf{U}`. It is
130 defined as a "*Vector*" or a *VectorSerie* type object, that is, given
131 either as a string or as a script. When there is no control, it has to be a
135 *Required command*. This define the level of trace and intermediary debug
136 information. The choices are limited between 0 (for False) and 1 (for
140 *Optional command*. This indicates the evolution error covariance matrix,
141 usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type
142 object, that is, given either as a string or as a script.
145 *Optional command*. This indicates the evolution model operator, usually
146 noted :math:`M`, which describes a step of evolution. It is defined as a
147 "*Function*" type object, that is, given as a script. Different functional
148 forms can be used, as described in the following subsection `Requirements
149 for functions describing an operator`_. If there is some control :math:`U`
150 included in the evolution model, the operator has to be applied to a pair
154 *Optional command*. This command allows to indicates the name and size of
155 physical variables that are bundled together in the control vector. This
156 information is dedicated to data processed inside an algorithm.
159 *Required command*. This indicates the observation vector used for data
160 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
161 is defined as a "*Vector*" or a *VectorSerie* type object, that is, given
162 either as a string or as a script.
165 *Required command*. This indicates the observation error covariance matrix,
166 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
167 object, that is, given either as a string or as a script.
169 **ObservationOperator**
170 *Required command*. This indicates the observation operator, previously
171 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
172 results :math:`\mathbf{y}` to be compared to observations
173 :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is,
174 given as a script. Different functional forms can be used, as described in
175 the following subsection `Requirements for functions describing an
176 operator`_. If there is some control :math:`U` included in the observation,
177 the operator has to be applied to a pair :math:`(X,U)`.
180 *Optional command*. This command allows to set internal observers, that are
181 functions linked with a particular variable, which will be executed each
182 time this variable is modified. It is a convenient way to monitor interest
183 variables during the data assimilation or optimization process, by printing
187 *Optional command*. This command allows to indicates the name and size of
188 physical variables that are bundled together in the output observation
189 vector. This information is dedicated to data processed inside an algorithm.
192 *Required command*. This is an open string to describe the study by a name
196 *Optional command*. If available, this repertory is used to find all the
197 script files that can be used to define some other commands by scripts.
200 *Optional command*. This commands allows to initialize some parameters or
201 data automatically before data assimilation algorithm processing.
204 *Optional command*. This commands allows to process some parameters or data
205 automatically after data assimilation algorithm processing. It is defined as
206 a script or a string, allowing to put post-processing code directly inside
209 List of commands and keywords for an ADAO checking case
210 -------------------------------------------------------
212 .. index:: single: CHECKING_STUDY
213 .. index:: single: Algorithm
214 .. index:: single: AlgorithmParameters
215 .. index:: single: CheckingPoint
216 .. index:: single: Debug
217 .. index:: single: ObservationOperator
218 .. index:: single: Study_name
219 .. index:: single: Study_repertory
220 .. index:: single: UserDataInit
222 The second set of commands is related to the description of a checking case,
223 that is a procedure to check required properties on information somewhere else
224 by a calculation case. The terms are ordered in alphabetical order, except the
225 first, which describes choice between calculation or checking. The different
226 commands are the following:
229 *Required command*. This is the general command describing the checking
230 case. It hierarchically contains all the other commands.
233 *Required command*. This is a string to indicate the data assimilation or
234 optimization algorithm chosen. The choices are limited and available through
235 the GUI. There exists for example "3DVAR", "Blue"... See below the list of
236 algorithms and associated parameters in the following subsection `Options
237 and required commands for algorithms`_.
239 **AlgorithmParameters**
240 *Optional command*. This command allows to add some optional parameters to
241 control the data assimilation or optimization algorithm. It is defined as a
242 "*Dict*" type object, that is, given as a script. See below the list of
243 algorithms and associated parameters in the following subsection `Options
244 and required commands for algorithms`_.
247 *Required command*. This indicates the vector used,
248 previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type
249 object, that is, given either as a string or as a script.
252 *Required command*. This define the level of trace and intermediary debug
253 information. The choices are limited between 0 (for False) and 1 (for
256 **ObservationOperator**
257 *Required command*. This indicates the observation operator, previously
258 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
259 results :math:`\mathbf{y}` to be compared to observations
260 :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is,
261 given as a script. Different functional forms can be used, as described in
262 the following subsection `Requirements for functions describing an
266 *Required command*. This is an open string to describe the study by a name
270 *Optional command*. If available, this repertory is used to find all the
271 script files that can be used to define some other commands by scripts.
274 *Optional command*. This commands allows to initialize some parameters or
275 data automatically before data assimilation algorithm processing.
277 Options and required commands for algorithms
278 --------------------------------------------
280 .. index:: single: 3DVAR
281 .. index:: single: Blue
282 .. index:: single: EnsembleBlue
283 .. index:: single: KalmanFilter
284 .. index:: single: ExtendedKalmanFilter
285 .. index:: single: LinearLeastSquares
286 .. index:: single: NonLinearLeastSquares
287 .. index:: single: ParticleSwarmOptimization
288 .. index:: single: QuantileRegression
290 .. index:: single: AlgorithmParameters
291 .. index:: single: Bounds
292 .. index:: single: CostDecrementTolerance
293 .. index:: single: GradientNormTolerance
294 .. index:: single: GroupRecallRate
295 .. index:: single: MaximumNumberOfSteps
296 .. index:: single: Minimizer
297 .. index:: single: NumberOfInsects
298 .. index:: single: ProjectedGradientTolerance
299 .. index:: single: QualityCriterion
300 .. index:: single: Quantile
301 .. index:: single: SetSeed
302 .. index:: single: StoreInternalVariables
303 .. index:: single: StoreSupplementaryCalculations
304 .. index:: single: SwarmVelocity
306 Each algorithm can be controlled using some generic or specific options given
307 through the "*AlgorithmParameters*" optional command, as follows for example::
309 AlgorithmParameters = {
310 "Minimizer" : "LBFGSB",
311 "MaximumNumberOfSteps" : 25,
312 "StoreSupplementaryCalculations" : ["APosterioriCovariance","OMA"],
315 This section describes the available options algorithm by algorithm. If an
316 option is specified for an algorithm that doesn't support it, the option is
317 simply left unused. The meaning of the acronyms or particular names can be found
318 in the :ref:`genindex` or the :ref:`section_glossary`. In addition, for each
319 algorithm, the required commands/keywords are given, being described in `List of
320 commands and keywords for an ADAO calculation case`_.
325 *"Background", "BackgroundError",
326 "Observation", "ObservationError",
327 "ObservationOperator"*
329 StoreSupplementaryCalculations
330 This list indicates the names of the supplementary variables that can be
331 available at the end of the algorithm. It involves potentially costly
332 calculations. The default is a void list, none of these variables being
333 calculated and stored by default. The possible names are in the following
334 list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation",
335 "SigmaBck2", "SigmaObs2", "MahalanobisConsistency"].
337 **"LinearLeastSquares"**
340 *"Observation", "ObservationError",
341 "ObservationOperator"*
343 StoreSupplementaryCalculations
344 This list indicates the names of the supplementary variables that can be
345 available at the end of the algorithm. It involves potentially costly
346 calculations. The default is a void list, none of these variables being
347 calculated and stored by default. The possible names are in the following
353 *"Background", "BackgroundError",
354 "Observation", "ObservationError",
355 "ObservationOperator"*
358 This key allows to choose the optimization minimizer. The default choice
359 is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
360 minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained
361 minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
362 unconstrained minimizer), "NCG" (Newton CG minimizer).
365 This key allows to define upper and lower bounds for every control
366 variable being optimized. Bounds can be given by a list of list of pairs
367 of lower/upper bounds for each variable, with possibly ``None`` every time
368 there is no bound. The bounds can always be specified, but they are taken
369 into account only by the constrained minimizers.
372 This key indicates the maximum number of iterations allowed for iterative
373 optimization. The default is 15000, which is very similar to no limit on
374 iterations. It is then recommended to adapt this parameter to the needs on
375 real problems. For some minimizers, the effective stopping step can be
376 slightly different due to algorithm internal control requirements.
378 CostDecrementTolerance
379 This key indicates a limit value, leading to stop successfully the
380 iterative optimization process when the cost function decreases less than
381 this tolerance at the last step. The default is 1.e-7, and it is
382 recommended to adapt it to the needs on real problems.
384 ProjectedGradientTolerance
385 This key indicates a limit value, leading to stop successfully the iterative
386 optimization process when all the components of the projected gradient are
387 under this limit. It is only used for constrained minimizers. The default is
388 -1, that is the internal default of each minimizer (generally 1.e-5), and it
389 is not recommended to change it.
391 GradientNormTolerance
392 This key indicates a limit value, leading to stop successfully the
393 iterative optimization process when the norm of the gradient is under this
394 limit. It is only used for non-constrained minimizers. The default is
395 1.e-5 and it is not recommended to change it.
397 StoreInternalVariables
398 This boolean key allows to store default internal variables, mainly the
399 current state during iterative optimization process. Be careful, this can be
400 a numerically costly choice in certain calculation cases. The default is
403 StoreSupplementaryCalculations
404 This list indicates the names of the supplementary variables that can be
405 available at the end of the algorithm. It involves potentially costly
406 calculations. The default is a void list, none of these variables being
407 calculated and stored by default. The possible names are in the following
408 list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation",
409 "SigmaObs2", "MahalanobisConsistency"].
411 **"NonLinearLeastSquares"**
415 "Observation", "ObservationError",
416 "ObservationOperator"*
419 This key allows to choose the optimization minimizer. The default choice
420 is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
421 minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained
422 minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
423 unconstrained minimizer), "NCG" (Newton CG minimizer).
426 This key allows to define upper and lower bounds for every control
427 variable being optimized. Bounds can be given by a list of list of pairs
428 of lower/upper bounds for each variable, with possibly ``None`` every time
429 there is no bound. The bounds can always be specified, but they are taken
430 into account only by the constrained minimizers.
433 This key indicates the maximum number of iterations allowed for iterative
434 optimization. The default is 15000, which is very similar to no limit on
435 iterations. It is then recommended to adapt this parameter to the needs on
436 real problems. For some minimizers, the effective stopping step can be
437 slightly different due to algorithm internal control requirements.
439 CostDecrementTolerance
440 This key indicates a limit value, leading to stop successfully the
441 iterative optimization process when the cost function decreases less than
442 this tolerance at the last step. The default is 1.e-7, and it is
443 recommended to adapt it to the needs on real problems.
445 ProjectedGradientTolerance
446 This key indicates a limit value, leading to stop successfully the iterative
447 optimization process when all the components of the projected gradient are
448 under this limit. It is only used for constrained minimizers. The default is
449 -1, that is the internal default of each minimizer (generally 1.e-5), and it
450 is not recommended to change it.
452 GradientNormTolerance
453 This key indicates a limit value, leading to stop successfully the
454 iterative optimization process when the norm of the gradient is under this
455 limit. It is only used for non-constrained minimizers. The default is
456 1.e-5 and it is not recommended to change it.
458 StoreInternalVariables
459 This boolean key allows to store default internal variables, mainly the
460 current state during iterative optimization process. Be careful, this can be
461 a numerically costly choice in certain calculation cases. The default is
464 StoreSupplementaryCalculations
465 This list indicates the names of the supplementary variables that can be
466 available at the end of the algorithm. It involves potentially costly
467 calculations. The default is a void list, none of these variables being
468 calculated and stored by default. The possible names are in the following
469 list: ["BMA", "OMA", "OMB", "Innovation"].
474 *"Background", "BackgroundError",
475 "Observation", "ObservationError",
476 "ObservationOperator"*
479 This key allow to give an integer in order to fix the seed of the random
480 generator used to generate the ensemble. A convenient value is for example
481 1000. By default, the seed is left uninitialized, and so use the default
482 initialization from the computer.
487 *"Background", "BackgroundError",
488 "Observation", "ObservationError",
489 "ObservationOperator",
490 "EvolutionModel", "EvolutionError",
494 This key allows to choose the type of estimation to be performed. It can be
495 either state-estimation, named "State", or parameter-estimation, named
496 "Parameters". The default choice is "State".
498 StoreSupplementaryCalculations
499 This list indicates the names of the supplementary variables that can be
500 available at the end of the algorithm. It involves potentially costly
501 calculations. The default is a void list, none of these variables being
502 calculated and stored by default. The possible names are in the following
503 list: ["APosterioriCovariance", "BMA", "Innovation"].
505 **"ExtendedKalmanFilter"**
508 *"Background", "BackgroundError",
509 "Observation", "ObservationError",
510 "ObservationOperator",
511 "EvolutionModel", "EvolutionError",
515 This key allows to define upper and lower bounds for every control variable
516 being optimized. Bounds can be given by a list of list of pairs of
517 lower/upper bounds for each variable, with extreme values every time there
518 is no bound. The bounds can always be specified, but they are taken into
519 account only by the constrained minimizers.
522 This key allows to define the method to take bounds into account. The
523 possible methods are in the following list: ["EstimateProjection"].
526 This key allows to choose the type of estimation to be performed. It can be
527 either state-estimation, named "State", or parameter-estimation, named
528 "Parameters". The default choice is "State".
530 StoreSupplementaryCalculations
531 This list indicates the names of the supplementary variables that can be
532 available at the end of the algorithm. It involves potentially costly
533 calculations. The default is a void list, none of these variables being
534 calculated and stored by default. The possible names are in the following
535 list: ["APosterioriCovariance", "BMA", "Innovation"].
537 **"ParticleSwarmOptimization"**
540 *"Background", "BackgroundError",
541 "Observation", "ObservationError",
542 "ObservationOperator"*
545 This key indicates the maximum number of iterations allowed for iterative
546 optimization. The default is 50, which is an arbitrary limit. It is then
547 recommended to adapt this parameter to the needs on real problems.
550 This key indicates the number of insects or particles in the swarm. The
551 default is 100, which is a usual default for this algorithm.
554 This key indicates the part of the insect velocity which is imposed by the
555 swarm. It is a positive floating point value. The default value is 1.
558 This key indicates the recall rate at the best swarm insect. It is a
559 floating point value between 0 and 1. The default value is 0.5.
562 This key indicates the quality criterion, minimized to find the optimal
563 state estimate. The default is the usual data assimilation criterion named
564 "DA", the augmented ponderated least squares. The possible criteria has to
565 be in the following list, where the equivalent names are indicated by "=":
566 ["AugmentedPonderatedLeastSquares"="APLS"="DA",
567 "PonderatedLeastSquares"="PLS", "LeastSquares"="LS"="L2",
568 "AbsoluteValue"="L1", "MaximumError"="ME"]
571 This key allow to give an integer in order to fix the seed of the random
572 generator used to generate the ensemble. A convenient value is for example
573 1000. By default, the seed is left uninitialized, and so use the default
574 initialization from the computer.
576 StoreInternalVariables
577 This boolean key allows to store default internal variables, mainly the
578 current state during iterative optimization process. Be careful, this can be
579 a numerically costly choice in certain calculation cases. The default is
582 StoreSupplementaryCalculations
583 This list indicates the names of the supplementary variables that can be
584 available at the end of the algorithm. It involves potentially costly
585 calculations. The default is a void list, none of these variables being
586 calculated and stored by default. The possible names are in the following
587 list: ["BMA", "OMA", "OMB", "Innovation"].
589 **"QuantileRegression"**
594 "ObservationOperator"*
597 This key allows to define the real value of the desired quantile, between
598 0 and 1. The default is 0.5, corresponding to the median.
601 This key allows to choose the optimization minimizer. The default choice
602 and only available choice is "MMQR" (Majorize-Minimize for Quantile
606 This key indicates the maximum number of iterations allowed for iterative
607 optimization. The default is 15000, which is very similar to no limit on
608 iterations. It is then recommended to adapt this parameter to the needs on
611 CostDecrementTolerance
612 This key indicates a limit value, leading to stop successfully the
613 iterative optimization process when the cost function or the surrogate
614 decreases less than this tolerance at the last step. The default is 1.e-6,
615 and it is recommended to adapt it to the needs on real problems.
617 StoreInternalVariables
618 This boolean key allows to store default internal variables, mainly the
619 current state during iterative optimization process. Be careful, this can be
620 a numerically costly choice in certain calculation cases. The default is
623 StoreSupplementaryCalculations
624 This list indicates the names of the supplementary variables that can be
625 available at the end of the algorithm. It involves potentially costly
626 calculations. The default is a void list, none of these variables being
627 calculated and stored by default. The possible names are in the following
628 list: ["BMA", "OMA", "OMB", "Innovation"].
630 Requirements for functions describing an operator
631 -------------------------------------------------
633 The operators for observation and evolution are required to implement the data
634 assimilation or optimization procedures. They include the physical simulation
635 numerical simulations, but also the filtering and restriction to compare the
636 simulation to observation. The evolution operator is considered here in its
637 incremental form, representing the transition between two successive states, and
638 is then similar to the observation operator.
640 Schematically, an operator has to give a output solution given the input
641 parameters. Part of the input parameters can be modified during the optimization
642 procedure. So the mathematical representation of such a process is a function.
643 It was briefly described in the section :ref:`section_theory` and is generalized
644 here by the relation:
646 .. math:: \mathbf{y} = O( \mathbf{x} )
648 between the pseudo-observations :math:`\mathbf{y}` and the parameters
649 :math:`\mathbf{x}` using the observation or evolution operator :math:`O`. The
650 same functional representation can be used for the linear tangent model
651 :math:`\mathbf{O}` of :math:`O` and its adjoint :math:`\mathbf{O}^*`, also
652 required by some data assimilation or optimization algorithms.
654 Then, **to describe completely an operator, the user has only to provide a
655 function that fully and only realize the functional operation**.
657 This function is usually given as a script that can be executed in a YACS node.
658 This script can without difference launch external codes or use internal SALOME
659 calls and methods. If the algorithm requires the 3 aspects of the operator
660 (direct form, tangent form and adjoint form), the user has to give the 3
661 functions or to approximate them.
663 There are 3 practical methods for the user to provide the operator functional
666 First functional form: using "*ScriptWithOneFunction*"
667 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
669 The first one consist in providing only one potentially non-linear function, and
670 to approximate the tangent and the adjoint operators. This is done by using the
671 keyword "*ScriptWithOneFunction*" for the description of the chosen operator in
672 the ADAO GUI. The user have to provide the function in a script, with a
673 mandatory name "*DirectOperator*". For example, the script can follow the
676 def DirectOperator( X ):
677 """ Direct non-linear simulation operator """
683 In this case, the user can also provide a value for the differential increment,
684 using through the GUI the keyword "*DifferentialIncrement*", which has a default
685 value of 1%. This coefficient will be used in the finite difference
686 approximation to build the tangent and adjoint operators.
688 This first operator definition allow easily to test the functional form before
689 its use in an ADAO case, reducing the complexity of implementation.
691 Second functional form: using "*ScriptWithFunctions*"
692 +++++++++++++++++++++++++++++++++++++++++++++++++++++
694 The second one consist in providing directly the three associated operators
695 :math:`O`, :math:`\mathbf{O}` and :math:`\mathbf{O}^*`. This is done by using
696 the keyword "*ScriptWithFunctions*" for the description of the chosen operator
697 in the ADAO GUI. The user have to provide three functions in one script, with
698 three mandatory names "*DirectOperator*", "*TangentOperator*" and
699 "*AdjointOperator*". For example, the script can follow the template::
701 def DirectOperator( X ):
702 """ Direct non-linear simulation operator """
706 return something like Y
708 def TangentOperator( (X, dX) ):
709 """ Tangent linear operator, around X, applied to dX """
713 return something like Y
715 def AdjointOperator( (X, Y) ):
716 """ Adjoint operator, around X, applied to Y """
720 return something like X
722 Another time, this second perator definition allow easily to test the functional
723 forms before their use in an ADAO case, greatly reducing the complexity of
726 Third functional form: using "*ScriptWithSwitch*"
727 +++++++++++++++++++++++++++++++++++++++++++++++++
729 This third form give more possibilities to control the execution of the three
730 functions representing the operator, allowing advanced usage and control over
731 each execution of the simulation code. This is done by using the keyword
732 "*ScriptWithSwitch*" for the description of the chosen operator in the ADAO GUI.
733 The user have to provide a switch in one script to control the execution of the
734 direct, tangent and adjoint forms of its simulation code. The user can then, for
735 example, use other approximations for the tangent and adjoint codes, or
736 introduce more complexity in the argument treatment of the functions. But it
737 will be far more complicated to implement and debug.
739 **It is recommended not to use this third functional form without a solid
740 numerical or physical reason.**
742 If, however, you want to use this third form, we recommend using the following
743 template for the switch. It requires an external script or code named
744 "*Physical_simulation_functions.py*", containing three functions named
745 "*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*" as previously.
746 Here is the switch template::
748 import Physical_simulation_functions
749 import numpy, logging
752 for param in computation["specificParameters"]:
753 if param["name"] == "method":
754 method = param["value"]
755 if method not in ["Direct", "Tangent", "Adjoint"]:
756 raise ValueError("No valid computation method is given")
757 logging.info("Found method is \'%s\'"%method)
759 logging.info("Loading operator functions")
760 Function = Physical_simulation_functions.DirectOperator
761 Tangent = Physical_simulation_functions.TangentOperator
762 Adjoint = Physical_simulation_functions.AdjointOperator
764 logging.info("Executing the possible computations")
766 if method == "Direct":
767 logging.info("Direct computation")
768 Xcurrent = computation["inputValues"][0][0][0]
769 data = Function(numpy.matrix( Xcurrent ).T)
770 if method == "Tangent":
771 logging.info("Tangent computation")
772 Xcurrent = computation["inputValues"][0][0][0]
773 dXcurrent = computation["inputValues"][0][0][1]
774 data = Tangent(numpy.matrix(Xcurrent).T, numpy.matrix(dXcurrent).T)
775 if method == "Adjoint":
776 logging.info("Adjoint computation")
777 Xcurrent = computation["inputValues"][0][0][0]
778 Ycurrent = computation["inputValues"][0][0][1]
779 data = Adjoint((numpy.matrix(Xcurrent).T, numpy.matrix(Ycurrent).T))
781 logging.info("Formatting the output")
782 it = numpy.ravel(data)
783 outputValues = [[[[]]]]
785 outputValues[0][0][0].append(val)
788 result["outputValues"] = outputValues
789 result["specificOutputInfos"] = []
790 result["returnCode"] = 0
791 result["errorMessage"] = ""
793 All various modifications could be done from this template hypothesis.
795 Special case of controled evolution operator
796 ++++++++++++++++++++++++++++++++++++++++++++
798 In some cases, the evolution or the observation operators are required to be
799 controled by an external input control, given a priori. In this case, the
800 generic form of the incremental evolution model is slightly modified as follows:
802 .. math:: \mathbf{y} = O( \mathbf{x}, \mathbf{u})
804 where :math:`\mathbf{u}` is the control over one state increment. In this case,
805 the direct operator has to be applied to a pair of variables :math:`(X,U)`.
806 Schematically, the operator has to be set as::
808 def DirectOperator( (X, U) ):
809 """ Direct non-linear simulation operator """
813 return something like X(n+1) or Y(n+1)
815 The tangent and adjoint operators have the same signature as previously, noting
816 that the derivatives has to be done only partially against :math:`\mathbf{x}`.
817 In such a case with explicit control, only the second functional form (using
818 "*ScriptWithFunctions*") and third functional form (using "*ScriptWithSwitch*")