3 ================================================================================
4 Reference description of the ADAO commands and keywords
5 ================================================================================
7 This section presents the reference description of the ADAO commands and
8 keywords available through the GUI or through scripts.
10 Each command or keyword to be defined through the ADAO GUI has some properties.
11 The first property is to be *required*, *optional* or only factual, describing a
12 type of input. The second property is to be an "open" variable with a fixed type
13 but with any value allowed by the type, or a "restricted" variable, limited to
14 some specified values. The EFICAS editor GUI having build-in validating
15 capacities, the properties of the commands or keywords given through this GUI
16 are automatically correct.
18 The mathematical notations used afterward are explained in the section
19 :ref:`section_theory`.
21 Examples of using these commands are available in the section
22 :ref:`section_examples` and in example files installed with ADAO module.
24 List of possible input types
25 ----------------------------
27 .. index:: single: Dict
28 .. index:: single: Function
29 .. index:: single: Matrix
30 .. index:: single: String
31 .. index:: single: Script
32 .. index:: single: Vector
34 Each ADAO variable has a pseudo-type to help filling it and validation. The
35 different pseudo-types are:
38 This indicates a variable that has to be filled by a dictionary, usually
42 This indicates a variable that has to be filled by a function, usually given
43 as a script or a component method.
46 This indicates a variable that has to be filled by a matrix, usually given
47 either as a string or as a script.
50 This indicates a string giving a literal representation of a matrix, a
51 vector or a vector serie, such as "1 2 ; 3 4" for a square 2x2 matrix.
54 This indicates a script given as an external file. It can be described by a
55 full absolute path name or only by the file name without path.
58 This indicates a variable that has to be filled by a vector, usually given
59 either as a string or as a script.
61 **VectorSerie** This indicates a variable that has to be filled by a list of
62 vectors, usually given either as a string or as a script.
64 When a command or keyword can be filled by a script file name, the script has to
65 contain a variable or a method that has the same name as the one to be filled.
66 In other words, when importing the script in a YACS Python node, it must create
67 a variable of the good name in the current namespace.
69 List of commands and keywords for an ADAO calculation case
70 ----------------------------------------------------------
72 .. index:: single: ASSIMILATION_STUDY
73 .. index:: single: Algorithm
74 .. index:: single: AlgorithmParameters
75 .. index:: single: Background
76 .. index:: single: BackgroundError
77 .. index:: single: Debug
78 .. index:: single: EvolutionError
79 .. index:: single: EvolutionModel
80 .. index:: single: InputVariables
81 .. index:: single: Observation
82 .. index:: single: ObservationError
83 .. index:: single: ObservationOperator
84 .. index:: single: Observers
85 .. index:: single: OutputVariables
86 .. index:: single: Study_name
87 .. index:: single: Study_repertory
88 .. index:: single: UserDataInit
89 .. index:: single: UserPostAnalysis
91 The first set of commands is related to the description of a calculation case,
92 that is a *Data Assimilation* procedure or an *Optimization* procedure. The
93 terms are ordered in alphabetical order, except the first, which describes
94 choice between calculation or checking. The different commands are the
97 **ASSIMILATION_STUDY**
98 *Required command*. This is the general command describing the data
99 assimilation or optimization case. It hierarchically contains all the other
103 *Required command*. This is a string to indicate the data assimilation or
104 optimization algorithm chosen. The choices are limited and available through
105 the GUI. There exists for example "3DVAR", "Blue"... See below the list of
106 algorithms and associated parameters in the following subsection `Options
109 **AlgorithmParameters**
110 *Optional command*. This command allows to add some optional parameters to
111 control the data assimilation or optimization algorithm. It is defined as a
112 "*Dict*" type object, that is, given as a script. See below the list of
113 algorithms and associated parameters in the following subsection `Options
117 *Required command*. This indicates the background or initial vector used,
118 previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type
119 object, that is, given either as a string or as a script.
122 *Required command*. This indicates the background error covariance matrix,
123 previously noted as :math:`\mathbf{B}`. It is defined as a "*Matrix*" type
124 object, that is, given either as a string or as a script.
127 *Required command*. This define the level of trace and intermediary debug
128 information. The choices are limited between 0 (for False) and 1 (for
132 *Optional command*. This indicates the evolution error covariance matrix,
133 usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type
134 object, that is, given either as a string or as a script.
137 *Optional command*. This indicates the evolution model operator, usually
138 noted :math:`M`, which describes a step of evolution. It is defined as a
139 "*Function*" type object, that is, given as a script. Different functional
140 forms can be used, as described in the following subsection `Requirements
141 for functions describing an operator`_.
144 *Optional command*. This command allows to indicates the name and size of
145 physical variables that are bundled together in the control vector. This
146 information is dedicated to data processed inside an algorithm.
149 *Required command*. This indicates the observation vector used for data
150 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
151 is defined as a "*Vector*" type object, that is, given either as a string or
155 *Required command*. This indicates the observation error covariance matrix,
156 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
157 object, that is, given either as a string or as a script.
159 **ObservationOperator**
160 *Required command*. This indicates the observation operator, previously
161 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
162 results :math:`\mathbf{y}` to be compared to observations
163 :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is,
164 given as a script. Different functional forms can be used, as described in
165 the following subsection `Requirements for functions describing an
169 *Optional command*. This command allows to set internal observers, that are
170 functions linked with a particular variable, which will be executed each
171 time this variable is modified. It is a convenient way to monitor interest
172 variables during the data assimilation or optimization process, by printing
176 *Optional command*. This command allows to indicates the name and size of
177 physical variables that are bundled together in the output observation
178 vector. This information is dedicated to data processed inside an algorithm.
181 *Required command*. This is an open string to describe the study by a name
185 *Optional command*. If available, this repertory is used to find all the
186 script files that can be used to define some other commands by scripts.
189 *Optional command*. This commands allows to initialize some parameters or
190 data automatically before data assimilation algorithm processing.
193 *Optional command*. This commands allows to process some parameters or data
194 automatically after data assimilation algorithm processing. It is defined as
195 a script or a string, allowing to put post-processing code directly inside
198 List of commands and keywords for an ADAO checking case
199 -------------------------------------------------------
201 .. index:: single: CHECKING_STUDY
202 .. index:: single: Algorithm
203 .. index:: single: AlgorithmParameters
204 .. index:: single: CheckingPoint
205 .. index:: single: Debug
206 .. index:: single: ObservationOperator
207 .. index:: single: Study_name
208 .. index:: single: Study_repertory
209 .. index:: single: UserDataInit
211 The second set of commands is related to the description of a checking case,
212 that is a procedure to check required properties on information somewhere else
213 by a calculation case. The terms are ordered in alphabetical order, except the
214 first, which describes choice between calculation or checking. The different
215 commands are the following:
218 *Required command*. This is the general command describing the checking
219 case. It hierarchically contains all the other commands.
222 *Required command*. This is a string to indicate the data assimilation or
223 optimization algorithm chosen. The choices are limited and available through
224 the GUI. There exists for example "3DVAR", "Blue"... See below the list of
225 algorithms and associated parameters in the following subsection `Options
228 **AlgorithmParameters**
229 *Optional command*. This command allows to add some optional parameters to
230 control the data assimilation or optimization algorithm. It is defined as a
231 "*Dict*" type object, that is, given as a script. See below the list of
232 algorithms and associated parameters in the following subsection `Options
236 *Required command*. This indicates the vector used,
237 previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type
238 object, that is, given either as a string or as a script.
241 *Required command*. This define the level of trace and intermediary debug
242 information. The choices are limited between 0 (for False) and 1 (for
245 **ObservationOperator**
246 *Required command*. This indicates the observation operator, previously
247 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
248 results :math:`\mathbf{y}` to be compared to observations
249 :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is,
250 given as a script. Different functional forms can be used, as described in
251 the following subsection `Requirements for functions describing an
255 *Required command*. This is an open string to describe the study by a name
259 *Optional command*. If available, this repertory is used to find all the
260 script files that can be used to define some other commands by scripts.
263 *Optional command*. This commands allows to initialize some parameters or
264 data automatically before data assimilation algorithm processing.
266 Options for algorithms
267 ----------------------
269 .. index:: single: 3DVAR
270 .. index:: single: Blue
271 .. index:: single: EnsembleBlue
272 .. index:: single: KalmanFilter
273 .. index:: single: LinearLeastSquares
274 .. index:: single: NonLinearLeastSquares
275 .. index:: single: ParticleSwarmOptimization
276 .. index:: single: QuantileRegression
278 .. index:: single: AlgorithmParameters
279 .. index:: single: Bounds
280 .. index:: single: CostDecrementTolerance
281 .. index:: single: GradientNormTolerance
282 .. index:: single: GroupRecallRate
283 .. index:: single: MaximumNumberOfSteps
284 .. index:: single: Minimizer
285 .. index:: single: NumberOfInsects
286 .. index:: single: ProjectedGradientTolerance
287 .. index:: single: QualityCriterion
288 .. index:: single: Quantile
289 .. index:: single: SetSeed
290 .. index:: single: StoreInternalVariables
291 .. index:: single: StoreSupplementaryCalculations
292 .. index:: single: SwarmVelocity
294 Each algorithm can be controlled using some generic or specific options given
295 through the "*AlgorithmParameters*" optional command, as follows for example::
297 AlgorithmParameters = {
298 "Minimizer" : "LBFGSB",
299 "MaximumNumberOfSteps" : 25,
300 "StoreSupplementaryCalculations" : ["APosterioriCovariance","OMA"],
303 This section describes the available options algorithm by algorithm. If an
304 option is specified for an algorithm that doesn't support it, the option is
305 simply left unused. The meaning of the acronyms or particular names can be found
306 in the :ref:`genindex` or the :ref:`section_glossary`.
310 StoreSupplementaryCalculations
311 This list indicates the names of the supplementary variables that can be
312 available at the end of the algorithm. It involves potentially costly
313 calculations. The default is a void list, none of these variables being
314 calculated and stored by default. The possible names are in the following
315 list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation",
316 "SigmaBck2", "SigmaObs2", "MahalanobisConsistency"].
318 **"LinearLeastSquares"**
320 StoreSupplementaryCalculations
321 This list indicates the names of the supplementary variables that can be
322 available at the end of the algorithm. It involves potentially costly
323 calculations. The default is a void list, none of these variables being
324 calculated and stored by default. The possible names are in the following
330 This key allows to choose the optimization minimizer. The default choice
331 is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
332 minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained
333 minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
334 unconstrained minimizer), "NCG" (Newton CG minimizer).
337 This key allows to define upper and lower bounds for every control
338 variable being optimized. Bounds can be given by a list of list of pairs
339 of lower/upper bounds for each variable, with possibly ``None`` every time
340 there is no bound. The bounds can always be specified, but they are taken
341 into account only by the constrained minimizers.
344 This key indicates the maximum number of iterations allowed for iterative
345 optimization. The default is 15000, which is very similar to no limit on
346 iterations. It is then recommended to adapt this parameter to the needs on
347 real problems. For some minimizers, the effective stopping step can be
348 slightly different due to algorithm internal control requirements.
350 CostDecrementTolerance
351 This key indicates a limit value, leading to stop successfully the
352 iterative optimization process when the cost function decreases less than
353 this tolerance at the last step. The default is 10e-7, and it is
354 recommended to adapt it the needs on real problems.
356 ProjectedGradientTolerance
357 This key indicates a limit value, leading to stop successfully the iterative
358 optimization process when all the components of the projected gradient are
359 under this limit. It is only used for constrained algorithms. The default is
360 -1, that is the internal default of each algorithm (generally 1.e-5), and it
361 is not recommended to change it.
363 GradientNormTolerance
364 This key indicates a limit value, leading to stop successfully the
365 iterative optimization process when the norm of the gradient is under this
366 limit. It is only used for non-constrained algorithms. The default is
367 10e-5 and it is not recommended to change it.
369 StoreInternalVariables
370 This boolean key allows to store default internal variables, mainly the
371 current state during iterative optimization process. Be careful, this can be
372 a numerically costly choice in certain calculation cases. The default is
375 StoreSupplementaryCalculations
376 This list indicates the names of the supplementary variables that can be
377 available at the end of the algorithm. It involves potentially costly
378 calculations. The default is a void list, none of these variables being
379 calculated and stored by default. The possible names are in the following
380 list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation",
381 "SigmaObs2", "MahalanobisConsistency"].
383 **"NonLinearLeastSquares"**
386 This key allows to choose the optimization minimizer. The default choice
387 is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
388 minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained
389 minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
390 unconstrained minimizer), "NCG" (Newton CG minimizer).
393 This key allows to define upper and lower bounds for every control
394 variable being optimized. Bounds can be given by a list of list of pairs
395 of lower/upper bounds for each variable, with possibly ``None`` every time
396 there is no bound. The bounds can always be specified, but they are taken
397 into account only by the constrained minimizers.
400 This key indicates the maximum number of iterations allowed for iterative
401 optimization. The default is 15000, which is very similar to no limit on
402 iterations. It is then recommended to adapt this parameter to the needs on
403 real problems. For some minimizers, the effective stopping step can be
404 slightly different due to algorithm internal control requirements.
406 CostDecrementTolerance
407 This key indicates a limit value, leading to stop successfully the
408 iterative optimization process when the cost function decreases less than
409 this tolerance at the last step. The default is 10e-7, and it is
410 recommended to adapt it the needs on real problems.
412 ProjectedGradientTolerance
413 This key indicates a limit value, leading to stop successfully the iterative
414 optimization process when all the components of the projected gradient are
415 under this limit. It is only used for constrained algorithms. The default is
416 -1, that is the internal default of each algorithm (generally 1.e-5), and it
417 is not recommended to change it.
419 GradientNormTolerance
420 This key indicates a limit value, leading to stop successfully the
421 iterative optimization process when the norm of the gradient is under this
422 limit. It is only used for non-constrained algorithms. The default is
423 10e-5 and it is not recommended to change it.
425 StoreInternalVariables
426 This boolean key allows to store default internal variables, mainly the
427 current state during iterative optimization process. Be careful, this can be
428 a numerically costly choice in certain calculation cases. The default is
431 StoreSupplementaryCalculations
432 This list indicates the names of the supplementary variables that can be
433 available at the end of the algorithm. It involves potentially costly
434 calculations. The default is a void list, none of these variables being
435 calculated and stored by default. The possible names are in the following
436 list: ["BMA", "OMA", "OMB", "Innovation"].
441 This key allow to give an integer in order to fix the seed of the random
442 generator used to generate the ensemble. A convenient value is for example
443 1000. By default, the seed is left uninitialized, and so use the default
444 initialization from the computer.
448 StoreSupplementaryCalculations
449 This list indicates the names of the supplementary variables that can be
450 available at the end of the algorithm. It involves potentially costly
451 calculations. The default is a void list, none of these variables being
452 calculated and stored by default. The possible names are in the following
453 list: ["APosterioriCovariance", "Innovation"].
455 **"ParticleSwarmOptimization"**
458 This key indicates the maximum number of iterations allowed for iterative
459 optimization. The default is 50, which is an arbitrary limit. It is then
460 recommended to adapt this parameter to the needs on real problems.
463 This key indicates the number of insects or particles in the swarm. The
464 default is 100, which is a usual default for this algorithm.
467 This key indicates the part of the insect velocity which is imposed by the
468 swarm. It is a positive floating point value. The default value is 1.
471 This key indicates the recall rate at the best swarm insect. It is a
472 floating point value between 0 and 1. The default value is 0.5.
475 This key indicates the quality criterion, minimized to find the optimal
476 state estimate. The default is the usual data assimilation criterion named
477 "DA", the augmented ponderated least squares. The possible criteria has to
478 be in the following list, where the equivalent names are indicated by "=":
479 ["AugmentedPonderatedLeastSquares"="APLS"="DA",
480 "PonderatedLeastSquares"="PLS", "LeastSquares"="LS"="L2",
481 "AbsoluteValue"="L1", "MaximumError"="ME"]
484 This key allow to give an integer in order to fix the seed of the random
485 generator used to generate the ensemble. A convenient value is for example
486 1000. By default, the seed is left uninitialized, and so use the default
487 initialization from the computer.
489 StoreInternalVariables
490 This boolean key allows to store default internal variables, mainly the
491 current state during iterative optimization process. Be careful, this can be
492 a numerically costly choice in certain calculation cases. The default is
495 StoreSupplementaryCalculations
496 This list indicates the names of the supplementary variables that can be
497 available at the end of the algorithm. It involves potentially costly
498 calculations. The default is a void list, none of these variables being
499 calculated and stored by default. The possible names are in the following
500 list: ["BMA", "OMA", "OMB", "Innovation"].
502 **"QuantileRegression"**
505 This key allows to define the real value of the desired quantile, between
506 0 and 1. The default is 0.5, corresponding to the median.
509 This key allows to choose the optimization minimizer. The default choice
510 and only available choice is "MMQR" (Majorize-Minimize for Quantile
514 This key indicates the maximum number of iterations allowed for iterative
515 optimization. The default is 15000, which is very similar to no limit on
516 iterations. It is then recommended to adapt this parameter to the needs on
519 CostDecrementTolerance
520 This key indicates a limit value, leading to stop successfully the
521 iterative optimization process when the cost function or the surrogate
522 decreases less than this tolerance at the last step. The default is 10e-6,
523 and it is recommended to adapt it the needs on real problems.
525 StoreInternalVariables
526 This boolean key allows to store default internal variables, mainly the
527 current state during iterative optimization process. Be careful, this can be
528 a numerically costly choice in certain calculation cases. The default is
531 StoreSupplementaryCalculations
532 This list indicates the names of the supplementary variables that can be
533 available at the end of the algorithm. It involves potentially costly
534 calculations. The default is a void list, none of these variables being
535 calculated and stored by default. The possible names are in the following
536 list: ["BMA", "OMA", "OMB", "Innovation"].
538 Requirements for functions describing an operator
539 -------------------------------------------------
541 The operators for observation and evolution are required to implement the data
542 assimilation or optimization procedures. They include the physical simulation
543 numerical simulations, but also the filtering and restriction to compare the
544 simulation to observation.
546 Schematically, an operator has to give a output solution given the input
547 parameters. Part of the input parameters can be modified during the optimization
548 procedure. So the mathematical representation of such a process is a function.
549 It was briefly described in the section :ref:`section_theory` and is generalized
550 here by the relation:
552 .. math:: \mathbf{y} = H( \mathbf{x} )
554 between the pseudo-observations :math:`\mathbf{y}` and the parameters
555 :math:`\mathbf{x}` using the observation operator :math:`H`. The same functional
556 representation can be used for the linear tangent model :math:`\mathbf{H}` of
557 :math:`H` and its adjoint :math:`\mathbf{H}^*`, also required by some data
558 assimilation or optimization algorithms.
560 Then, **to describe completely an operator, the user has only to provide a
561 function that fully and only realize the functional operation**.
563 This function is usually given as a script that can be executed in a YACS node.
564 This script can without difference launch external codes or use internal SALOME
565 calls and methods. If the algorithm requires the 3 aspects of the operator
566 (direct form, tangent form and adjoint form), the user has to give the 3
567 functions or to approximate them.
569 There are 3 practical methods for the user to provide the operator functional
572 First functional form: using "*ScriptWithOneFunction*"
573 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
575 The first one consist in providing only one potentially non-linear function, and
576 to approximate the tangent and the adjoint operators. This is done by using the
577 keyword "*ScriptWithOneFunction*" for the description of the chosen operator in
578 the ADAO GUI. The user have to provide the function in a script, with a
579 mandatory name "*DirectOperator*". For example, the script can follow the
582 def DirectOperator( X ):
583 """ Direct non-linear simulation operator """
589 In this case, the user can also provide a value for the differential increment,
590 using through the GUI the keyword "*DifferentialIncrement*", which has a default
591 value of 1%. This coefficient will be used in the finite difference
592 approximation to build the tangent and adjoint operators.
594 This first operator definition allow easily to test the functional form before
595 its use in an ADAO case, reducing the complexity of implementation.
597 Second functional form: using "*ScriptWithFunctions*"
598 +++++++++++++++++++++++++++++++++++++++++++++++++++++
600 The second one consist in providing directly the three associated operators
601 :math:`H`, :math:`\mathbf{H}` and :math:`\mathbf{H}^*`. This is done by using the
602 keyword "*ScriptWithFunctions*" for the description of the chosen operator in
603 the ADAO GUI. The user have to provide three functions in one script, with three
604 mandatory names "*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*".
605 For example, the script can follow the template::
607 def DirectOperator( X ):
608 """ Direct non-linear simulation operator """
612 return something like Y
614 def TangentOperator( (X, dX) ):
615 """ Tangent linear operator, around X, applied to dX """
619 return something like Y
621 def AdjointOperator( (X, Y) ):
622 """ Adjoint operator, around X, applied to Y """
626 return something like X
628 Another time, this second perator definition allow easily to test the functional
629 forms before their use in an ADAO case, greatly reducing the complexity of
632 Third functional form: using "*ScriptWithSwitch*"
633 +++++++++++++++++++++++++++++++++++++++++++++++++
635 This third form give more possibilities to control the execution of the three
636 functions representing the operator, allowing advanced usage and control over
637 each execution of the simulation code. This is done by using the keyword
638 "*ScriptWithSwitch*" for the description of the chosen operator in the ADAO GUI.
639 The user have to provide a switch in one script to control the execution of the
640 direct, tangent and adjoint forms of its simulation code. The user can then, for
641 example, use other approximations for the tangent and adjoint codes, or
642 introduce more complexity in the argument treatment of the functions. But it
643 will be far more complicated to implement and debug.
645 **It is recommended not to use this third functional form without a solid
646 numerical or physical reason.**
648 If, however, you want to use this third form, we recommend using the following
649 template for the switch. It requires an external script or code named
650 "*Physical_simulation_functions.py*", containing three functions named
651 "*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*" as previously.
652 Here is the switch template::
654 import Physical_simulation_functions
655 import numpy, logging
658 for param in computation["specificParameters"]:
659 if param["name"] == "method":
660 method = param["value"]
661 if method not in ["Direct", "Tangent", "Adjoint"]:
662 raise ValueError("No valid computation method is given")
663 logging.info("Found method is \'%s\'"%method)
665 logging.info("Loading operator functions")
666 FunctionH = Physical_simulation_functions.DirectOperator
667 TangentH = Physical_simulation_functions.TangentOperator
668 AdjointH = Physical_simulation_functions.AdjointOperator
670 logging.info("Executing the possible computations")
672 if method == "Direct":
673 logging.info("Direct computation")
674 Xcurrent = computation["inputValues"][0][0][0]
675 data = FunctionH(numpy.matrix( Xcurrent ).T)
676 if method == "Tangent":
677 logging.info("Tangent computation")
678 Xcurrent = computation["inputValues"][0][0][0]
679 dXcurrent = computation["inputValues"][0][0][1]
680 data = TangentH(numpy.matrix(Xcurrent).T, numpy.matrix(dXcurrent).T)
681 if method == "Adjoint":
682 logging.info("Adjoint computation")
683 Xcurrent = computation["inputValues"][0][0][0]
684 Ycurrent = computation["inputValues"][0][0][1]
685 data = AdjointH((numpy.matrix(Xcurrent).T, numpy.matrix(Ycurrent).T))
687 logging.info("Formatting the output")
688 it = numpy.ravel(data)
689 outputValues = [[[[]]]]
691 outputValues[0][0][0].append(val)
694 result["outputValues"] = outputValues
695 result["specificOutputInfos"] = []
696 result["returnCode"] = 0
697 result["errorMessage"] = ""
699 All various modifications could be done from this template hypothesis.