-Results can be obtained, through the "*algoResults*" output port, using YACS
-nodes to retrieve all the informations in the "*pyobj*" object, to transform
-them, to convert them, to save part of them, etc.
-
-The data assimilation results and complementary calculations can be retrieved
-using the "*get*" method af the "*algoResults.ADD*" object. This method pick the
-different output variables identified by their name. Indicating in parenthesis
-their availability as automatic (for every algorithm) or optional (depending on
-the algorithm), and their notation coming from section :ref:`section_theory`,
-the main available output variables are the following:
-
-#. "Analysis" (automatic): the control state evaluated by the data assimilation
- procedure, noted as :math:`\mathbf{x}^a`.
-#. "Innovation" (automatic): the difference between the observations and the
- control state transformed by the observation operator, noted as
- :math:`\mathbf{y}^o - \mathbf{H}\mathbf{x}^b`.
-#. "APosterioriCovariance" (optional): the covariance matrix of the *a
- posteriori* analysis errors, noted as :math:`\mathbf{A}`.
-#. "OMB" (optional): the difference between the observations and the
- background, similar to the innovation.
-#. "BMA" (optional): the difference between the background and the analysis,
- noted as :math:`\mathbf{x}^b - \mathbf{x}^a`.
-#. "OMA" (optional): the difference between the observations and the analysis,
- noted as :math:`\mathbf{y}^o - \mathbf{H}\mathbf{x}^a`.
-#. "CostFunctionJ" (optional): the minimisation function, noted as :math:`J`.
-#. "CostFunctionJo" (optional): the observation part of the minimisation
- function, noted as :math:`J^o`.
-#. "CostFunctionJb" (optional): the background part of the minimisation
- function, noted as :math:`J^b`.
-
-Input variables are also available as output in order to gather all the
-information at the end of the procedure.
-
-All the variables are list of typed values, each item of the list
-corresponding to the value of the variable at a time step or an iteration step
-in the data assimilation optimization procedure. The variable value at a given
-"*i*" step can be obtained by the method "*valueserie(i)*". The last one
-(consisting in the solution of the evaluation problem) can be obtained using the
-step "*-1*" as in a standard list.
-
-Reference description of the commands and keywords available through the GUI
------------------------------------------------------------------------------
-
-Each command or keyword to be defined through the ADAO GUI has some properties.
-The first property is to be a required command, an optional command or a keyword
-describing a type of input. The second property is to be an "open" variable with
-a fixed type but with any value allowed by the type, or a "restricted" variable,
-limited to some specified values. The mathematical notations used afterwards are
-explained in the section :ref:`section_theory`.
-
-List of possible input types
-++++++++++++++++++++++++++++
-
-The different type-style commands are:
-
-:Dict:
- *Type of an input*. This indicates a variable that has to be filled by a
- dictionary, usually given as a script.
-
-:Function:
- *Type of an input*. This indicates a variable that has to be filled by a
- function, usually given as a script.
-
-:Matrix:
- *Type of an input*. This indicates a variable that has to be filled by a
- matrix, usually given either as a string or as a script.
-
-:String:
- *Type of an input*. This indicates a string, such as a name or a literal
- representation of a matrix or vector, such as "1 2 ; 3 4".
-
-:Script:
- *Type of an input*. This indicates a script given as an external file.
-
-:Vector:
- *Type of an input*. This indicates a variable that has to be filled by a
- vector, usually given either as a string or as a script.
-
-List of commands
-++++++++++++++++
-
-The different commands are the following:
-
-:ASSIMILATION_STUDY:
- *Required command*. This is the general command describing an ADAO case. It
- hierarchicaly contains all the other commands.
-
-:Algorithm:
- *Required command*. This is a string to indicates the data assimilation
- algorithm chosen. The choices are limited and available through the GUI.
- There exists for example: "3DVAR", "Blue"... See below the list of
- algorithms and associated parameters.
-
-:AlgorithmParameters:
- *Optional command*. This command allows to add some optional parameters to
- control the data assimilation algorithm calculation. It is defined as a
- "*Dict*" type object. See below the list of algorithms and associated
- parameters.
-
-:Background:
- *Required command*. This indicates the backgroud vector used for data
- assimilation, previously noted as :math:`\mathbf{x}^b`. It is defined as a
- "*Vector*" type object, that is, given either as a string or as a script.
-
-:BackgroundError:
- *Required command*. This indicates the backgroud error covariance matrix,
- previously noted as :math:`\mathbf{B}`.It is defined as a "*Matrix*" type
- object, that is, given either as a string or as a script.
-
-:Debug:
- *Required command*. This let choose the level of trace and intermediary
- debug informations. The choices are limited between 0 (for False) and 1 (for
- True) and available through the GUI.
-
-:InputVariables:
- *Optional command*. This command allows to indicates the name and size of
- physical variables that are bundled together in the control vector. This
- information is dedicated to data processed inside of data assimilation
- algorithm.
-
-:Observation:
- *Required command*. This indicates the observation vector used for data
- assimilation, previously noted as :math:`\mathbf{y}^o`. It is defined as a
- "*Vector*" type object, that is, given either as a string or as a script.
-
-:ObservationError:
- *Required command*. This indicates the observation error covariance matrix,
- previously noted as :math:`\mathbf{R}`.It is defined as a "*Matrix*" type
- object, that is, given either as a string or as a script.
-
-:ObservationOperator:
- *Required command*. This indicates the observation operator, previously
- noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}`
- to results :math:`\mathbf{y}` to be compared to observations
- :math:`\mathbf{y}^o`.
-
-:Observers:
- *Optional command*. This command allows to set internal observers, that are
- functions linked with a particular variable, which will be executed each
- time this variable is modified. It is a convenient way to monitor interest
- variables during the data assimilation process, by printing or plotting it,
- etc.
-
-:OutputVariables:
- *Optional command*. This command allows to indicates the name and size of
- physical variables that are bundled together in the output observation
- vector. This information is dedicated to data processed inside of data
- assimilation algorithm.
-
-:Study_name:
- *Required command*. This is an open string to describe the study by a name
- or a sentence.
-
-:Study_repertory:
- *Optional command*. If available, this repertory is used to find all the
- script files that can be used to define some other commands by scripts.
-
-:UserDataInit:
- *Optional command*. This commands allows to initialise some parameters or
- data automatically before data assimilation algorithm processing.
-
-:UserPostAnalysis:
- *Optional command*. This commands allows to process some parameters or data
- automatically after data assimilation algorithm processing. It is defined as
- a script or a string, allowing to put simple code directly inside the ADAO
- case.
-
-.. _subsection_algo_options:
-
-List of possible options for the algorithms
-+++++++++++++++++++++++++++++++++++++++++++
-
-Each algorithm can be controled using some generic or specific options given
-throught the "*AlgorithmParameters*" optional command, as follows::
-
- AlgorithmParameters = {
- "Minimizer" : "CG",
- "MaximumNumberOfSteps" : 10,
- }
-
-This section describes the available options by algorithm. If an option is
-specified for an algorithm that doesn't support it, the option is simply left
-unused.
-
-:"Blue":
-
- :CalculateAPosterioriCovariance:
- This boolean key allows to enable the calculation and the storage of the
- covariance matrix of a posteriori anlysis errors. Be careful, this is a
- numericaly costly step. The default is "False".
-
-:"LinearLeastSquares":
- no option
-
-:"3DVAR":
-
- :Minimizer:
- This key allows to choose the optimization minimizer. The default choice
- is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
- minimizer, see [Byrd95] and [Zhu97]), "TNC" (nonlinear constrained
- minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
- unconstrained minimizer), "NCG" (Newton CG minimizer).
-
- :Bounds:
- This key allows to define upper and lower bounds for every control
- variable being optimized. Bounds can be given by a list of list of pairs
- of lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained minimizers.
-
- :MaximumNumberOfSteps:
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some algorithms, the effective stopping step can be
- slightly different due to algorihtm internal control requirements.
-
- :CalculateAPosterioriCovariance:
- This boolean key allows to enable the calculation and the storage of the
- covariance matrix of a posteriori anlysis errors. Be careful, this is a
- numericaly costly step. The default is "False".
-
- :CostDecrementTolerance:
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 10e-7, and it is
- recommended to adapt it the needs on real problems.
-
- :ProjectedGradientTolerance:
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when all the components of the projected
- gradient are under this limit. It is only used for constrained algorithms.
- The default is -1, that is the internal default of each algorithm, and it
- is not recommended to change it.
-
- :GradientNormTolerance:
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained algorithms. The default is
- 10e-5 and it is not recommended to change it.
-
-:"NonLinearLeastSquares":
-
- :Minimizer:
- This key allows to choose the optimization minimizer. The default choice
- is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
- minimizer, see [Byrd95] and [Zhu97]), "TNC" (nonlinear constrained
- minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear
- unconstrained minimizer), "NCG" (Newton CG minimizer).
-
- :Bounds:
- This key allows to define upper and lower bounds for every control
- variable being optimized. Bounds can be given by a list of list of pairs
- of lower/upper bounds for each variable, with possibly ``None`` every time
- there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained minimizers.
-
- :MaximumNumberOfSteps:
- This key indicates the maximum number of iterations allowed for iterative
- optimization. The default is 15000, which very similar to no limit on
- iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some algorithms, the effective stopping step can be
- slightly different due to algorihtm internal control requirements.
-
- :CostDecrementTolerance:
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the cost function decreases less than
- this tolerance at the last step. The default is 10e-7, and it is
- recommended to adapt it the needs on real problems.
-
- :ProjectedGradientTolerance:
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when all the components of the projected
- gradient are under this limit. It is only used for constrained algorithms.
- The default is -1, that is the internal default of each algorithm, and it
- is not recommended to change it.
-
- :GradientNormTolerance:
- This key indicates a limit value, leading to stop successfully the
- iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained algorithms. The default is
- 10e-5 and it is not recommended to change it.
-
-:"EnsembleBlue":
-
- :SetSeed:
- This key allow to give an integer in order to fix the seed of the random
- generator used to generate the ensemble. A convenient value is for example
- 1000. By default, the seed is left uninitialized, and so use the default
- initialization from the computer.
-
-:"KalmanFilter":
-
- :CalculateAPosterioriCovariance:
- This boolean key allows to enable the calculation and the storage of the
- covariance matrix of a posteriori anlysis errors. Be careful, this is a
- numericaly costly step. The default is "False".
-
-Examples of using these commands are available in the section
-:ref:`section_examples` and in example files installed with ADAO module.