From: Jean-Philippe ARGAUD Date: Sun, 11 Nov 2012 23:48:46 +0000 (+0100) Subject: Improving documentation with references and examples X-Git-Tag: V6_6_0~7 X-Git-Url: http://git.salome-platform.org/gitweb/?a=commitdiff_plain;h=f9c09be236122dea245bc4fc4e93c8267981e046;p=modules%2Fadao.git Improving documentation with references and examples --- diff --git a/doc/advanced.rst b/doc/advanced.rst index 177b840..e16a843 100644 --- a/doc/advanced.rst +++ b/doc/advanced.rst @@ -16,12 +16,12 @@ Converting and executing an ADAO command file (JDC) using a shell script It is possible to convert and execute an ADAO command file (JDC, or ".comm" file) automatically by using a template script containing all the required steps. The user has to know where are the main SALOME scripts, and in particular -the ``runAppli`` one. The directory in which this script resides is symbolicaly +the ``runAppli`` one. The directory in which this script resides is symbolically named ```` and has to be replaced by the good one in the template. When an ADAO command file is build by the ADAO GUI editor and saved, if it is -named for example "AdaoStudy1.comm", then a compagnon file named "AdaoStudy1.py" +named for example "AdaoStudy1.comm", then a companion file named "AdaoStudy1.py" is automatically created in the same directory. It is named ```` in the template, and it is converted to YACS as an ````. After that, it can be executed in console mode using the standard @@ -97,34 +97,36 @@ to avoid weird difficulties:: print p.getErrorReport() This method allows for example to edit the YACS XML scheme in TUI, or to gather -results for futher use. +results for further use. -Getting informations on special variables during the ADAO calculation in YACS +Getting information on special variables during the ADAO calculation in YACS ----------------------------------------------------------------------------- Some special variables, used during calculations, can be monitored during the ADAO calculation in YACS. These variables can be printed, plotted, saved, etc. This can be done using "*observers*", that are scripts associated with one variable. In order to use this feature, one has to build scripts using as -standard inputs (available in the namespace) the variable ``var``. This variable -is to be used in the same way as for the final ADD object. +standard inputs (available in the namespace) the variables ``var`` and ``info``. +The variable ``var`` is to be used in the same way as for the final ADD object, +that is as a list object through its "*valueserie*" method. As an example, here is one very simple script used to print the value of one monitored variable:: - print " ---> Value =",var.valueserie(-1) + print " --->",info," Value =",var.valueserie(-1) Stored in a python file, this script can be associated to each variable available in the "*SELECTION*" keyword of the "*Observers*" command: "*Analysis*", "*CurrentState*", "*CostFunction*"... The current value of the variable will be printed at each step of the optimization or assimilation -algorithm. +algorithm. The observers can embed plotting capabilities, storage, printing, +etc. Getting more information when running a calculation --------------------------------------------------- When running, useful data and messages are logged. There are two ways to obtain -theses informations. +theses information. The first one, and the preferred way, is to use the built-in variable "*Debug*" available in every ADAO case. It is available through the GUI of the module. diff --git a/doc/examples.rst b/doc/examples.rst index 5350651..0dd0baf 100644 --- a/doc/examples.rst +++ b/doc/examples.rst @@ -26,11 +26,11 @@ Building a simple estimation case with explicit data definition --------------------------------------------------------------- This simple example is a demonstration one, and describes how to set a BLUE -estimation framework in order to get *weighted least square estimated state* of -a system from an observation of the state and from an *a priori* knowledge (or -background) of this state. In other words, we look for the weighted middle -between the observation and the background vectors. All the numerical values of -this example are arbitrary. +estimation framework in order to get *ponderated (or fully weighted) least +square estimated state* of a system from an observation of the state and from an +*a priori* knowledge (or background) of this state. In other words, we look for +the weighted middle between the observation and the background vectors. All the +numerical values of this example are arbitrary. Experimental set up +++++++++++++++++++ @@ -160,11 +160,15 @@ in the script node is:: print The augmented YACS scheme can be saved (overwriting the generated scheme if the -simple "*Save*" command or button are used, or with a new name). Then, -classically in YACS, it have to be prepared for run, and then executed. After -completion, the printing on standard output is available in the "*YACS Container -Log*", obtained through the right click menu of the "*proc*" window in the YACS -scheme as shown below: +simple "*Save*" command or button are used, or with a new name). Ideally, the +implementation of such post-processing procedure can be done in YACS to test, +and then entirely saved in one script that can be integrated in the ADAO case by +using the keyword "*UserPostAnalysis*". + +Then, classically in YACS, it have to be prepared for run, and then executed. +After completion, the printing on standard output is available in the "*YACS +Container Log*", obtained through the right click menu of the "*proc*" window in +the YACS scheme as shown below: .. _yacs_containerlog: .. image:: images/yacs_containerlog.png @@ -236,7 +240,7 @@ The names of the Python variables above are mandatory, in order to define the right variables, but the Python script can be bigger and define classes, functions, etc. with other names. It shows different ways to define arrays and matrices, using list, string (as in Numpy or Octave), Numpy array type or Numpy -matrix type, and Numpy special functions. All of these syntaxes are valid. +matrix type, and Numpy special functions. All of these syntax are valid. After saving this script somewhere in your path (named here "*script.py*" for the example), we use the GUI to build the ADAO case. The procedure to fill in @@ -341,7 +345,7 @@ as previously the hypothesis of uncorrelated errors (that is, a diagonal matrix, of size 3x3 because :math:`\mathbf{x}^b` is of lenght 3) and to have the same variance of 0.1 for all variables. We get: - ``B = 0.1 * diagonal( lenght(Xb) )`` + ``B = 0.1 * diagonal( length(Xb) )`` We suppose that there exist an observation operator :math:`\mathbf{H}`, which can be non linear. In real calibration procedure or inverse problems, the @@ -435,10 +439,7 @@ of the state. It is here defined in an external file named conveniently named here ``"FunctionH"`` and ``"AdjointH"``. These functions are user ones, representing as programming functions the :math:`\mathbf{H}` operator and its adjoint. We suppose these functions are given by the user. A simple -skeleton is given in the Python script file ``Physical_simulation_functions.py`` -of the ADAO examples standard directory. It can be used in the case only the -non-linear direct physical simulation exists. The script is partly reproduced -here for convenience:: +skeleton is given here for convenience:: def FunctionH( XX ): """ Direct non-linear simulation operator """ @@ -453,64 +454,9 @@ here for convenience:: # --------------------------------------> EXAMPLE TO BE REMOVED # return numpy.array( HX ) - # - def TangentHMatrix( X, increment = 0.01, centeredDF = False ): - """ Tangent operator (Jacobian) calculated by finite differences """ - # - dX = increment * X.A1 - # - if centeredDF: - # - Jacobian = [] - for i in range( len(dX) ): - X_plus_dXi = numpy.array( X.A1 ) - X_plus_dXi[i] = X[i] + dX[i] - X_moins_dXi = numpy.array( X.A1 ) - X_moins_dXi[i] = X[i] - dX[i] - # - HX_plus_dXi = FunctionH( X_plus_dXi ) - HX_moins_dXi = FunctionH( X_moins_dXi ) - # - HX_Diff = ( HX_plus_dXi - HX_moins_dXi ) / (2.*dX[i]) - # - Jacobian.append( HX_Diff ) - # - else: - # - HX_plus_dX = [] - for i in range( len(dX) ): - X_plus_dXi = numpy.array( X.A1 ) - X_plus_dXi[i] = X[i] + dX[i] - # - HX_plus_dXi = FunctionH( X_plus_dXi ) - # - HX_plus_dX.append( HX_plus_dXi ) - # - HX = FunctionH( X ) - # - Jacobian = [] - for i in range( len(dX) ): - Jacobian.append( ( HX_plus_dX[i] - HX ) / dX[i] ) - # - Jacobian = numpy.matrix( Jacobian ) - # - return Jacobian - # - def TangentH( X ): - """ Tangent operator """ - _X = numpy.asmatrix(X).flatten().T - HtX = self.TangentHMatrix( _X ) * _X - return HtX.A1 - # - def AdjointH( (X, Y) ): - """ Ajoint operator """ - # - Jacobian = TangentHMatrix( X, centeredDF = False ) - # - Y = numpy.asmatrix(Y).flatten().T - HaY = numpy.dot(Jacobian, Y) - # - return HaY.A1 + +We does not need the operators ``"TangentH"`` and ``"AdjointH"`` because they +will be approximated using ADAO capabilities. We insist on the fact that these non-linear operator ``"FunctionH"``, tangent operator ``"TangentH"`` and adjoint operator ``"AdjointH"`` come from the @@ -519,85 +465,6 @@ eventual adjoint, and have to be carefully set up by the data assimilation user. The errors in or missuses of the operators can not be detected or corrected by the data assimilation framework alone. -To operates in the module ADAO, it is required to define for ADAO these -different types of operators: the (potentially non-linear) standard observation -operator, named ``"Direct"``, its linearised approximation, named ``"Tangent"``, -and the adjoint operator named ``"Adjoint"``. The Python script have to retrieve -an input parameter, found under the key "value", in a variable named -``"specificParameters"`` of the SALOME input data and parameters -``"computation"`` dictionary variable. If the operator is already linear, the -``"Direct"`` and ``"Tangent"`` functions are the same, as it can be supposed -here. The following example Python script file named -``Script_ObservationOperator_H.py``, illustrates the case:: - - import Physical_simulation_functions - import numpy, logging - # - # ----------------------------------------------------------------------- - # SALOME input data and parameters: all information are the required input - # variable "computation", containing for example: - # {'inputValues': [[[[0.0, 0.0, 0.0]]]], - # 'inputVarList': ['adao_default'], - # 'outputVarList': ['adao_default'], - # 'specificParameters': [{'name': 'method', 'value': 'Direct'}]} - # ----------------------------------------------------------------------- - # - # Recovering the type of computation: "Direct", "Tangent" or "Adjoint" - # -------------------------------------------------------------------- - method = "" - for param in computation["specificParameters"]: - if param["name"] == "method": - method = param["value"] - logging.info("ComputationFunctionNode: Found method is \'%s\'"%method) - # - # Loading the H operator functions from external definitions - # ---------------------------------------------------------- - logging.info("ComputationFunctionNode: Loading operator functions") - FunctionH = Physical_simulation_functions.FunctionH - TangentH = Physical_simulation_functions.TangentH - AdjointH = Physical_simulation_functions.AdjointH - # - # Executing the possible computations - # ----------------------------------- - if method == "Direct": - logging.info("ComputationFunctionNode: Direct computation") - Xcurrent = computation["inputValues"][0][0][0] - data = FunctionH(numpy.matrix( Xcurrent ).T) - # - if method == "Tangent": - logging.info("ComputationFunctionNode: Tangent computation") - Xcurrent = computation["inputValues"][0][0][0] - data = TangentH(numpy.matrix( Xcurrent ).T) - # - if method == "Adjoint": - logging.info("ComputationFunctionNode: Adjoint computation") - Xcurrent = computation["inputValues"][0][0][0] - Ycurrent = computation["inputValues"][0][0][1] - data = AdjointH((numpy.matrix( Xcurrent ).T, numpy.matrix( Ycurrent ).T)) - # - # Formatting the output - # --------------------- - logging.info("ComputationFunctionNode: Formatting the output") - it = data.flat - outputValues = [[[[]]]] - for val in it: - outputValues[0][0][0].append(val) - # - # Creating the required ADAO variable - # ----------------------------------- - result = {} - result["outputValues"] = outputValues - result["specificOutputInfos"] = [] - result["returnCode"] = 0 - result["errorMessage"] = "" - -As output, this script has to define a nested list variable, as shown above with -the ``"outputValues"`` variable, where the nested levels describe the different -variables included in the state, then the different possible states at the same -time, then the different time steps. In this case, because there is only one -time step and one state, and all the variables are stored together, we only set -the most inner level of the lists. - In this twin experiments framework, the observation :math:`\mathbf{y}^o` and its error covariances matrix :math:`\mathbf{R}` can be generated. It is done in two Python script files, the first one being named ``Script_Observation_yo.py``:: @@ -646,7 +513,7 @@ following parameters can be defined in a Python script file named } Finally, it is common to post-process the results, retrieving them after the -data assimilation phase in order to analyse, print or show them. It requires to +data assimilation phase in order to analyze, print or show them. It requires to use a intermediary Python script file in order to extract these results. The following example Python script file named ``Script_UserPostAnalysis.py``, illustrates the fact:: @@ -678,7 +545,6 @@ listed here: #. ``Script_BackgroundError_B.py`` #. ``Script_Background_xb.py`` #. ``Script_ObservationError_R.py`` -#. ``Script_ObservationOperator_H.py`` #. ``Script_Observation_yo.py`` #. ``Script_UserPostAnalysis.py`` @@ -697,7 +563,9 @@ definition by Python script files. It is entirely similar to the method described in the `Building a simple estimation case with external data definition by scripts`_ previous section. For each variable to be defined, we select the "*Script*" option of the "*FROM*" keyword, which leads to a -"*SCRIPT_DATA/SCRIPT_FILE*" entry in the tree. +"*SCRIPT_DATA/SCRIPT_FILE*" entry in the tree. For the "*ObservationOperator*" +keyword, we choose the "*ScriptWithOneFunction*" form and keep the default +differential increment. The other steps to build the ADAO case are exactly the same as in the `Building a simple estimation case with explicit data definition`_ previous section. diff --git a/doc/glossary.rst b/doc/glossary.rst index ca09ccb..8a2253a 100644 --- a/doc/glossary.rst +++ b/doc/glossary.rst @@ -18,5 +18,59 @@ Glossary One iteration occurs when using iterative optimizers (e.g. 3DVAR), and it is entirely hidden in the main YACS OptimizerLoop Node named "compute_bloc". Nevertheless, the user can watch the iterative process - throught the *YACS Container Log* window, which is updated during the + through the *YACS Container Log* window, which is updated during the process, and using *Observers* attached to calculation variables. + + APosterioriCovariance + Keyword to indicate the covariance matrix of *a posteriori* analysis + errors. + + BMA (Background minus Analysis) + Difference between the simulation based on the background state and the + one base on the optimal state estimation, noted as :math:`\mathbf{x}^b - + \mathbf{x}^a`. + + OMA (Observation minus Analysis) + Difference between the observations and the result of the simulation based + on the optimal state estimation, the analysis, filtered to be compatible + with the observation, noted as :math:`\mathbf{y}^o - + \mathbf{H}\mathbf{x}^a`. + + OMB (Observation minus Background) + Difference between the observations and the result of the simulation based + on the background state, filtered to be compatible with the observation, + noted as :math:`\mathbf{y}^o - \mathbf{H}\mathbf{x}^b`. + + SigmaBck2 + Keyword to indicate the Desroziers-Ivanov parameter measuring the + background part consistency of the data assimilation optimal state + estimation. + + SigmaObs2 + Keyword to indicate the Desroziers-Ivanov parameter measuring the + observation part consistency of the data assimilation optimal state + estimation. + + MahalanobisConsistency + Keyword to indicate the Mahalanobis parameter measuring the consistency of + the data assimilation optimal state estimation. + + analysis + The optimal state estimation through a data assimilation or optimization + procedure. + + innovation + Difference between the observations and the result of the simulation based + on the background state, filtered to be compatible with the observation. + It is similar with OMB in static cases. + + CostFunctionJ + Keyword to indicate the minimization function, noted as :math:`J`. + + CostFunctionJo + Keyword to indicate the observation part of the minimization function, + noted as :math:`J^o`. + + CostFunctionJb + Keyword to indicate the background part of the minimization function, + noted as :math:`J^b`. diff --git a/doc/reference.rst b/doc/reference.rst index 7a47ee8..d316761 100644 --- a/doc/reference.rst +++ b/doc/reference.rst @@ -4,19 +4,25 @@ Reference description of the ADAO commands and keywords ================================================================================ - -This section presents the reference description of the commands and keywords -available through the GUI or through scripts. +This section presents the reference description of the ADAO commands and +keywords available through the GUI or through scripts. Each command or keyword to be defined through the ADAO GUI has some properties. -The first property is to be a required command, an optional command or a keyword -describing a type of input. The second property is to be an "open" variable with -a fixed type but with any value allowed by the type, or a "restricted" variable, -limited to some specified values. The mathematical notations used afterwards are -explained in the section :ref:`section_theory`. +The first property is to be *required*, *optional* or only factual, describing a +type of input. The second property is to be an "open" variable with a fixed type +but with any value allowed by the type, or a "restricted" variable, limited to +some specified values. The EFICAS editor GUI having build-in validating +capacities, the properties of the commands or keywords given through this GUI +are automatically correct. + +The mathematical notations used afterward are explained in the section +:ref:`section_theory`. + +Examples of using these commands are available in the section +:ref:`section_examples` and in example files installed with ADAO module. List of possible input types -++++++++++++++++++++++++++++ +---------------------------- .. index:: single: Dict .. index:: single: Function @@ -25,33 +31,43 @@ List of possible input types .. index:: single: Script .. index:: single: Vector -The different type-style commands are: +Each ADAO variable has a pseudo-type to help filling it and validation. The +different pseudo-types are: + +**Dict** + This indicates a variable that has to be filled by a dictionary, usually + given as a script. + +**Function** + This indicates a variable that has to be filled by a function, usually given + as a script or a component method. -:Dict: - *Type of an input*. This indicates a variable that has to be filled by a - dictionary, usually given as a script. +**Matrix** + This indicates a variable that has to be filled by a matrix, usually given + either as a string or as a script. -:Function: - *Type of an input*. This indicates a variable that has to be filled by a - function, usually given as a script. +**String** + This indicates a string giving a literal representation of a matrix, a + vector or a vector serie, such as "1 2 ; 3 4" for a square 2x2 matrix. -:Matrix: - *Type of an input*. This indicates a variable that has to be filled by a - matrix, usually given either as a string or as a script. +**Script** + This indicates a script given as an external file. It can be described by a + full absolute path name or only by the file name without path. -:String: - *Type of an input*. This indicates a string, such as a name or a literal - representation of a matrix or vector, such as "1 2 ; 3 4". +**Vector** + This indicates a variable that has to be filled by a vector, usually given + either as a string or as a script. -:Script: - *Type of an input*. This indicates a script given as an external file. +**VectorSerie** This indicates a variable that has to be filled by a list of + vectors, usually given either as a string or as a script. -:Vector: - *Type of an input*. This indicates a variable that has to be filled by a - vector, usually given either as a string or as a script. - -List of commands -++++++++++++++++ +When a command or keyword can be filled by a script file name, the script has to +contain a variable or a method that has the same name as the one to be filled. +In other words, when importing the script in a YACS Python node, it must create +a variable of the good name in the current namespace. + +List of commands and keywords for an ADAO calculation case +---------------------------------------------------------- .. index:: single: ASSIMILATION_STUDY .. index:: single: Algorithm @@ -59,6 +75,8 @@ List of commands .. index:: single: Background .. index:: single: BackgroundError .. index:: single: Debug +.. index:: single: EvolutionError +.. index:: single: EvolutionModel .. index:: single: InputVariables .. index:: single: Observation .. index:: single: ObservationError @@ -70,256 +88,612 @@ List of commands .. index:: single: UserDataInit .. index:: single: UserPostAnalysis -The different commands are the following: +The first set of commands is related to the description of a calculation case, +that is a *Data Assimilation* procedure or an *Optimization* procedure. The +terms are ordered in alphabetical order, except the first, which describes +choice between calculation or checking. The different commands are the +following: + +**ASSIMILATION_STUDY** + *Required command*. This is the general command describing the data + assimilation or optimization case. It hierarchically contains all the other + commands. + +**Algorithm** + *Required command*. This is a string to indicate the data assimilation or + optimization algorithm chosen. The choices are limited and available through + the GUI. There exists for example "3DVAR", "Blue"... See below the list of + algorithms and associated parameters in the following subsection `Options + for algorithms`_. + +**AlgorithmParameters** + *Optional command*. This command allows to add some optional parameters to + control the data assimilation or optimization algorithm. It is defined as a + "*Dict*" type object, that is, given as a script. See below the list of + algorithms and associated parameters in the following subsection `Options + for algorithms`_. + +**Background** + *Required command*. This indicates the background or initial vector used, + previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type + object, that is, given either as a string or as a script. -:ASSIMILATION_STUDY: - *Required command*. This is the general command describing an ADAO case. It - hierarchicaly contains all the other commands. +**BackgroundError** + *Required command*. This indicates the background error covariance matrix, + previously noted as :math:`\mathbf{B}`. It is defined as a "*Matrix*" type + object, that is, given either as a string or as a script. -:Algorithm: - *Required command*. This is a string to indicates the data assimilation - algorithm chosen. The choices are limited and available through the GUI. - There exists for example: "3DVAR", "Blue"... See below the list of - algorithms and associated parameters. +**Debug** + *Required command*. This define the level of trace and intermediary debug + information. The choices are limited between 0 (for False) and 1 (for + True). -:AlgorithmParameters: - *Optional command*. This command allows to add some optional parameters to - control the data assimilation algorithm calculation. It is defined as a - "*Dict*" type object. See below the list of algorithms and associated - parameters. - -:Background: - *Required command*. This indicates the backgroud vector used for data - assimilation, previously noted as :math:`\mathbf{x}^b`. It is defined as a - "*Vector*" type object, that is, given either as a string or as a script. - -:BackgroundError: - *Required command*. This indicates the backgroud error covariance matrix, - previously noted as :math:`\mathbf{B}`.It is defined as a "*Matrix*" type +**EvolutionError** + *Optional command*. This indicates the evolution error covariance matrix, + usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type object, that is, given either as a string or as a script. -:Debug: - *Required command*. This let choose the level of trace and intermediary - debug informations. The choices are limited between 0 (for False) and 1 (for - True) and available through the GUI. +**EvolutionModel** + *Optional command*. This indicates the evolution model operator, usually + noted :math:`M`, which describes a step of evolution. It is defined as a + "*Function*" type object, that is, given as a script. Different functional + forms can be used, as described in the following subsection `Requirements + for functions describing an operator`_. -:InputVariables: +**InputVariables** *Optional command*. This command allows to indicates the name and size of physical variables that are bundled together in the control vector. This - information is dedicated to data processed inside of data assimilation - algorithm. + information is dedicated to data processed inside an algorithm. -:Observation: +**Observation** *Required command*. This indicates the observation vector used for data - assimilation, previously noted as :math:`\mathbf{y}^o`. It is defined as a - "*Vector*" type object, that is, given either as a string or as a script. + assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It + is defined as a "*Vector*" type object, that is, given either as a string or + as a script. -:ObservationError: +**ObservationError** *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`.It is defined as a "*Matrix*" type + previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type object, that is, given either as a string or as a script. -:ObservationOperator: +**ObservationOperator** *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` - to results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. - -:Observers: + noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to + results :math:`\mathbf{y}` to be compared to observations + :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is, + given as a script. Different functional forms can be used, as described in + the following subsection `Requirements for functions describing an + operator`_. + +**Observers** *Optional command*. This command allows to set internal observers, that are functions linked with a particular variable, which will be executed each time this variable is modified. It is a convenient way to monitor interest - variables during the data assimilation process, by printing or plotting it, - etc. + variables during the data assimilation or optimization process, by printing + or plotting it, etc. -:OutputVariables: +**OutputVariables** *Optional command*. This command allows to indicates the name and size of physical variables that are bundled together in the output observation - vector. This information is dedicated to data processed inside of data - assimilation algorithm. + vector. This information is dedicated to data processed inside an algorithm. -:Study_name: +**Study_name** *Required command*. This is an open string to describe the study by a name or a sentence. -:Study_repertory: +**Study_repertory** *Optional command*. If available, this repertory is used to find all the script files that can be used to define some other commands by scripts. -:UserDataInit: - *Optional command*. This commands allows to initialise some parameters or +**UserDataInit** + *Optional command*. This commands allows to initialize some parameters or data automatically before data assimilation algorithm processing. -:UserPostAnalysis: +**UserPostAnalysis** *Optional command*. This commands allows to process some parameters or data automatically after data assimilation algorithm processing. It is defined as - a script or a string, allowing to put simple code directly inside the ADAO - case. + a script or a string, allowing to put post-processing code directly inside + the ADAO case. + +List of commands and keywords for an ADAO checking case +------------------------------------------------------- + +.. index:: single: CHECKING_STUDY +.. index:: single: Algorithm +.. index:: single: AlgorithmParameters +.. index:: single: CheckingPoint +.. index:: single: Debug +.. index:: single: ObservationOperator +.. index:: single: Study_name +.. index:: single: Study_repertory +.. index:: single: UserDataInit + +The second set of commands is related to the description of a checking case, +that is a procedure to check required properties on information somewhere else +by a calculation case. The terms are ordered in alphabetical order, except the +first, which describes choice between calculation or checking. The different +commands are the following: + +**CHECKING_STUDY** + *Required command*. This is the general command describing the checking + case. It hierarchically contains all the other commands. + +**Algorithm** + *Required command*. This is a string to indicate the data assimilation or + optimization algorithm chosen. The choices are limited and available through + the GUI. There exists for example "3DVAR", "Blue"... See below the list of + algorithms and associated parameters in the following subsection `Options + for algorithms`_. + +**AlgorithmParameters** + *Optional command*. This command allows to add some optional parameters to + control the data assimilation or optimization algorithm. It is defined as a + "*Dict*" type object, that is, given as a script. See below the list of + algorithms and associated parameters in the following subsection `Options + for algorithms`_. + +**CheckingPoint** + *Required command*. This indicates the vector used, + previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type + object, that is, given either as a string or as a script. -.. _subsection_algo_options: +**Debug** + *Required command*. This define the level of trace and intermediary debug + information. The choices are limited between 0 (for False) and 1 (for + True). -List of possible options for the algorithms -+++++++++++++++++++++++++++++++++++++++++++ +**ObservationOperator** + *Required command*. This indicates the observation operator, previously + noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to + results :math:`\mathbf{y}` to be compared to observations + :math:`\mathbf{y}^o`. It is defined as a "*Function*" type object, that is, + given as a script. Different functional forms can be used, as described in + the following subsection `Requirements for functions describing an + operator`_. + +**Study_name** + *Required command*. This is an open string to describe the study by a name + or a sentence. +**Study_repertory** + *Optional command*. If available, this repertory is used to find all the + script files that can be used to define some other commands by scripts. + +**UserDataInit** + *Optional command*. This commands allows to initialize some parameters or + data automatically before data assimilation algorithm processing. + +Options for algorithms +---------------------- + +.. index:: single: 3DVAR .. index:: single: Blue +.. index:: single: EnsembleBlue +.. index:: single: KalmanFilter .. index:: single: LinearLeastSquares -.. index:: single: 3DVAR .. index:: single: NonLinearLeastSquares -.. index:: single: EnsembleBlue +.. index:: single: ParticleSwarmOptimization .. index:: single: QuantileRegression .. index:: single: AlgorithmParameters -.. index:: single: Minimizer .. index:: single: Bounds -.. index:: single: MaximumNumberOfSteps -.. index:: single: CalculateAPosterioriCovariance .. index:: single: CostDecrementTolerance -.. index:: single: ProjectedGradientTolerance .. index:: single: GradientNormTolerance -.. index:: single: SetSeed +.. index:: single: GroupRecallRate +.. index:: single: MaximumNumberOfSteps +.. index:: single: Minimizer +.. index:: single: NumberOfInsects +.. index:: single: ProjectedGradientTolerance +.. index:: single: QualityCriterion .. index:: single: Quantile +.. index:: single: SetSeed +.. index:: single: StoreInternalVariables +.. index:: single: StoreSupplementaryCalculations +.. index:: single: SwarmVelocity -Each algorithm can be controled using some generic or specific options given -throught the "*AlgorithmParameters*" optional command, as follows:: +Each algorithm can be controlled using some generic or specific options given +through the "*AlgorithmParameters*" optional command, as follows for example:: AlgorithmParameters = { - "Minimizer" : "CG", - "MaximumNumberOfSteps" : 10, + "Minimizer" : "LBFGSB", + "MaximumNumberOfSteps" : 25, + "StoreSupplementaryCalculations" : ["APosterioriCovariance","OMA"], } -This section describes the available options by algorithm. If an option is -specified for an algorithm that doesn't support it, the option is simply left -unused. - -:"Blue": - - :CalculateAPosterioriCovariance: - This boolean key allows to enable the calculation and the storage of the - covariance matrix of a posteriori anlysis errors. Be careful, this is a - numericaly costly step. The default is "False". - -:"LinearLeastSquares": - no option - -:"3DVAR": - - :Minimizer: - This key allows to choose the optimization minimizer. The default choice - is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained - minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained - minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear - unconstrained minimizer), "NCG" (Newton CG minimizer). - - :Bounds: - This key allows to define upper and lower bounds for every control - variable being optimized. Bounds can be given by a list of list of pairs - of lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained minimizers. - - :MaximumNumberOfSteps: - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. For some minimizers, the effective stopping step can be - slightly different due to algorihtm internal control requirements. - - :CalculateAPosterioriCovariance: - This boolean key allows to enable the calculation and the storage of the - covariance matrix of a posteriori anlysis errors. Be careful, this is a - numericaly costly step. The default is "False". - - :CostDecrementTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function decreases less than - this tolerance at the last step. The default is 10e-7, and it is - recommended to adapt it the needs on real problems. - - :ProjectedGradientTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when all the components of the projected - gradient are under this limit. It is only used for constrained algorithms. - The default is -1, that is the internal default of each algorithm, and it - is not recommended to change it. - - :GradientNormTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the norm of the gradient is under this - limit. It is only used for non-constrained algorithms. The default is - 10e-5 and it is not recommended to change it. - -:"NonLinearLeastSquares": - - :Minimizer: - This key allows to choose the optimization minimizer. The default choice - is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained - minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained - minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear - unconstrained minimizer), "NCG" (Newton CG minimizer). - - :Bounds: - This key allows to define upper and lower bounds for every control - variable being optimized. Bounds can be given by a list of list of pairs - of lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained minimizers. - - :MaximumNumberOfSteps: - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. For some minimizers, the effective stopping step can be - slightly different due to algorihtm internal control requirements. - - :CostDecrementTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function decreases less than - this tolerance at the last step. The default is 10e-7, and it is - recommended to adapt it the needs on real problems. - - :ProjectedGradientTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when all the components of the projected - gradient are under this limit. It is only used for constrained algorithms. - The default is -1, that is the internal default of each algorithm, and it - is not recommended to change it. - - :GradientNormTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the norm of the gradient is under this - limit. It is only used for non-constrained algorithms. The default is - 10e-5 and it is not recommended to change it. - -:"EnsembleBlue": - - :SetSeed: - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - -:"QuantileRegression": - - :Quantile: - This key allows to define the real value of the desired quantile, between - 0 and 1. The default is 0.5, corresponding to the median. - - :Minimizer: - This key allows to choose the optimization minimizer. The default choice - and only available choice is "MMQR" (Majorize-Minimize for Quantile - Regression). - - :MaximumNumberOfSteps: - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. - - :CostDecrementTolerance: - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function or the surrogate - decreases less than this tolerance at the last step. The default is 10e-6, - and it is recommended to adapt it the needs on real problems. - -Examples of using these commands are available in the section -:ref:`section_examples` and in example files installed with ADAO module. +This section describes the available options algorithm by algorithm. If an +option is specified for an algorithm that doesn't support it, the option is +simply left unused. The meaning of the acronyms or particular names can be found +in the :ref:`genindex` or the :ref:`section_glossary`. + +**"Blue"** + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation", + "SigmaBck2", "SigmaObs2", "MahalanobisConsistency"]. + +**"LinearLeastSquares"** + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["OMA"]. + +**"3DVAR"** + + Minimizer + This key allows to choose the optimization minimizer. The default choice + is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained + minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained + minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear + unconstrained minimizer), "NCG" (Newton CG minimizer). + + Bounds + This key allows to define upper and lower bounds for every control + variable being optimized. Bounds can be given by a list of list of pairs + of lower/upper bounds for each variable, with possibly ``None`` every time + there is no bound. The bounds can always be specified, but they are taken + into account only by the constrained minimizers. + + MaximumNumberOfSteps + This key indicates the maximum number of iterations allowed for iterative + optimization. The default is 15000, which is very similar to no limit on + iterations. It is then recommended to adapt this parameter to the needs on + real problems. For some minimizers, the effective stopping step can be + slightly different due to algorithm internal control requirements. + + CostDecrementTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the cost function decreases less than + this tolerance at the last step. The default is 10e-7, and it is + recommended to adapt it the needs on real problems. + + ProjectedGradientTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when all the components of the projected + gradient are under this limit. It is only used for constrained algorithms. + The default is -1, that is the internal default of each algorithm, and it + is not recommended to change it. + + GradientNormTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the norm of the gradient is under this + limit. It is only used for non-constrained algorithms. The default is + 10e-5 and it is not recommended to change it. + + StoreInternalVariables + This boolean key allows to store default internal variables, mainly the + current state during iterative optimization process. Be careful, this can be + a numerically costly choice in certain calculation cases. The default is + "False". + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["APosterioriCovariance", "BMA", "OMA", "OMB", "Innovation", + "SigmaObs2", "MahalanobisConsistency"]. + +**"NonLinearLeastSquares"** + + Minimizer + This key allows to choose the optimization minimizer. The default choice + is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained + minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained + minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear + unconstrained minimizer), "NCG" (Newton CG minimizer). + + Bounds + This key allows to define upper and lower bounds for every control + variable being optimized. Bounds can be given by a list of list of pairs + of lower/upper bounds for each variable, with possibly ``None`` every time + there is no bound. The bounds can always be specified, but they are taken + into account only by the constrained minimizers. + + MaximumNumberOfSteps + This key indicates the maximum number of iterations allowed for iterative + optimization. The default is 15000, which is very similar to no limit on + iterations. It is then recommended to adapt this parameter to the needs on + real problems. For some minimizers, the effective stopping step can be + slightly different due to algorithm internal control requirements. + + CostDecrementTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the cost function decreases less than + this tolerance at the last step. The default is 10e-7, and it is + recommended to adapt it the needs on real problems. + + ProjectedGradientTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when all the components of the projected + gradient are under this limit. It is only used for constrained algorithms. + The default is -1, that is the internal default of each algorithm, and it + is not recommended to change it. + + GradientNormTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the norm of the gradient is under this + limit. It is only used for non-constrained algorithms. The default is + 10e-5 and it is not recommended to change it. + + StoreInternalVariables + This boolean key allows to store default internal variables, mainly the + current state during iterative optimization process. Be careful, this can be + a numerically costly choice in certain calculation cases. The default is + "False". + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["BMA", "OMA", "OMB", "Innovation"]. + +**"EnsembleBlue"** + + SetSeed + This key allow to give an integer in order to fix the seed of the random + generator used to generate the ensemble. A convenient value is for example + 1000. By default, the seed is left uninitialized, and so use the default + initialization from the computer. + +**"KalmanFilter"** + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["APosterioriCovariance", "Innovation"]. + +**"ParticleSwarmOptimization"** + + MaximumNumberOfSteps + This key indicates the maximum number of iterations allowed for iterative + optimization. The default is 50, which is an arbitrary limit. It is then + recommended to adapt this parameter to the needs on real problems. + + NumberOfInsects + This key indicates the number of insects or particles in the swarm. The + default is 100, which is a usual default for this algorithm. + + SwarmVelocity + This key indicates the part of the insect velocity which is imposed by the + swarm. It is a positive floating point value. The default value is 1. + + GroupRecallRate + This key indicates the recall rate at the best swarm insect. It is a + floating point value between 0 and 1. The default value is 0.5. + + QualityCriterion + This key indicates the quality criterion, minimized to find the optimal + state estimate. The default is the usual data assimilation criterion named + "DA", the augmented ponderated least squares. The possible criteria has to + be in the following list, where the equivalent names are indicated by "=": + ["AugmentedPonderatedLeastSquares"="APLS"="DA", + "PonderatedLeastSquares"="PLS", "LeastSquares"="LS"="L2", + "AbsoluteValue"="L1", "MaximumError"="ME"] + + SetSeed + This key allow to give an integer in order to fix the seed of the random + generator used to generate the ensemble. A convenient value is for example + 1000. By default, the seed is left uninitialized, and so use the default + initialization from the computer. + + StoreInternalVariables + This boolean key allows to store default internal variables, mainly the + current state during iterative optimization process. Be careful, this can be + a numerically costly choice in certain calculation cases. The default is + "False". + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["BMA", "OMA", "OMB", "Innovation"]. + +**"QuantileRegression"** + + Quantile + This key allows to define the real value of the desired quantile, between + 0 and 1. The default is 0.5, corresponding to the median. + + Minimizer + This key allows to choose the optimization minimizer. The default choice + and only available choice is "MMQR" (Majorize-Minimize for Quantile + Regression). + + MaximumNumberOfSteps + This key indicates the maximum number of iterations allowed for iterative + optimization. The default is 15000, which is very similar to no limit on + iterations. It is then recommended to adapt this parameter to the needs on + real problems. + + CostDecrementTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the cost function or the surrogate + decreases less than this tolerance at the last step. The default is 10e-6, + and it is recommended to adapt it the needs on real problems. + + StoreInternalVariables + This boolean key allows to store default internal variables, mainly the + current state during iterative optimization process. Be careful, this can be + a numerically costly choice in certain calculation cases. The default is + "False". + + StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations. The default is a void list, none of these variables being + calculated and stored by default. The possible names are in the following + list: ["BMA", "OMA", "OMB", "Innovation"]. + +Requirements for functions describing an operator +------------------------------------------------- + +The operators for observation and evolution are required to implement the data +assimilation or optimization procedures. They include the physical simulation +numerical simulations, but also the filtering and restriction to compare the +simulation to observation. + +Schematically, an operator has to give a output solution given the input +parameters. Part of the input parameters can be modified during the optimization +procedure. So the mathematical representation of such a process is a function. +It was briefly described in the section :ref:`section_theory` and is generalized +here by the relation: + +.. math:: \mathbf{y} = H( \mathbf{x} ) + +between the pseudo-observations :math:`\mathbf{y}` and the parameters +:math:`\mathbf{x}` using the observation operator :math:`H`. The same functional +representation can be used for the linear tangent model :math:`\mathbf{H}` of +:math:`H` and its adjoint :math:`\mathbf{H}^*`, also required by some data +assimilation or optimization algorithms. + +Then, **to describe completely an operator, the user has only to provide a +function that fully and only realize the functional operation**. + +This function is usually given as a script that can be executed in a YACS node. +This script can without difference launch external codes or use internal SALOME +calls and methods. If the algorithm requires the 3 aspects of the operator +(direct form, tangent form and adjoint form), the user has to give the 3 +functions or to approximate them. + +There are 3 practical methods for the user to provide the operator functional +representation. + +First functional form: using "*ScriptWithOneFunction*" +++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +The first one consist in providing only one potentially non-linear function, and +to approximate the tangent and the adjoint operators. This is done by using the +keyword "*ScriptWithOneFunction*" for the description of the chosen operator in +the ADAO GUI. The user have to provide the function in a script, with a +mandatory name "*DirectOperator*". For example, the script can follow the +template:: + + def DirectOperator( X ): + """ Direct non-linear simulation operator """ + ... + ... + ... + return Y=H(X) + +In this case, the user can also provide a value for the differential increment, +using through the GUI the keyword "*DifferentialIncrement*", which has a default +value of 1%. This coefficient will be used in the finite difference +approximation to build the tangent and adjoint operators. + +This first operator definition allow easily to test the functional form before +its use in an ADAO case, reducing the complexity of implementation. + +Second functional form: using "*ScriptWithFunctions*" ++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +The second one consist in providing directly the three associated operators +:math:`H`, :math:`\mathbf{H}` and :math:`\mathbf{H}^*`. This is done by using the +keyword "*ScriptWithFunctions*" for the description of the chosen operator in +the ADAO GUI. The user have to provide three functions in one script, with three +mandatory names "*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*". +For example, the script can follow the template:: + + def DirectOperator( X ): + """ Direct non-linear simulation operator """ + ... + ... + ... + return something like Y + + def TangentOperator( (X, dX) ): + """ Tangent linear operator, around X, applied to dX """ + ... + ... + ... + return something like Y + + def AdjointOperator( (X, Y) ): + """ Adjoint operator, around X, applied to Y """ + ... + ... + ... + return something like X + +Another time, this second perator definition allow easily to test the functional +forms before their use in an ADAO case, greatly reducing the complexity of +implementation. + +Third functional form: using "*ScriptWithSwitch*" ++++++++++++++++++++++++++++++++++++++++++++++++++ + +This third form give more possibilities to control the execution of the three +functions representing the operator, allowing advanced usage and control over +each execution of the simulation code. This is done by using the keyword +"*ScriptWithSwitch*" for the description of the chosen operator in the ADAO GUI. +The user have to provide a switch in one script to control the execution of the +direct, tangent and adjoint forms of its simulation code. The user can then, for +example, use other approximations for the tangent and adjoint codes, or +introduce more complexity in the argument treatment of the functions. But it +will be far more complicated to implement and debug. + +**It is recommended not to use this third functional form without a solid +numerical or physical reason.** + +If, however, you want to use this third form, we recommend using the following +template for the switch. It requires an external script or code named +"*Physical_simulation_functions.py*", containing three functions named +"*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*" as previously. +Here is the switch template:: + + import Physical_simulation_functions + import numpy, logging + # + method = "" + for param in computation["specificParameters"]: + if param["name"] == "method": + method = param["value"] + if method not in ["Direct", "Tangent", "Adjoint"]: + raise ValueError("No valid computation method is given") + logging.info("Found method is \'%s\'"%method) + # + logging.info("Loading operator functions") + FunctionH = Physical_simulation_functions.DirectOperator + TangentH = Physical_simulation_functions.TangentOperator + AdjointH = Physical_simulation_functions.AdjointOperator + # + logging.info("Executing the possible computations") + data = [] + if method == "Direct": + logging.info("Direct computation") + Xcurrent = computation["inputValues"][0][0][0] + data = FunctionH(numpy.matrix( Xcurrent ).T) + if method == "Tangent": + logging.info("Tangent computation") + Xcurrent = computation["inputValues"][0][0][0] + dXcurrent = computation["inputValues"][0][0][1] + data = TangentH(numpy.matrix(Xcurrent).T, numpy.matrix(dXcurrent).T) + if method == "Adjoint": + logging.info("Adjoint computation") + Xcurrent = computation["inputValues"][0][0][0] + Ycurrent = computation["inputValues"][0][0][1] + data = AdjointH((numpy.matrix(Xcurrent).T, numpy.matrix(Ycurrent).T)) + # + logging.info("Formatting the output") + it = numpy.ravel(data) + outputValues = [[[[]]]] + for val in it: + outputValues[0][0][0].append(val) + # + result = {} + result["outputValues"] = outputValues + result["specificOutputInfos"] = [] + result["returnCode"] = 0 + result["errorMessage"] = "" + +All various modifications could be done from this template hypothesis. diff --git a/doc/using.rst b/doc/using.rst index 918427c..a10c7e3 100644 --- a/doc/using.rst +++ b/doc/using.rst @@ -33,7 +33,7 @@ input data, and then generates a complete executable block diagram used in YACS. Many variations exist for the definition of input data, but the logical sequence remains unchanged. -First of all, the user is considered to know its personnal input data needed to +First of all, the user is considered to know its personal input data needed to set up the data assimilation study. These data can already be available in SALOME or not. @@ -42,13 +42,13 @@ SALOME or not. #. **Activate the ADAO module and use the editor GUI,** #. **Build and/or modify the ADAO case and save it,** #. **Export the ADAO case as a YACS scheme,** -#. **Modify and supplement the YACS scheme and save it,** +#. **Supplement and modify the YACS scheme and save it,** #. **Execute the YACS case and obtain the results.** Each step will be detailed in the next section. -STEP: Activate the ADAO module and use the editor GUI ------------------------------------------------------ +STEP 1: Activate the ADAO module and use the editor GUI +------------------------------------------------------- As always for a module, it has to be activated by choosing the appropriate module button (or menu) in the toolbar of SALOME. If there is no SALOME study @@ -73,11 +73,11 @@ create a new ADAO case, and you will see: .. centered:: **The EFICAS editor for cases definition in module ADAO** -STEP: Build and modify the ADAO case and save it ------------------------------------------------- +STEP 2: Build and modify the ADAO case and save it +-------------------------------------------------- -To build a case using EFICAS, you have to go through a series of substeps, by -selecting, at each substep, a keyword and then filling in its value. +To build a case using EFICAS, you have to go through a series of sub-steps, by +selecting, at each sub-step, a keyword and then filling in its value. The structured editor indicates hierarchical types, values or keywords allowed. Incomplete or incorrect keywords are identified by a visual error red flag. @@ -87,7 +87,7 @@ are contextually provided in the editor reserved places. A new case is set up with the minimal list of commands. All the mandatory commands or keywords are already present, none of them can be suppressed. -Optionnal keywords can be added by choosing them in a list of suggestions of +Optional keywords can be added by choosing them in a list of suggestions of allowed ones for the main command, for example the "*ASSIMILATION_STUDY*" command. As an example, one can add an "*AlgorithmParameters*" keyword, as described in the last part of the section :ref:`section_examples`. @@ -111,8 +111,8 @@ used for JDC EFICAS files. This will generate a pair of files describing the ADAO case, with the same base name, the first one being completed by a "*.comm*" extension and the second one by a "*.py*" extension [#]_. -STEP: Export the ADAO case as a YACS scheme -------------------------------------------- +STEP 3: Export the ADAO case as a YACS scheme +--------------------------------------------- When the ADAO case is completed, you have to export it as a YACS scheme [#]_ in order to execute the data assimilation calculation. This can be easily done by @@ -131,10 +131,10 @@ This will lead to automatically generate a YACS scheme, and open the YACS module on this scheme. The YACS file, associated with the scheme, will be stored in the same directory and with the same base name as the ADAO saved case, only changing its extension to "*.xml*". Be careful, *if the XML file name already exist, it -will be overwriten without prompting for replacing the file*. +will be overwritten without prompting for replacing the file*. -STEP: Supplement and modify the YACS scheme and save it -------------------------------------------------------- +STEP 4: Supplement and modify the YACS scheme and save it +--------------------------------------------------------- .. index:: single: Analysis @@ -146,10 +146,10 @@ calculation schemes. It is recommended to save the modified scheme with a new name, in order to preserve the XML file in the case you re-export the ADAO case to YACS. -The main supplement needed in the YACS scheme is a postprocessing step. The +The main supplement needed in the YACS scheme is a post-processing step. The evaluation of the results has to be done in the physical context of the -simulation used by the data assimilation procedure. The postprocessing can be -provided throught the "*UserPostAnalysis*" ADAO keyword as a script, or can be +simulation used by the data assimilation procedure. The post-processing can be +provided through the "*UserPostAnalysis*" ADAO keyword as a script, or can be build as YACS nodes using all SALOME possibilities. The YACS scheme has an "*algoResults*" output port of the computation bloc, @@ -157,14 +157,14 @@ which gives access to a "*pyobj*" named hereafter "*ADD*", containing all the processing results. These results can be obtained by retrieving the named variables stored along the calculation. The main is the "*Analysis*" one, that can be obtained by the python command (for example in an in-line script node or -a script provided throught the "*UserPostAnalysis*" keyword):: +a script provided through the "*UserPostAnalysis*" keyword):: ADD = algoResults.getAssimilationStudy() Analysis = ADD.get("Analysis").valueserie() "*Analysis*" is a complex object, similar to a list of values calculated at each step of data assimilation calculation. In order to get and print the optimal -data assimilation state evaluation, in script provided throught the +data assimilation state evaluation, in script provided through the "*UserPostAnalysis*" keyword, one can use:: Xa = ADD.get("Analysis").valueserie(-1) @@ -176,13 +176,13 @@ assimilation or optimization evaluation problem, noted as :math:`\mathbf{x}^a` in the section :ref:`section_theory`. Such command can be used to print results, or to convert these ones to -structures that can be used in the native or external SALOME postprocessing. A +structures that can be used in the native or external SALOME post-processing. A simple example is given in the section :ref:`section_examples`. -STEP: Execute the YACS case and obtain the results --------------------------------------------------- +STEP 5: Execute the YACS case and obtain the results +---------------------------------------------------- -The YACS scheme is now complete and can be executed. Parametrisation and +The YACS scheme is now complete and can be executed. Parametrization and execution of a YACS case is fully compliant with the standard way to deal with a YACS scheme, and is described in the *YACS module User's Guide*.