From 362c4eb075effbd543110678dba528d195729cd2 Mon Sep 17 00:00:00 2001 From: Jean-Philippe ARGAUD Date: Thu, 10 Nov 2011 09:01:46 +0100 Subject: [PATCH] Avancement de la documentation --- doc/examples.rst | 120 ++++++++++++++++++++++++++++++++++++++--------- doc/index.rst | 45 +++++++++++++----- doc/intro.rst | 6 +++ doc/theory.rst | 79 +++++++++++++------------------ doc/using.rst | 60 +++++++++++------------- 5 files changed, 199 insertions(+), 111 deletions(-) diff --git a/doc/examples.rst b/doc/examples.rst index a803bd5..b0e8bac 100644 --- a/doc/examples.rst +++ b/doc/examples.rst @@ -271,23 +271,23 @@ definition of the ADAO case, which is an keyword of the ASSIMILATION_STUDY. This keyword requires a Python dictionary, containing some key/value pairs. For example, with a 3DVAR algorithm, the possible keys are "*Minimizer*", -"*MaximumNumberOfSteps*", "ProjectedGradientTolerance", "GradientNormTolerance" -and "*Bounds*": +"*MaximumNumberOfSteps*", "*ProjectedGradientTolerance*", +"*GradientNormTolerance*" and "*Bounds*": -#. The "*Minimizer*" key allows to choose the optimisation minimizer. The +#. The "*Minimizer*" key allows to choose the optimization minimizer. The default choice is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained minimizer, see [Byrd95] and [Zhu97]), "TNC" (nonlinear constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). #. The "*MaximumNumberOfSteps*" key indicates the maximum number of iterations - allowed for iterative optimisation. The default is 15000, which very + allowed for iterative optimization. The default is 15000, which very similar of no limit on iterations. It is then recommended to adapt this parameter to the needs on real problems. -#. The "ProjectedGradientTolerance" key indicates a limit value, leading to - stop successfully the iterative optimisation process when all the components - of the projected gradient are under this limit. -#. The "GradientNormTolerance" key indicates a limit value, leading to stop - successfully the iterative optimisation process when the norm of the +#. The "*ProjectedGradientTolerance*" key indicates a limit value, leading to + stop successfully the iterative optimization process when all the + components of the projected gradient are under this limit. +#. The "*GradientNormTolerance*" key indicates a limit value, leading to stop + successfully the iterative optimization process when the norm of the gradient is under this limit. #. The "*Bounds*" key allows to define upper and lower bounds for every control variable being optimized. Bounds can be given by a list of list of @@ -297,7 +297,7 @@ and "*Bounds*": If no bounds at all are required on the control variables, then one can choose the "BFGS" or "CG" minimisation algorithm for the 3DVAR algorithm. For -constrained optimisation, the minimizer "LBFGSB" is often more robust, but the +constrained optimization, the minimizer "LBFGSB" is often more robust, but the "TNC" is always more performant. This dictionary has to be defined, for example, in an external Python script @@ -465,18 +465,96 @@ of the state. It is here defined in an external file named ``"Physical_simulation_functions.py"``, which should contain functions conveniently named here ``"FunctionH"`` and ``"AdjointH"``. These functions are user ones, representing as programming functions the :math:`\mathbf{H}` operator -and its adjoint. We suppose these functions are given by the user (a simple +and its adjoint. We suppose these functions are given by the user. A simple skeleton is given in the Python script file ``Physical_simulation_functions.py`` -of the ADAO examples standard directory, not reproduced here). - -To operates in ADAO, it is required to define different types of operators: the -(potentially non-linear) standard observation operator, named ``"Direct"``, its -linearised approximation, named ``"Tangent"``, and the adjoint operator named -``"Adjoint"``. The Python script have to retrieve an input parameter, found -under the key "value", in a variable named ``"specificParameters"`` of the -SALOME input data and parameters ``"computation"`` dictionary variable. If the -operator is already linear, the ``"Direct"`` and ``"Tangent"`` functions are the -same, as it is supposed here. The following example Python script file named +of the ADAO examples standard directory. It can be used in the case only the +non-linear direct physical simulation exists. The script is partly reproduced +here for convenience:: + + #-*-coding:iso-8859-1-*- + # + def FunctionH( XX ): + """ Direct non-linear simulation operator """ + # + # --------------------------------------> EXAMPLE TO BE REMOVED + if type(XX) is type(numpy.matrix([])): # EXAMPLE TO BE REMOVED + HX = XX.A1.tolist() # EXAMPLE TO BE REMOVED + elif type(XX) is type(numpy.array([])): # EXAMPLE TO BE REMOVED + HX = numpy.matrix(XX).A1.tolist() # EXAMPLE TO BE REMOVED + else: # EXAMPLE TO BE REMOVED + HX = XX # EXAMPLE TO BE REMOVED + # --------------------------------------> EXAMPLE TO BE REMOVED + # + return numpy.array( HX ) + # + def TangentH( X, increment = 0.01, centeredDF = False ): + """ Tangent operator (Jacobian) calculated by finite differences """ + # + dX = increment * X.A1 + # + if centeredDF: + # + Jacobian = [] + for i in range( len(dX) ): + X_plus_dXi = X.A1 + X_plus_dXi[i] = X[i] + dX[i] + X_moins_dXi = X.A1 + X_moins_dXi[i] = X[i] - dX[i] + # + HX_plus_dXi = FunctionH( X_plus_dXi ) + HX_moins_dXi = FunctionH( X_moins_dXi ) + # + HX_Diff = ( HX_plus_dXi - HX_moins_dXi ) / (2.*dX[i]) + # + Jacobian.append( HX_Diff ) + # + else: + # + HX_plus_dX = [] + for i in range( len(dX) ): + X_plus_dXi = X.A1 + X_plus_dXi[i] = X[i] + dX[i] + # + HX_plus_dXi = FunctionH( X_plus_dXi ) + # + HX_plus_dX.append( HX_plus_dXi ) + # + HX = FunctionH( X ) + # + Jacobian = [] + for i in range( len(dX) ): + Jacobian.append( ( HX_plus_dX[i] - HX ) / dX[i] ) + # + Jacobian = numpy.matrix( Jacobian ) + # + return Jacobian + # + def AdjointH( (X, Y) ): + """ Ajoint operator """ + # + Jacobian = TangentH( X, centeredDF = False ) + # + Y = numpy.asmatrix(Y).flatten().T + HtY = numpy.dot(Jacobian, Y) + # + return HtY.A1 + +We insist on the fact that these non-linear operator ``"FunctionH"``, tangent +operator ``"TangentH"`` and adjoint operator ``"AdjointH"`` come from the +physical knowledge, include the reference physical simulation code and its +eventual adjoint, and have to be carefully set up by the data assimilation user. +The errors in or missuses of the operators can not be detected or corrected by +the data assimilation framework alone. + +To operates in the module ADAO, it is required to define for ADAO these +different types of operators: the (potentially non-linear) standard observation +operator, named ``"Direct"``, its linearised approximation, named ``"Tangent"``, +and the adjoint operator named ``"Adjoint"``. The Python script have to retrieve +an input parameter, found under the key "value", in a variable named +``"specificParameters"`` of the SALOME input data and parameters +``"computation"`` dictionary variable. If the operator is already linear, the +``"Direct"`` and ``"Tangent"`` functions are the same, as it is supposed here. +The following example Python script file named ``Script_ObservationOperator_H.py``, illustrates the case:: #-*-coding:iso-8859-1-*- diff --git a/doc/index.rst b/doc/index.rst index 68bfc25..c438e78 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -17,17 +17,18 @@ in the section :ref:`section_theory`. The documentation of this module is divided in 5 parts, the first one being an introduction. The second part briefly introduces data assimilation and concepts. -The third part describes how to use the module ADAO. The fourth part focuses on -advanced usages of the module, how to get more information, or how to use it -without the graphical user interface (GUI). The last part gives examples on ADAO -usage. Users interested in quick use of the module can jump to the section -:ref:`section_examples`, but a valuable use of the module requires to read and -come back regularly the section :ref:`section_using`. - -In all this documentation, we use standard notations of data assimilation. -Moreover, vectors are written horizontally or vertically without difference. -Matrices are written either normally, or with a condensed notation, consisting -in the use of a "``;``" to separate the rows in a continuous line. +The third part describes how to use the module ADAO. The fourth part gives +examples on ADAO usage. Users interested in quick use of the module can jump to +this section :ref:`section_examples`, but a valuable use of the module requires +to read and come back regularly to the section :ref:`section_using`. The last +part focuses on advanced usages of the module, how to get more information, or +how to use it without the graphical user interface (GUI). + +In all this documentation, we use standard notations of data assimilation, as +described in [Ide97]. Moreover, vectors are written horizontally or vertically +without making difference. Matrices are written either normally, or with a +condensed notation, consisting in the use of a space to separate values and a +"``;``" to separate the rows, in a continuous line. .. toctree:: :maxdepth: 2 @@ -35,11 +36,31 @@ in the use of a "``;``" to separate the rows in a continuous line. intro theory using - advanced examples + advanced Indices and tables ================================================================================ * :ref:`genindex` * :ref:`search` + +.. [Argaud09] Argaud J.-P., Bouriquet B., Hunt J., *Data Assimilation from Operational and Industrial Applications to Complex Systems*, Mathematics Today, pp.150-152, October 2009 + +.. [Bouttier99] Bouttier B., Courtier P., *Data assimilation concepts and methods*, Meteorological Training Course Lecture Series, ECMWF, 1999, http://www.ecmwf.int/newsevents/training/rcourse_notes/pdf_files/Assim_concepts.pdf + +.. [Bocquet04] Bocquet M., *Introduction aux principes et méthodes de l'assimilation de données en géophysique*, Lecture Notes, 2004-2008, http://cerea.enpc.fr/HomePages/bocquet/assim.pdf + +.. [Byrd95] Byrd R. H., Lu P., Nocedal J., *A Limited Memory Algorithm for Bound Constrained Optimization*, SIAM Journal on Scientific and Statistical Computing, 16(5), pp.1190-1208, 1995 + +.. [Ide97] Ide K., Courtier P., Ghil M., Lorenc A. C., *Unified notation for data assimilation: operational, sequential and variational*, Journal of the Meteorological Society of Japan, 75(1B), pp.181-189, 1997 + +.. [Kalnay03] Kalnay E., *Atmospheric Modeling, Data Assimilation and Predictability*, Cambridge University Press, 2003 + +.. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987 + +.. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997 + +.. [WikipediaDA] Wikipedia/Data_assimilation: http://en.wikipedia.org/wiki/Data_assimilation + +.. [Zhu97] Zhu C., Byrd R. H., Nocedal J., *L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization*, ACM Transactions on Mathematical Software, Vol 23(4), pp.550-560, 1997 diff --git a/doc/intro.rst b/doc/intro.rst index aa6a8ad..bddb9a3 100644 --- a/doc/intro.rst +++ b/doc/intro.rst @@ -6,3 +6,9 @@ The aim of the ADAO module is to help using *data assimilation* methodology in conjunction with other calculation modules in SALOME. The module provides interface to some standard algorithms of data assimilation or optimization, and allows integration of them in a SALOME study. + +Its main objective is to *facilitate the use of various standard data +assimilation methods*, while remaining easy to use and providing a path to help +the implementation. The module covers a wide variety of practical applications +in a robust way, allowing quick experimental setup to be performed. And its +methodological scalability gives way to extend the application domain. diff --git a/doc/theory.rst b/doc/theory.rst index 2a45a61..fe4f324 100644 --- a/doc/theory.rst +++ b/doc/theory.rst @@ -30,7 +30,7 @@ Fields reconstruction consists in finding, from a restricted set of real measures, the physical field which is the most *consistent* with these measures. This consistency is to understand in terms of interpolation, that is to say that -the field, we want to reconstruct using data assimilation on measures, has to +the field we want to reconstruct, using data assimilation on measures, has to fit at best the measures, while remaining constrained by the overall calculation. The calculation is thus an *a priori* estimation of the field that we seek to identify. @@ -48,7 +48,7 @@ variables are constrained by evolution equations for the state of the atmosphere, which indicates for example that the pressure at a point can not take any value independently of the value at this same point in previous time. We must therefore make the reconstruction of a field at any point in space, in -order "consistent" with the evolution equations and measures of the previous +a manner "consistent" with the evolution equations and measures of the previous time steps. Parameters identification or calibration @@ -79,25 +79,26 @@ We can write these features in a simple manner. By default, all variables are vectors, as there are several parameters to readjust. According to standard notations in data assimilation, we note -:math:`\mathbf{x}^a` the optimal unknown parameters that is to be determined by +:math:`\mathbf{x}^a` the optimal parameters that is to be determined by calibration, :math:`\mathbf{y}^o` the observations (or experimental -measurements) that we must compare the simulation outputs, :math:`\mathbf{x}^b` -the background (*a priori* values, or regularization values) of searched -parameters, :math:`\mathbf{x}^t` unknown ideals parameters that would give as -output exactly the observations (assuming that the errors are zero and the model -exact). - -In the simplest case, static, the steps of simulation and of observation can be -combined into a single observation operator noted :math:`H` (linear or -nonlinear), which transforms the input parameters :math:`\mathbf{x}` to results -:math:`\mathbf{y}` to be compared to observations :math:`\mathbf{y}^o`. -Moreover, we use the linearized operator :math:`\mathbf{H}` to represent the -effect of the full operator :math:`H` around a linearization point (and we omit -thereafter to mention :math:`H` even if it is possible to keep it). In reality, -we have already indicated that the stochastic nature of variables is essential, -coming from the fact that model, background and observations are incorrect. We -therefore introduce errors of observations additively, in the form of a random -vector :math:`\mathbf{\epsilon}^o` such that: +measurements) that we must compare to the simulation outputs, +:math:`\mathbf{x}^b` the background (*a priori* values, or regularization +values) of searched parameters, :math:`\mathbf{x}^t` the unknown ideals +parameters that would give exactly the observations (assuming that the errors +are zero and the model is exact) as output. + +In the simplest case, which is static, the steps of simulation and of +observation can be combined into a single observation operator noted :math:`H` +(linear or nonlinear), which transforms the input parameters :math:`\mathbf{x}` +to results :math:`\mathbf{y}` to be compared to observations +:math:`\mathbf{y}^o`. Moreover, we use the linearized operator +:math:`\mathbf{H}` to represent the effect of the full operator :math:`H` around +a linearization point (and we omit thereafter to mention :math:`H` even if it is +possible to keep it). In reality, we have already indicated that the stochastic +nature of variables is essential, coming from the fact that model, background +and observations are incorrect. We therefore introduce errors of observations +additively, in the form of a random vector :math:`\mathbf{\epsilon}^o` such +that: .. math:: \mathbf{y}^o = \mathbf{H} \mathbf{x}^t + \mathbf{\epsilon}^o @@ -125,24 +126,24 @@ then the "*analysis*" :math:`\mathbf{x}^a` and comes from the minimisation of an error function (in variational assimilation) or from the filtering correction (in assimilation by filtering). -In **variational assimilation**, one classically attempts to minimize the -following function :math:`J`: +In **variational assimilation**, in a static case, one classically attempts to +minimize the following function :math:`J`: .. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x}) which is usually designed as the "*3D-VAR*" function. Since covariance matrices are proportional to the variances of errors, their presence in both terms of the function :math:`J` can effectively weight the differences by confidence in the -background or observations. The parameters :math:`\mathbf{x}` realizing the -minimum of this function therefore constitute the analysis :math:`\mathbf{x}^a`. -It is at this level that we have to use the full panoply of function -minimization methods otherwise known in optimization. Depending on the size of -the parameters vector :math:`\mathbf{x}` to identify and ot the availability of -gradient and Hessian of :math:`J`, it is appropriate to adapt the chosen -optimization method (gradient, Newton, quasi-Newton ...). +background or observations. The parameters vector :math:`\mathbf{x}` realizing +the minimum of this function therefore constitute the analysis +:math:`\mathbf{x}^a`. It is at this level that we have to use the full panoply +of function minimization methods otherwise known in optimization. Depending on +the size of the parameters vector :math:`\mathbf{x}` to identify and of the +availability of gradient and Hessian of :math:`J`, it is appropriate to adapt +the chosen optimization method (gradient, Newton, quasi-Newton...). In **assimilation by filtering**, in this simple case usually referred to as -"*BLUE*"(for "*Best Linear Unbiased Estimator*"), the :math:`\mathbf{x}^a` +"*BLUE*" (for "*Best Linear Unbiased Estimator*"), the :math:`\mathbf{x}^a` analysis is given as a correction of the background :math:`\mathbf{x}^b` by a term proportional to the difference between observations :math:`\mathbf{y}^o` and calculations :math:`\mathbf{H}\mathbf{x}^b`: @@ -163,7 +164,7 @@ equivalent. It is indicated here that these methods of "*3D-VAR*" and "*BLUE*" may be extended to dynamic problems, called respectively "*4D-VAR*" and "*Kalman -filter*". They can take account of the evolution operator to establish an +filter*". They can take into account the evolution operator to establish an analysis at the right time steps of the gap between observations and simulations, and to have, at every moment, the propagation of the background through the evolution model. Many other variants have been developed to improve the @@ -177,7 +178,7 @@ To get more information about all the data assimilation techniques, the reader can consult introductory documents like [Argaud09], on-line training courses or lectures like [Bouttier99] and [Bocquet04] (along with other materials coming from geosciences applications), or general documents like [Talagrand97], -[Tarantola87], [Kalnay03] and [WikipediaDA]. +[Tarantola87], [Kalnay03], [Ide97] and [WikipediaDA]. Note that data assimilation is not restricted to meteorology or geo-sciences, but is widely used in other scientific domains. There are several fields in science @@ -188,17 +189,3 @@ Some aspects of data assimilation are also known as *parameter estimation*, *inverse problems*, *bayesian estimation*, *optimal interpolation*, *mathematical regularisation*, *data smoothing*, etc. These terms can be used in bibliographical searches. - -.. [Argaud09] Argaud J.-P., Bouriquet B., Hunt J., *Data Assimilation from Operational and Industrial Applications to Complex Systems*, Mathematics Today, pp.150-152, October 2009 - -.. [Bouttier99] Bouttier B., Courtier P., *Data assimilation concepts and methods*, Meteorological Training Course Lecture Series, ECMWF, 1999, http://www.ecmwf.int/newsevents/training/rcourse_notes/pdf_files/Assim_concepts.pdf - -.. [Bocquet04] Bocquet M., *Introduction aux principes et méthodes de l'assimilation de données en géophysique*, Lecture Notes, 2004-2008, http://cerea.enpc.fr/HomePages/bocquet/assim.pdf - -.. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987 - -.. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997 - -.. [Kalnay03] Kalnay E., *Atmospheric Modeling, Data Assimilation and Predictability*, Cambridge University Press, 2003 - -.. [WikipediaDA] Wikipedia/Data_assimilation: http://en.wikipedia.org/wiki/Data_assimilation diff --git a/doc/using.rst b/doc/using.rst index 664d07f..964923e 100644 --- a/doc/using.rst +++ b/doc/using.rst @@ -12,7 +12,7 @@ Using the ADAO module :align: middle This section presents the usage of the ADAO module in SALOME. It is complemented -by advanced usage procedures the section :ref:`section_advanced`, and by some +by advanced usage procedures the section :ref:`section_advanced`, and by examples in the section :ref:`section_examples`. Logical procedure to build an ADAO test case @@ -195,7 +195,7 @@ information at the end of the procedure. All the variables are list of typed values, each item of the list corresponding to the value of the variable at a time step or an iteration step -in the data assimilation optimisation procedure. The variable value at a given +in the data assimilation optimization procedure. The variable value at a given "*i*" step can be obtained by the method "*valueserie(i)*". The last one (consisting in the solution of the evaluation problem) can be obtained using the step "*-1*" as in a standard list. @@ -213,114 +213,110 @@ section :ref:`section_theory`. The different type-style commands are: :Dict: - Type of an input. This indicates a variable that has to be filled by a + *Type of an input*. This indicates a variable that has to be filled by a dictionary, usually given as a script. :Function: - Type of an input. This indicates a variable that has to be filled by a + *Type of an input*. This indicates a variable that has to be filled by a function, usually given as a script. :Matrix: - Type of an input. This indicates a variable that has to be filled by a + *Type of an input*. This indicates a variable that has to be filled by a matrix, usually given either as a string or as a script. :String: - Type of an input. This indicates a string, such as a name or a literal + *Type of an input*. This indicates a string, such as a name or a literal representation of a matrix or vector, such as "1 2 ; 3 4". :Script: - Type of an input. This indicates a script given as an external file. + *Type of an input*. This indicates a script given as an external file. :Vector: - Type of an input. This indicates a variable that has to be filled by a + *Type of an input*. This indicates a variable that has to be filled by a vector, usually given either as a string or as a script. The different commands are the following: :ASSIM_STUDY: - Required command. This is the general command describing an ADAO case. It + *Required command*. This is the general command describing an ADAO case. It hierarchicaly contains all the other commands. :Algorithm: - Required command. This is a string to indicates the data assimilation + *Required command*. This is a string to indicates the data assimilation algorithm chosen. The choices are limited and available through the GUI. There exists for example: "3DVAR", "Blue", "EnsembleBlue", "KalmanFilter". :AlgorithmParameters: - Optional command. This command allows to add some optional parameters to + *Optional command*. This command allows to add some optional parameters to control the data assimilation algorithm calculation. It is defined as a "*Dict*" type object. :Background: - Required command. This indicates the backgroud vector used for data + *Required command*. This indicates the backgroud vector used for data assimilation, previously noted as :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object, that is, given either as a string or as a script. :BackgroundError: - Required command. This indicates the backgroud error covariance matrix, + *Required command*. This indicates the backgroud error covariance matrix, previously noted as :math:`\mathbf{B}`.It is defined as a "*Matrix*" type object, that is, given either as a string or as a script. :Debug: - Required command. This let choose the level of trace and intermediary debug - informations.The choices are limited between 0 (for False) and 1 (for True) - and available through the GUI. + *Required command*. This let choose the level of trace and intermediary + debug informations.The choices are limited between 0 (for False) and 1 (for + True) and available through the GUI. :InputVariables: - Optional command. This command allows to indicates the name and size of + *Optional command*. This command allows to indicates the name and size of physical variables that are bundled together in the control vector. This information is dedicated to data processed inside of data assimilation algorithm. :Observation: - Required command. This indicates the observation vector used for data + *Required command*. This indicates the observation vector used for data assimilation, previously noted as :math:`\mathbf{y}^o`. It is defined as a "*Vector*" type object, that is, given either as a string or as a script. :ObservationError: - Required command. This indicates the observation error covariance matrix, + *Required command*. This indicates the observation error covariance matrix, previously noted as :math:`\mathbf{R}`.It is defined as a "*Matrix*" type object, that is, given either as a string or as a script. :ObservationOperator: - Required command. This indicates the observation operator, previously + *Required command*. This indicates the observation operator, previously noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to results :math:`\mathbf{y}` to be compared to observations :math:`\mathbf{y}^o`. :OutputVariables: - Optional command. This command allows to indicates the name and size of + *Optional command*. This command allows to indicates the name and size of physical variables that are bundled together in the output observation vector. This information is dedicated to data processed inside of data assimilation algorithm. :Study_name: - Required command. This is an open string to describe the study by a name or - a sentence. + *Required command*. This is an open string to describe the study by a name + or a sentence. :Study_repertory: - Optional command. If available, this repertory is used to find all the + *Optional command*. If available, this repertory is used to find all the script files that can be used to define some other commands by scripts. :UserDataInit: - Optional command. This commands allows to initialise some parameters or data - automatically before data assimilation algorithm processing. + *Optional command*. This commands allows to initialise some parameters or + data automatically before data assimilation algorithm processing. :UserPostAnalysis: - Optional command. This commands allows to process some parameters or data + *Optional command*. This commands allows to process some parameters or data automatically after data assimilation algorithm processing. It is defined as a script or a string, allowing to put simple code directly inside the ADAO case. Examples of using these commands are available in the section -:ref:`section_examples` and in examples files installed with ADAO module. +:ref:`section_examples` and in example files installed with ADAO module. .. [#] For more information on EFICAS, see the *EFICAS module* available in SALOME GUI. .. [#] For more information on YACS, see the *YACS module User's Guide* available in the main "*Help*" menu of SALOME GUI. .. [#] This intermediary python file can be safely removed after YACS export, but can also be used as described in the section :ref:`section_advanced`. - -.. [Byrd95] Byrd R. H., Lu P., Nocedal J., *A Limited Memory Algorithm for Bound Constrained Optimization*, SIAM Journal on Scientific and Statistical Computing, 16(5), pp.1190-1208, 1995 - -.. [Zhu97] Zhu C., Byrd R. H., Nocedal J., *L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization*, ACM Transactions on Mathematical Software, Vol 23(4), pp.550-560, 1997 -- 2.39.2