From: Jean-Philippe ARGAUD Date: Fri, 9 Nov 2012 15:12:54 +0000 (+0100) Subject: Improving the documentation intro, theory and advanced X-Git-Tag: V6_6_0~11 X-Git-Url: http://git.salome-platform.org/gitweb/?a=commitdiff_plain;h=4f35b1fad4a30ac1945270505bf4ce8506181ba7;p=modules%2Fadao.git Improving the documentation intro, theory and advanced --- diff --git a/doc/advanced.rst b/doc/advanced.rst index ba595ff..177b840 100644 --- a/doc/advanced.rst +++ b/doc/advanced.rst @@ -5,51 +5,45 @@ Advanced usage of the ADAO module ================================================================================ This section presents advanced methods to use the ADAO module, how to get more -information, or how to use it without the graphical user interface (GUI). - -Exporting an ADAO command file (JDC) to YACS using a console user mode ----------------------------------------------------------------------- - -An export command can use the Python file generated by the editor used to build -the ADAO command file (JDC). If the ADAO command file is named "Study1.comm", -then a file named "Study1.py" can be found in the same directory. The complete -procedure is the following: - -#. using the SALOME application including ADAO module, launch SALOME with ``./runAppli -k`` -#. initialise the command line session with: ``./runSession`` -#. change to the YACS calculation scheme directory to be executed -#. execute the export command: ``python ${ADAO_ROOT_DIR}/bin/salome/AdaoYacsSchemaCreator.py `` -#. standard output comes on console, successive executions can be done -#. stop SALOME: ``killSalome.py`` -#. exit from the session: ``CTRL+D`` - -Be careful, if the output YACS xml scheme file already exists, this command -replace it without asking the user. The command accepts files with or without -path specifications. - -It is not necessary to launch and shut down SALOME each time if the application -is already running. - -Running an ADAO calculation scheme in YACS using a console user mode --------------------------------------------------------------------- - -This section describes how to execute in console mode a YACS calculation scheme, -obtained using the ADAO "Export to YACS" function. It uses the standard YACS -console mode, which is briefly recalled here (see YACS documentation for more -information) through a simple example. - -The way to do that is as follows: - -#. using the SALOME application including ADAO module, launch SALOME with ``./runAppli -k`` -#. initialise the command line session with: ``./runSession`` -#. change to the YACS calculation scheme directory to be executed -#. execute the YACS supervisor: ``driver `` -#. standard output comes on console, successive executions can be done -#. stop SALOME: ``killSalome.py`` -#. exit from the session: ``CTRL+D`` - -It is not necessary to launch and shut down SALOME each time if the application -is already running. +information, or how to use it without the graphical user interface (GUI). It +requires to know how to find files or commands included inside the whole SALOME +installation. All the names to be replaced by user are indicated by the +following syntax ``<...>``. + +Converting and executing an ADAO command file (JDC) using a shell script +------------------------------------------------------------------------ + +It is possible to convert and execute an ADAO command file (JDC, or ".comm" +file) automatically by using a template script containing all the required +steps. The user has to know where are the main SALOME scripts, and in particular +the ``runAppli`` one. The directory in which this script resides is symbolicaly +named ```` and has to be replaced by the good one +in the template. + +When an ADAO command file is build by the ADAO GUI editor and saved, if it is +named for example "AdaoStudy1.comm", then a compagnon file named "AdaoStudy1.py" +is automatically created in the same directory. It is named ```` in the template, and it is converted to YACS as an ````. After that, it can be executed in console mode using the standard +YACS console command (see YACS documentation for more information). + +In the example, we choose also to start and stop the SALOME application server +in the same script, which is not necessary, but useful to avoid stalling SALOME +sessions. + +The template of the shell script is the following:: + + #!/bin/bash + /runAppli -k -t + /runSession python \ + ${ADAO_ROOT_DIR}/bin/salome/AdaoYacsSchemaCreator.py \ + + /runSession driver \ + + /runSession killSalome.py + +Standard output and errors come on console, successive executions can be done if +the SALOME server is already running. Running an ADAO calculation scheme in YACS using a TUI user mode ---------------------------------------------------------------- @@ -129,17 +123,17 @@ algorithm. Getting more information when running a calculation --------------------------------------------------- -When running, the ADAO module is logging useful data and messages. There are two -ways to obtain theses informations. +When running, useful data and messages are logged. There are two ways to obtain +theses informations. The first one, and the preferred way, is to use the built-in variable "*Debug*" -integrated in every "*ASSIMILATION_STUDY*". It is available through the GUI of -the module. Setting it to "*1*" will send a lot of messages in the log window of -the YACS scheme execution. +available in every ADAO case. It is available through the GUI of the module. +Setting it to "*1*" will send messages in the log window of the YACS scheme +execution. The second one consist in using the "*logging*" native module of Python (see the Python documentation http://docs.python.org/library/logging.html for more -informations on this module). Everywhere in the YACS scheme, mainly through the +information on this module). Everywhere in the YACS scheme, mainly through the scripts entries, the user can set the logging level in accordance to the needs of detailed informations. The different logging levels are: "*DEBUG*", "*INFO*", "*WARNING*", "*ERROR*", "*CRITICAL*". All the informations flagged with a @@ -152,3 +146,8 @@ following Python lines:: The standard logging module default level is "*WARNING*", the default level in the ADAO module is "*INFO*". + +It is also recommended to include in the simulation code some logging or debug +mechanisms and use them in conjunction with the two previous methods. But be +careful not to store too big variables because it cost time, whatever logging +level is chosen. diff --git a/doc/bibliography.rst b/doc/bibliography.rst index c682554..beaf5c3 100644 --- a/doc/bibliography.rst +++ b/doc/bibliography.rst @@ -20,6 +20,12 @@ Bibliography .. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997 -.. [WikipediaDA] Wikipedia/Data_assimilation: http://en.wikipedia.org/wiki/Data_assimilation +.. [WikipediaDA] Wikipedia, *Data assimilation*, http://en.wikipedia.org/wiki/Data_assimilation + +.. [WikipediaMO] Wikipedia, *Mathematical optimization*, https://en.wikipedia.org/wiki/Mathematical_optimization + +.. [WikipediaPSO] Wikipedia, *Particle swarm optimization*, https://en.wikipedia.org/wiki/Particle_swarm_optimization + +.. [WikipediaQR] Wikipedia, *Quantile regression*, https://en.wikipedia.org/wiki/Quantile_regression .. [Zhu97] Zhu C., Byrd R. H., Nocedal J., *L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization*, ACM Transactions on Mathematical Software, Vol 23(4), pp.550-560, 1997 diff --git a/doc/examples.rst b/doc/examples.rst index 0210cbf..8cbfe7c 100644 --- a/doc/examples.rst +++ b/doc/examples.rst @@ -451,7 +451,7 @@ here for convenience:: # return numpy.array( HX ) # - def TangentH( X, increment = 0.01, centeredDF = False ): + def TangentHMatrix( X, increment = 0.01, centeredDF = False ): """ Tangent operator (Jacobian) calculated by finite differences """ # dX = increment * X.A1 @@ -493,15 +493,21 @@ here for convenience:: # return Jacobian # + def TangentH( X ): + """ Tangent operator """ + _X = numpy.asmatrix(X).flatten().T + HtX = self.TangentHMatrix( _X ) * _X + return HtX.A1 + # def AdjointH( (X, Y) ): """ Ajoint operator """ # - Jacobian = TangentH( X, centeredDF = False ) + Jacobian = TangentHMatrix( X, centeredDF = False ) # Y = numpy.asmatrix(Y).flatten().T - HtY = numpy.dot(Jacobian, Y) + HaY = numpy.dot(Jacobian, Y) # - return HtY.A1 + return HaY.A1 We insist on the fact that these non-linear operator ``"FunctionH"``, tangent operator ``"TangentH"`` and adjoint operator ``"AdjointH"`` come from the @@ -517,8 +523,8 @@ and the adjoint operator named ``"Adjoint"``. The Python script have to retrieve an input parameter, found under the key "value", in a variable named ``"specificParameters"`` of the SALOME input data and parameters ``"computation"`` dictionary variable. If the operator is already linear, the -``"Direct"`` and ``"Tangent"`` functions are the same, as it is supposed here. -The following example Python script file named +``"Direct"`` and ``"Tangent"`` functions are the same, as it can be supposed +here. The following example Python script file named ``Script_ObservationOperator_H.py``, illustrates the case:: import Physical_simulation_functions @@ -545,6 +551,7 @@ The following example Python script file named # ---------------------------------------------------------- logging.info("ComputationFunctionNode: Loading operator functions") FunctionH = Physical_simulation_functions.FunctionH + TangentH = Physical_simulation_functions.TangentH AdjointH = Physical_simulation_functions.AdjointH # # Executing the possible computations diff --git a/doc/index.rst b/doc/index.rst index dfd8abf..0072e44 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -6,32 +6,34 @@ ADAO module documentation :align: center :scale: 10% -The ADAO module provides **data assimilation** features in SALOME context. It is -based on usage of other SALOME modules, namely YACS and EFICAS, and on usage of -a generic underlying data assimilation library. +The ADAO module provides **data assimilation and optimization** features in +SALOME context. It is based on usage of other SALOME modules, namely YACS and +EFICAS, and on usage of a generic underlying data assimilation library. Briefly stated, Data Assimilation is a methodological framework to compute the optimal estimate of the inaccessible true value of a system state over time. It uses information coming from experimental measurements or observations, and from numerical *a priori* models, including information about their errors. Parts of -the framework are also known as *parameter estimation*, *inverse problems*, -*bayesian estimation*, *optimal interpolation*, etc. More details can be found -in the section :ref:`section_theory`. +the framework are also known under the names of *parameter estimation*, *inverse +problems*, *bayesian estimation*, *optimal interpolation*, etc. More details can +be found in the section :ref:`section_theory`. -The documentation of this module is divided in 5 parts, the first one being an +The documentation of this module is divided in parts. The first one is an introduction. The second part briefly introduces data assimilation and concepts. The third part describes how to use the module ADAO. The fourth part gives examples on ADAO usage. Users interested in quick use of the module can jump to -this section :ref:`section_examples`, but a valuable use of the module requires -to read and come back regularly to the section :ref:`section_using`. The last -part focuses on advanced usages of the module, how to get more information, or -how to use it without the graphical user interface (GUI). - -In all this documentation, we use standard notations of data assimilation, as -described in [Ide97]. Moreover, vectors are written horizontally or vertically -without making difference. Matrices are written either normally, or with a -condensed notation, consisting in the use of a space to separate values and a -"``;``" to separate the rows, in a continuous line. +this fourth section :ref:`section_examples`, but a valuable use of the module +requires to read and come back regularly to the third one :ref:`section_using`. +The last part focuses on advanced usages of the module, how to get more +information, or how to use it by scripting, without the graphical user interface +(GUI). + +In all this documentation, we use standard notations of linear algebra, data +assimilation (as described in [Ide97]) and optimization. In particular, vectors +are written horizontally or vertically without making difference. Matrices are +written either normally, or with a condensed notation, consisting in the use of +a space to separate values and a "``;``" to separate the rows, in a continuous +line. Table of contents ----------------- diff --git a/doc/intro.rst b/doc/intro.rst index bddb9a3..c0267ce 100644 --- a/doc/intro.rst +++ b/doc/intro.rst @@ -2,13 +2,17 @@ Introduction to ADAO ================================================================================ -The aim of the ADAO module is to help using *data assimilation* methodology in -conjunction with other calculation modules in SALOME. The module provides -interface to some standard algorithms of data assimilation or optimization, and -allows integration of them in a SALOME study. +The aim of the ADAO module is **to help using data assimilation or optimization +methodology in conjunction with other modules in SALOME**. The ADAO module +provides interface to some standard algorithms of data assimilation or +optimization, and allows integration of them in a SALOME study. Calculation or +simulation modules have to provide one or more specific calling methods in order +to ba callable in the SALOME/ADAO framework, and all the SALOME modules can be +used throught YACS integration of ADAO. Its main objective is to *facilitate the use of various standard data -assimilation methods*, while remaining easy to use and providing a path to help -the implementation. The module covers a wide variety of practical applications -in a robust way, allowing quick experimental setup to be performed. And its -methodological scalability gives way to extend the application domain. +assimilation or optimization methods*, while remaining easy to use and providing +a path to help the implementation. The module covers a wide variety of practical +applications in a robust way, allowing real engineering applications but also +quick experimental setup to be performed. Its methodological and numerical +scalability gives way to extend the application domain. diff --git a/doc/theory.rst b/doc/theory.rst index 2ed799f..a88d825 100644 --- a/doc/theory.rst +++ b/doc/theory.rst @@ -1,7 +1,7 @@ .. _section_theory: ================================================================================ -A brief introduction to Data Assimilation +A brief introduction to Data Assimilation and Optimization ================================================================================ .. index:: single: Data Assimilation @@ -11,18 +11,26 @@ A brief introduction to Data Assimilation **Data Assimilation** is a general framework for computing the optimal estimate of the true state of a system, over time if necessary. It uses values obtained -both from observations and *a priori* models, including information about their -errors. - -In other words, data assimilation merges measurement data, the observations, -with *a priori* physical and mathematical knowledge, embedded in numerical -models, to obtain the best possible estimate of the true state and of its -stochastic properties. Note that this true state can not be reached, but can -only be estimated. Moreover, despite the fact that used information are -stochastic by nature, data assimilation provides deterministic techniques in -order to realize the estimation. - -Two main types of applications exist in data assimilation being covered by the +by combining both observations and *a priori* models, including information +about their errors. + +In other words, data assimilation merges measurement data of a system, that are +the observations, with *a priori* system physical and mathematical knowledge, +embedded in numerical models, to obtain the best possible estimate of the system +true state and of its stochastic properties. Note that this true state can not +be reached, but can only be estimated. Moreover, despite the fact that the used +information are stochastic by nature, data assimilation provides deterministic +techniques in order to realize the estimation. + +Because data assimilation look for the **best possible** estimate, its +underlying procedure always integrates optimization in order to find this +estimate: particular optimization methods are always embedded in data +assimilation algorithms. Optimization methods can be seen here as a way to +extend data assimilation applications. They will be introduced this way in the +section `Going further in the state estimation by optimization methods`_, but +they are far more general and can be used without data assimilation concepts. + +Two main types of applications exist in data assimilation, being covered by the same formalism: **parameters identification** and **fields reconstruction**. Before introducing the `Simple description of the data assimilation framework`_ in a next section, we describe briefly these two types. At the end, some @@ -31,7 +39,7 @@ references allow `Going further in the data assimilation framework`_. Fields reconstruction or measures interpolation ----------------------------------------------- -.. index:: single: parameters identification +.. index:: single: fields reconstruction Fields reconstruction consists in finding, from a restricted set of real measures, the physical field which is the most *consistent* with these measures. @@ -47,21 +55,21 @@ time step, as a whole. The interpolation process in this case is more complicated since it is temporal, not only in terms of instantaneous values of the field. -A simple example of fields reconstruction comes from of meteorology, in which we +A simple example of fields reconstruction comes from meteorology, in which one look for value of variables such as temperature or pressure in all points of the -spatial domain. We have instantaneous measurements of these quantities at +spatial domain. One have instantaneous measurements of these quantities at certain points, but also a history set of these measures. Moreover, these variables are constrained by evolution equations for the state of the atmosphere, which indicates for example that the pressure at a point can not take any value independently of the value at this same point in previous time. -We must therefore make the reconstruction of a field at any point in space, in -a manner "consistent" with the evolution equations and measures of the previous -time steps. +One must therefore make the reconstruction of a field at any point in space, in +a "consistent" manner with the evolution equations and with the measures of the +previous time steps. Parameters identification or calibration ---------------------------------------- -.. index:: single: fields reconstruction +.. index:: single: parameters identification The identification of parameters by data assimilation is a form of calibration which uses both the measurement and an *a priori* estimation (called the @@ -70,9 +78,9 @@ characterization of their errors. From this point of view, it uses all available information on the physical system (even if assumptions about errors are relatively restrictive) to find the "*optimal*" estimation from the true state. We note, in terms of optimization, that the background realizes a mathematical -regularization of the main problem of identification. +regularization of the main problem of parameters identification. -In practice, the two gaps "*calculation-background*" and +In practice, the two observed gaps "*calculation-background*" and "*calculation-measures*" are added to build the calibration correction of parameters or initial conditions. The addition of these two gaps requires a relative weight, which is chosen to reflect the trust we give to each piece of @@ -81,6 +89,14 @@ background and on the observations. Thus the stochastic aspect of information, measured or *a priori*, is essential for building the calibration error function. +A simple example of parameters identification comes from any kind of physical +simulation process involving a parametrized model. For example, a static +mechanical simulation of a beam constrained by some forces is described by beam +parameters, such as a Young coefficient, or by the intensity of the force. The +parameter estimation problem consists in finding for example the right Young +coefficient in order that the simulation of the beam corresponds to +measurements, including the knowledge of errors. + Simple description of the data assimilation framework ----------------------------------------------------- @@ -145,16 +161,18 @@ minimize the following function :math:`J`: .. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x}) -which is usually designed as the "*3D-VAR*" function. Since covariance matrices -are proportional to the variances of errors, their presence in both terms of the -function :math:`J` can effectively weight the differences by confidence in the -background or observations. The parameters vector :math:`\mathbf{x}` realizing -the minimum of this function therefore constitute the analysis -:math:`\mathbf{x}^a`. It is at this level that we have to use the full panoply -of function minimization methods otherwise known in optimization. Depending on -the size of the parameters vector :math:`\mathbf{x}` to identify and of the -availability of gradient and Hessian of :math:`J`, it is appropriate to adapt -the chosen optimization method (gradient, Newton, quasi-Newton...). +which is usually designed as the "*3D-VAR*" function. Since :math:`\mathbf{B}` +and :math:`\mathbf{R}` covariance matrices are proportional to the variances of +errors, their presence in both terms of the function :math:`J` can effectively +weight the differences by confidence in the background or observations. The +parameters vector :math:`\mathbf{x}` realizing the minimum of this function +therefore constitute the analysis :math:`\mathbf{x}^a`. It is at this level that +we have to use the full panoply of function minimization methods otherwise known +in optimization (see also section `Going further in the state estimation by +optimization methods`_). Depending on the size of the parameters vector +:math:`\mathbf{x}` to identify and of the availability of gradient and Hessian +of :math:`J`, it is appropriate to adapt the chosen optimization method +(gradient, Newton, quasi-Newton...). In **assimilation by filtering**, in this simple case usually referred to as "*BLUE*" (for "*Best Linear Unbiased Estimator*"), the :math:`\mathbf{x}^a` @@ -188,18 +206,78 @@ calculation size and time. Going further in the data assimilation framework ------------------------------------------------ +.. index:: single: state estimation +.. index:: single: parameter estimation +.. index:: single: inverse problems +.. index:: single: Bayesian estimation +.. index:: single: optimal interpolation +.. index:: single: mathematical regularization +.. index:: single: data smoothing + To get more information about all the data assimilation techniques, the reader -can consult introductory documents like [Argaud09], on-line training courses or -lectures like [Bouttier99] and [Bocquet04] (along with other materials coming -from geosciences applications), or general documents like [Talagrand97], -[Tarantola87], [Kalnay03], [Ide97] and [WikipediaDA]. +can consult introductory documents like [Argaud09]_, on-line training courses or +lectures like [Bouttier99]_ and [Bocquet04]_ (along with other materials coming +from geosciences applications), or general documents like [Talagrand97]_, +[Tarantola87]_, [Kalnay03]_, [Ide97]_ and [WikipediaDA]_. Note that data assimilation is not restricted to meteorology or geo-sciences, but is widely used in other scientific domains. There are several fields in science and technology where the effective use of observed but incomplete data is crucial. -Some aspects of data assimilation are also known as *parameter estimation*, -*inverse problems*, *bayesian estimation*, *optimal interpolation*, -*mathematical regularisation*, *data smoothing*, etc. These terms can be used in -bibliographical searches. +Some aspects of data assimilation are also known as *state estimation*, +*parameter estimation*, *inverse problems*, *Bayesian estimation*, *optimal +interpolation*, *mathematical regularization*, *data smoothing*, etc. These +terms can be used in bibliographical searches. + +Going further in the state estimation by optimization methods +------------------------------------------------------------- + +.. index:: single: state estimation +.. index:: single: optimization methods + +As seen before, in a static simulation case, the variational data assimilation +requires to minimize the goal function :math:`J`: + +.. math:: J(\mathbf{x})=(\mathbf{x}-\mathbf{x}^b)^T.\mathbf{B}^{-1}.(\mathbf{x}-\mathbf{x}^b)+(\mathbf{y}^o-\mathbf{H}.\mathbf{x})^T.\mathbf{R}^{-1}.(\mathbf{y}^o-\mathbf{H}.\mathbf{x}) + +which is named the "*3D-VAR*" function. It can be seen as a *least squares +minimization* extented form, obtained by adding a regularizing term using +:math:`\mathbf{x}-\mathbf{x}^b`, and by weighting the differences using +:math:`\mathbf{B}` and :math:`\mathbf{R}` the two covariance matrices. The +minimization of the :math:`J` function leads to the *best* state estimation. + +State estimation possibilities extension, by using more explicitly optimization +methods and their properties, can be imagined in two ways. + +First, classical optimization methods involves using various gradient-based +minimizing procedures. They are extremely efficient to look for a single local +minimum. But they require the goal function :math:`J` to be sufficiently regular +and differentiable, and are not able to capture global properties of the +minimization problem, for example: global minimum, set of equivalent solutions +due to over-parametrization, multiple local minima, etc. **A way to extend +estimation possibilities is then to use a whole range of optimizers, allowing +global minimization, various robust search properties, etc**. There is a lot of +minimizing methods, such as stochastic ones, evolutionary ones, heuristics and +meta-heuristics for real-valued problems, etc. They can treat partially irregular +or noisy function :math:`J`, can characterize local minima, etc. The main +drawback is a greater numerical cost to find state estimates, and no guarantee +of convergence in finite time. Here, we only point the following +topics, as the methods are available in the ADAO module: *Quantile regression* +[WikipediaQR]_ and *Particle swarm optimization* [WikipediaPSO]_. + +Secondly, optimization methods try usually to minimize quadratic measures of +errors, as the natural properties of such goal functions are well suited for +classical gradient optimization. But other measures of errors can be more +adapted to real physical simulation problems. Then, **an another way to extend +estimation possibilities is to use other measures of errors to be reduced**. For +example, we can cite *absolute error value*, *maximum error value*, etc. These +error measures are not differentiables, but some optimization methods can deal +with: heuristics and meta-heuristics for real-valued problem, etc. As +previously, the main drawback remain a greater numerical cost to find state +estimates, and no guarantee of convergence in finite time. Here, we point also +the following methods as it is available in the ADAO module: *Particle swarm +optimization* [WikipediaPSO]_. + +The reader interested in the subject of optimization can look at [WikipediaMO]_ +as a general entry point. diff --git a/doc/using.rst b/doc/using.rst index 6ceff51..4b76a92 100644 --- a/doc/using.rst +++ b/doc/using.rst @@ -168,9 +168,9 @@ Execute the YACS case and obtain the results .. index:: single: Analysis .. index:: single: Innovation .. index:: single: APosterioriCovariance -.. index:: single: OMB -.. index:: single: BMA -.. index:: single: OMA +.. index:: single: OMB (Observation minus Background) +.. index:: single: BMA (Background minus Analysis) +.. index:: single: OMA (Observation minus Analysis) .. index:: single: CostFunctionJ .. index:: single: CostFunctionJo .. index:: single: CostFunctionJb @@ -420,7 +420,7 @@ unused. :Minimizer: This key allows to choose the optimization minimizer. The default choice is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained - minimizer, see [Byrd95] and [Zhu97]), "TNC" (nonlinear constrained + minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). @@ -467,7 +467,7 @@ unused. :Minimizer: This key allows to choose the optimization minimizer. The default choice is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained - minimizer, see [Byrd95] and [Zhu97]), "TNC" (nonlinear constrained + minimizer, see [Byrd95]_ and [Zhu97]_), "TNC" (nonlinear constrained minimizer), "CG" (nonlinear unconstrained minimizer), "BFGS" (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer).