Python documentation http://docs.python.org/library/logging.html for more
information on this module). Everywhere in the YACS scheme, mainly through the
scripts entries, the user can set the logging level in accordance to the needs
-of detailed informations. The different logging levels are: "*DEBUG*", "*INFO*",
-"*WARNING*", "*ERROR*", "*CRITICAL*". All the informations flagged with a
+of detailed information. The different logging levels are: "*DEBUG*", "*INFO*",
+"*WARNING*", "*ERROR*", "*CRITICAL*". All the information flagged with a
certain level will be printed for whatever activated level above this particular
one (included). The easiest way is to change the log level by using the
following Python lines::
be careful not to store too big variables because it cost time, whatever logging
level is chosen (that is, even if these variables are not printed).
+Accelerating numerical derivatives calculations by using a parallel mode
+------------------------------------------------------------------------
+
+When setting an operator, as described in :ref:`section_reference`, the user can
+choose a functional form "*ScriptWithOneFunction*". This form explicitly leads
+to approximate the tangent and adjoint operators by a finite differences
+calculation. It requires several calls to the direct operator (user defined
+function), at least as many times as the dimension of the state vector. This are
+these calls that can potentially be executed in parallel.
+
+Under some conditions, it is then possible to accelerate the numerical
+derivatives calculations by using a parallel mode for the finite differences
+approximation. When setting up an ADAO case, it is done by adding the optional
+sub-command "*EnableMultiProcessing*", set to "1", for the
+"*SCRIPTWITHONEFUNCTION*" command in the operator definition. The parallel mode
+will only use local resources (both multi-cores or multi-processors) of the
+computer on which SALOME is running, requiring as many resources as available.
+By default, this parallel mode is disabled ("*EnableMultiProcessing=0*").
+
+The main conditions to perform parallel calculations come from the user defined
+function, that represents the direct operator. This function has at least to be
+"thread safe" to be executed in Python parallel environment (notions out of
+scope of this paragraph). It is not obvious to give general rules, so it's
+recommended, for the user who enable this internal parallelism, to carefully
+verify his function and the obtained results.
+
+From a user point of view, some conditions, that have to be met to set up
+parallel calculations for tangent and the adjoint operators approximations, are
+the following ones:
+
+#. The dimension of the state vector is more than 2 or 3.
+#. Unitary calculation of user defined direct function "last for long time", that is, more than few minutes.
+#. The user defined direct function doesn't already use parallelism (or parallel execution is disabled in the user calculation).
+#. The user defined direct function doesn't requires read/write access to common resources, mainly stored data or memory capacities.
+
+If these conditions are satisfied, the user can choose to enable the internal
+parallelism for the numerical derivative calculations. Despite the simplicity of
+activating, by setting one variable only, the user is urged to verify the
+results of its calculations. One must at least doing them one time with
+parallelism enabled, and an another time with parallelism disabled, to compare
+the results. If it does fail somewhere, you have to know that this parallel
+scheme is working for complex codes, like *Code_Aster* in *SalomeMeca*
+[SalomeMeca]_ for example. So check your operator function before and during
+enabling parallelism...
+
+**In case of doubt, it is recommended NOT TO ACTIVATE this parallelism.**
+
Switching from a version of ADAO to a newer one
-----------------------------------------------
.. [Salome] *SALOME The Open Source Integration Platform for Numerical Simulation*, http://www.salome-platform.org/
+.. [SalomeMeca] *Salome_Meca and Code_Aster, Analysis of Structures and Thermomechanics for Studies & Research*, http://www.code-aster.org/
+
.. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987
.. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997
all the required input data through the GUI. The second one shows, on the same
case, how to define input data using external sources through scripts. We
describe here always Python scripts because they can be directly inserted in
-YACS script nodes, but external files can use other langages.
+YACS script nodes, but external files can use other languages.
The mathematical notations used afterward are explained in the section
:ref:`section_theory`.
---------------------------------------------------------
This simple example is a demonstration one, and describes how to set a BLUE
-estimation framework in order to get *ponderated (or fully weighted) least
-square estimated state* of a system from an observation of the state and from an
-*a priori* knowledge (or background) of this state. In other words, we look for
-the weighted middle between the observation and the background vectors. All the
+estimation framework in order to get the *fully weighted least square estimated
+state* of a system from an observation of the state and from an *a priori*
+knowledge (or background) of this state. In other words, we look for the
+weighted middle between the observation and the background vectors. All the
numerical values of this example are arbitrary.
Experimental setup
We choose to operate in a 3-dimensional space. 3D is chosen in order to restrict
the size of numerical object to explicitly enter by the user, but the problem is
-not dependant of the dimension and can be set in dimension 10, 100, 1000... The
+not dependent of the dimension and can be set in dimension 10, 100, 1000... The
observation :math:`\mathbf{y}^o` is of value 1 in each direction, so::
Yo = [1 1 1]
process, and it is the most versatile method in order to parametrize the input
data. **But be careful, script methodology is not a "safe" procedure, in the
sense that erroneous data, or errors in calculations, can be directly injected
-into the YACS scheme execution.**
+into the YACS scheme execution. The user have to carefully verify the content of
+his scripts.**
Adding parameters to control the data assimilation algorithm
------------------------------------------------------------
}
If no bounds at all are required on the control variables, then one can choose
-the "*BFGS*" or "*CG*" minimisation algorithm for all the variational data
+the "*BFGS*" or "*CG*" minimization algorithm for all the variational data
assimilation or optimization algorithms. For constrained optimization, the
minimizer "*LBFGSB*" is often more robust, but the "*TNC*" is sometimes more
-performant.
+effective.
Then the script can be added to the ADAO case, in a file entry describing the
"*AlgorithmParameters*" keyword, as follows:
R = 0.0001 * diagonal( lenght(Yo) )
-All the informations required for estimation by data assimilation are then
+All the information required for estimation by data assimilation are then
defined.
Skeletons of the scripts describing the setup
+++++++++++++++++++++++++++++++++++++++++++++
-We give here the essential parts of each script used afterwards to build the
+We give here the essential parts of each script used afterward to build the
ADAO case. Remember that using these scripts in real Python files requires to
correctly define the path to imported modules or codes (even if the module is in
the same directory that the importing Python file. We indicate the path
#
return numpy.array( HX )
-We does not need the linear compagnion operators ``"TangentOperator"`` and
+We does not need the linear companion operators ``"TangentOperator"`` and
``"AdjointOperator"`` because they will be approximated using ADAO capabilities.
We insist on the fact that these non-linear operator ``"DirectOperator"``,
Finally, it is common to post-process the results, retrieving them after the
data assimilation phase in order to analyze, print or show them. It requires to
use a intermediary Python script file in order to extract these results at the
-end of the adata assimilation or optimization process. The following example
+end of the a data assimilation or optimization process. The following example
Python script file, named ``Script_UserPostAnalysis.py``, illustrates the fact::
from Physical_data_and_covariance_matrices import True_state
requires to read and come back regularly to the third and fifth ones. The last
part focuses on :ref:`section_advanced`, how to get more information, or how to
use it by scripting, without the graphical user interface (GUI). And, to respect
-the module requirements, be sure to read the part :ref:`section_licence`.
+the module requirements, be sure to read the part :ref:`section_license`.
In all this documentation, we use standard notations of linear algebra, data
assimilation (as described in [Ide97]_) and optimization. In particular, vectors
examples
reference
advanced
- licence
+ license
glossary
bibliography
assimilation or optimization, and allows integration of their use in a SALOME
study. Calculation or simulation modules have to provide one or more specific
calling methods in order to be callable in the SALOME/ADAO framework, and all
-the SALOME modules can be used throught YACS integration of ADAO.
+the SALOME modules can be used through YACS integration of ADAO.
Its main objective is to **facilitate the use of various standard data
assimilation or optimization methods, while remaining easy to use and providing
+++ /dev/null
-.. _section_licence:
-
-================================================================================
-Licence and requirements for the module
-================================================================================
-
-.. index:: single: LICENCE
-.. index:: single: SALOME
-.. index:: single: ADAO
-
-The licence for this module is the GNU Lesser General Public License (Lesser
-GPL), as stated here and in the source files::
-
- <ADAO, a SALOME module for Data Assimilation and Optimization>
-
- Copyright (C) 2008-2014 EDF R&D
-
- This library is free software; you can redistribute it and/or
- modify it under the terms of the GNU Lesser General Public
- License as published by the Free Software Foundation; either
- version 2.1 of the License, or (at your option) any later version.
-
- This library is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- Lesser General Public License for more details.
-
- You should have received a copy of the GNU Lesser General Public
- License along with this library; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
- See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
-
-In addition, we expect that all publications describing work using this
-software, or all commercial products using it, quote at least one of the
-references given below:
-
- * *ADAO, a SALOME module for Data Assimilation and Optimization*,
- http://www.salome-platform.org/
-
- * *SALOME The Open Source Integration Platform for Numerical Simulation*,
- http://www.salome-platform.org/
-
-The documentation of the module is also covered by the licence and the
-requirement of quoting.
--- /dev/null
+.. _section_license:
+
+================================================================================
+License and requirements for the module
+================================================================================
+
+.. index:: single: LICENSE
+.. index:: single: SALOME
+.. index:: single: ADAO
+
+The license for this module is the GNU Lesser General Public License (Lesser
+GPL), as stated here and in the source files::
+
+ <ADAO, a SALOME module for Data Assimilation and Optimization>
+
+ Copyright (C) 2008-2014 EDF R&D
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+
+In addition, we expect that all publications describing work using this
+software, or all commercial products using it, quote at least one of the
+references given below:
+
+ * *ADAO, a SALOME module for Data Assimilation and Optimization*,
+ http://www.salome-platform.org/
+
+ * *SALOME The Open Source Integration Platform for Numerical Simulation*,
+ http://www.salome-platform.org/
+
+The documentation of the module is also covered by the license and the
+requirement of quoting.
**String**
This indicates a string giving a literal representation of a matrix, a
- vector or a vector serie, such as "1 2 ; 3 4" or "[[1,2],[3,4]]" for a
+ vector or a vector series, such as "1 2 ; 3 4" or "[[1,2],[3,4]]" for a
square 2x2 matrix.
**Vector**
**UserDataInit**
*Optional command*. This commands allows to initialize some parameters or
- data automatically before data assimilation or optimisation algorithm input
+ data automatically before data assimilation or optimization algorithm input
processing. It indicates a script file name to be executed before entering
in initialization phase of chosen variables.
Each algorithm can be controlled using some generic or specific options, given
through the "*AlgorithmParameters*" optional command in a script file or a
-sring, as follows for example in a file::
+string, as follows for example in a file::
AlgorithmParameters = {
"Minimizer" : "LBFGSB",
}
To give the "*AlgorithmParameters*" values by string, one must enclose a
-standard dictionnary definition between simple quotes, as for example::
+standard dictionary definition between simple quotes, as for example::
'{"Minimizer":"LBFGSB","MaximumNumberOfSteps":25}'
"ObservationOperator"*
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
"ObservationOperator"*
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
"ObservationOperator"*
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
being optimized. Bounds can be given by a list of list of pairs of
lower/upper bounds for each variable, with possibly ``None`` every time
there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained minimizers.
+ into account only by the constrained optimizers.
MaximumNumberOfSteps
This key indicates the maximum number of iterations allowed for iterative
optimization. The default is 15000, which is very similar to no limit on
iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some minimizers, the effective stopping step can be
+ real problems. For some optimizers, the effective stopping step can be
slightly different of the limit due to algorithm internal control
requirements.
ProjectedGradientTolerance
This key indicates a limit value, leading to stop successfully the iterative
optimization process when all the components of the projected gradient are
- under this limit. It is only used for constrained minimizers. The default is
+ under this limit. It is only used for constrained optimizers. The default is
-1, that is the internal default of each minimizer (generally 1.e-5), and it
is not recommended to change it.
GradientNormTolerance
This key indicates a limit value, leading to stop successfully the
iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained minimizers. The default is
+ limit. It is only used for non-constrained optimizers. The default is
1.e-5 and it is not recommended to change it.
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
being optimized. Bounds can be given by a list of list of pairs of
lower/upper bounds for each variable, with possibly ``None`` every time
there is no bound. The bounds can always be specified, but they are taken
- into account only by the constrained minimizers.
+ into account only by the constrained optimizers.
MaximumNumberOfSteps
This key indicates the maximum number of iterations allowed for iterative
optimization. The default is 15000, which is very similar to no limit on
iterations. It is then recommended to adapt this parameter to the needs on
- real problems. For some minimizers, the effective stopping step can be
+ real problems. For some optimizers, the effective stopping step can be
slightly different due to algorithm internal control requirements.
CostDecrementTolerance
ProjectedGradientTolerance
This key indicates a limit value, leading to stop successfully the iterative
optimization process when all the components of the projected gradient are
- under this limit. It is only used for constrained minimizers. The default is
+ under this limit. It is only used for constrained optimizers. The default is
-1, that is the internal default of each minimizer (generally 1.e-5), and it
is not recommended to change it.
GradientNormTolerance
This key indicates a limit value, leading to stop successfully the
iterative optimization process when the norm of the gradient is under this
- limit. It is only used for non-constrained minimizers. The default is
+ limit. It is only used for non-constrained optimizers. The default is
1.e-5 and it is not recommended to change it.
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
with a value of "Parameters". The default choice is "State".
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
being optimized. Bounds can be given by a list of list of pairs of
lower/upper bounds for each variable, with extreme values every time there
is no bound. The bounds can always be specified, but they are taken into
- account only by the constrained minimizers.
+ account only by the constrained optimizers.
ConstrainedBy
This key allows to define the method to take bounds into account. The
with a value of "Parameters". The default choice is "State".
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
being optimized. Bounds can be given by a list of list of pairs of
lower/upper bounds for each variable, with extreme values every time there
is no bound. The bounds can always be specified, but they are taken into
- account only by the constrained minimizers.
+ account only by the constrained optimizers.
ConstrainedBy
This key allows to define the method to take bounds into account. The
Alpha, Beta, Kappa, Reconditioner
These keys are internal scaling parameters. "Alpha" requires a value between
- 1.e-4 and 1. "Beta" has an optimal value of 2 for gaussian *a priori*
+ 1.e-4 and 1. "Beta" has an optimal value of 2 for Gaussian *a priori*
distribution. "Kappa" requires an integer value, and the right default is
obtained by setting it to 0. "Reconditioner" requires a value between 1.e-3
and 10, it defaults to 1.
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
QualityCriterion
This key indicates the quality criterion, minimized to find the optimal
state estimate. The default is the usual data assimilation criterion named
- "DA", the augmented ponderated least squares. The possible criteria has to
+ "DA", the augmented weighted least squares. The possible criteria has to
be in the following list, where the equivalent names are indicated by "=":
["AugmentedPonderatedLeastSquares"="APLS"="DA",
"PonderatedLeastSquares"="PLS", "LeastSquares"="LS"="L2",
initialization from the computer.
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
and it is recommended to adapt it to the needs on real problems.
StoreInternalVariables
- This boolean key allows to store default internal variables, mainly the
+ This Boolean key allows to store default internal variables, mainly the
current state during iterative optimization process. Be careful, this can be
a numerically costly choice in certain calculation cases. The default is
"False".
On input and output of these operators, the :math:`\mathbf{x}` and
:math:`\mathbf{y}` variables or their increments are mathematically vectors,
-and they are given as non-orented vectors (of type list or Numpy array) or
+and they are given as non-oriented vectors (of type list or Numpy array) or
oriented ones (of type Numpy matrix).
Then, **to describe completely an operator, the user has only to provide a
return Y=O(X)
In this case, the user has also provide a value for the differential increment
-(or keep the devault value), using through the GUI the keyword
+(or keep the default value), using through the GUI the keyword
"*DifferentialIncrement*", which has a default value of 1%. This coefficient
-will be used in the finite difference approximation to build the tangent and
-adjoint operators. The finite difference approximation order can also be chosen
+will be used in the finite differences approximation to build the tangent and
+adjoint operators. The finite differences approximation order can also be chosen
through the GUI, using the keyword "*CenteredFiniteDifference*", with 0 for an
uncentered schema of first order (which is the default value), and with 1 for a
centered schema of second order (of twice the first order computational cost).
.. index:: single: TangentOperator
.. index:: single: AdjointOperator
-**In general, it is recommended to use the first functionnal form rather than
+**In general, it is recommended to use the first functional form rather than
the second one. A small performance improvement is not a good reason to use a
-detailled implementation as this second functional form.**
+detailed implementation as this second functional form.**
The second one consist in providing directly the three associated operators
:math:`O`, :math:`\mathbf{O}` and :math:`\mathbf{O}^*`. This is done by using
For some algorithms, it is required that the tangent and adjoint functions can
return the matrix equivalent to the linear operator. In this case, when
-respectivly the ``dX`` or the ``Y`` arguments are ``None``, the user has to
+respectively the ``dX`` or the ``Y`` arguments are ``None``, the user has to
return the associated matrix.
**Important warning:** the names "*DirectOperator*", "*TangentOperator*" and
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
In some cases, the evolution or the observation operator is required to be
-controled by an external input control, given *a priori*. In this case, the
+controlled by an external input control, given *a priori*. In this case, the
generic form of the incremental model is slightly modified as follows:
.. math:: \mathbf{y} = O( \mathbf{x}, \mathbf{u})
.. index:: single: ObservationError
This first form is the default and more general one. The covariance matrix
-:math:`\mathbf{M}` has to be fully specified. Even if the matrix is symetric by
+:math:`\mathbf{M}` has to be fully specified. Even if the matrix is symmetric by
nature, the entire :math:`\mathbf{M}` matrix has to be given.
.. math:: \mathbf{M} = \begin{pmatrix}
chargé en python. On donne ici une séquence complète de commandes pour tester la
validité du schéma avant de l'exécuter, ajoutant des lignes supplémentaires
initiales pour charger de manière explicite le catalogue de types pour éviter
-des difficultés obscures::
+d'obscures difficultés::
#-*-coding:iso-8859-1-*-
import pilot
print p.getErrorReport()
Cette démarche permet par exemple d'éditer le schéma YACS XML en mode texte TUI,
-ou pour rassembler les résultats pour un usage ultérieur.
+ou de rassembler les résultats pour un usage ultérieur.
Obtenir des informations sur des variables spéciales au cours d'un calcul ADAO en YACS
--------------------------------------------------------------------------------------
quel que soit le niveau de surveillance choisi (c'est-à-dire même si ces
variables ne sont pas affichées).
+Accélérer les calculs de dérivées numériques en utilisant un mode parallèle
+---------------------------------------------------------------------------
+
+Lors de la définition d'un opérateur, comme décrit dans la chapitre
+:ref:`section_reference`, l'utilisateur peut choisir la forme fonctionnelle
+"*ScriptWithOneFunction*". Cette forme conduit explicitement à approximer les
+opérateurs tangent et adjoint par un calcul par différences finies. Il requiert
+de nombreux appels à l'opérateur direct (fonction définie par l'utilisateur), au
+moins autant de fois que la dimension du vecteur d'état. Ce sont ces appels qui
+peuvent être potentiellement exécutés en parallèle.
+
+Sous certaines conditions, il est alors possible d'accélérer les calculs de
+dérivées numériques en utilisant un mode parallèle pour l'approximation par
+différences finies. Lors de la définition d'un cas ADAO, c'est effectué en
+ajoutant le mot-clé optionnel "*EnableMultiProcessing*", mis à "1", de la
+commande "*SCRIPTWITHONEFUNCTION*" dans la définition de l'opérateur. Le mode
+parallèle utilise uniquement des ressources locales (à la fois multi-coeurs ou
+multi-processeurs) de l'ordinateur sur lequel SALOME est en train de tourner,
+demandant autant de ressources que disponible. Par défaut, ce mode parallèle est
+désactivé ("*EnableMultiProcessing=0*").
+
+Les principales conditions pour réaliser ces calculs parallèles viennent de la
+fonction définie par l'utilisateur, qui représente l'opérateur direct. Cette
+fonction doit au moins être "thread safe" pour être exécutée dans un
+environnement Python parallèle (notions au-delà du cadre de ce paragraphe). Il
+n'est pas évident de donner des règles générales, donc il est recommandé, à
+l'utilisateur qui active ce parallélisme interne, de vérifier soigneusement sa
+fonction et les résultats obtenus.
+
+D'un point de vue utilisateur, certaines conditions, qui doivent être réunies
+pour mettre en place des calculs parallèles pour les approximations des
+opérateurs tangent et adjoint, sont les suivantes :
+
+#. La dimension du vecteur d'état est supérieure à 2 ou 3.
+#. Le calcul unitaire de la fonction utilisateur directe "dure un certain temps", c'est-à-dire plus que quelques minutes.
+#. La fonction utilisateur directe n'utilise pas déjà du parallélisme (ou l'exécution parallèle est désactivée dans le calcul de l'utilisateur).
+#. La fonction utilisateur directe ne nécessite pas d'accès en lecture/écriture de ressources communes, principalement des données stockées ou des espaces mémoire.
+
+Si ces conditions sont satisfaites, l'utilisateur peut choisir d'activer le
+parallélisme interne pour le calcul des dérivées numériques. Malgré la
+simplicité d'activation, obtenue en définissant une variable seulement,
+l'utilisateur est fortement invité à vérifier les résultats de ses calculs. Il
+faut au moins les effectuer une fois avec le parallélisme activé, et une autre
+fois avec le parallélisme désactivé, pour comparer les résultats. Si cette mise
+en oeuvre échoue à un moment ou à un autre, il faut savoir que ce schéma de
+parallélisme fonctionne pour des codes complexes, comme *Code_Aster* dans
+*SalomeMeca* [SalomeMeca]_ par exemple. Donc vérifiez votre fonction d'opérateur
+avant et pendant l'activation du parallélisme...
+
+**En cas de doute, il est recommandé de NE PAS ACTIVER ce parallélisme.**
+
Passer d'une version d'ADAO à une nouvelle
------------------------------------------
.. [Salome] *SALOME The Open Source Integration Platform for Numerical Simulation*, http://www.salome-platform.org/
+.. [SalomeMeca] *Salome_Meca et Code_Aster, Analyse des Structures et Thermomécanique pour les Etudes et la Recherche*, http://www.code-aster.org/
+
.. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987
.. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997
répétitifs, et c'est la méthode la plus polyvalente pour paramétrer les données
d'entrée. **Mais attention, la méthodologie par scripts n'est pas une procédure
"sûre", en ce sens que des données erronées ou des erreurs dans les calculs,
-peuvent être directement introduites dans l'exécution du schéma YACS.**
+peuvent être directement introduites dans l'exécution du schéma YACS.
+L'utilisateur doit vérifier avec attention le contenu de ses scripts.**
Ajout de paramètres pour contrôler l'algorithme d'assimilation de données
-------------------------------------------------------------------------
Si aucune borne n'est requise sur les variables de contrôle, alors on peut
choisir les algorithmes de minimisation "*BFGS*" ou "*CG*" pour tous les
algorithmes variationnels d'assimilation de données ou d'optimisation. Pour
-l'optimisation sous contraintes, le minimiseur "*LBFGSB*" est souvent plus
+l'optimisation sous contraintes, l'algorithme "*LBFGSB*" est bien souvent plus
robuste, mais le "*TNC*" est parfois plus performant.
Ensuite le script peut être ajouté au cas ADAO, dans une entrée de type fichier
:ref:`section_advanced`, sur l'obtention de renseignements supplémentaires, ou
sur l'usage par scripts, sans interface utilisateur graphique (GUI). Enfin, pour
respecter les exigences de licence du module, n'oubliez pas de lire la partie
-:ref:`section_licence`.
+:ref:`section_license`.
Dans cette documentation, on utilise les notations standards de l'algèbre
linéaire, de l'assimilation de données (comme décrit dans [Ide97]_) et de
examples
reference
advanced
- licence
+ license
glossary
bibliography
+++ /dev/null
-.. _section_licence:
-
-================================================================================
-Licence et conditions d'utilisation pour le module
-================================================================================
-
-.. index:: single: LICENCE
-.. index:: single: SALOME
-.. index:: single: ADAO
-
-La licence pour ce module est la GNU Lesser General Public License (Lesser GPL),
-tel qu'il est déclaré ici et dans les fichiers sources::
-
- <ADAO, a SALOME module for Data Assimilation and Optimization>
-
- Copyright (C) 2008-2014 EDF R&D
-
- This library is free software; you can redistribute it and/or
- modify it under the terms of the GNU Lesser General Public
- License as published by the Free Software Foundation; either
- version 2.1 of the License, or (at your option) any later version.
-
- This library is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- Lesser General Public License for more details.
-
- You should have received a copy of the GNU Lesser General Public
- License along with this library; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-
- See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
-
-En outre, nous souhaitons que toute publication décrivant des travaux utilisant
-ce module, ou tout produit commercial l'utilisant, cite au moins l'une des
-références ci-dessous :
-
- * *ADAO, a SALOME module for Data Assimilation and Optimization*,
- http://www.salome-platform.org/
-
- * *ADAO, un module SALOME pour l'Assimilation de Données et l'Aide à
- l'Optimisation*, http://www.salome-platform.org/
-
- * *SALOME The Open Source Integration Platform for Numerical Simulation*,
- http://www.salome-platform.org/
-
-La documentation du module est également couverte par la licence et l'obligation
-de citation.
--- /dev/null
+.. _section_license:
+
+================================================================================
+Licence et conditions d'utilisation pour le module
+================================================================================
+
+.. index:: single: LICENCE
+.. index:: single: SALOME
+.. index:: single: ADAO
+
+La licence pour ce module est la GNU Lesser General Public License (Lesser GPL),
+tel qu'il est déclaré ici et dans les fichiers sources::
+
+ <ADAO, a SALOME module for Data Assimilation and Optimization>
+
+ Copyright (C) 2008-2014 EDF R&D
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+
+En outre, nous souhaitons que toute publication décrivant des travaux utilisant
+ce module, ou tout produit commercial l'utilisant, cite au moins l'une des
+références ci-dessous :
+
+ * *ADAO, a SALOME module for Data Assimilation and Optimization*,
+ http://www.salome-platform.org/
+
+ * *ADAO, un module SALOME pour l'Assimilation de Données et l'Aide à
+ l'Optimisation*, http://www.salome-platform.org/
+
+ * *SALOME The Open Source Integration Platform for Numerical Simulation*,
+ http://www.salome-platform.org/
+
+La documentation du module est également couverte par la licence et l'obligation
+de citation.