.. [Morales11] Morales J.L., Nocedal J., *L-BFGS-B: Remark on Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization*, ACM Transactions on Mathematical Software, 38(1), 2011
+.. [Nelder] Nelder J.A., Mead R., *A simplex method for function minimization*, The Computer Journal, 7, pp.308-313, 1965
+
+.. [Powell] Powell M.J.D., *An efficient method for finding the minimum of a function of several variables without calculating derivatives*, Computer Journal, 7(2), pp.155-162, 1964
+
.. [Salome] *SALOME The Open Source Integration Platform for Numerical Simulation*, http://www.salome-platform.org/
.. [SalomeMeca] *Salome_Meca and Code_Aster, Analysis of Structures and Thermomechanics for Studies & Research*, http://www.code-aster.org/
either state-estimation, with a value of "State", or parameter-estimation,
with a value of "Parameters". The default choice is "State".
+ Example : ``{"EstimationOf":"Parameters"}``
+
ProjectedGradientTolerance
This key indicates a limit value, leading to stop successfully the iterative
optimization process when all the components of the projected gradient are
- [Byrd95]_
- [Morales11]_
- [Talagrand97]_
+ - [Zhu97]_
--- /dev/null
+..
+ Copyright (C) 2008-2015 EDF R&D
+
+ This file is part of SALOME ADAO module.
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+
+ Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+.. index:: single: DerivativeFreeOptimization
+.. _section_ref_algorithm_DerivativeFreeOptimization:
+
+Calculation algorithm "*DerivativeFreeOptimization*"
+----------------------------------------------------
+
+.. warning::
+
+ in its present version, this algorithm is experimental, and so changes can be
+ required in forthcoming versions.
+
+Description
++++++++++++
+
+This algorithm realizes an estimation of the state of a dynamic system by
+minimization of a cost function :math:`J` without gradient. It is a method that
+doesn't use the derivatives of the cost function. It fall in the same category
+then the :ref:`section_ref_algorithm_ParticleSwarmOptimization`.
+
+This is an optimization method allowing for global minimum search of a general
+error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
+with or without weights. The default error function is the augmented weighted
+least squares function, classicaly used in data assimilation.
+
+Optional and required commands
+++++++++++++++++++++++++++++++
+
+.. index:: single: AlgorithmParameters
+.. index:: single: Background
+.. index:: single: BackgroundError
+.. index:: single: Observation
+.. index:: single: ObservationError
+.. index:: single: ObservationOperator
+.. index:: single: Minimizer
+.. index:: single: MaximumNumberOfSteps
+.. index:: single: MaximumNumberOfFunctionEvaluations
+.. index:: single: StateVariationTolerance
+.. index:: single: CostDecrementTolerance
+.. index:: single: QualityCriterion
+.. index:: single: StoreSupplementaryCalculations
+
+The general required commands, available in the editing user interface, are the
+following:
+
+ Background
+ *Required command*. This indicates the background or initial vector used,
+ previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
+ "*Vector*" or a *VectorSerie*" type object.
+
+ BackgroundError
+ *Required command*. This indicates the background error covariance matrix,
+ previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
+ type object, a "*ScalarSparseMatrix*" type object, or a
+ "*DiagonalSparseMatrix*" type object.
+
+ Observation
+ *Required command*. This indicates the observation vector used for data
+ assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
+ is defined as a "*Vector*" or a *VectorSerie* type object.
+
+ ObservationError
+ *Required command*. This indicates the observation error covariance matrix,
+ previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
+ object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
+ type object.
+
+ ObservationOperator
+ *Required command*. This indicates the observation operator, previously
+ noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
+ results :math:`\mathbf{y}` to be compared to observations
+ :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
+ a "*Matrix*" type one. In the case of "*Function*" type, different
+ functional forms can be used, as described in the section
+ :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
+ included in the observation, the operator has to be applied to a pair
+ :math:`(X,U)`.
+
+The general optional commands, available in the editing user interface, are
+indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
+of the command "*AlgorithmParameters*" allows to choose the specific options,
+described hereafter, of the algorithm. See
+:ref:`section_ref_options_Algorithm_Parameters` for the good use of this
+command.
+
+The options of the algorithm are the following:
+
+ Minimizer
+ This key allows to choose the optimization minimizer. The default choice is
+ "POWELL", and the possible ones are "POWELL" (modified Powell unconstrained
+ minimizer, see [Powell]_), "SIMPLEX" (nonlinear constrained minimizer), "CG"
+ (simplex of Nelder-Mead unconstrained minimizer, see [Nelder]_). It is
+ recommended to stay with the default.
+
+ Example : ``{"Minimizer":"POWELL"}``
+
+ MaximumNumberOfSteps
+ This key indicates the maximum number of iterations allowed for iterative
+ optimization. The default is 15000, which is very similar to no limit on
+ iterations. It is then recommended to adapt this parameter to the needs on
+ real problems. For some optimizers, the effective stopping step can be
+ slightly different of the limit due to algorithm internal control
+ requirements.
+
+ Example : ``{"MaximumNumberOfSteps":50}``
+
+ MaximumNumberOfFunctionEvaluations
+ This key indicates the maximum number of evaluation of the cost function to
+ be optimized. The default is 15000, which is very similar to no limit on
+ iterations. The calculation can be over this limit when an outer
+ optimization loop has to be finished. It is strongly recommended to adapt
+ this parameter to the needs on real problems.
+
+ Example : ``{"MaximumNumberOfFunctionEvaluations":50}``
+
+ StateVariationTolerance
+ This key indicates the maximum relative variation of the state for stopping
+ by convergence on the state. The default is 1.e-4, and it is recommended to
+ adapt it to the needs on real problems.
+
+ Example : ``{"StateVariationTolerance":1.e-4}``
+
+ CostDecrementTolerance
+ This key indicates a limit value, leading to stop successfully the
+ iterative optimization process when the cost function decreases less than
+ this tolerance at the last step. The default is 1.e-7, and it is
+ recommended to adapt it to the needs on real problems.
+
+ Example : ``{"CostDecrementTolerance":1.e-7}``
+
+ QualityCriterion
+ This key indicates the quality criterion, minimized to find the optimal
+ state estimate. The default is the usual data assimilation criterion named
+ "DA", the augmented weighted least squares. The possible criteria has to be
+ in the following list, where the equivalent names are indicated by the sign
+ "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA",
+ "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2",
+ "AbsoluteValue"="L1", "MaximumError"="ME"].
+
+ Example : ``{"QualityCriterion":"DA"}``
+
+ StoreSupplementaryCalculations
+ This list indicates the names of the supplementary variables that can be
+ available at the end of the algorithm. It involves potentially costly
+ calculations or memory consumptions. The default is a void list, none of
+ these variables being calculated and stored by default. The possible names
+ are in the following list: ["CurrentState", "CostFunctionJ",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
+ "SimulatedObservationAtOptimum"].
+
+ Example : ``{"StoreSupplementaryCalculations":["CurrentState", "CostFunctionJ"]}``
+
+Information and variables available at the end of the algorithm
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+At the output, after executing the algorithm, there are variables and
+information originating from the calculation. The description of
+:ref:`section_ref_output_variables` show the way to obtain them by the method
+named ``get`` of the variable "*ADD*" of the post-processing. The input
+variables, available to the user at the output in order to facilitate the
+writing of post-processing procedures, are described in the
+:ref:`subsection_r_o_v_Inventaire`.
+
+The unconditional outputs of the algorithm are the following:
+
+ Analysis
+ *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
+ optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
+
+ Example : ``Xa = ADD.get("Analysis")[-1]``
+
+ CostFunctionJ
+ *List of values*. Each element is a value of the error function :math:`J`.
+
+ Example : ``J = ADD.get("CostFunctionJ")[:]``
+
+ CostFunctionJb
+ *List of values*. Each element is a value of the error function :math:`J^b`,
+ that is of the background difference part.
+
+ Example : ``Jb = ADD.get("CostFunctionJb")[:]``
+
+ CostFunctionJo
+ *List of values*. Each element is a value of the error function :math:`J^o`,
+ that is of the observation difference part.
+
+ Example : ``Jo = ADD.get("CostFunctionJo")[:]``
+
+ CurrentState
+ *List of vectors*. Each element is a usual state vector used during the
+ optimization algorithm procedure.
+
+ Example : ``Xs = ADD.get("CurrentState")[:]``
+
+The conditional outputs of the algorithm are the following:
+
+ SimulatedObservationAtBackground
+ *List of vectors*. Each element is a vector of observation simulated from
+ the background :math:`\mathbf{x}^b`.
+
+ Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+
+ SimulatedObservationAtCurrentState
+ *List of vectors*. Each element is an observed vector at the current state,
+ that is, in the observation space.
+
+ Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+
+ SimulatedObservationAtOptimum
+ *List of vectors*. Each element is a vector of observation simulated from
+ the analysis or optimal state :math:`\mathbf{x}^a`.
+
+ Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+
+See also
+++++++++
+
+References to other sections:
+ - :ref:`section_ref_algorithm_ParticleSwarmOptimization`
+
+Bibliographical references:
+ - [Nelder]_
+ - [Powell]_
Description
+++++++++++
-This algorithm realizes an estimation of the state of a dynamic system by a
-particle swarm.
+This algorithm realizes an estimation of the state of a dynamic system by
+minimization of a cost function :math:`J` by using a particle swarm. It is a
+method that doesn't use the derivatives of the cost function. It fall in the
+same category then the :ref:`section_ref_algorithm_DerivativeFreeOptimization`.
This is an optimization method allowing for global minimum search of a general
error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
++++++++
References to other sections:
+ - :ref:`section_ref_algorithm_DerivativeFreeOptimization`
+
+Bibliographical references:
- [WikipediaPSO]_
ref_algorithm_3DVAR
ref_algorithm_4DVAR
ref_algorithm_Blue
+ ref_algorithm_DerivativeFreeOptimization
ref_algorithm_EnsembleBlue
ref_algorithm_ExtendedBlue
ref_algorithm_ExtendedKalmanFilter
.. [Morales11] Morales J.L., Nocedal J., *L-BFGS-B: Remark on Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization*, ACM Transactions on Mathematical Software, 38(1), 2011
+.. [Nelder] Nelder J.A., Mead R., *A simplex method for function minimization*, The Computer Journal, 7, pp.308-313, 1965
+
+.. [Powell] Powell M.J.D., *An efficient method for finding the minimum of a function of several variables without calculating derivatives*, Computer Journal, 7(2), pp.155-162, 1964
+
.. [Salome] *SALOME The Open Source Integration Platform for Numerical Simulation*, http://www.salome-platform.org/
.. [SalomeMeca] *Salome_Meca et Code_Aster, Analyse des Structures et Thermomécanique pour les Etudes et la Recherche*, http://www.code-aster.org/
- [Byrd95]_
- [Morales11]_
- [Talagrand97]_
+ - [Zhu97]_
--- /dev/null
+..
+ Copyright (C) 2008-2015 EDF R&D
+
+ This file is part of SALOME ADAO module.
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+
+ Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+.. index:: single: DerivativeFreeOptimization
+.. _section_ref_algorithm_DerivativeFreeOptimization:
+
+Algorithme de calcul "*DerivativeFreeOptimization*"
+----------------------------------------------------
+
+.. warning::
+
+ dans sa présente version, cet algorithme est expérimental, et reste donc
+ susceptible de changements dans les prochaines versions.
+
+Description
++++++++++++
+
+Cet algorithme réalise une estimation d'état d'un système dynamique par
+minimisation d'une fonctionnelle d'écart :math:`J` sans gradient. C'est une
+méthode qui n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre
+dans la même catégorie que
+l':ref:`section_ref_algorithm_ParticleSwarmOptimization`.
+
+C'est une méthode d'optimisation permettant la recherche du minimum global d'une
+fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou
+:math:`L^{\infty}`, avec ou sans pondérations. La fonctionnelle d'erreur par
+défaut est celle de moindres carrés pondérés augmentés, classiquement utilisée
+en assimilation de données.
+
+Commandes requises et optionnelles
+++++++++++++++++++++++++++++++++++
+
+.. index:: single: AlgorithmParameters
+.. index:: single: Background
+.. index:: single: BackgroundError
+.. index:: single: Observation
+.. index:: single: ObservationError
+.. index:: single: ObservationOperator
+.. index:: single: Minimizer
+.. index:: single: MaximumNumberOfSteps
+.. index:: single: MaximumNumberOfFunctionEvaluations
+.. index:: single: StateVariationTolerance
+.. index:: single: CostDecrementTolerance
+.. index:: single: QualityCriterion
+.. index:: single: StoreSupplementaryCalculations
+
+Les commandes requises générales, disponibles dans l'interface en édition, sont
+les suivantes:
+
+ Background
+ *Commande obligatoire*. Elle définit le vecteur d'ébauche ou
+ d'initialisation, noté précédemment :math:`\mathbf{x}^b`. Sa valeur est
+ définie comme un objet de type "*Vector*" ou de type "*VectorSerie*".
+
+ BackgroundError
+ *Commande obligatoire*. Elle définit la matrice de covariance des erreurs
+ d'ébauche, notée précédemment :math:`\mathbf{B}`. Sa valeur est définie
+ comme un objet de type "*Matrix*", de type "*ScalarSparseMatrix*", ou de
+ type "*DiagonalSparseMatrix*".
+
+ Observation
+ *Commande obligatoire*. Elle définit le vecteur d'observation utilisé en
+ assimilation de données ou en optimisation, et noté précédemment
+ :math:`\mathbf{y}^o`. Sa valeur est définie comme un objet de type "*Vector*"
+ ou de type "*VectorSerie*".
+
+ ObservationError
+ *Commande obligatoire*. Elle définit la matrice de covariance des erreurs
+ d'ébauche, notée précédemment :math:`\mathbf{R}`. Sa valeur est définie
+ comme un objet de type "*Matrix*", de type "*ScalarSparseMatrix*", ou de
+ type "*DiagonalSparseMatrix*".
+
+ ObservationOperator
+ *Commande obligatoire*. Elle indique l'opérateur d'observation, noté
+ précédemment :math:`H`, qui transforme les paramètres d'entrée
+ :math:`\mathbf{x}` en résultats :math:`\mathbf{y}` qui sont à comparer aux
+ observations :math:`\mathbf{y}^o`. Sa valeur est définie comme un objet de
+ type "*Function*" ou de type "*Matrix*". Dans le cas du type "*Function*",
+ différentes formes fonctionnelles peuvent être utilisées, comme décrit dans
+ la section :ref:`section_ref_operator_requirements`. Si un contrôle
+ :math:`U` est inclus dans le modèle d'observation, l'opérateur doit être
+ appliqué à une paire :math:`(X,U)`.
+
+Les commandes optionnelles générales, disponibles dans l'interface en édition,
+sont indiquées dans la :ref:`section_ref_assimilation_keywords`. De plus, les
+paramètres de la commande "*AlgorithmParameters*" permettent d'indiquer les
+options particulières, décrites ci-après, de l'algorithme. On se reportera à la
+:ref:`section_ref_options_Algorithm_Parameters` pour le bon usage de cette
+commande.
+
+Les options de l'algorithme sont les suivantes:
+
+ Minimizer
+ Cette clé permet de changer le minimiseur pour l'optimiseur. Le choix par
+ défaut est "POWELL", et les choix possibles sont "POWELL" (minimisation sans
+ contrainte de type Powell modifiée, voir [Powell]_), "SIMPLEX" (minimisation
+ sans contrainte de type simplexe ou Nelder-Mead, voir [Nelder]_). Il est
+ conseillé de conserver la valeur par défaut.
+
+ Exemple : ``{"Minimizer":"POWELL"}``
+
+ MaximumNumberOfSteps
+ Cette clé indique le nombre maximum d'itérations possibles en optimisation
+ itérative. Le défaut est 15000, qui est une limite arbitraire. Il est ainsi
+ fortement recommandé d'adapter ce paramètre aux besoins pour des problèmes
+ réels. Pour certains optimiseurs, le nombre de pas effectif d'arrêt peut
+ être légèrement différent de la limite à cause d'exigences de contrôle
+ interne de l'algorithme.
+
+ Exemple : ``{"MaximumNumberOfSteps":50}``
+
+ MaximumNumberOfFunctionEvaluations
+ Cette clé indique le nombre maximum d'évaluations possibles de la
+ fonctionnelle à optimiser. Le défaut est 15000, qui est une limite
+ arbitraire. Le calcul peut dépasser ce nombre lorsqu'il doit finir une
+ boucle externe d'optimisation. Il est fortement recommandé d'adapter ce
+ paramètre aux besoins pour des problèmes réels.
+
+ Exemple : ``{"MaximumNumberOfFunctionEvaluations":50}``
+
+ StateVariationTolerance
+ Cette clé indique la variation relative maximale de l'état lors pour l'arrêt
+ par convergence sur l'état. Le défaut est de 1.e-4, et il est recommandé
+ de l'adapter aux besoins pour des problèmes réels.
+
+ Exemple : ``{"StateVariationTolerance":1.e-4}``
+
+ CostDecrementTolerance
+ Cette clé indique une valeur limite, conduisant à arrêter le processus
+ itératif d'optimisation lorsque la fonction coût décroît moins que cette
+ tolérance au dernier pas. Le défaut est de 1.e-7, et il est recommandé
+ de l'adapter aux besoins pour des problèmes réels.
+
+ Exemple : ``{"CostDecrementTolerance":1.e-7}``
+
+ QualityCriterion
+ Cette clé indique le critère de qualité, qui est minimisé pour trouver
+ l'estimation optimale de l'état. Le défaut est le critère usuel de
+ l'assimilation de données nommé "DA", qui est le critère de moindres carrés
+ pondérés augmentés. Les critères possibles sont dans la liste suivante, dans
+ laquelle les noms équivalents sont indiqués par un signe "=" :
+ ["AugmentedWeightedLeastSquares"="AWLS"="DA", "WeightedLeastSquares"="WLS",
+ "LeastSquares"="LS"="L2", "AbsoluteValue"="L1", "MaximumError"="ME"].
+
+ Exemple : ``{"QualityCriterion":"DA"}``
+
+ StoreSupplementaryCalculations
+ Cette liste indique les noms des variables supplémentaires qui peuvent être
+ disponibles à la fin de l'algorithme. Cela implique potentiellement des
+ calculs ou du stockage coûteux. La valeur par défaut est une liste vide,
+ aucune de ces variables n'étant calculée et stockée par défaut. Les noms
+ possibles sont dans la liste suivante : ["CurrentState", "CostFunctionJ",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
+ "SimulatedObservationAtOptimum"].
+
+ Exemple : ``{"StoreSupplementaryCalculations":["CurrentState", "CostFunctionJ"]}``
+
+Informations et variables disponibles à la fin de l'algorithme
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+En sortie, après exécution de l'algorithme, on dispose d'informations et de
+variables issues du calcul. La description des
+:ref:`section_ref_output_variables` indique la manière de les obtenir par la
+méthode nommée ``get`` de la variable "*ADD*" du post-processing. Les variables
+d'entrée, mises à disposition de l'utilisateur en sortie pour faciliter
+l'écriture des procédures de post-processing, sont décrites dans
+l':ref:`subsection_r_o_v_Inventaire`.
+
+Les sorties non conditionnelles de l'algorithme sont les suivantes:
+
+ Analysis
+ *Liste de vecteurs*. Chaque élément est un état optimal :math:`\mathbf{x}*`
+ en optimisation ou une analyse :math:`\mathbf{x}^a` en assimilation de
+ données.
+
+ Exemple : ``Xa = ADD.get("Analysis")[-1]``
+
+ CostFunctionJ
+ *Liste de valeurs*. Chaque élément est une valeur de fonctionnelle d'écart
+ :math:`J`.
+
+ Exemple : ``J = ADD.get("CostFunctionJ")[:]``
+
+ CostFunctionJb
+ *Liste de valeurs*. Chaque élément est une valeur de fonctionnelle d'écart
+ :math:`J^b`, c'est-à-dire de la partie écart à l'ébauche.
+
+ Exemple : ``Jb = ADD.get("CostFunctionJb")[:]``
+
+ CostFunctionJo
+ *Liste de valeurs*. Chaque élément est une valeur de fonctionnelle d'écart
+ :math:`J^o`, c'est-à-dire de la partie écart à l'observation.
+
+ Exemple : ``Jo = ADD.get("CostFunctionJo")[:]``
+
+ CurrentState
+ *Liste de vecteurs*. Chaque élément est un vecteur d'état courant utilisé
+ au cours du déroulement de l'algorithme d'optimisation.
+
+ Exemple : ``Xs = ADD.get("CurrentState")[:]``
+
+Les sorties conditionnelles de l'algorithme sont les suivantes:
+
+ SimulatedObservationAtBackground
+ *Liste de vecteurs*. Chaque élément est un vecteur d'observation simulé à
+ partir de l'ébauche :math:`\mathbf{x}^b`.
+
+ Exemple : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
+
+ SimulatedObservationAtCurrentState
+ *Liste de vecteurs*. Chaque élément est un vecteur observé à l'état courant,
+ c'est-à-dire dans l'espace des observations.
+
+ Exemple : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]``
+
+ SimulatedObservationAtOptimum
+ *Liste de vecteurs*. Chaque élément est un vecteur d'observation simulé à
+ partir de l'analyse ou de l'état optimal :math:`\mathbf{x}^a`.
+
+ Exemple : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
+
+Voir aussi
+++++++++++
+
+Références vers d'autres sections :
+ - :ref:`section_ref_algorithm_ParticleSwarmOptimization`
+
+Références bibliographiques :
+ - [Nelder]_
+ - [Powell]_
Description
+++++++++++
-Cet algorithme réalise une estimation de l'état d'un système dynamique par un
-essaim particulaire.
+Cet algorithme réalise une estimation de l'état d'un système dynamique par
+minimisation d'une fonctionnelle d'écart :math:`J` en utilisant un essaim
+particulaire. C'est une méthode qui n'utilise pas les dérivées de la
+fonctionnelle d'écart. Elle entre dans la même catégorie que
+l':ref:`section_ref_algorithm_DerivativeFreeOptimization`.
C'est une méthode d'optimisation permettant la recherche du minimum global d'une
fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou
Voir aussi
++++++++++
+Références vers d'autres sections :
+ - :ref:`section_ref_algorithm_DerivativeFreeOptimization`
+
Références bibliographiques :
- [WikipediaPSO]_
ref_algorithm_3DVAR
ref_algorithm_4DVAR
ref_algorithm_Blue
+ ref_algorithm_DerivativeFreeOptimization
ref_algorithm_EnsembleBlue
ref_algorithm_ExtendedBlue
ref_algorithm_ExtendedKalmanFilter
--- /dev/null
+#-*-coding:iso-8859-1-*-
+#
+# Copyright (C) 2008-2015 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+import logging
+from daCore import BasicObjects
+import numpy, scipy.optimize
+
+# ==============================================================================
+class ElementaryAlgorithm(BasicObjects.Algorithm):
+ def __init__(self):
+ BasicObjects.Algorithm.__init__(self, "DERIVATIVEFREEOPTIMIZATION")
+ self.defineRequiredParameter(
+ name = "Minimizer",
+ default = "POWELL",
+ typecast = str,
+ message = "Minimiseur utilisé",
+ listval = ["POWELL", "SIMPLEX"],
+ )
+ self.defineRequiredParameter(
+ name = "MaximumNumberOfSteps",
+ default = 15000,
+ typecast = int,
+ message = "Nombre maximal de pas d'optimisation",
+ minval = -1,
+ )
+ self.defineRequiredParameter(
+ name = "MaximumNumberOfFunctionEvaluations",
+ default = 15000,
+ typecast = int,
+ message = "Nombre maximal de d'évaluations de la function",
+ minval = -1,
+ )
+ self.defineRequiredParameter(
+ name = "StateVariationTolerance",
+ default = 1.e-4,
+ typecast = float,
+ message = "Variation relative maximale de l'état lors de l'arrêt",
+ )
+ self.defineRequiredParameter(
+ name = "CostDecrementTolerance",
+ default = 1.e-7,
+ typecast = float,
+ message = "Diminution relative minimale du cout lors de l'arrêt",
+ )
+ self.defineRequiredParameter(
+ name = "QualityCriterion",
+ default = "AugmentedWeightedLeastSquares",
+ typecast = str,
+ message = "Critère de qualité utilisé",
+ listval = ["AugmentedWeightedLeastSquares","AWLS","DA",
+ "WeightedLeastSquares","WLS",
+ "LeastSquares","LS","L2",
+ "AbsoluteValue","L1",
+ "MaximumError","ME"],
+ )
+ self.defineRequiredParameter(
+ name = "StoreInternalVariables",
+ default = False,
+ typecast = bool,
+ message = "Stockage des variables internes ou intermédiaires du calcul",
+ )
+ self.defineRequiredParameter(
+ name = "StoreSupplementaryCalculations",
+ default = [],
+ typecast = tuple,
+ message = "Liste de calculs supplémentaires à stocker et/ou effectuer",
+ listval = ["CurrentState", "CostFunctionJ", "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]
+ )
+
+ def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
+ self._pre_run()
+ if logging.getLogger().level < logging.WARNING:
+ self.__disp = 1
+ else:
+ self.__disp = 0
+ #
+ # Paramètres de pilotage
+ # ----------------------
+ self.setParameters(Parameters)
+# self.setParameterValue("StoreInternalVariables",True)
+# print self._parameters["StoreInternalVariables"]
+ #
+ # Opérateurs
+ # ----------
+ Hm = HO["Direct"].appliedTo
+ #
+ # Précalcul des inversions de B et R
+ # ----------------------------------
+ BI = B.getI()
+ RI = R.getI()
+ #
+ # Définition de la fonction-coût
+ # ------------------------------
+ def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
+ _X = numpy.asmatrix(numpy.ravel( x )).T
+ self.StoredVariables["CurrentState"].store( _X )
+ _HX = Hm( _X )
+ _HX = numpy.asmatrix(numpy.ravel( _HX )).T
+ if "SimulatedObservationAtCurrentState" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
+ #
+ if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if BI is None or RI is None:
+ raise ValueError("Background and Observation error covariance matrix has to be properly defined!")
+ Jb = 0.5 * (_X - Xb).T * BI * (_X - Xb)
+ Jo = 0.5 * (Y - _HX).T * RI * (Y - _HX)
+ elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ if RI is None:
+ raise ValueError("Observation error covariance matrix has to be properly defined!")
+ Jb = 0.
+ Jo = 0.5 * (Y - _HX).T * RI * (Y - _HX)
+ elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ Jb = 0.
+ Jo = 0.5 * (Y - _HX).T * (Y - _HX)
+ elif QualityMeasure in ["AbsoluteValue","L1"]:
+ Jb = 0.
+ Jo = numpy.sum( numpy.abs(Y - _HX) )
+ elif QualityMeasure in ["MaximumError","ME"]:
+ Jb = 0.
+ Jo = numpy.max( numpy.abs(Y - _HX) )
+ #
+ J = float( Jb ) + float( Jo )
+ #
+ self.StoredVariables["CostFunctionJb"].store( Jb )
+ self.StoredVariables["CostFunctionJo"].store( Jo )
+ self.StoredVariables["CostFunctionJ" ].store( J )
+ return J
+ #
+ # Point de démarrage de l'optimisation : Xini = Xb
+ # ------------------------------------
+ Xini = numpy.ravel(Xb)
+ #
+ # Minimisation de la fonctionnelle
+ # --------------------------------
+ nbPreviousSteps = self.StoredVariables["CostFunctionJ"].stepnumber()
+ #
+ if self._parameters["Minimizer"] == "POWELL":
+ Minimum, J_optimal, direc, niter, nfeval, rc = scipy.optimize.fmin_powell(
+ func = CostFunction,
+ x0 = Xini,
+ args = (self._parameters["QualityCriterion"],),
+ maxiter = self._parameters["MaximumNumberOfSteps"]-1,
+ maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"]-1,
+ xtol = self._parameters["StateVariationTolerance"],
+ ftol = self._parameters["CostDecrementTolerance"],
+ full_output = True,
+ disp = self.__disp,
+ )
+ elif self._parameters["Minimizer"] == "SIMPLEX":
+ Minimum, J_optimal, niter, nfeval, rc = scipy.optimize.fmin(
+ func = CostFunction,
+ x0 = Xini,
+ args = (self._parameters["QualityCriterion"],),
+ maxiter = self._parameters["MaximumNumberOfSteps"]-1,
+ maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"]-1,
+ xtol = self._parameters["StateVariationTolerance"],
+ ftol = self._parameters["CostDecrementTolerance"],
+ full_output = True,
+ disp = self.__disp,
+ )
+ else:
+ raise ValueError("Error in Minimizer name: %s"%self._parameters["Minimizer"])
+ #
+ IndexMin = numpy.argmin( self.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
+ MinJ = self.StoredVariables["CostFunctionJ"][IndexMin]
+ Minimum = self.StoredVariables["CurrentState"][IndexMin]
+ #
+ # Obtention de l'analyse
+ # ----------------------
+ Xa = numpy.asmatrix(numpy.ravel( Minimum )).T
+ #
+ self.StoredVariables["Analysis"].store( Xa.A1 )
+ #
+ if "SimulatedObservationAtBackground" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["SimulatedObservationAtBackground"].store( numpy.ravel(Hm(Xb)) )
+ if "SimulatedObservationAtOptimum" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["SimulatedObservationAtOptimum"].store( numpy.ravel(Hm(Xa)) )
+ #
+ self._post_run()
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print '\n AUTODIAGNOSTIC \n'
+++ /dev/null
-#-*-coding:iso-8859-1-*-
-#
-# Copyright (C) 2008-2015 EDF R&D
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-#
-# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
-#
-# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-
-import logging
-from daCore import BasicObjects
-import numpy, scipy.optimize
-
-# ==============================================================================
-class ElementaryAlgorithm(BasicObjects.Algorithm):
- def __init__(self):
- BasicObjects.Algorithm.__init__(self, "DERIVATIVESFREEOPTIMIZATION")
- self.defineRequiredParameter(
- name = "Minimizer",
- default = "POWELL",
- typecast = str,
- message = "Minimiseur utilisé",
- listval = ["POWELL", "SIMPLEX"],
- )
- self.defineRequiredParameter(
- name = "MaximumNumberOfSteps",
- default = 15000,
- typecast = int,
- message = "Nombre maximal de pas d'optimisation",
- minval = -1,
- )
- self.defineRequiredParameter(
- name = "MaximumNumberOfFunctionEvaluations",
- default = 15000,
- typecast = int,
- message = "Nombre maximal de d'évaluations de la function",
- minval = -1,
- )
- self.defineRequiredParameter(
- name = "StateVariationTolerance",
- default = 1.e-4,
- typecast = float,
- message = "Variation relative minimale de l'état lors de l'arrêt",
- )
- self.defineRequiredParameter(
- name = "CostDecrementTolerance",
- default = 1.e-7,
- typecast = float,
- message = "Diminution relative minimale du cout lors de l'arrêt",
- )
- self.defineRequiredParameter(
- name = "QualityCriterion",
- default = "AugmentedWeightedLeastSquares",
- typecast = str,
- message = "Critère de qualité utilisé",
- listval = ["AugmentedWeightedLeastSquares","AWLS","DA",
- "WeightedLeastSquares","WLS",
- "LeastSquares","LS","L2",
- "AbsoluteValue","L1",
- "MaximumError","ME"],
- )
- self.defineRequiredParameter(
- name = "StoreInternalVariables",
- default = False,
- typecast = bool,
- message = "Stockage des variables internes ou intermédiaires du calcul",
- )
- self.defineRequiredParameter(
- name = "StoreSupplementaryCalculations",
- default = [],
- typecast = tuple,
- message = "Liste de calculs supplémentaires à stocker et/ou effectuer",
- listval = ["CurrentState", "CostFunctionJ", "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]
- )
-
- def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
- self._pre_run()
- if logging.getLogger().level < logging.WARNING:
- self.__disp = 1
- else:
- self.__disp = 0
- #
- # Paramètres de pilotage
- # ----------------------
- self.setParameters(Parameters)
-# self.setParameterValue("StoreInternalVariables",True)
-# print self._parameters["StoreInternalVariables"]
- #
- # Opérateurs
- # ----------
- Hm = HO["Direct"].appliedTo
- #
- # Précalcul des inversions de B et R
- # ----------------------------------
- BI = B.getI()
- RI = R.getI()
- #
- # Définition de la fonction-coût
- # ------------------------------
- def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.asmatrix(numpy.ravel( x )).T
- self.StoredVariables["CurrentState"].store( _X )
- _HX = Hm( _X )
- _HX = numpy.asmatrix(numpy.ravel( _HX )).T
- if "SimulatedObservationAtCurrentState" in self._parameters["StoreSupplementaryCalculations"]:
- self.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
- #
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
- if BI is None or RI is None:
- raise ValueError("Background and Observation error covariance matrix has to be properly defined!")
- Jb = 0.5 * (_X - Xb).T * BI * (_X - Xb)
- Jo = 0.5 * (Y - _HX).T * RI * (Y - _HX)
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
- if RI is None:
- raise ValueError("Observation error covariance matrix has to be properly defined!")
- Jb = 0.
- Jo = 0.5 * (Y - _HX).T * RI * (Y - _HX)
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
- Jb = 0.
- Jo = 0.5 * (Y - _HX).T * (Y - _HX)
- elif QualityMeasure in ["AbsoluteValue","L1"]:
- Jb = 0.
- Jo = numpy.sum( numpy.abs(Y - _HX) )
- elif QualityMeasure in ["MaximumError","ME"]:
- Jb = 0.
- Jo = numpy.max( numpy.abs(Y - _HX) )
- #
- J = float( Jb ) + float( Jo )
- #
- self.StoredVariables["CostFunctionJb"].store( Jb )
- self.StoredVariables["CostFunctionJo"].store( Jo )
- self.StoredVariables["CostFunctionJ" ].store( J )
- return J
- #
- # Point de démarrage de l'optimisation : Xini = Xb
- # ------------------------------------
- Xini = numpy.ravel(Xb)
- #
- # Minimisation de la fonctionnelle
- # --------------------------------
- nbPreviousSteps = self.StoredVariables["CostFunctionJ"].stepnumber()
- #
- if self._parameters["Minimizer"] == "POWELL":
- Minimum, J_optimal, direc, niter, nfeval, rc = scipy.optimize.fmin_powell(
- func = CostFunction,
- x0 = Xini,
- args = (self._parameters["QualityCriterion"],),
- maxiter = self._parameters["MaximumNumberOfSteps"]-1,
- maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"]-1,
- xtol = self._parameters["StateVariationTolerance"],
- ftol = self._parameters["CostDecrementTolerance"],
- full_output = True,
- disp = self.__disp,
- )
- elif self._parameters["Minimizer"] == "SIMPLEX":
- Minimum, J_optimal, niter, nfeval, rc = scipy.optimize.fmin(
- func = CostFunction,
- x0 = Xini,
- args = (self._parameters["QualityCriterion"],),
- maxiter = self._parameters["MaximumNumberOfSteps"]-1,
- maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"]-1,
- xtol = self._parameters["StateVariationTolerance"],
- ftol = self._parameters["CostDecrementTolerance"],
- full_output = True,
- disp = self.__disp,
- )
- else:
- raise ValueError("Error in Minimizer name: %s"%self._parameters["Minimizer"])
- #
- IndexMin = numpy.argmin( self.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
- MinJ = self.StoredVariables["CostFunctionJ"][IndexMin]
- Minimum = self.StoredVariables["CurrentState"][IndexMin]
- #
- # Obtention de l'analyse
- # ----------------------
- Xa = numpy.asmatrix(numpy.ravel( Minimum )).T
- #
- self.StoredVariables["Analysis"].store( Xa.A1 )
- #
- if "SimulatedObservationAtBackground" in self._parameters["StoreSupplementaryCalculations"]:
- self.StoredVariables["SimulatedObservationAtBackground"].store( numpy.ravel(Hm(Xb)) )
- if "SimulatedObservationAtOptimum" in self._parameters["StoreSupplementaryCalculations"]:
- self.StoredVariables["SimulatedObservationAtOptimum"].store( numpy.ravel(Hm(Xa)) )
- #
- self._post_run()
- return 0
-
-# ==============================================================================
-if __name__ == "__main__":
- print '\n AUTODIAGNOSTIC \n'
--- /dev/null
+#-*-coding:iso-8859-1-*-
+#
+# Copyright (C) 2008-2015 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+import logging
+from daCore import BasicObjects
+import numpy
+
+# ==============================================================================
+class ElementaryAlgorithm(BasicObjects.Algorithm):
+ def __init__(self):
+ BasicObjects.Algorithm.__init__(self, "TABUSEARCH")
+ self.defineRequiredParameter(
+ name = "MaximumNumberOfSteps",
+ default = 50,
+ typecast = int,
+ message = "Nombre maximal de pas d'optimisation",
+ minval = 1,
+ )
+ self.defineRequiredParameter(
+ name = "SetSeed",
+ typecast = numpy.random.seed,
+ message = "Graine fixée pour le générateur aléatoire",
+ )
+ self.defineRequiredParameter(
+ name = "LengthOfTabuList",
+ default = 50,
+ typecast = int,
+ message = "Longueur de la liste tabou",
+ minval = 1,
+ )
+ self.defineRequiredParameter(
+ name = "NumberOfElementaryPerturbations",
+ default = 1,
+ typecast = int,
+ message = "Nombre de perturbations élémentaires pour choisir une perturbation d'état",
+ minval = 1,
+ )
+ self.defineRequiredParameter(
+ name = "NoiseDistribution",
+ default = "Uniform",
+ typecast = str,
+ message = "Distribution pour générer les perturbations d'état",
+ listval = ["Gaussian","Uniform"],
+ )
+ self.defineRequiredParameter(
+ name = "QualityCriterion",
+ default = "AugmentedWeightedLeastSquares",
+ typecast = str,
+ message = "Critère de qualité utilisé",
+ listval = ["AugmentedWeightedLeastSquares","AWLS","DA",
+ "WeightedLeastSquares","WLS",
+ "LeastSquares","LS","L2",
+ "AbsoluteValue","L1",
+ "MaximumError","ME"],
+ )
+ self.defineRequiredParameter(
+ name = "NoiseHalfRange",
+ default = [],
+ typecast = numpy.matrix,
+ message = "Demi-amplitude des perturbations uniformes centrées d'état pour chaque composante de l'état",
+ )
+ self.defineRequiredParameter(
+ name = "StandardDeviation",
+ default = [],
+ typecast = numpy.matrix,
+ message = "Ecart-type des perturbations gaussiennes d'état pour chaque composante de l'état",
+ )
+ self.defineRequiredParameter(
+ name = "NoiseAddingProbability",
+ default = 1.,
+ typecast = float,
+ message = "Probabilité de perturbation d'une composante de l'état",
+ minval = 0.,
+ maxval = 1.,
+ )
+ self.defineRequiredParameter(
+ name = "StoreInternalVariables",
+ default = False,
+ typecast = bool,
+ message = "Stockage des variables internes ou intermédiaires du calcul",
+ )
+ self.defineRequiredParameter(
+ name = "StoreSupplementaryCalculations",
+ default = [],
+ typecast = tuple,
+ message = "Liste de calculs supplémentaires à stocker et/ou effectuer",
+ listval = ["BMA", "OMA", "OMB", "CurrentState", "CostFunctionJ", "Innovation", "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]
+ )
+ self.defineRequiredParameter( # Pas de type
+ name = "Bounds",
+ message = "Liste des valeurs de bornes",
+ )
+
+ def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
+ self._pre_run()
+ #
+ # Paramètres de pilotage
+ # ----------------------
+ self.setParameters(Parameters)
+ #
+ if self._parameters.has_key("Bounds") and (type(self._parameters["Bounds"]) is type([]) or type(self._parameters["Bounds"]) is type(())) and (len(self._parameters["Bounds"]) > 0):
+ Bounds = self._parameters["Bounds"]
+ logging.debug("%s Prise en compte des bornes effectuee"%(self._name,))
+ else:
+ Bounds = None
+ #
+ if self._parameters["NoiseDistribution"] == "Uniform":
+ nrange = numpy.ravel(self._parameters["NoiseHalfRange"]) # Vecteur
+ if nrange.size != Xb.size:
+ raise ValueError("Noise generation by Uniform distribution requires range for all variable increments. The actual noise half range vector is:\n%s"%nrange)
+ elif self._parameters["NoiseDistribution"] == "Gaussian":
+ sigma = numpy.ravel(self._parameters["StandardDeviation"]) # Vecteur
+ if sigma.size != Xb.size:
+ raise ValueError("Noise generation by Gaussian distribution requires standard deviation for all variable increments. The actual standard deviation vector is:\n%s"%sigma)
+ #
+ # Opérateur d'observation
+ # -----------------------
+ Hm = HO["Direct"].appliedTo
+ #
+ # Précalcul des inversions de B et R
+ # ----------------------------------
+ BI = B.getI()
+ RI = R.getI()
+ #
+ # Définition de la fonction de deplacement
+ # ----------------------------------------
+ def Tweak( x, NoiseDistribution, NoiseAddingProbability ):
+ _X = numpy.asmatrix(numpy.ravel( x )).T
+ if NoiseDistribution == "Uniform":
+ for i in xrange(_X.size):
+ if NoiseAddingProbability >= numpy.random.uniform():
+ _increment = numpy.random.uniform(low=-nrange[i], high=nrange[i])
+ # On ne traite pas encore le dépassement des bornes ici
+ _X[i] += _increment
+ elif NoiseDistribution == "Gaussian":
+ for i in xrange(_X.size):
+ if NoiseAddingProbability >= numpy.random.uniform():
+ _increment = numpy.random.normal(loc=0., scale=sigma[i])
+ # On ne traite pas encore le dépassement des bornes ici
+ _X[i] += _increment
+ #
+ return _X
+ #
+ def StateInList( x, TL ):
+ _X = numpy.ravel( x )
+ _xInList = False
+ for state in TL:
+ if numpy.all(numpy.abs( _X - numpy.ravel(state) ) <= 1e-16*numpy.abs(_X)):
+ _xInList = True
+ if _xInList: sys.exit()
+ return _xInList
+ #
+ # Minimisation de la fonctionnelle
+ # --------------------------------
+ _n = 0
+ _S = Xb
+ # _qualityS = CostFunction( _S, self._parameters["QualityCriterion"] )
+ _qualityS = BasicObjects.CostFunction3D(
+ _S,
+ _Hm = Hm,
+ _BI = BI,
+ _RI = RI,
+ _Xb = Xb,
+ _Y = Y,
+ _SSC = self._parameters["StoreSupplementaryCalculations"],
+ _QM = self._parameters["QualityCriterion"],
+ _SSV = self.StoredVariables,
+ _sSc = False,
+ )
+ _Best, _qualityBest = _S, _qualityS
+ _TabuList = []
+ _TabuList.append( _S )
+ while _n < self._parameters["MaximumNumberOfSteps"]:
+ _n += 1
+ if len(_TabuList) > self._parameters["LengthOfTabuList"]:
+ _TabuList.pop(0)
+ _R = Tweak( _S, self._parameters["NoiseDistribution"], self._parameters["NoiseAddingProbability"] )
+ # _qualityR = CostFunction( _R, self._parameters["QualityCriterion"] )
+ _qualityR = BasicObjects.CostFunction3D(
+ _R,
+ _Hm = Hm,
+ _BI = BI,
+ _RI = RI,
+ _Xb = Xb,
+ _Y = Y,
+ _SSC = self._parameters["StoreSupplementaryCalculations"],
+ _QM = self._parameters["QualityCriterion"],
+ _SSV = self.StoredVariables,
+ _sSc = False,
+ )
+ for nbt in range(self._parameters["NumberOfElementaryPerturbations"]-1):
+ _W = Tweak( _S, self._parameters["NoiseDistribution"], self._parameters["NoiseAddingProbability"] )
+ # _qualityW = CostFunction( _W, self._parameters["QualityCriterion"] )
+ _qualityW = BasicObjects.CostFunction3D(
+ _W,
+ _Hm = Hm,
+ _BI = BI,
+ _RI = RI,
+ _Xb = Xb,
+ _Y = Y,
+ _SSC = self._parameters["StoreSupplementaryCalculations"],
+ _QM = self._parameters["QualityCriterion"],
+ _SSV = self.StoredVariables,
+ _sSc = False,
+ )
+ if (not StateInList(_W, _TabuList)) and ( (_qualityW < _qualityR) or StateInList(_R,_TabuList) ):
+ _R, _qualityR = _W, _qualityW
+ if (not StateInList( _R, _TabuList )) and (_qualityR < _qualityS):
+ _S, _qualityS = _R, _qualityR
+ _TabuList.append( _S )
+ if _qualityS < _qualityBest:
+ _Best, _qualityBest = _S, _qualityS
+ #
+ if self._parameters["StoreInternalVariables"] or "CurrentState" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["CurrentState"].store( _Best )
+ if "SimulatedObservationAtCurrentState" in self._parameters["StoreSupplementaryCalculations"]:
+ _HmX = Hm( numpy.asmatrix(numpy.ravel( _Best )).T )
+ _HmX = numpy.asmatrix(numpy.ravel( _HmX )).T
+ self.StoredVariables["SimulatedObservationAtCurrentState"].store( _HmX )
+ self.StoredVariables["CostFunctionJb"].store( 0. )
+ self.StoredVariables["CostFunctionJo"].store( 0. )
+ self.StoredVariables["CostFunctionJ" ].store( _qualityBest )
+ #
+ # Obtention de l'analyse
+ # ----------------------
+ Xa = numpy.asmatrix(numpy.ravel( _Best )).T
+ #
+ self.StoredVariables["Analysis"].store( Xa.A1 )
+ #
+ if "Innovation" in self._parameters["StoreSupplementaryCalculations"] or \
+ "OMB" in self._parameters["StoreSupplementaryCalculations"] or \
+ "SimulatedObservationAtBackground" in self._parameters["StoreSupplementaryCalculations"]:
+ HXb = Hm(Xb)
+ d = Y - HXb
+ if "OMA" in self._parameters["StoreSupplementaryCalculations"] or \
+ "SimulatedObservationAtOptimum" in self._parameters["StoreSupplementaryCalculations"]:
+ HXa = Hm(Xa)
+ #
+ # Calculs et/ou stockages supplémentaires
+ # ---------------------------------------
+ if "Innovation" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["Innovation"].store( numpy.ravel(d) )
+ if "BMA" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(Xa) )
+ if "OMA" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["OMA"].store( numpy.ravel(Y) - numpy.ravel(HXa) )
+ if "OMB" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["OMB"].store( numpy.ravel(d) )
+ if "SimulatedObservationAtBackground" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["SimulatedObservationAtBackground"].store( numpy.ravel(HXb) )
+ if "SimulatedObservationAtOptimum" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["SimulatedObservationAtOptimum"].store( numpy.ravel(HXa) )
+ #
+ self._post_run(HO)
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print '\n AUTODIAGNOSTIC \n'
"withmpWorkers" :None,
}
"""
- if (type(asFunction) is type({})) and \
+ if isinstance(asFunction, dict) and \
("useApproximatedDerivatives" in asFunction) and bool(asFunction["useApproximatedDerivatives"]) and \
("Direct" in asFunction) and (asFunction["Direct"] is not None):
if "withCenteredDF" not in asFunction: asFunction["withCenteredDF"] = False
self.__HO["Direct"] = Operator( fromMethod = FDA.DirectOperator, avoidingRedundancy = avoidRC )
self.__HO["Tangent"] = Operator( fromMethod = FDA.TangentOperator, avoidingRedundancy = avoidRC )
self.__HO["Adjoint"] = Operator( fromMethod = FDA.AdjointOperator, avoidingRedundancy = avoidRC )
- elif (type(asFunction) is type({})) and \
+ elif isinstance(asFunction, dict) and \
("Tangent" in asFunction) and ("Adjoint" in asFunction) and \
(asFunction["Tangent"] is not None) and (asFunction["Adjoint"] is not None):
if ("Direct" not in asFunction) or (asFunction["Direct"] is None):
"withmpWorkers" :None,
}
"""
- if (type(asFunction) is type({})) and \
+ if isinstance(asFunction, dict) and \
("useApproximatedDerivatives" in asFunction) and bool(asFunction["useApproximatedDerivatives"]) and \
("Direct" in asFunction) and (asFunction["Direct"] is not None):
if "withCenteredDF" not in asFunction: asFunction["withCenteredDF"] = False
self.__EM["Direct"] = Operator( fromMethod = FDA.DirectOperator, avoidingRedundancy = avoidRC )
self.__EM["Tangent"] = Operator( fromMethod = FDA.TangentOperator, avoidingRedundancy = avoidRC )
self.__EM["Adjoint"] = Operator( fromMethod = FDA.AdjointOperator, avoidingRedundancy = avoidRC )
- elif (type(asFunction) is type({})) and \
+ elif isinstance(asFunction, dict) and \
("Tangent" in asFunction) and ("Adjoint" in asFunction) and \
(asFunction["Tangent"] is not None) and (asFunction["Adjoint"] is not None):
if ("Direct" not in asFunction) or (asFunction["Direct"] is None):
"withmpWorkers" :None,
}
"""
- if (type(asFunction) is type({})) and \
+ if isinstance(asFunction, dict) and \
("useApproximatedDerivatives" in asFunction) and bool(asFunction["useApproximatedDerivatives"]) and \
("Direct" in asFunction) and (asFunction["Direct"] is not None):
if "withCenteredDF" not in asFunction: asFunction["withCenteredDF"] = False
self.__CM["Direct"] = Operator( fromMethod = FDA.DirectOperator, avoidingRedundancy = avoidRC )
self.__CM["Tangent"] = Operator( fromMethod = FDA.TangentOperator, avoidingRedundancy = avoidRC )
self.__CM["Adjoint"] = Operator( fromMethod = FDA.AdjointOperator, avoidingRedundancy = avoidRC )
- elif (type(asFunction) is type({})) and \
+ elif isinstance(asFunction, dict) and \
("Tangent" in asFunction) and ("Adjoint" in asFunction) and \
(asFunction["Tangent"] is not None) and (asFunction["Adjoint"] is not None):
if ("Direct" not in asFunction) or (asFunction["Direct"] is None):
Validation de la correspondance correcte des tailles des variables et
des matrices s'il y en a.
"""
- if self.__Xb is None: __Xb_shape = (0,)
- elif hasattr(self.__Xb,"size"): __Xb_shape = (self.__Xb.size,)
+ if self.__Xb is None: __Xb_shape = (0,)
+ elif hasattr(self.__Xb,"size"): __Xb_shape = (self.__Xb.size,)
elif hasattr(self.__Xb,"shape"):
- if type(self.__Xb.shape) is tuple: __Xb_shape = self.__Xb.shape
- else: __Xb_shape = self.__Xb.shape()
+ if isinstance(self.__Xb.shape, tuple): __Xb_shape = self.__Xb.shape
+ else: __Xb_shape = self.__Xb.shape()
else: raise TypeError("The background (Xb) has no attribute of shape: problem !")
#
- if self.__Y is None: __Y_shape = (0,)
- elif hasattr(self.__Y,"size"): __Y_shape = (self.__Y.size,)
+ if self.__Y is None: __Y_shape = (0,)
+ elif hasattr(self.__Y,"size"): __Y_shape = (self.__Y.size,)
elif hasattr(self.__Y,"shape"):
- if type(self.__Y.shape) is tuple: __Y_shape = self.__Y.shape
- else: __Y_shape = self.__Y.shape()
+ if isinstance(self.__Y.shape, tuple): __Y_shape = self.__Y.shape
+ else: __Y_shape = self.__Y.shape()
else: raise TypeError("The observation (Y) has no attribute of shape: problem !")
#
- if self.__U is None: __U_shape = (0,)
- elif hasattr(self.__U,"size"): __U_shape = (self.__U.size,)
+ if self.__U is None: __U_shape = (0,)
+ elif hasattr(self.__U,"size"): __U_shape = (self.__U.size,)
elif hasattr(self.__U,"shape"):
- if type(self.__U.shape) is tuple: __U_shape = self.__U.shape
- else: __U_shape = self.__U.shape()
+ if isinstance(self.__U.shape, tuple): __U_shape = self.__U.shape
+ else: __U_shape = self.__U.shape()
else: raise TypeError("The control (U) has no attribute of shape: problem !")
#
- if self.__B is None: __B_shape = (0,0)
+ if self.__B is None: __B_shape = (0,0)
elif hasattr(self.__B,"shape"):
- if type(self.__B.shape) is tuple: __B_shape = self.__B.shape
- else: __B_shape = self.__B.shape()
+ if isinstance(self.__B.shape, tuple): __B_shape = self.__B.shape
+ else: __B_shape = self.__B.shape()
else: raise TypeError("The a priori errors covariance matrix (B) has no attribute of shape: problem !")
#
- if self.__R is None: __R_shape = (0,0)
+ if self.__R is None: __R_shape = (0,0)
elif hasattr(self.__R,"shape"):
- if type(self.__R.shape) is tuple: __R_shape = self.__R.shape
- else: __R_shape = self.__R.shape()
+ if isinstance(self.__R.shape, tuple): __R_shape = self.__R.shape
+ else: __R_shape = self.__R.shape()
else: raise TypeError("The observation errors covariance matrix (R) has no attribute of shape: problem !")
#
- if self.__Q is None: __Q_shape = (0,0)
+ if self.__Q is None: __Q_shape = (0,0)
elif hasattr(self.__Q,"shape"):
- if type(self.__Q.shape) is tuple: __Q_shape = self.__Q.shape
- else: __Q_shape = self.__Q.shape()
+ if isinstance(self.__Q.shape, tuple): __Q_shape = self.__Q.shape
+ else: __Q_shape = self.__Q.shape()
else: raise TypeError("The evolution errors covariance matrix (Q) has no attribute of shape: problem !")
#
- if len(self.__HO) == 0: __HO_shape = (0,0)
- elif type(self.__HO) is type({}): __HO_shape = (0,0)
+ if len(self.__HO) == 0: __HO_shape = (0,0)
+ elif isinstance(self.__HO, dict): __HO_shape = (0,0)
elif hasattr(self.__HO["Direct"],"shape"):
- if type(self.__HO["Direct"].shape) is tuple: __HO_shape = self.__HO["Direct"].shape
- else: __HO_shape = self.__HO["Direct"].shape()
+ if isinstance(self.__HO["Direct"].shape, tuple): __HO_shape = self.__HO["Direct"].shape
+ else: __HO_shape = self.__HO["Direct"].shape()
else: raise TypeError("The observation operator (H) has no attribute of shape: problem !")
#
- if len(self.__EM) == 0: __EM_shape = (0,0)
- elif type(self.__EM) is type({}): __EM_shape = (0,0)
+ if len(self.__EM) == 0: __EM_shape = (0,0)
+ elif isinstance(self.__EM, dict): __EM_shape = (0,0)
elif hasattr(self.__EM["Direct"],"shape"):
- if type(self.__EM["Direct"].shape) is tuple: __EM_shape = self.__EM["Direct"].shape
- else: __EM_shape = self.__EM["Direct"].shape()
+ if isinstance(self.__EM["Direct"].shape, tuple): __EM_shape = self.__EM["Direct"].shape
+ else: __EM_shape = self.__EM["Direct"].shape()
else: raise TypeError("The evolution model (EM) has no attribute of shape: problem !")
#
- if len(self.__CM) == 0: __CM_shape = (0,0)
- elif type(self.__CM) is type({}): __CM_shape = (0,0)
+ if len(self.__CM) == 0: __CM_shape = (0,0)
+ elif isinstance(self.__CM, dict): __CM_shape = (0,0)
elif hasattr(self.__CM["Direct"],"shape"):
- if type(self.__CM["Direct"].shape) is tuple: __CM_shape = self.__CM["Direct"].shape
- else: __CM_shape = self.__CM["Direct"].shape()
+ if isinstance(self.__CM["Direct"].shape, tuple): __CM_shape = self.__CM["Direct"].shape
+ else: __CM_shape = self.__CM["Direct"].shape()
else: raise TypeError("The control model (CM) has no attribute of shape: problem !")
#
# Vérification des conditions
#
if ("AlgorithmParameters" in self.__StoredInputs) \
and ("Bounds" in self.__StoredInputs["AlgorithmParameters"]) \
- and (type(self.__StoredInputs["AlgorithmParameters"]["Bounds"]) is type([]) or type(self.__StoredInputs["AlgorithmParameters"]["Bounds"]) is type(())) \
+ and (isinstance(self.__StoredInputs["AlgorithmParameters"]["Bounds"], list) or isinstance(self.__StoredInputs["AlgorithmParameters"]["Bounds"], tuple)) \
and (len(self.__StoredInputs["AlgorithmParameters"]["Bounds"]) != max(__Xb_shape)):
raise ValueError("The number \"%s\" of bound pairs for the state (X) components is different of the size \"%s\" of the state itself." \
%(len(self.__StoredInputs["AlgorithmParameters"]["Bounds"]),max(__Xb_shape)))
les arguments (variable persistante VariableName, paramètres HookParameters).
"""
#
- if type( self.__algorithm ) is dict:
+ if isinstance(self.__algorithm, dict):
raise ValueError("No observer can be build before choosing an algorithm.")
#
# Vérification du nom de variable et typage
# -----------------------------------------
- if type( VariableName ) is str:
+ if isinstance(VariableName, str):
VariableNames = [VariableName,]
- elif type( VariableName ) is list:
+ elif isinstance(VariableName, list):
VariableNames = map( str, VariableName )
else:
raise ValueError("The observer requires a name or a list of names of variables.")
Permet de retirer un observer à une ou des variable nommée.
"""
#
- if type( self.__algorithm ) is dict:
+ if isinstance(self.__algorithm, dict):
raise ValueError("No observer can be removed before choosing an algorithm.")
#
# Vérification du nom de variable et typage
# -----------------------------------------
- if type( VariableName ) is str:
+ if isinstance(VariableName, str):
VariableNames = [VariableName,]
- elif type( VariableName ) is list:
+ elif isinstance(VariableName, list):
VariableNames = map( str, VariableName )
else:
raise ValueError("The observer requires a name or a list of names of variables.")
"""
Définit les outils généraux élémentaires.
- Ce module est destiné à etre appelée par AssimilationStudy pour constituer
- les objets élémentaires de l'algorithme.
+ Ce module est destiné à être appelée par AssimilationStudy.
"""
__author__ = "Jean-Philippe ARGAUD"
__all__ = []
"3DVAR",
"4DVAR",
"Blue",
+ "DerivativeFreeOptimization",
"ExtendedBlue",
"EnsembleBlue",
"KalmanFilter",
"Observation", "ObservationError",
"ObservationOperator",
]
+AlgoDataRequirements["DerivativeFreeOptimization"] = [
+ "Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator",
+ ]
AlgoDataRequirements["ExtendedBlue"] = [
"Background", "BackgroundError",
"Observation", "ObservationError",
AlgoType["3DVAR"] = "Optim"
AlgoType["4DVAR"] = "Optim"
AlgoType["Blue"] = "Optim"
+AlgoType["DerivativeFreeOptimization"] = "Optim"
AlgoType["ExtendedBlue"] = "Optim"
AlgoType["EnsembleBlue"] = "Optim"
AlgoType["KalmanFilter"] = "Optim"