.. [Cade03] Cade B. S., Noon B. R., *A Gentle Introduction to Quantile Regression for Ecologists*, Frontiers in Ecology and the Environment, 1(8), pp.412-420, 2003
+.. [Chakraborty08] Chakraborty U.K., *Advances in differential evolution*, Studies in computational intelligence, Vol.143, Springer, 2008
+
.. [Evensen94] Evensen G., *Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics*, Journal of Geophysical Research, 99(C5), 10,143–10,162, 1994
.. [Evensen03] Evensen G., *The Ensemble Kalman Filter: theoretical formulation and practical implementation*, Seminar on Recent developments in data assimilation for atmosphere and ocean, ECMWF, 8 to 12 September 2003
.. [Powell09] Powell M. J. D., *The BOBYQA algorithm for bound constrained optimization without derivatives*, Cambridge University Technical Report DAMTP NA2009/06, 2009
+.. [Price05] Price K.V., Storn R., Lampinen J., *Differential evolution: a practical approach to global optimization*, Springer, 2005
+
.. [Python] *Python programming language*, http://www.python.org/
.. [R] *The R Project for Statistical Computing*, http://www.r-project.org/
.. [SalomeMeca] *Salome_Meca and Code_Aster, Analysis of Structures and Thermomechanics for Studies & Research*, http://www.code-aster.org/
+.. [Storn97] Storn R., Price, K., *Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces*, Journal of Global Optimization, 11(1), pp.341-359, 1997
+
.. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987
.. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997
Calculation algorithm "*DerivativeFreeOptimization*"
----------------------------------------------------
-.. warning::
-
- in its present version, this algorithm is experimental, and so changes can be
- required in forthcoming versions.
-
Description
+++++++++++
This algorithm realizes an estimation of the state of a system by minimization
of a cost function :math:`J` without gradient. It is a method that does not use
-the derivatives of the cost function. It fall, for example, in the same category
-than the :ref:`section_ref_algorithm_ParticleSwarmOptimization`.
+the derivatives of the cost function. It falls in the same category than the
+:ref:`section_ref_algorithm_ParticleSwarmOptimization` or the
+:ref:`section_ref_algorithm_DifferentialEvolution`.
This is an optimization method allowing for global minimum search of a general
error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
calculations or memory consumptions. The default is a void list, none of
these variables being calculated and stored by default. The possible names
are in the following list: ["BMA", "CostFunctionJ",
- "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
- "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum",
- "CurrentOptimum", "CurrentState", "IndexOfOptimum",
- "InnovationAtCurrentState", "OMA", "OMB",
+ "CostFunctionJAtCurrentOptimum", "CostFunctionJb",
+ "CostFunctionJbAtCurrentOptimum", "CostFunctionJo",
+ "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState",
+ "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB",
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
The conditional outputs of the algorithm are the following:
+ .. include:: snippets/BMA.rst
+
.. include:: snippets/CostFunctionJAtCurrentOptimum.rst
.. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
.. include:: snippets/IndexOfOptimum.rst
+ .. include:: snippets/Innovation.rst
+
.. include:: snippets/InnovationAtCurrentState.rst
.. include:: snippets/OMA.rst
References to other sections:
- :ref:`section_ref_algorithm_ParticleSwarmOptimization`
+ - :ref:`section_ref_algorithm_DifferentialEvolution`
Bibliographical references:
- [Johnson08]_
--- /dev/null
+..
+ Copyright (C) 2008-2018 EDF R&D
+
+ This file is part of SALOME ADAO module.
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+
+ Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+.. index:: single: DifferentialEvolution
+.. _section_ref_algorithm_DifferentialEvolution:
+
+Calculation algorithm "*DifferentialEvolution*"
+----------------------------------------------------
+
+.. warning::
+
+ in its present version, this algorithm is experimental, and so changes can be
+ required in forthcoming versions.
+
+Description
++++++++++++
+
+This algorithm realizes an estimation of the state of a system by minimization
+of a cost function :math:`J` by using an evolutionary strategy of differential
+evolution. It is a method that does not use the derivatives of the cost
+function. It falls in the same category than the
+:ref:`section_ref_algorithm_DerivativeFreeOptimization` or the
+:ref:`section_ref_algorithm_ParticleSwarmOptimization`.
+
+This is an optimization method allowing for global minimum search of a general
+error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
+with or without weights. The default error function is the augmented weighted
+least squares function, classically used in data assimilation.
+
+Optional and required commands
+++++++++++++++++++++++++++++++
+
+The general required commands, available in the editing user interface, are the
+following:
+
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
+
+The general optional commands, available in the editing user interface, are
+indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
+of the command "*AlgorithmParameters*" allows to choose the specific options,
+described hereafter, of the algorithm. See
+:ref:`section_ref_options_Algorithm_Parameters` for the good use of this
+command.
+
+The options of the algorithm are the following:
+
+ .. include:: snippets/Minimizer_DE.rst
+
+ .. include:: snippets/BoundsWithExtremes.rst
+
+ .. include:: snippets/CrossOverProbability_CR.rst
+
+ .. include:: snippets/MaximumNumberOfSteps.rst
+
+ .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst
+
+ .. include:: snippets/MutationDifferentialWeight_F.rst
+
+ .. include:: snippets/PopulationSize.rst
+
+ .. include:: snippets/QualityCriterion.rst
+
+ .. include:: snippets/SetSeed.rst
+
+ StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
+ This list indicates the names of the supplementary variables that can be
+ available at the end of the algorithm. It involves potentially costly
+ calculations or memory consumptions. The default is a void list, none of
+ these variables being calculated and stored by default. The possible names
+ are in the following list: ["BMA", "CostFunctionJ",
+ "CostFunctionJAtCurrentOptimum", "CostFunctionJb",
+ "CostFunctionJbAtCurrentOptimum", "CostFunctionJo",
+ "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState",
+ "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
+ "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
+
+ Example :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+
+Information and variables available at the end of the algorithm
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+At the output, after executing the algorithm, there are variables and
+information originating from the calculation. The description of
+:ref:`section_ref_output_variables` show the way to obtain them by the method
+named ``get`` of the variable "*ADD*" of the post-processing. The input
+variables, available to the user at the output in order to facilitate the
+writing of post-processing procedures, are described in the
+:ref:`subsection_r_o_v_Inventaire`.
+
+The unconditional outputs of the algorithm are the following:
+
+ .. include:: snippets/Analysis.rst
+
+ .. include:: snippets/CostFunctionJ.rst
+
+ .. include:: snippets/CostFunctionJb.rst
+
+ .. include:: snippets/CostFunctionJo.rst
+
+ .. include:: snippets/CurrentState.rst
+
+The conditional outputs of the algorithm are the following:
+
+ .. include:: snippets/BMA.rst
+
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
+
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
+
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
+
+ .. include:: snippets/CurrentOptimum.rst
+
+ .. include:: snippets/IndexOfOptimum.rst
+
+ .. include:: snippets/Innovation.rst
+
+ .. include:: snippets/InnovationAtCurrentState.rst
+
+ .. include:: snippets/OMA.rst
+
+ .. include:: snippets/OMB.rst
+
+ .. include:: snippets/SimulatedObservationAtBackground.rst
+
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
+
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
+
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
+
+See also
+++++++++
+
+References to other sections:
+ - :ref:`section_ref_algorithm_DerivativeFreeOptimization`
+ - :ref:`section_ref_algorithm_ParticleSwarmOptimization`
+
+Bibliographical references:
+ - [Chakraborty08]_
+ - [Price05]_
+ - [Storn97]_
Description
+++++++++++
-This algorithm realizes an estimation of the state of a dynamic system by
-minimization of a cost function :math:`J` by using a particle swarm. It is a
-method that does not use the derivatives of the cost function. It fall in the
-same category than the :ref:`section_ref_algorithm_DerivativeFreeOptimization`.
+This algorithm realizes an estimation of the state of a system by minimization
+of a cost function :math:`J` by using an evolutionary strategy of particle
+swarm. It is a method that does not use the derivatives of the cost function.
+It falls in the same category than the
+:ref:`section_ref_algorithm_DerivativeFreeOptimization` or the
+:ref:`section_ref_algorithm_DifferentialEvolution`.
This is an optimization method allowing for global minimum search of a general
error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`,
References to other sections:
- :ref:`section_ref_algorithm_DerivativeFreeOptimization`
+ - :ref:`section_ref_algorithm_DifferentialEvolution`
Bibliographical references:
- [WikipediaPSO]_
ref_algorithm_4DVAR
ref_algorithm_Blue
ref_algorithm_DerivativeFreeOptimization
+ ref_algorithm_DifferentialEvolution
ref_algorithm_EnsembleBlue
ref_algorithm_EnsembleKalmanFilter
ref_algorithm_ExtendedBlue
--- /dev/null
+.. index:: single: CrossOverProbability_CR
+
+CrossOverProbability_CR
+ This key is used to define the probability of recombination or crossover
+ during the differential evolution. This variable is usually noted as ``CR``
+ in the literature. The default value is 0.7, and it is recommended to change
+ it if necessary.
+
+ Example:
+ ``{"CrossOverProbability_CR":0.7}``
--- /dev/null
+.. index:: single: Minimizer
+
+Minimizer
+ This key allows to choose the optimization strategy for the minimizer. The
+ default choice is "BEST1BIN", and the possible ones, among the multiples
+ crossover and mutation strategies, are
+ "BEST1BIN",
+ "BEST1EXP",
+ "RAND1EXP",
+ "RANDTOBEST1EXP",
+ "CURRENTTOBEST1EXP",
+ "BEST2EXP",
+ "RAND2EXP",
+ "RANDTOBEST1BIN",
+ "CURRENTTOBEST1BIN",
+ "BEST2BIN",
+ "RAND2BIN",
+ "RAND1BIN".
+ It is greatly recommanded to keep the default value.
+
+ Example:
+ ``{"Minimizer":"BEST1BIN"}``
--- /dev/null
+.. index:: single: MutationDifferentialWeight_F
+
+MutationDifferentialWeight_F
+ This key is used to define the differential weight in the mutation step.
+ This variable is usually noted as ``F`` in the literature. It can be constant
+ if it is in the form of a single value, or randomly variable in the two given
+ bounds in the pair. The default value is (0.5, 1).
+
+ Example:
+ ``{"MutationDifferentialWeight_F":(0.5, 1)}``
--- /dev/null
+.. index:: single: PopulationSize
+
+PopulationSize
+ This key is used to define the (approximate) size of the population at each
+ generation. This size is slightly adjusted to take into account the number of
+ state variables to be optimized. The default value is 100, and it is
+ recommended to choose a population between 1 and about ten times the number
+ of state variables, the size being smaller as the number of variables
+ increases.
+
+ Example:
+ ``{"PopulationSize":100}``
.. [Cade03] Cade B. S., Noon B. R., *A Gentle Introduction to Quantile Regression for Ecologists*, Frontiers in Ecology and the Environment, 1(8), pp.412-420, 2003
+.. [Chakraborty08] Chakraborty U.K., *Advances in differential evolution*, Studies in computational intelligence, Vol.143, Springer, 2008
+
.. [Evensen94] Evensen G., *Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics*, Journal of Geophysical Research, 99(C5), 10,143–10,162, 1994
.. [Evensen03] Evensen G., *The Ensemble Kalman Filter: theoretical formulation and practical implementation*, Seminar on Recent developments in data assimilation for atmosphere and ocean, ECMWF, 8 to 12 September 2003
.. [Powell09] Powell M. J. D., *The BOBYQA algorithm for bound constrained optimization without derivatives*, Cambridge University Technical Report DAMTP NA2009/06, 2009
+.. [Price05] Price K.V., Storn R., Lampinen J., *Differential evolution: a practical approach to global optimization*, Springer, 2005
+
.. [Python] *Python programming language*, http://www.python.org/
.. [R] *The R Project for Statistical Computing*, http://www.r-project.org/
.. [SalomeMeca] *Salome_Meca et Code_Aster, Analyse des Structures et Thermomécanique pour les Etudes et la Recherche*, http://www.code-aster.org/
+.. [Storn97] Storn R., Price, K., *Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces*, Journal of Global Optimization, 11(1), pp.341-359, 1997
+
.. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987
.. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997
Algorithme de calcul "*DerivativeFreeOptimization*"
---------------------------------------------------
-.. warning::
-
- dans sa présente version, cet algorithme est expérimental, et reste donc
- susceptible de changements dans les prochaines versions.
-
Description
+++++++++++
-Cet algorithme réalise une estimation d'état d'un système par minimisation d'une
-fonctionnelle d'écart :math:`J` sans gradient. C'est une méthode qui n'utilise
-pas les dérivées de la fonctionnelle d'écart. Elle entre, par exemple, dans la
-même catégorie que l':ref:`section_ref_algorithm_ParticleSwarmOptimization`.
+Cet algorithme réalise une estimation d'état d'un système par minimisation
+d'une fonctionnelle d'écart :math:`J` sans gradient. C'est une méthode qui
+n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre, par
+exemple, dans la même catégorie que
+l':ref:`section_ref_algorithm_ParticleSwarmOptimization` ou
+l':ref:`section_ref_algorithm_DifferentialEvolution`.
C'est une méthode d'optimisation permettant la recherche du minimum global d'une
fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou
calculs ou du stockage coûteux. La valeur par défaut est une liste vide,
aucune de ces variables n'étant calculée et stockée par défaut. Les noms
possibles sont dans la liste suivante : ["BMA", "CostFunctionJ",
- "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum",
- "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum",
- "CurrentOptimum", "CurrentState", "IndexOfOptimum",
- "InnovationAtCurrentState", "OMA", "OMB",
+ "CostFunctionJAtCurrentOptimum", "CostFunctionJb",
+ "CostFunctionJbAtCurrentOptimum", "CostFunctionJo",
+ "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState",
+ "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB",
"SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
Les sorties conditionnelles de l'algorithme sont les suivantes:
+ .. include:: snippets/BMA.rst
+
.. include:: snippets/CostFunctionJAtCurrentOptimum.rst
.. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
.. include:: snippets/IndexOfOptimum.rst
+ .. include:: snippets/Innovation.rst
+
.. include:: snippets/InnovationAtCurrentState.rst
.. include:: snippets/OMA.rst
Références vers d'autres sections :
- :ref:`section_ref_algorithm_ParticleSwarmOptimization`
+ - :ref:`section_ref_algorithm_DifferentialEvolution`
Références bibliographiques :
- [Johnson08]_
--- /dev/null
+..
+ Copyright (C) 2008-2018 EDF R&D
+
+ This file is part of SALOME ADAO module.
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+
+ Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+.. index:: single: DifferentialEvolution
+.. _section_ref_algorithm_DifferentialEvolution:
+
+Algorithme de calcul "*DifferentialEvolution*"
+----------------------------------------------
+
+.. warning::
+
+ dans sa présente version, cet algorithme est expérimental, et reste donc
+ susceptible de changements dans les prochaines versions.
+
+Description
++++++++++++
+
+Cet algorithme réalise une estimation de l'état d'un système par minimisation
+d'une fonctionnelle d'écart :math:`J` en utilisant une méthode évolutionnaire
+d'évolution différentielle. C'est une méthode qui n'utilise pas les dérivées de
+la fonctionnelle d'écart. Elle entre dans la même catégorie que
+l':ref:`section_ref_algorithm_DerivativeFreeOptimization` ou
+l':ref:`section_ref_algorithm_ParticleSwarmOptimization`.
+
+C'est une méthode d'optimisation permettant la recherche du minimum global d'une
+fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou
+:math:`L^{\infty}`, avec ou sans pondérations. La fonctionnelle d'erreur par
+défaut est celle de moindres carrés pondérés augmentés, classiquement utilisée
+en assimilation de données.
+
+Commandes requises et optionnelles
+++++++++++++++++++++++++++++++++++
+
+Les commandes requises générales, disponibles dans l'interface en édition, sont
+les suivantes:
+
+ .. include:: snippets/Background.rst
+
+ .. include:: snippets/BackgroundError.rst
+
+ .. include:: snippets/Observation.rst
+
+ .. include:: snippets/ObservationError.rst
+
+ .. include:: snippets/ObservationOperator.rst
+
+Les commandes optionnelles générales, disponibles dans l'interface en édition,
+sont indiquées dans la :ref:`section_ref_assimilation_keywords`. De plus, les
+paramètres de la commande "*AlgorithmParameters*" permettent d'indiquer les
+options particulières, décrites ci-après, de l'algorithme. On se reportera à la
+:ref:`section_ref_options_Algorithm_Parameters` pour le bon usage de cette
+commande.
+
+Les options de l'algorithme sont les suivantes:
+
+ .. include:: snippets/Minimizer_DE.rst
+
+ .. include:: snippets/BoundsWithExtremes.rst
+
+ .. include:: snippets/CrossOverProbability_CR.rst
+
+ .. include:: snippets/MaximumNumberOfSteps.rst
+
+ .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst
+
+ .. include:: snippets/MutationDifferentialWeight_F.rst
+
+ .. include:: snippets/PopulationSize.rst
+
+ .. include:: snippets/QualityCriterion.rst
+
+ .. include:: snippets/SetSeed.rst
+
+ StoreSupplementaryCalculations
+ .. index:: single: StoreSupplementaryCalculations
+
+ Cette liste indique les noms des variables supplémentaires qui peuvent être
+ disponibles à la fin de l'algorithme. Cela implique potentiellement des
+ calculs ou du stockage coûteux. La valeur par défaut est une liste vide,
+ aucune de ces variables n'étant calculée et stockée par défaut. Les noms
+ possibles sont dans la liste suivante : ["BMA", "CostFunctionJ",
+ "CostFunctionJAtCurrentOptimum", "CostFunctionJb",
+ "CostFunctionJbAtCurrentOptimum", "CostFunctionJo",
+ "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState",
+ "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum",
+ "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
+
+ Exemple :
+ ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
+
+Informations et variables disponibles à la fin de l'algorithme
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+En sortie, après exécution de l'algorithme, on dispose d'informations et de
+variables issues du calcul. La description des
+:ref:`section_ref_output_variables` indique la manière de les obtenir par la
+méthode nommée ``get`` de la variable "*ADD*" du post-processing. Les variables
+d'entrée, mises à disposition de l'utilisateur en sortie pour faciliter
+l'écriture des procédures de post-processing, sont décrites dans
+l':ref:`subsection_r_o_v_Inventaire`.
+
+Les sorties non conditionnelles de l'algorithme sont les suivantes:
+
+ .. include:: snippets/Analysis.rst
+
+ .. include:: snippets/CostFunctionJ.rst
+
+ .. include:: snippets/CostFunctionJb.rst
+
+ .. include:: snippets/CostFunctionJo.rst
+
+ .. include:: snippets/CurrentState.rst
+
+Les sorties conditionnelles de l'algorithme sont les suivantes:
+
+ .. include:: snippets/BMA.rst
+
+ .. include:: snippets/CostFunctionJAtCurrentOptimum.rst
+
+ .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst
+
+ .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst
+
+ .. include:: snippets/CurrentOptimum.rst
+
+ .. include:: snippets/IndexOfOptimum.rst
+
+ .. include:: snippets/Innovation.rst
+
+ .. include:: snippets/InnovationAtCurrentState.rst
+
+ .. include:: snippets/OMA.rst
+
+ .. include:: snippets/OMB.rst
+
+ .. include:: snippets/SimulatedObservationAtBackground.rst
+
+ .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst
+
+ .. include:: snippets/SimulatedObservationAtCurrentState.rst
+
+ .. include:: snippets/SimulatedObservationAtOptimum.rst
+
+Voir aussi
+++++++++++
+
+Références vers d'autres sections :
+ - :ref:`section_ref_algorithm_DerivativeFreeOptimization`
+ - :ref:`section_ref_algorithm_ParticleSwarmOptimization`
+
+Références bibliographiques :
+ - [Chakraborty08]_
+ - [Price05]_
+ - [Storn97]_
Description
+++++++++++
-Cet algorithme réalise une estimation de l'état d'un système dynamique par
-minimisation d'une fonctionnelle d'écart :math:`J` en utilisant un essaim
-particulaire. C'est une méthode qui n'utilise pas les dérivées de la
+Cet algorithme réalise une estimation de l'état d'un système par minimisation
+d'une fonctionnelle d'écart :math:`J` en utilisant une méthode évolutionnaire
+d'essaim particulaire. C'est une méthode qui n'utilise pas les dérivées de la
fonctionnelle d'écart. Elle entre dans la même catégorie que
-l':ref:`section_ref_algorithm_DerivativeFreeOptimization`.
+l':ref:`section_ref_algorithm_DerivativeFreeOptimization` ou
+l':ref:`section_ref_algorithm_DifferentialEvolution`.
C'est une méthode d'optimisation permettant la recherche du minimum global d'une
fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou
disponibles à la fin de l'algorithme. Cela implique potentiellement des
calculs ou du stockage coûteux. La valeur par défaut est une liste vide,
aucune de ces variables n'étant calculée et stockée par défaut. Les noms
- possibles sont dans la liste suivante : ["BMA", "CostFunctionJ",
- "CostFunctionJb", "CostFunctionJo", "CurrentState", "OMA", "OMB",
- "Innovation", "SimulatedObservationAtBackground",
- "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"].
+ possibles sont dans la liste suivante : ["BMA", "CostFunctionJ", "CostFunctionJb",
+ "CostFunctionJo", "CurrentState", "OMA", "OMB", "Innovation",
+ "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState",
+ "SimulatedObservationAtOptimum"].
Exemple :
``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
Références vers d'autres sections :
- :ref:`section_ref_algorithm_DerivativeFreeOptimization`
+ - :ref:`section_ref_algorithm_DifferentialEvolution`
Références bibliographiques :
- [WikipediaPSO]_
ref_algorithm_4DVAR
ref_algorithm_Blue
ref_algorithm_DerivativeFreeOptimization
+ ref_algorithm_DifferentialEvolution
ref_algorithm_EnsembleBlue
ref_algorithm_EnsembleKalmanFilter
ref_algorithm_ExtendedBlue
--- /dev/null
+.. index:: single: CrossOverProbability_CR
+
+CrossOverProbability_CR
+ Cette clé permet de définir la probabilité de recombinaison ou de croisement
+ lors de l'évolution différentielle. Cette variable est usuellement notée
+ ``CR`` dans la littérature. La valeur par défaut est 0.7, et il est conseillé
+ de la changer si nécessaire.
+
+ Exemple :
+ ``{"CrossOverProbability_CR":0.7}``
--- /dev/null
+.. index:: single: Minimizer
+
+Minimizer
+ Cette clé permet de changer la stratégie de minimisation pour l'optimiseur.
+ Le choix par défaut est "BEST1BIN", et les choix possibles sont les
+ multiples variables pour les stratégies de croisement et mutation, décrites
+ par les clés
+ "BEST1BIN",
+ "BEST1EXP",
+ "RAND1EXP",
+ "RANDTOBEST1EXP",
+ "CURRENTTOBEST1EXP",
+ "BEST2EXP",
+ "RAND2EXP",
+ "RANDTOBEST1BIN",
+ "CURRENTTOBEST1BIN",
+ "BEST2BIN",
+ "RAND2BIN",
+ "RAND1BIN".
+ Il est fortement conseillé de conserver la valeur par défaut.
+
+ Exemple :
+ ``{"Minimizer":"BEST1BIN"}``
--- /dev/null
+.. index:: single: MutationDifferentialWeight_F
+
+MutationDifferentialWeight_F
+ Cette clé permet de définir le poids différentiel dans l'étape de mutation.
+ Cette variable est usuellement notée ``F`` dans la littérature. Il peut être
+ constant s'il est sous la forme d'une valeur unique, ou variable de manière
+ aléatoire dans les deux bornes données dans la paire. La valeur par défaut
+ est (0.5, 1).
+
+ Exemple :
+ ``{"MutationDifferentialWeight_F":(0.5, 1)}``
--- /dev/null
+.. index:: single: PopulationSize
+
+PopulationSize
+ Cette clé permet de définir la taille (approximative) de la population à
+ chaque génération. Cette taille est légèrement ajustée pour tenir compte du
+ nombre de variables d'état à optimiser. La valeur par défaut est 100. Il est
+ conseillé de choisir une population comprise entre 1 et une dizaine de fois
+ le nombre de variables d'états, la taille étant d'autant plus petite que le
+ nombre de variables augmente.
+
+ Exemple :
+ ``{"PopulationSize":100}``