From: Jean-Philippe ARGAUD Date: Fri, 3 Aug 2018 19:07:36 +0000 (+0200) Subject: Documentation evolution for DE and correction X-Git-Tag: V9_2_0b_ok_ADAO~17 X-Git-Url: http://git.salome-platform.org/gitweb/?a=commitdiff_plain;h=d70d96f59ca2f9b05ad3d321943148ba83f3ce83;p=modules%2Fadao.git Documentation evolution for DE and correction --- diff --git a/doc/en/bibliography.rst b/doc/en/bibliography.rst index 61cb1b9..40e982b 100644 --- a/doc/en/bibliography.rst +++ b/doc/en/bibliography.rst @@ -43,6 +43,8 @@ Bibliography .. [Cade03] Cade B. S., Noon B. R., *A Gentle Introduction to Quantile Regression for Ecologists*, Frontiers in Ecology and the Environment, 1(8), pp.412-420, 2003 +.. [Chakraborty08] Chakraborty U.K., *Advances in differential evolution*, Studies in computational intelligence, Vol.143, Springer, 2008 + .. [Evensen94] Evensen G., *Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics*, Journal of Geophysical Research, 99(C5), 10,143–10,162, 1994 .. [Evensen03] Evensen G., *The Ensemble Kalman Filter: theoretical formulation and practical implementation*, Seminar on Recent developments in data assimilation for atmosphere and ocean, ECMWF, 8 to 12 September 2003 @@ -73,6 +75,8 @@ Bibliography .. [Powell09] Powell M. J. D., *The BOBYQA algorithm for bound constrained optimization without derivatives*, Cambridge University Technical Report DAMTP NA2009/06, 2009 +.. [Price05] Price K.V., Storn R., Lampinen J., *Differential evolution: a practical approach to global optimization*, Springer, 2005 + .. [Python] *Python programming language*, http://www.python.org/ .. [R] *The R Project for Statistical Computing*, http://www.r-project.org/ @@ -83,6 +87,8 @@ Bibliography .. [SalomeMeca] *Salome_Meca and Code_Aster, Analysis of Structures and Thermomechanics for Studies & Research*, http://www.code-aster.org/ +.. [Storn97] Storn R., Price, K., *Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces*, Journal of Global Optimization, 11(1), pp.341-359, 1997 + .. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987 .. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997 diff --git a/doc/en/ref_algorithm_DerivativeFreeOptimization.rst b/doc/en/ref_algorithm_DerivativeFreeOptimization.rst index b369345..75d8fdf 100644 --- a/doc/en/ref_algorithm_DerivativeFreeOptimization.rst +++ b/doc/en/ref_algorithm_DerivativeFreeOptimization.rst @@ -27,18 +27,14 @@ Calculation algorithm "*DerivativeFreeOptimization*" ---------------------------------------------------- -.. warning:: - - in its present version, this algorithm is experimental, and so changes can be - required in forthcoming versions. - Description +++++++++++ This algorithm realizes an estimation of the state of a system by minimization of a cost function :math:`J` without gradient. It is a method that does not use -the derivatives of the cost function. It fall, for example, in the same category -than the :ref:`section_ref_algorithm_ParticleSwarmOptimization`. +the derivatives of the cost function. It falls in the same category than the +:ref:`section_ref_algorithm_ParticleSwarmOptimization` or the +:ref:`section_ref_algorithm_DifferentialEvolution`. This is an optimization method allowing for global minimum search of a general error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`, @@ -92,10 +88,10 @@ The options of the algorithm are the following: calculations or memory consumptions. The default is a void list, none of these variables being calculated and stored by default. The possible names are in the following list: ["BMA", "CostFunctionJ", - "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum", - "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum", - "CurrentOptimum", "CurrentState", "IndexOfOptimum", - "InnovationAtCurrentState", "OMA", "OMB", + "CostFunctionJAtCurrentOptimum", "CostFunctionJb", + "CostFunctionJbAtCurrentOptimum", "CostFunctionJo", + "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState", + "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB", "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. @@ -127,6 +123,8 @@ The unconditional outputs of the algorithm are the following: The conditional outputs of the algorithm are the following: + .. include:: snippets/BMA.rst + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst @@ -137,6 +135,8 @@ The conditional outputs of the algorithm are the following: .. include:: snippets/IndexOfOptimum.rst + .. include:: snippets/Innovation.rst + .. include:: snippets/InnovationAtCurrentState.rst .. include:: snippets/OMA.rst @@ -156,6 +156,7 @@ See also References to other sections: - :ref:`section_ref_algorithm_ParticleSwarmOptimization` + - :ref:`section_ref_algorithm_DifferentialEvolution` Bibliographical references: - [Johnson08]_ diff --git a/doc/en/ref_algorithm_DifferentialEvolution.rst b/doc/en/ref_algorithm_DifferentialEvolution.rst new file mode 100644 index 0000000..5548abc --- /dev/null +++ b/doc/en/ref_algorithm_DifferentialEvolution.rst @@ -0,0 +1,174 @@ +.. + Copyright (C) 2008-2018 EDF R&D + + This file is part of SALOME ADAO module. + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com + + Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D + +.. index:: single: DifferentialEvolution +.. _section_ref_algorithm_DifferentialEvolution: + +Calculation algorithm "*DifferentialEvolution*" +---------------------------------------------------- + +.. warning:: + + in its present version, this algorithm is experimental, and so changes can be + required in forthcoming versions. + +Description ++++++++++++ + +This algorithm realizes an estimation of the state of a system by minimization +of a cost function :math:`J` by using an evolutionary strategy of differential +evolution. It is a method that does not use the derivatives of the cost +function. It falls in the same category than the +:ref:`section_ref_algorithm_DerivativeFreeOptimization` or the +:ref:`section_ref_algorithm_ParticleSwarmOptimization`. + +This is an optimization method allowing for global minimum search of a general +error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`, +with or without weights. The default error function is the augmented weighted +least squares function, classically used in data assimilation. + +Optional and required commands +++++++++++++++++++++++++++++++ + +The general required commands, available in the editing user interface, are the +following: + + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst + +The general optional commands, available in the editing user interface, are +indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters +of the command "*AlgorithmParameters*" allows to choose the specific options, +described hereafter, of the algorithm. See +:ref:`section_ref_options_Algorithm_Parameters` for the good use of this +command. + +The options of the algorithm are the following: + + .. include:: snippets/Minimizer_DE.rst + + .. include:: snippets/BoundsWithExtremes.rst + + .. include:: snippets/CrossOverProbability_CR.rst + + .. include:: snippets/MaximumNumberOfSteps.rst + + .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst + + .. include:: snippets/MutationDifferentialWeight_F.rst + + .. include:: snippets/PopulationSize.rst + + .. include:: snippets/QualityCriterion.rst + + .. include:: snippets/SetSeed.rst + + StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + + This list indicates the names of the supplementary variables that can be + available at the end of the algorithm. It involves potentially costly + calculations or memory consumptions. The default is a void list, none of + these variables being calculated and stored by default. The possible names + are in the following list: ["BMA", "CostFunctionJ", + "CostFunctionJAtCurrentOptimum", "CostFunctionJb", + "CostFunctionJbAtCurrentOptimum", "CostFunctionJo", + "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState", + "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB", + "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum", + "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. + + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + +Information and variables available at the end of the algorithm ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +At the output, after executing the algorithm, there are variables and +information originating from the calculation. The description of +:ref:`section_ref_output_variables` show the way to obtain them by the method +named ``get`` of the variable "*ADD*" of the post-processing. The input +variables, available to the user at the output in order to facilitate the +writing of post-processing procedures, are described in the +:ref:`subsection_r_o_v_Inventaire`. + +The unconditional outputs of the algorithm are the following: + + .. include:: snippets/Analysis.rst + + .. include:: snippets/CostFunctionJ.rst + + .. include:: snippets/CostFunctionJb.rst + + .. include:: snippets/CostFunctionJo.rst + + .. include:: snippets/CurrentState.rst + +The conditional outputs of the algorithm are the following: + + .. include:: snippets/BMA.rst + + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst + + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst + + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst + + .. include:: snippets/CurrentOptimum.rst + + .. include:: snippets/IndexOfOptimum.rst + + .. include:: snippets/Innovation.rst + + .. include:: snippets/InnovationAtCurrentState.rst + + .. include:: snippets/OMA.rst + + .. include:: snippets/OMB.rst + + .. include:: snippets/SimulatedObservationAtBackground.rst + + .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst + + .. include:: snippets/SimulatedObservationAtCurrentState.rst + + .. include:: snippets/SimulatedObservationAtOptimum.rst + +See also +++++++++ + +References to other sections: + - :ref:`section_ref_algorithm_DerivativeFreeOptimization` + - :ref:`section_ref_algorithm_ParticleSwarmOptimization` + +Bibliographical references: + - [Chakraborty08]_ + - [Price05]_ + - [Storn97]_ diff --git a/doc/en/ref_algorithm_ParticleSwarmOptimization.rst b/doc/en/ref_algorithm_ParticleSwarmOptimization.rst index 3d0b406..3229547 100644 --- a/doc/en/ref_algorithm_ParticleSwarmOptimization.rst +++ b/doc/en/ref_algorithm_ParticleSwarmOptimization.rst @@ -30,10 +30,12 @@ Calculation algorithm "*ParticleSwarmOptimization*" Description +++++++++++ -This algorithm realizes an estimation of the state of a dynamic system by -minimization of a cost function :math:`J` by using a particle swarm. It is a -method that does not use the derivatives of the cost function. It fall in the -same category than the :ref:`section_ref_algorithm_DerivativeFreeOptimization`. +This algorithm realizes an estimation of the state of a system by minimization +of a cost function :math:`J` by using an evolutionary strategy of particle +swarm. It is a method that does not use the derivatives of the cost function. +It falls in the same category than the +:ref:`section_ref_algorithm_DerivativeFreeOptimization` or the +:ref:`section_ref_algorithm_DifferentialEvolution`. This is an optimization method allowing for global minimum search of a general error function :math:`J` of type :math:`L^1`, :math:`L^2` or :math:`L^{\infty}`, @@ -169,6 +171,7 @@ See also References to other sections: - :ref:`section_ref_algorithm_DerivativeFreeOptimization` + - :ref:`section_ref_algorithm_DifferentialEvolution` Bibliographical references: - [WikipediaPSO]_ diff --git a/doc/en/reference.rst b/doc/en/reference.rst index 33ba4f9..14ac71c 100644 --- a/doc/en/reference.rst +++ b/doc/en/reference.rst @@ -98,6 +98,7 @@ The mathematical notations used afterward are explained in the section ref_algorithm_4DVAR ref_algorithm_Blue ref_algorithm_DerivativeFreeOptimization + ref_algorithm_DifferentialEvolution ref_algorithm_EnsembleBlue ref_algorithm_EnsembleKalmanFilter ref_algorithm_ExtendedBlue diff --git a/doc/en/snippets/CrossOverProbability_CR.rst b/doc/en/snippets/CrossOverProbability_CR.rst new file mode 100644 index 0000000..7b16227 --- /dev/null +++ b/doc/en/snippets/CrossOverProbability_CR.rst @@ -0,0 +1,10 @@ +.. index:: single: CrossOverProbability_CR + +CrossOverProbability_CR + This key is used to define the probability of recombination or crossover + during the differential evolution. This variable is usually noted as ``CR`` + in the literature. The default value is 0.7, and it is recommended to change + it if necessary. + + Example: + ``{"CrossOverProbability_CR":0.7}`` diff --git a/doc/en/snippets/Minimizer_DE.rst b/doc/en/snippets/Minimizer_DE.rst new file mode 100644 index 0000000..69a46d7 --- /dev/null +++ b/doc/en/snippets/Minimizer_DE.rst @@ -0,0 +1,22 @@ +.. index:: single: Minimizer + +Minimizer + This key allows to choose the optimization strategy for the minimizer. The + default choice is "BEST1BIN", and the possible ones, among the multiples + crossover and mutation strategies, are + "BEST1BIN", + "BEST1EXP", + "RAND1EXP", + "RANDTOBEST1EXP", + "CURRENTTOBEST1EXP", + "BEST2EXP", + "RAND2EXP", + "RANDTOBEST1BIN", + "CURRENTTOBEST1BIN", + "BEST2BIN", + "RAND2BIN", + "RAND1BIN". + It is greatly recommanded to keep the default value. + + Example: + ``{"Minimizer":"BEST1BIN"}`` diff --git a/doc/en/snippets/MutationDifferentialWeight_F.rst b/doc/en/snippets/MutationDifferentialWeight_F.rst new file mode 100644 index 0000000..445507d --- /dev/null +++ b/doc/en/snippets/MutationDifferentialWeight_F.rst @@ -0,0 +1,10 @@ +.. index:: single: MutationDifferentialWeight_F + +MutationDifferentialWeight_F + This key is used to define the differential weight in the mutation step. + This variable is usually noted as ``F`` in the literature. It can be constant + if it is in the form of a single value, or randomly variable in the two given + bounds in the pair. The default value is (0.5, 1). + + Example: + ``{"MutationDifferentialWeight_F":(0.5, 1)}`` diff --git a/doc/en/snippets/PopulationSize.rst b/doc/en/snippets/PopulationSize.rst new file mode 100644 index 0000000..7d267de --- /dev/null +++ b/doc/en/snippets/PopulationSize.rst @@ -0,0 +1,12 @@ +.. index:: single: PopulationSize + +PopulationSize + This key is used to define the (approximate) size of the population at each + generation. This size is slightly adjusted to take into account the number of + state variables to be optimized. The default value is 100, and it is + recommended to choose a population between 1 and about ten times the number + of state variables, the size being smaller as the number of variables + increases. + + Example: + ``{"PopulationSize":100}`` diff --git a/doc/fr/bibliography.rst b/doc/fr/bibliography.rst index 9a15989..9f239d4 100644 --- a/doc/fr/bibliography.rst +++ b/doc/fr/bibliography.rst @@ -43,6 +43,8 @@ Bibliographie .. [Cade03] Cade B. S., Noon B. R., *A Gentle Introduction to Quantile Regression for Ecologists*, Frontiers in Ecology and the Environment, 1(8), pp.412-420, 2003 +.. [Chakraborty08] Chakraborty U.K., *Advances in differential evolution*, Studies in computational intelligence, Vol.143, Springer, 2008 + .. [Evensen94] Evensen G., *Sequential data assimilation with a nonlinear quasi-geostrophic model using monte carlo methods to forecast error statistics*, Journal of Geophysical Research, 99(C5), 10,143–10,162, 1994 .. [Evensen03] Evensen G., *The Ensemble Kalman Filter: theoretical formulation and practical implementation*, Seminar on Recent developments in data assimilation for atmosphere and ocean, ECMWF, 8 to 12 September 2003 @@ -73,6 +75,8 @@ Bibliographie .. [Powell09] Powell M. J. D., *The BOBYQA algorithm for bound constrained optimization without derivatives*, Cambridge University Technical Report DAMTP NA2009/06, 2009 +.. [Price05] Price K.V., Storn R., Lampinen J., *Differential evolution: a practical approach to global optimization*, Springer, 2005 + .. [Python] *Python programming language*, http://www.python.org/ .. [R] *The R Project for Statistical Computing*, http://www.r-project.org/ @@ -83,6 +87,8 @@ Bibliographie .. [SalomeMeca] *Salome_Meca et Code_Aster, Analyse des Structures et Thermomécanique pour les Etudes et la Recherche*, http://www.code-aster.org/ +.. [Storn97] Storn R., Price, K., *Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces*, Journal of Global Optimization, 11(1), pp.341-359, 1997 + .. [Tarantola87] Tarantola A., *Inverse Problem: Theory Methods for Data Fitting and Parameter Estimation*, Elsevier, 1987 .. [Talagrand97] Talagrand O., *Assimilation of Observations, an Introduction*, Journal of the Meteorological Society of Japan, 75(1B), pp.191-209, 1997 diff --git a/doc/fr/ref_algorithm_DerivativeFreeOptimization.rst b/doc/fr/ref_algorithm_DerivativeFreeOptimization.rst index 97bd2d2..490f082 100644 --- a/doc/fr/ref_algorithm_DerivativeFreeOptimization.rst +++ b/doc/fr/ref_algorithm_DerivativeFreeOptimization.rst @@ -27,18 +27,15 @@ Algorithme de calcul "*DerivativeFreeOptimization*" --------------------------------------------------- -.. warning:: - - dans sa présente version, cet algorithme est expérimental, et reste donc - susceptible de changements dans les prochaines versions. - Description +++++++++++ -Cet algorithme réalise une estimation d'état d'un système par minimisation d'une -fonctionnelle d'écart :math:`J` sans gradient. C'est une méthode qui n'utilise -pas les dérivées de la fonctionnelle d'écart. Elle entre, par exemple, dans la -même catégorie que l':ref:`section_ref_algorithm_ParticleSwarmOptimization`. +Cet algorithme réalise une estimation d'état d'un système par minimisation +d'une fonctionnelle d'écart :math:`J` sans gradient. C'est une méthode qui +n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre, par +exemple, dans la même catégorie que +l':ref:`section_ref_algorithm_ParticleSwarmOptimization` ou +l':ref:`section_ref_algorithm_DifferentialEvolution`. C'est une méthode d'optimisation permettant la recherche du minimum global d'une fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou @@ -93,10 +90,10 @@ Les options de l'algorithme sont les suivantes: calculs ou du stockage coûteux. La valeur par défaut est une liste vide, aucune de ces variables n'étant calculée et stockée par défaut. Les noms possibles sont dans la liste suivante : ["BMA", "CostFunctionJ", - "CostFunctionJb", "CostFunctionJo", "CostFunctionJAtCurrentOptimum", - "CostFunctionJbAtCurrentOptimum", "CostFunctionJoAtCurrentOptimum", - "CurrentOptimum", "CurrentState", "IndexOfOptimum", - "InnovationAtCurrentState", "OMA", "OMB", + "CostFunctionJAtCurrentOptimum", "CostFunctionJb", + "CostFunctionJbAtCurrentOptimum", "CostFunctionJo", + "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState", + "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB", "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. @@ -128,6 +125,8 @@ Les sorties non conditionnelles de l'algorithme sont les suivantes: Les sorties conditionnelles de l'algorithme sont les suivantes: + .. include:: snippets/BMA.rst + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst @@ -138,6 +137,8 @@ Les sorties conditionnelles de l'algorithme sont les suivantes: .. include:: snippets/IndexOfOptimum.rst + .. include:: snippets/Innovation.rst + .. include:: snippets/InnovationAtCurrentState.rst .. include:: snippets/OMA.rst @@ -157,6 +158,7 @@ Voir aussi Références vers d'autres sections : - :ref:`section_ref_algorithm_ParticleSwarmOptimization` + - :ref:`section_ref_algorithm_DifferentialEvolution` Références bibliographiques : - [Johnson08]_ diff --git a/doc/fr/ref_algorithm_DifferentialEvolution.rst b/doc/fr/ref_algorithm_DifferentialEvolution.rst new file mode 100644 index 0000000..80dc429 --- /dev/null +++ b/doc/fr/ref_algorithm_DifferentialEvolution.rst @@ -0,0 +1,175 @@ +.. + Copyright (C) 2008-2018 EDF R&D + + This file is part of SALOME ADAO module. + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + + See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com + + Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D + +.. index:: single: DifferentialEvolution +.. _section_ref_algorithm_DifferentialEvolution: + +Algorithme de calcul "*DifferentialEvolution*" +---------------------------------------------- + +.. warning:: + + dans sa présente version, cet algorithme est expérimental, et reste donc + susceptible de changements dans les prochaines versions. + +Description ++++++++++++ + +Cet algorithme réalise une estimation de l'état d'un système par minimisation +d'une fonctionnelle d'écart :math:`J` en utilisant une méthode évolutionnaire +d'évolution différentielle. C'est une méthode qui n'utilise pas les dérivées de +la fonctionnelle d'écart. Elle entre dans la même catégorie que +l':ref:`section_ref_algorithm_DerivativeFreeOptimization` ou +l':ref:`section_ref_algorithm_ParticleSwarmOptimization`. + +C'est une méthode d'optimisation permettant la recherche du minimum global d'une +fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou +:math:`L^{\infty}`, avec ou sans pondérations. La fonctionnelle d'erreur par +défaut est celle de moindres carrés pondérés augmentés, classiquement utilisée +en assimilation de données. + +Commandes requises et optionnelles +++++++++++++++++++++++++++++++++++ + +Les commandes requises générales, disponibles dans l'interface en édition, sont +les suivantes: + + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst + +Les commandes optionnelles générales, disponibles dans l'interface en édition, +sont indiquées dans la :ref:`section_ref_assimilation_keywords`. De plus, les +paramètres de la commande "*AlgorithmParameters*" permettent d'indiquer les +options particulières, décrites ci-après, de l'algorithme. On se reportera à la +:ref:`section_ref_options_Algorithm_Parameters` pour le bon usage de cette +commande. + +Les options de l'algorithme sont les suivantes: + + .. include:: snippets/Minimizer_DE.rst + + .. include:: snippets/BoundsWithExtremes.rst + + .. include:: snippets/CrossOverProbability_CR.rst + + .. include:: snippets/MaximumNumberOfSteps.rst + + .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst + + .. include:: snippets/MutationDifferentialWeight_F.rst + + .. include:: snippets/PopulationSize.rst + + .. include:: snippets/QualityCriterion.rst + + .. include:: snippets/SetSeed.rst + + StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + + Cette liste indique les noms des variables supplémentaires qui peuvent être + disponibles à la fin de l'algorithme. Cela implique potentiellement des + calculs ou du stockage coûteux. La valeur par défaut est une liste vide, + aucune de ces variables n'étant calculée et stockée par défaut. Les noms + possibles sont dans la liste suivante : ["BMA", "CostFunctionJ", + "CostFunctionJAtCurrentOptimum", "CostFunctionJb", + "CostFunctionJbAtCurrentOptimum", "CostFunctionJo", + "CostFunctionJoAtCurrentOptimum", "CurrentOptimum", "CurrentState", + "IndexOfOptimum", "Innovation", "InnovationAtCurrentState", "OMA", "OMB", + "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum", + "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. + + Exemple : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + +Informations et variables disponibles à la fin de l'algorithme +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +En sortie, après exécution de l'algorithme, on dispose d'informations et de +variables issues du calcul. La description des +:ref:`section_ref_output_variables` indique la manière de les obtenir par la +méthode nommée ``get`` de la variable "*ADD*" du post-processing. Les variables +d'entrée, mises à disposition de l'utilisateur en sortie pour faciliter +l'écriture des procédures de post-processing, sont décrites dans +l':ref:`subsection_r_o_v_Inventaire`. + +Les sorties non conditionnelles de l'algorithme sont les suivantes: + + .. include:: snippets/Analysis.rst + + .. include:: snippets/CostFunctionJ.rst + + .. include:: snippets/CostFunctionJb.rst + + .. include:: snippets/CostFunctionJo.rst + + .. include:: snippets/CurrentState.rst + +Les sorties conditionnelles de l'algorithme sont les suivantes: + + .. include:: snippets/BMA.rst + + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst + + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst + + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst + + .. include:: snippets/CurrentOptimum.rst + + .. include:: snippets/IndexOfOptimum.rst + + .. include:: snippets/Innovation.rst + + .. include:: snippets/InnovationAtCurrentState.rst + + .. include:: snippets/OMA.rst + + .. include:: snippets/OMB.rst + + .. include:: snippets/SimulatedObservationAtBackground.rst + + .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst + + .. include:: snippets/SimulatedObservationAtCurrentState.rst + + .. include:: snippets/SimulatedObservationAtOptimum.rst + +Voir aussi +++++++++++ + +Références vers d'autres sections : + - :ref:`section_ref_algorithm_DerivativeFreeOptimization` + - :ref:`section_ref_algorithm_ParticleSwarmOptimization` + +Références bibliographiques : + - [Chakraborty08]_ + - [Price05]_ + - [Storn97]_ diff --git a/doc/fr/ref_algorithm_ParticleSwarmOptimization.rst b/doc/fr/ref_algorithm_ParticleSwarmOptimization.rst index f9ca7d7..1f15025 100644 --- a/doc/fr/ref_algorithm_ParticleSwarmOptimization.rst +++ b/doc/fr/ref_algorithm_ParticleSwarmOptimization.rst @@ -30,11 +30,12 @@ Algorithme de calcul "*ParticleSwarmOptimization*" Description +++++++++++ -Cet algorithme réalise une estimation de l'état d'un système dynamique par -minimisation d'une fonctionnelle d'écart :math:`J` en utilisant un essaim -particulaire. C'est une méthode qui n'utilise pas les dérivées de la +Cet algorithme réalise une estimation de l'état d'un système par minimisation +d'une fonctionnelle d'écart :math:`J` en utilisant une méthode évolutionnaire +d'essaim particulaire. C'est une méthode qui n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre dans la même catégorie que -l':ref:`section_ref_algorithm_DerivativeFreeOptimization`. +l':ref:`section_ref_algorithm_DerivativeFreeOptimization` ou +l':ref:`section_ref_algorithm_DifferentialEvolution`. C'est une méthode d'optimisation permettant la recherche du minimum global d'une fonctionnelle d'erreur :math:`J` quelconque de type :math:`L^1`, :math:`L^2` ou @@ -121,10 +122,10 @@ Les options de l'algorithme sont les suivantes: disponibles à la fin de l'algorithme. Cela implique potentiellement des calculs ou du stockage coûteux. La valeur par défaut est une liste vide, aucune de ces variables n'étant calculée et stockée par défaut. Les noms - possibles sont dans la liste suivante : ["BMA", "CostFunctionJ", - "CostFunctionJb", "CostFunctionJo", "CurrentState", "OMA", "OMB", - "Innovation", "SimulatedObservationAtBackground", - "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. + possibles sont dans la liste suivante : ["BMA", "CostFunctionJ", "CostFunctionJb", + "CostFunctionJo", "CurrentState", "OMA", "OMB", "Innovation", + "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", + "SimulatedObservationAtOptimum"]. Exemple : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` @@ -173,6 +174,7 @@ Voir aussi Références vers d'autres sections : - :ref:`section_ref_algorithm_DerivativeFreeOptimization` + - :ref:`section_ref_algorithm_DifferentialEvolution` Références bibliographiques : - [WikipediaPSO]_ diff --git a/doc/fr/reference.rst b/doc/fr/reference.rst index d4480ca..f2b31ef 100644 --- a/doc/fr/reference.rst +++ b/doc/fr/reference.rst @@ -100,6 +100,7 @@ ADAO. Les notations mathématiques utilisées sont expliquées dans la section ref_algorithm_4DVAR ref_algorithm_Blue ref_algorithm_DerivativeFreeOptimization + ref_algorithm_DifferentialEvolution ref_algorithm_EnsembleBlue ref_algorithm_EnsembleKalmanFilter ref_algorithm_ExtendedBlue diff --git a/doc/fr/snippets/CrossOverProbability_CR.rst b/doc/fr/snippets/CrossOverProbability_CR.rst new file mode 100644 index 0000000..e815cb5 --- /dev/null +++ b/doc/fr/snippets/CrossOverProbability_CR.rst @@ -0,0 +1,10 @@ +.. index:: single: CrossOverProbability_CR + +CrossOverProbability_CR + Cette clé permet de définir la probabilité de recombinaison ou de croisement + lors de l'évolution différentielle. Cette variable est usuellement notée + ``CR`` dans la littérature. La valeur par défaut est 0.7, et il est conseillé + de la changer si nécessaire. + + Exemple : + ``{"CrossOverProbability_CR":0.7}`` diff --git a/doc/fr/snippets/Minimizer_DE.rst b/doc/fr/snippets/Minimizer_DE.rst new file mode 100644 index 0000000..616ca5d --- /dev/null +++ b/doc/fr/snippets/Minimizer_DE.rst @@ -0,0 +1,23 @@ +.. index:: single: Minimizer + +Minimizer + Cette clé permet de changer la stratégie de minimisation pour l'optimiseur. + Le choix par défaut est "BEST1BIN", et les choix possibles sont les + multiples variables pour les stratégies de croisement et mutation, décrites + par les clés + "BEST1BIN", + "BEST1EXP", + "RAND1EXP", + "RANDTOBEST1EXP", + "CURRENTTOBEST1EXP", + "BEST2EXP", + "RAND2EXP", + "RANDTOBEST1BIN", + "CURRENTTOBEST1BIN", + "BEST2BIN", + "RAND2BIN", + "RAND1BIN". + Il est fortement conseillé de conserver la valeur par défaut. + + Exemple : + ``{"Minimizer":"BEST1BIN"}`` diff --git a/doc/fr/snippets/MutationDifferentialWeight_F.rst b/doc/fr/snippets/MutationDifferentialWeight_F.rst new file mode 100644 index 0000000..2cd6a88 --- /dev/null +++ b/doc/fr/snippets/MutationDifferentialWeight_F.rst @@ -0,0 +1,11 @@ +.. index:: single: MutationDifferentialWeight_F + +MutationDifferentialWeight_F + Cette clé permet de définir le poids différentiel dans l'étape de mutation. + Cette variable est usuellement notée ``F`` dans la littérature. Il peut être + constant s'il est sous la forme d'une valeur unique, ou variable de manière + aléatoire dans les deux bornes données dans la paire. La valeur par défaut + est (0.5, 1). + + Exemple : + ``{"MutationDifferentialWeight_F":(0.5, 1)}`` diff --git a/doc/fr/snippets/PopulationSize.rst b/doc/fr/snippets/PopulationSize.rst new file mode 100644 index 0000000..de245ed --- /dev/null +++ b/doc/fr/snippets/PopulationSize.rst @@ -0,0 +1,12 @@ +.. index:: single: PopulationSize + +PopulationSize + Cette clé permet de définir la taille (approximative) de la population à + chaque génération. Cette taille est légèrement ajustée pour tenir compte du + nombre de variables d'état à optimiser. La valeur par défaut est 100. Il est + conseillé de choisir une population comprise entre 1 et une dizaine de fois + le nombre de variables d'états, la taille étant d'autant plus petite que le + nombre de variables augmente. + + Exemple : + ``{"PopulationSize":100}``