2 Copyright (C) 2008-2024 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: ParticleSwarmOptimization
25 .. index:: single: Particle Swarm (Optimization)
26 .. _section_ref_algorithm_ParticleSwarmOptimization:
28 Calculation algorithm "*ParticleSwarmOptimization*"
29 ---------------------------------------------------
31 .. ------------------------------------ ..
32 .. include:: snippets/Header2Algo01.rst
34 This algorithm realizes an estimation of the state of a system by minimization
35 without gradient of a cost function :math:`J` by using an evolutionary strategy
36 of particle swarm. It is a method that does not use the derivatives of the cost
37 function. It falls in the same category than the
38 :ref:`section_ref_algorithm_DerivativeFreeOptimization`, the
39 :ref:`section_ref_algorithm_DifferentialEvolution` or the
40 :ref:`section_ref_algorithm_TabuSearch`.
42 This is a mono-objective optimization method, allowing for global minimum
43 search of a general error function :math:`J` of type :math:`L^1`, :math:`L^2`
44 or :math:`L^{\infty}`, with or without weights, as described in the section for
45 :ref:`section_theory_optimization`. The default error function is the augmented
46 weighted least squares function, classically used in data assimilation.
48 It is based on the evolution of a population (called a "swarm") of states (each
49 state is called a "particle" or an "insect"). There exists various variants of
50 this algorithm. The following stable and robust formulations are proposed here:
53 pair: Variant ; CanonicalPSO
55 pair: Variant ; SPSO-2011
56 pair: Variant ; AIS PSO
57 pair: Variant ; SIS PSO
58 pair: Variant ; PSIS PSO
60 pair: Variant ; SPSO-2011-SIS
61 pair: Variant ; SPSO-2011-PSIS
63 - "CanonicalPSO" (Canonical Particle Swarm Optimization, see
64 [ZambranoBigiarini13]_), classical algorithm called "canonical" of particle
65 swarm, robust and defining a reference for particle swarm algorithms,
66 - "OGCR" (Simple Particle Swarm Optimization), simplified algorithm of
67 particle swarm with no bounds on insects or velocities, not recommended
68 because less robust, but sometimes a lot more efficient,
69 - "SPSO-2011" (Standard Particle Swarm Optimization 2011, voir
70 [ZambranoBigiarini13]_), 2011 reference algorithm of particle swarm, robust,
71 efficient and defined as a reference for particle swarm algorithms. This
72 algorithm is sometimes called ":math:`\omega`-PSO" or "Inertia PSO" because
73 it incorporates a so-called inertia contribution, or also called "AIS" (for
74 "Asynchronous Iteration Strategy") or "APSO" (for "Advanced Particle Swarm
75 Optimization") because it incorporates evolutionary updating of the best
76 elements, leading to intrinsically improved convergence of the algorithm.
77 - "SPSO-2011-SIS" (Standard Particle Swarm Optimisation 2011 with Synchronous
78 Iteration Strategy), very similar to the 2011 reference algorithm, and with
79 a synchronous particle update, called "SIS",
80 - "SPSO-2011-PSIS" (Standard Particle Swarm Optimisation 2011 with Parallel
81 Synchronous Iteration Strategy), similar to the "SPSO-2011-SIS" algorithm
82 with synchronous updating and parallelization, known as "PSIS", of the
85 The following are a few practical suggestions for the effective use of these
88 - The recommended variant of this algorithm is the "SPSO-2011" even if the
89 "CanonicalPSO" algorithm remains by default the more robust one. If the state
90 evaluation can be carried out in parallel, the "SPSO-2011-PSIS" algorithm can
91 be used, even if its convergence is sometimes a little less efficient.
92 - The number of particles or insects usually recommended varies between 40 and
93 100 depending on the algorithm, more or less independently of the dimension
94 of the state space. Usually, the best performances are obtained for
95 populations of 70 to 500 particles. Even if the default value for this
96 elementary parameter comes from extended knowledge on these algorithms, it is
97 recommended to adapt it to the difficulty of the given problems.
98 - The recommended number of generations for population evolution is often
99 around 50, but it can easily vary between 25 and 500.
100 - The maximum number of evaluations of the simulation function should usually
101 be limited to between a few thousand and a few tens of thousands of times the
102 dimension of the state space.
103 - The error functional usually decreases by levels (thus with a zero
104 progression of the value of the functional at each generation when we stay in
105 the level), making it *not recommended* to stop on the criterion of decrease
106 of the cost function. It is normally wiser to adapt the number of iterations
107 or generations to accelerate the convergence of the algorithms.
108 - If the problem is constrained, it is necessary to define the bounds of the
109 variables (by the variable "*Bounds*"). If the problem is totally
110 unconstrained, it is essential to define increment bounds (by the variable
111 "*BoxBounds*") to delimit the optimal search in a useful way. Similarly, if
112 the problem is partially constrained, it is recommended (but not required) to
113 define increment bounds. In case these increment bounds are not defined, the
114 variable bounds will be used as increment bounds.
116 These suggestions are to be used as experimental indications, not as
117 requirements, because they are to be appreciated or adapted according to the
118 physics of each problem that is treated.
120 The count of the number of evaluations of the function to be simulated during
121 this algorithm is deterministic, namely the "*number of iterations or
122 generations*" multiplied by the "*number of individuals in the population*".
123 With the default values, it takes between `40x50=2000` and `100*50=5000`
124 evaluations. It is for this reason that this algorithm is usually interesting
125 when the dimension of the state space is large, or when the non-linearities of
126 the simulation make the evaluation of the gradient of the functional by
127 numerical approximation complicated or invalid. But it is also necessary that
128 the calculation of the function to be simulated is not too costly to avoid a
129 prohibitive optimization time length.
131 .. ------------------------------------ ..
132 .. include:: snippets/Header2Algo12.rst
134 .. include:: snippets/FeaturePropNonLocalOptimization.rst
136 .. include:: snippets/FeaturePropDerivativeFree.rst
138 .. include:: snippets/FeaturePropParallelAlgorithm.rst
140 .. include:: snippets/FeaturePropConvergenceOnNumbers.rst
142 .. ------------------------------------ ..
143 .. include:: snippets/Header2Algo02.rst
145 .. include:: snippets/Background.rst
147 .. include:: snippets/BackgroundError.rst
149 .. include:: snippets/Observation.rst
151 .. include:: snippets/ObservationError.rst
153 .. include:: snippets/ObservationOperator.rst
155 .. ------------------------------------ ..
156 .. include:: snippets/Header2Algo03AdOp.rst
158 .. include:: snippets/BoundsWithNone.rst
160 .. include:: snippets/BoxBounds.rst
162 .. include:: snippets/CognitiveAcceleration.rst
164 .. include:: snippets/InertiaWeight.rst
166 .. include:: snippets/InitializationPoint.rst
168 .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst
170 .. include:: snippets/MaximumNumberOfIterations_50.rst
172 .. include:: snippets/NumberOfInsects.rst
174 .. include:: snippets/QualityCriterion.rst
176 .. include:: snippets/SetSeed.rst
178 .. include:: snippets/SocialAcceleration.rst
180 StoreSupplementaryCalculations
181 .. index:: single: StoreSupplementaryCalculations
183 *List of names*. This list indicates the names of the supplementary
184 variables, that can be available during or at the end of the algorithm, if
185 they are initially required by the user. Their availability involves,
186 potentially, costly calculations or memory consumptions. The default is then
187 a void list, none of these variables being calculated and stored by default
188 (excepted the unconditional variables). The possible names are in the
189 following list (the detailed description of each named variable is given in
190 the following part of this specific algorithmic documentation, in the
191 sub-section "*Information and variables available at the end of the
198 "CurrentIterationNumber",
201 "InternalCostFunctionJ",
202 "InternalCostFunctionJb",
203 "InternalCostFunctionJo",
207 "SimulatedObservationAtBackground",
208 "SimulatedObservationAtCurrentState",
209 "SimulatedObservationAtOptimum",
213 ``{"StoreSupplementaryCalculations":["CurrentState", "Residu"]}``
215 .. include:: snippets/SwarmTopology.rst
217 .. include:: snippets/Variant_PSO.rst
219 .. include:: snippets/VelocityClampingFactor.rst
221 .. ------------------------------------ ..
222 .. include:: snippets/Header2Algo04.rst
224 .. include:: snippets/Analysis.rst
226 .. include:: snippets/CostFunctionJ.rst
228 .. include:: snippets/CostFunctionJb.rst
230 .. include:: snippets/CostFunctionJo.rst
232 .. ------------------------------------ ..
233 .. include:: snippets/Header2Algo05.rst
235 .. include:: snippets/Analysis.rst
237 .. include:: snippets/BMA.rst
239 .. include:: snippets/CostFunctionJ.rst
241 .. include:: snippets/CostFunctionJb.rst
243 .. include:: snippets/CostFunctionJo.rst
245 .. include:: snippets/CurrentIterationNumber.rst
247 .. include:: snippets/CurrentState.rst
249 .. include:: snippets/Innovation.rst
251 .. include:: snippets/InternalCostFunctionJ.rst
253 .. include:: snippets/InternalCostFunctionJb.rst
255 .. include:: snippets/InternalCostFunctionJo.rst
257 .. include:: snippets/InternalStates.rst
259 .. include:: snippets/OMA.rst
261 .. include:: snippets/OMB.rst
263 .. include:: snippets/SimulatedObservationAtBackground.rst
265 .. include:: snippets/SimulatedObservationAtCurrentState.rst
267 .. include:: snippets/SimulatedObservationAtOptimum.rst
269 .. ------------------------------------ ..
270 .. _section_ref_algorithm_ParticleSwarmOptimization_examples:
272 .. include:: snippets/Header2Algo09.rst
274 .. include:: scripts/simple_ParticleSwarmOptimization1.rst
276 .. literalinclude:: scripts/simple_ParticleSwarmOptimization1.py
278 .. include:: snippets/Header2Algo10.rst
280 .. literalinclude:: scripts/simple_ParticleSwarmOptimization1.res
283 .. include:: snippets/Header2Algo11.rst
285 .. _simple_ParticleSwarmOptimization1:
286 .. image:: scripts/simple_ParticleSwarmOptimization1.png
290 .. ------------------------------------ ..
291 .. include:: snippets/Header2Algo06.rst
293 - :ref:`section_ref_algorithm_DerivativeFreeOptimization`
294 - :ref:`section_ref_algorithm_DifferentialEvolution`
295 - :ref:`section_ref_algorithm_TabuSearch`
297 .. ------------------------------------ ..
298 .. include:: snippets/Header2Algo07.rst
301 - [ZambranoBigiarini13]_