.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelFree.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropDerivativeFree.rst
+.. include:: snippets/FeaturePropConvergenceOnNumbers.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelAlgorithm.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelAlgorithm.rst
+.. include:: snippets/FeaturePropConvergenceOnNumbers.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropDerivativeNeeded.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropDerivativeFree.rst
+.. include:: snippets/FeaturePropConvergenceOnNumbers.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelAlgorithm.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
--- /dev/null
+.. index:: single: Convergence on residue or number criteria
+
+- The methods proposed by this algorithm **achieve their convergence on one or
+ more residue or number criteria**. In practice, there may be several
+ convergence criteria active simultaneously.
+
+ The residue can be a conventional measure based on a gap (e.g.
+ "*calculation-measurement gap*"), or be a significant value for the algorithm
+ (e.g. "*nullity of gradient*").
+
+ The number is frequently a significant value for the algorithm, such as a
+ number of iterations or a number of evaluations, but it can also be, for
+ example, a number of generations for an evolutionary algorithm.
+
+ Convergence thresholds need to be carefully adjusted, to reduce the gobal
+ calculation cost, or to ensure that convergence is adapted to the physical
+ case encountered.
--- /dev/null
+.. index:: single: Convergence on number criteria
+
+- The methods proposed by this algorithm **achieve their convergence on one or
+ more number criteria**. In practice, there may be simultaneously several
+ convergence criteria.
+
+ The number is frequently a significant value for the algorithm, such as a
+ number of iterations or a number of evaluations, but it can also be, for
+ example, a number of generations for an evolutionary algorithm.
+
+ Convergence thresholds need to be carefully adjusted, to reduce the gobal
+ calculation cost, or to ensure that convergence is adapted to the physical
+ case encountered.
--- /dev/null
+.. index:: single: Convergence on residue criteria
+
+- The methods proposed by this algorithm **achieve their convergence on one or
+ more residue criteria**. In practice, there may be several convergence
+ criteria active simultaneously.
+
+ The residue can be a conventional measure based on a gap (e.g.
+ "*calculation-measurement gap*"), or be a significant value for the algorithm
+ (e.g. "*nullity of gradient*").
+
+ Convergence thresholds need to be carefully adjusted, to reduce the gobal
+ calculation cost, or to ensure that convergence is adapted to the physical
+ case encountered.
--- /dev/null
+.. index:: single: Convergence on static criteria
+
+- The methods proposed by this algorithm **achieve their convergence on one or
+ more static criteria, fixed by some particular algorithmic properties**. In
+ practice, there may be several convergence criteria active simultaneously.
+
+ The more frequent algorithmic property is the one of direct calculations,
+ which evaluate the converged solution without any controllable iteration.
+ There is no convergence threshold to be adjusted in this case.
- The methods proposed by this algorithm **have internal parallelism**, and can
therefore take advantage of computational distribution resources. The
- potential interaction between the internal parallelism of the methods and the
- parallelism that may be present in the user's observation or evolution
- operators must therefore be carefully tuned.
+ potential interaction, between the parallelism of the numerical derivation,
+ and the parallelism that may be present in the observation or evolution
+ operators embedding user codes, must therefore be carefully tuned.
- The methods proposed by this algorithm **have no internal parallelism, but
use the numerical derivation of operator(s), which can be parallelized**. The
- potential interaction between the parallelism of the numerical derivation and
- the parallelism that may be present in the user's observation or evolution
- operators must therefore be carefully tuned.
+ potential interaction, between the parallelism of the numerical derivation,
+ and the parallelism that may be present in the observation or evolution
+ operators embedding user codes, must therefore be carefully tuned.
-.. index:: single: Absence of algorithmic parallelism
+.. index:: single: No algorithmic parallelism
-- The methods proposed by this algorithm **have no internal parallelism**, and
- therefore cannot take advantage of computer resources for distributing
- calculations. The methods are sequential, and any use of parallelism
- resources is therefore reserved for the user's observation or evolution
- operators.
+- The methods proposed by this algorithm **have no internal parallelism or
+ numerical derivation of operator(s)**, and therefore cannot take advantage of
+ computer resources for distributing calculations. The methods are sequential,
+ and any use of parallelism resources is therefore reserved for observation or
+ evolution operators, i.e. user codes.
-Some noteworthy properties of the implemented algorithm
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++
+Some noteworthy properties of the implemented methods
++++++++++++++++++++++++++++++++++++++++++++++++++++++
To complete the description, we summarize here a few notable properties of the
-algorithm or of its implementation. These properties may have an influence on
-how it is used or on its computational performance. For further information,
-please refer to the more comprehensive references given at the end of this
-algorithm description.
+algorithm methods or of their implementations. These properties may have an
+influence on how it is used or on its computational performance. For further
+information, please refer to the more comprehensive references given at the end
+of this algorithm description.
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelFree.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropDerivativeFree.rst
+.. include:: snippets/FeaturePropConvergenceOnNumbers.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelAlgorithm.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelDerivativesOnly.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelAlgorithm.rst
+.. include:: snippets/FeaturePropConvergenceOnNumbers.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropDerivativeNeeded.rst
+.. include:: snippets/FeaturePropConvergenceOnBoth.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropDerivativeFree.rst
+.. include:: snippets/FeaturePropConvergenceOnNumbers.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/FeaturePropParallelAlgorithm.rst
+.. include:: snippets/FeaturePropConvergenceOnStatic.rst
+
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
--- /dev/null
+.. index:: single: Convergence sur critère(s) de résidu ou de nombre
+
+- Les méthodes proposées par cet algorithme **atteignent leur convergence sur
+ un ou plusieurs critères de résidu ou de nombre**. En pratique, il peut y
+ avoir plusieurs critères de convergence actifs simultanément.
+
+ Le résidu peut être une mesure standard basée sur un écart ("*écart
+ calculs-mesures*" par exemple), ou une valeur remarquable lié à l'algorithme
+ ("*nullité d'un gradient*" par exemple).
+
+ Le nombre est fréquemment un élément remarquable lié à l'algorithme, comme un
+ nombre d'itérations ou un nombre d'évaluations, mais cela peut aussi être par
+ exemple un nombre de générations pour un algorithme évolutionnaire.
+
+ Il convient de régler soigneusement les seuils de convergence, pour limiter
+ le coût calcul global de l'algorithme, ou pour assurer une adaptation de la
+ convergence au cas physique traité.
--- /dev/null
+.. index:: single: Convergence sur critère(s) de nombre
+
+- Les méthodes proposées par cet algorithme **atteignent leur convergence sur
+ un ou plusieurs critères de nombre**. En pratique, il peut y avoir plusieurs
+ critères de convergence actifs simultanément.
+
+ Le nombre est fréquemment un élément remarquable lié à l'algorithme, comme un
+ nombre d'itérations ou un nombre d'évaluations, mais cela peut aussi être,
+ par exemple, un nombre de générations pour un algorithme évolutionnaire.
+
+ Il convient de régler soigneusement les seuils de convergence, pour limiter
+ le coût calcul global de l'algorithme, ou pour assurer une adaptation de la
+ convergence au cas physique traité.
--- /dev/null
+.. index:: single: Convergence sur critère(s) de résidu
+
+- Les méthodes proposées par cet algorithme **atteignent leur convergence sur
+ un ou plusieurs critères de résidu**. En pratique, il peut y avoir plusieurs
+ critères de convergence actifs simultanément.
+
+ Le résidu peut être une mesure standard basée sur un écart ("*écart
+ calculs-mesures*" par exemple), ou être une valeur remarquable lié à
+ l'algorithme ("*nullité d'un gradient*" par exemple).
+
+ Il convient de régler soigneusement les seuils de convergence, pour limiter
+ le coût calcul global de l'algorithme, ou pour assurer une adaptation de la
+ convergence au cas physique traité.
--- /dev/null
+.. index:: single: Convergence sur critère(s) statique(s)
+
+- Les méthodes proposées par cet algorithme **atteignent leur convergence sur
+ un ou plusieurs critères statiques, fixés par des propriétés algorithmiques
+ particulières**. En pratique, il peut y avoir plusieurs critères de
+ convergence actifs simultanément.
+
+ La propriété algorithmique la plus courante est celle des calculs directs,
+ qui évaluent la solution à convergence sans itération contrôlable. Il n'y a
+ aucun seuil de convergence à régler dans ce cas.
**recherche globale du minimum**, permettant en théorie d'atteindre un état
globalement optimal sur le domaine de recherche. Cette optimalité globale est
néanmoins obtenue "*à convergence*", ce qui signifie en temps long ou infini
- lors d'une optimisation itérative *à valeurs réelles* (par opposition *à
- valeurs entières*).
+ lors d'une optimisation itérative "*à valeurs réelles*" (par opposition à "*à
+ valeurs entières*").
interne**, et peuvent donc profiter de ressources informatiques de
répartition de calculs. L'interaction potentielle, entre le parallélisme
interne des méthodes, et le parallélisme éventuellement présent dans les
- opérateurs d'observation ou d'évolution de l'utilisateur, doit donc être
- soigneusement réglée.
+ opérateurs d'observation ou d'évolution intégrant les codes de l'utilisateur,
+ doit donc être soigneusement réglée.
interne, mais utilisent la dérivation numérique d'opérateur(s) qui est, elle,
parallélisable**. L'interaction potentielle, entre le parallélisme de la
dérivation numérique, et le parallélisme éventuellement présent dans les
- opérateurs d'observation ou d'évolution de l'utilisateur, doit donc être
- soigneusement réglée.
+ opérateurs d'observation ou d'évolution intégrant les codes de l'utilisateur,
+ doit donc être soigneusement réglée.
.. index:: single: Parallélisme algorithmique absent
- Les méthodes proposées par cet algorithme **ne présentent pas de parallélisme
- interne**, et ne peuvent donc profiter de ressources informatiques de
- répartition de calculs. Les méthodes sont séquentielles, et un usage éventuel
- des ressources du parallélisme est donc réservé aux opérateurs d'observation
- ou d'évolution de l'utilisateur.
+ interne ni de dérivation numérique d'opérateur(s)**, et ne peuvent donc
+ profiter de ressources informatiques de répartition de calculs. Les méthodes
+ sont séquentielles, et un usage éventuel des ressources du parallélisme est
+ donc réservé aux opérateurs d'observation ou d'évolution, donc aux codes de
+ l'utilisateur.
-Quelques propriétés notables de l'algorithme implémenté
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++
+Quelques propriétés notables des méthodes implémentées
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
Pour compléter la description on synthétise ici quelques propriétés notables,
-de l'algorithme ou de son implémentation. Ces propriétés peuvent avoir une
-influence sur la manière de l'utiliser ou sur ses performances de calcul. Pour
-de plus amples renseignements, on se reportera aux références plus complètes
-indiquées à la fin du descriptif de cet algorithme.
+des méthodes de l'algorithme ou de leurs implémentations. Ces propriétés
+peuvent avoir une influence sur la manière de l'utiliser ou sur ses
+performances de calcul. Pour de plus amples renseignements, on se reportera aux
+références plus complètes indiquées à la fin du descriptif de cet algorithme.
"NonLocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnBoth",
),
)
"NonLocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnBoth",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnStatic",
),
)
"NonLocalOptimization",
"DerivativeFree",
"ParallelFree",
+ "ConvergenceOnBoth",
),
)
features=(
"NonLocalOptimization",
"DerivativeFree",
+ "ConvergenceOnNumbers",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnStatic",
),
)
"LocalOptimization",
"DerivativeFree",
"ParallelAlgorithm",
+ "ConvergenceOnStatic",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnStatic",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnStatic",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnStatic",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnStatic",
),
)
"LocalOptimization",
"DerivativeNeeded",
"ParallelDerivativesOnly",
+ "ConvergenceOnBoth",
),
)
"NonLocalOptimization",
"DerivativeFree",
"ParallelAlgorithm",
+ "ConvergenceOnNumbers",
),
)
features=(
"LocalOptimization",
"DerivativeNeeded",
+ "ConvergenceOnBoth",
),
)
features=(
"NonLocalOptimization",
"DerivativeFree",
+ "ConvergenceOnNumbers",
),
)
"LocalOptimization",
"DerivativeFree",
"ParallelAlgorithm",
+ "ConvergenceOnStatic",
),
)
__msg += "\n%s%30s : %s"%(__prefix, "NLopt version", self.getNloptVersion())
__msg += "\n%s%30s : %s"%(__prefix, "MatplotLib version", self.getMatplotlibVersion())
__msg += "\n%s%30s : %s"%(__prefix, "GnuplotPy version", self.getGnuplotVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "Sphinx version", self.getSphinxVersion())
+ __msg += "\n"
+ __msg += "\n%s%30s : %s"%(__prefix, "Pandas version", self.getPandasVersion())
__msg += "\n%s%30s : %s"%(__prefix, "Fmpy version", self.getFmpyVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "Sphinx version", self.getSphinxVersion())
return __msg
def getAllInformation(self, __prefix="", __title="Whole system information"):
return has_nlopt
has_nlopt = property(fget = _has_nlopt)
+ def _has_pandas(self):
+ try:
+ import pandas # noqa: F401
+ has_pandas = True
+ except ImportError:
+ has_pandas = False
+ return has_pandas
+ has_pandas = property(fget = _has_pandas)
+
def _has_sdf(self):
try:
import sdf # noqa: F401
return has_models
has_models = property(fget = _has_models)
- def _has_linkmod(self):
+ def _has_pst4mod(self):
try:
- import LinkMod # noqa: F401
- has_linkmod = True
+ import pst4mod # noqa: F401
+ has_pst4mod = True
except ImportError:
- has_linkmod = False
- return has_linkmod
- has_linkmod = property(fget = _has_linkmod)
+ has_pst4mod = False
+ return has_pst4mod
+ has_pst4mod = property(fget = _has_pst4mod)
# Versions
__version = "0.0.0"
return __version
+ def getNloptVersion(self):
+ "Retourne la version de nlopt disponible"
+ if self.has_nlopt:
+ import nlopt
+ __version = "%s.%s.%s"%(
+ nlopt.version_major(),
+ nlopt.version_minor(),
+ nlopt.version_bugfix(),
+ )
+ else:
+ __version = "0.0.0"
+ return __version
+
def getMatplotlibVersion(self):
"Retourne la version de matplotlib disponible"
if self.has_matplotlib:
__version = "0.0.0"
return __version
+ def getPandasVersion(self):
+ "Retourne la version de pandas disponible"
+ if self.has_pandas:
+ import pandas
+ __version = pandas.__version__
+ else:
+ __version = "0.0.0"
+ return __version
+
def getGnuplotVersion(self):
"Retourne la version de gnuplotpy disponible"
if self.has_gnuplot:
__version = "0.0"
return __version
- def getSphinxVersion(self):
- "Retourne la version de sphinx disponible"
- if self.has_sphinx:
- import sphinx
- __version = sphinx.__version__
- else:
- __version = "0.0.0"
- return __version
-
- def getNloptVersion(self):
- "Retourne la version de nlopt disponible"
- if self.has_nlopt:
- import nlopt
- __version = "%s.%s.%s"%(
- nlopt.version_major(),
- nlopt.version_minor(),
- nlopt.version_bugfix(),
- )
+ def getFmpyVersion(self):
+ "Retourne la version de fmpy disponible"
+ if self.has_fmpy:
+ import fmpy
+ __version = fmpy.__version__
else:
__version = "0.0.0"
return __version
__version = "0.0.0"
return __version
- def getFmpyVersion(self):
- "Retourne la version de fmpy disponible"
- if self.has_fmpy:
- import fmpy
- __version = fmpy.__version__
+ def getSphinxVersion(self):
+ "Retourne la version de sphinx disponible"
+ if self.has_sphinx:
+ import sphinx
+ __version = sphinx.__version__
else:
__version = "0.0.0"
return __version
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2024 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D