ADAO: A module for Data Assimilation and Optimization
=====================================================
-About
------
+About ADAO: A module for Data Assimilation and Optimization
+-----------------------------------------------------------
**The ADAO module provides data assimilation and optimization** features in
Python or SALOME context (see http://www.salome-platform.org/). Briefly stated,
This mono-objective optimization algorithm is naturally written for a single
estimate, without any dynamic or iterative notion (there is no need in this
case for an incremental evolution operator, nor for an evolution error
-covariance). In ADAO, it can also be used on a succession of observations,
-placing the estimate in a recursive framework similar to a
+covariance). In the traditional framework of temporal or iterative data
+assimilation that ADAO deals with, it can also be used on a succession of
+observations, placing the estimate in a recursive framework similar to a
:ref:`section_ref_algorithm_KalmanFilter`. A standard estimate is made at each
observation step on the state predicted by the incremental evolution model,
knowing that the state error covariance remains the background covariance
.. include:: snippets/Header2Algo01.rst
This algorithm realizes an estimation of the state of a system by minimization
-of a cost function :math:`J` without gradient. It is a method that does not use
-the derivatives of the cost function. It falls in the same category than the
-:ref:`section_ref_algorithm_ParticleSwarmOptimization`, the
-:ref:`section_ref_algorithm_DifferentialEvolution` or the
+without gradient of a cost function :math:`J`, using a search method by simplex
+type or similar approximation. It is a method that does not use the derivatives
+of the cost function. It falls in the same category than the
+:ref:`section_ref_algorithm_DifferentialEvolution`,
+:ref:`section_ref_algorithm_ParticleSwarmOptimization` or
:ref:`section_ref_algorithm_TabuSearch`.
This is a mono-objective optimization method allowing for global minimum search
.. include:: snippets/Header2Algo01.rst
This algorithm realizes an estimation of the state of a system by minimization
-of a cost function :math:`J` by using an evolutionary strategy of differential
-evolution. It is a method that does not use the derivatives of the cost
-function. It falls in the same category than the
-:ref:`section_ref_algorithm_DerivativeFreeOptimization`, the
-:ref:`section_ref_algorithm_ParticleSwarmOptimization` or the
+without gradient of a cost function :math:`J`, using an evolutionary strategy
+of differential evolution. It is a method that does not use the derivatives of
+the cost function. It falls in the same category than the
+:ref:`section_ref_algorithm_DerivativeFreeOptimization`,
+:ref:`section_ref_algorithm_ParticleSwarmOptimization` or
:ref:`section_ref_algorithm_TabuSearch`.
This is an optimization method allowing for global minimum search of a general
the options requested by the user. You can refer to the
:ref:`section_ref_sampling_requirements` for an illustration of sampling.
Beware of the size of the hypercube (and then to the number of computations)
-that can be reached, it can grow quickly to be quite large.
+that can be reached, it can grow quickly to be quite large. The memory required
+is then the product of the size of an individual :math:`\mathbf{y}` state and
+the size of the hypercube.
.. _mop_determination:
.. image:: images/mop_determination.png
.. include:: snippets/NameOfLocations.rst
+.. include:: snippets/ReduceMemoryUse.rst
+
.. include:: snippets/SampleAsExplicitHyperCube.rst
.. include:: snippets/SampleAsIndependantRandomVariables.rst
the simulation make the evaluation of the gradient of the functional by
numerical approximation complicated or invalid. But it is also necessary that
the calculation of the function to be simulated is not too costly to avoid a
-prohibitive optimization time.
+prohibitive optimization time length.
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/Header2Algo01.rst
This algorithm realizes an estimation of the state of a system by minimization
-of a cost function :math:`J` without gradient. It is a method that does not use
-the derivatives of the cost function. It falls in the same category than the
-:ref:`section_ref_algorithm_DerivativeFreeOptimization`, the
-:ref:`section_ref_algorithm_ParticleSwarmOptimization` or the
-:ref:`section_ref_algorithm_DifferentialEvolution`.
+without gradient of a cost function :math:`J`, using a Tabu list search method.
+It is a method that does not use the derivatives of the cost function. It falls
+in the same category than the
+:ref:`section_ref_algorithm_DerivativeFreeOptimization`,
+:ref:`section_ref_algorithm_DifferentialEvolution` or
+:ref:`section_ref_algorithm_ParticleSwarmOptimization`.
This is a mono-objective optimization method allowing for global minimum search
of a general error function :math:`J` of type :math:`L^1`, :math:`L^2` or
be ordered in lines to be acquired record row by record row
("ColMajor=True").
- Without information or with a void list, all the values of all the
- variables are used, but one can also select only the ones of the variables
- that are indicated in the name list "ColNames". the variable names are
- always as header of columns.
+ Without information or with a void list for the variable names, all the
+ values of all the variables are used, but one can also select only the ones
+ of the variables that are indicated in the name list "ColNames". The
+ variable names are always as header of columns.
Example of CSV file for "*Observation*" variable in "*DataFile*" ::
.. warning::
It is recommended, before using them as an input of this type, to carefully
- check text or binary files . Various checks are carried out on loading, but
+ check text or binary files. Various checks are carried out on loading, but
the variety of potential errors is great. In practice, by respecting the
- requirements for naming variables and comments, text files from programs or
- spreadsheets are (most of the time) compatible.
+ requirements for naming variables and for comments, text files from
+ classical programs or spreadsheets are (most of the time) compatible.
:widths: 20, 10, 10
Python, 3.6.5, 3.11.7
- Numpy, 1.14.3, 1.26.2
- Scipy, 0.19.1, 1.11.4
- MatplotLib, 2.2.2, 3.8.2
+ Numpy, 1.14.3, 1.26.4
+ Scipy, 0.19.1, 1.12.0
+ MatplotLib, 2.2.2, 3.8.3
GnuplotPy, 1.8, 1.8
NLopt, 2.4.2, 2.7.1
--- /dev/null
+.. index:: single: ReduceMemoryUse
+
+ReduceMemoryUse
+ *Boolean value*. The variable leads to the activation, or not, of the memory
+ footprint reduction mode at runtime, at the cost of a potential increase in
+ calculation time. Results may differ above a certain precision (1.e-12 to
+ 1.e-14), usually close to machine precision (1.e-16). The default is "False",
+ the choices are "True" or "False".
+
+ Example:
+ ``{"ReduceMemoryUse":False}``
Cet algorithme d'optimisation mono-objectif est naturellement écrit pour une
estimation unique, sans notion dynamique ou itérative (il n'y a donc pas besoin
dans ce cas d'opérateur d'évolution incrémentale, ni de covariance d'erreurs
-d'évolution). Dans ADAO, il peut aussi être utilisé sur une succession
-d'observations, plaçant alors l'estimation dans un cadre récursif similaire à
-un :ref:`section_ref_algorithm_KalmanFilter`. Une estimation standard
-"*3D-VAR*" est effectuée à chaque pas d'observation sur l'état prévu par le
-modèle d'évolution incrémentale, sachant que la covariance d'erreur d'état
-reste la covariance d'ébauche initialement fournie par l'utilisateur. Pour être
-explicite, contrairement aux filtres de type Kalman, la covariance d'erreurs
-sur les états n'est pas remise à jour.
+d'évolution). Dans le cadre traditionnel de l'assimilation de données
+temporelle ou itérative que traite ADAO, il peut aussi être utilisé sur une
+succession d'observations, plaçant alors l'estimation dans un cadre récursif
+similaire à un :ref:`section_ref_algorithm_KalmanFilter`. Une estimation
+standard "*3D-VAR*" est effectuée à chaque pas d'observation sur l'état prévu
+par le modèle d'évolution incrémentale, sachant que la covariance d'erreur
+d'état reste la covariance d'ébauche initialement fournie par l'utilisateur.
+Pour être explicite, contrairement aux filtres de type Kalman, la covariance
+d'erreurs sur les états n'est pas remise à jour.
Une extension du 3DVAR, couplant en parallèle une méthode 3DVAR, pour
l'estimation d'un unique meilleur état, avec un filtre de Kalman d'ensemble
.. ------------------------------------ ..
.. include:: snippets/Header2Algo01.rst
-Cet algorithme réalise une estimation d'état d'un système par minimisation
-d'une fonctionnelle d'écart :math:`J` sans gradient. C'est une méthode qui
-n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre, par
+Cet algorithme réalise une estimation de l'état d'un système par minimisation
+sans gradient d'une fonctionnelle d'écart :math:`J`, en utilisant une méthode
+de recherche par approximation de type simplexe ou similaire. C'est une méthode
+qui n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre, par
exemple, dans la même catégorie que
l':ref:`section_ref_algorithm_ParticleSwarmOptimization`,
l':ref:`section_ref_algorithm_DifferentialEvolution` ou
.. include:: snippets/Header2Algo01.rst
Cet algorithme réalise une estimation de l'état d'un système par minimisation
-d'une fonctionnelle d'écart :math:`J` en utilisant une méthode évolutionnaire
-d'évolution différentielle. C'est une méthode qui n'utilise pas les dérivées de
-la fonctionnelle d'écart. Elle entre dans la même catégorie que
+sans gradient d'une fonctionnelle d'écart :math:`J`, en utilisant une méthode
+de recherche évolutionnaire d'évolution différentielle. C'est une méthode qui
+n'utilise pas les dérivées de la fonctionnelle d'écart. Elle entre dans la même
+catégorie que
l':ref:`section_ref_algorithm_DerivativeFreeOptimization`,
l':ref:`section_ref_algorithm_ParticleSwarmOptimization` ou
l':ref:`section_ref_algorithm_TabuSearch`.
se reporter aux :ref:`section_ref_sampling_requirements` pour une illustration
de l'échantillonnage. Attention à la taille de l'hypercube (et donc au nombre
de calculs) qu'il est possible d'atteindre, elle peut rapidement devenir
-importante.
+importante. La mémoire requise est ensuite le produit de la taille d'un état
+individuel :math:`\mathbf{y}` par la taille de l'hypercube.
.. _mop_determination:
.. image:: images/mop_determination.png
.. include:: snippets/NameOfLocations.rst
+.. include:: snippets/ReduceMemoryUse.rst
+
.. include:: snippets/SampleAsExplicitHyperCube.rst
.. include:: snippets/SampleAsIndependantRandomVariables.rst
.. include:: snippets/Header2Algo01.rst
Cet algorithme réalise une estimation de l'état d'un système par minimisation
-d'une fonctionnelle d'écart :math:`J` en utilisant une méthode évolutionnaire
-d'essaim particulaire. C'est une méthode qui n'utilise pas les dérivées de la
-fonctionnelle d'écart. Elle est basée sur l'évolution d'une population (appelée
-"essaim") d'états (chaque individu étant appelé une "particule" ou un insecte).
-Elle entre dans la même catégorie que les
+sans gradient d'une fonctionnelle d'écart :math:`J`, en utilisant une méthode
+évolutionnaire d'essaim particulaire. C'est une méthode qui n'utilise pas les
+dérivées de la fonctionnelle d'écart. Elle entre dans la même catégorie que les
:ref:`section_ref_algorithm_DerivativeFreeOptimization`,
:ref:`section_ref_algorithm_DifferentialEvolution` ou
:ref:`section_ref_algorithm_TabuSearch`.
fonctionnelle d'erreur par défaut est celle de moindres carrés pondérés
augmentés, classiquement utilisée en assimilation de données.
-Il existe diverses variantes de cet algorithme. On propose ici les formulations
-stables et robustes suivantes :
+Elle est basée sur l'évolution d'une population (appelée "essaim") d'états
+(chaque individu étant appelé une "particule" ou un "insecte"). Il existe
+diverses variantes de cet algorithme. On propose ici les formulations stables
+et robustes suivantes :
.. index::
pair: Variant ; CanonicalPSO
intéressant lorsque la dimension de l'espace des états est grande, ou que les
non-linéarités de la simulation rendent compliqué, ou invalide, l'évaluation du
gradient de la fonctionnelle par approximation numérique. Mais il est aussi
-nécessaire que le calcul de la fonction à simuler ne soit pas trop coûteuse
-pour éviter une temps d'optimisation rédhibitoire.
+nécessaire que le calcul de la fonction à simuler ne soit pas trop coûteux
+pour éviter une durée d'optimisation rédhibitoire.
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. ------------------------------------ ..
.. include:: snippets/Header2Algo01.rst
-Cet algorithme réalise une estimation d'état par minimisation d'une
-fonctionnelle d'écart :math:`J` sans gradient. C'est une méthode qui n'utilise
-pas les dérivées de la fonctionnelle d'écart. Elle entre, par exemple, dans la
-même catégorie que l':ref:`section_ref_algorithm_DerivativeFreeOptimization`,
+Cet algorithme réalise une estimation de l'état d'un système par minimisation
+sans gradient d'une fonctionnelle d'écart :math:`J`, en utilisant une méthode
+de recherche avec liste Tabou. C'est une méthode qui n'utilise pas les dérivées
+de la fonctionnelle d'écart. Elle entre, par exemple, dans la même catégorie
+que
+l':ref:`section_ref_algorithm_DerivativeFreeOptimization`,
l':ref:`section_ref_algorithm_ParticleSwarmOptimization` ou
l':ref:`section_ref_algorithm_DifferentialEvolution`.
("ColMajor=True").
Sans précision ou avec une liste vide pour les noms de variables, on
- utilise les valeurs toutes les variables, mais on peut aussi sélectionner
- uniquement celles des variables indiquées dans la liste de noms fournie
- dans "ColNames". Les noms de variable sont toujours en entête de colonne.
+ utilise les valeurs de toutes les variables, mais on peut aussi
+ sélectionner uniquement celles des variables indiquées dans la liste de
+ noms fournie dans "ColNames". Les noms de variable sont toujours en entête
+ de colonne.
Exemple de fichier CSV pour la variable "*Observation*" en "*DataFile*" ::
:widths: 20, 10, 10
Python, 3.6.5, 3.11.7
- Numpy, 1.14.3, 1.26.2
- Scipy, 0.19.1, 1.11.4
- MatplotLib, 2.2.2, 3.8.2
+ Numpy, 1.14.3, 1.26.4
+ Scipy, 0.19.1, 1.12.0
+ MatplotLib, 2.2.2, 3.8.3
GnuplotPy, 1.8, 1.8
NLopt, 2.4.2, 2.7.1
--- /dev/null
+.. index:: single: ReduceMemoryUse
+
+ReduceMemoryUse
+ *Valeur booléenne*. La variable conduit à l'activation, ou pas, du mode de
+ réduction de l'empreinte mémoire lors de l'exécution, au prix d'une
+ augmentation potentielle du temps de calcul. Les résultats peuvent différer à
+ partir d'une certaine précision (1.e-12 à 1.e-14) usuellement proche de la
+ précision machine (1.e-16). La valeur par défaut est "False", les choix sont
+ "True" ou "False".
+
+ Exemple :
+ ``{"ReduceMemoryUse":False}``
SetDebug
*Valeur booléenne*. La variable conduit à l'activation, ou pas, du mode de
débogage durant l'évaluation de la fonction ou de l'opérateur. La valeur par
- défaut est "True", les choix sont "True" ou "False".
+ défaut est "False", les choix sont "True" ou "False".
Exemple :
``{"SetDebug":False}``
"""
__author__ = "Jean-Philippe ARGAUD"
-import math, numpy, scipy
+import math, numpy, scipy, copy
+from daCore.PlatformInfo import vfloat
from daCore.NumericObjects import ApplyBounds, ForceNumericBounds
-from daCore.PlatformInfo import PlatformInfo, vfloat
-mpr = PlatformInfo().MachinePrecision()
-mfp = PlatformInfo().MaximumPrecision()
# ==============================================================================
def c2ukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
Cm = None
#
Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
- Xnp = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
+ Xnmu = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
nbSpts = 2*Xn.size+1
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
for point in range(nbSpts):
- Xnp[:,point] = ApplyBounds( Xnp[:,point], selfA._parameters["Bounds"] )
+ Xnmu[:,point] = ApplyBounds( Xnmu[:,point], selfA._parameters["Bounds"] )
#
- XEtnnp = []
+ XEnnmu = []
for point in range(nbSpts):
if selfA._parameters["EstimationOf"] == "State":
Mm = EM["Direct"].appliedControledFormTo
- XEtnnpi = numpy.asarray( Mm( (Xnp[:,point], Un) ) ).reshape((-1,1))
+ XEnnmui = numpy.asarray( Mm( (Xnmu[:,point], Un) ) ).reshape((-1,1))
if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
- XEtnnpi = XEtnnpi + Cm @ Un
+ XEnnmui = XEnnmui + Cm @ Un
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- XEtnnpi = ApplyBounds( XEtnnpi, selfA._parameters["Bounds"] )
+ XEnnmui = ApplyBounds( XEnnmui, selfA._parameters["Bounds"] )
elif selfA._parameters["EstimationOf"] == "Parameters":
# --- > Par principe, M = Id, Q = 0
- XEtnnpi = Xnp[:,point]
- XEtnnp.append( numpy.ravel(XEtnnpi).reshape((-1,1)) )
- XEtnnp = numpy.concatenate( XEtnnp, axis=1 )
+ XEnnmui = Xnmu[:,point]
+ XEnnmu.append( numpy.ravel(XEnnmui).reshape((-1,1)) )
+ XEnnmu = numpy.concatenate( XEnnmu, axis=1 )
#
- Xncm = ( XEtnnp * Wm ).sum(axis=1)
+ Xhmn = ( XEnnmu * Wm ).sum(axis=1)
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- Xncm = ApplyBounds( Xncm, selfA._parameters["Bounds"] )
+ Xhmn = ApplyBounds( Xhmn, selfA._parameters["Bounds"] )
#
- if selfA._parameters["EstimationOf"] == "State": Pnm = Q
- elif selfA._parameters["EstimationOf"] == "Parameters": Pnm = 0.
+ if selfA._parameters["EstimationOf"] == "State": Pmn = copy.copy(Q)
+ elif selfA._parameters["EstimationOf"] == "Parameters": Pmn = 0.
for point in range(nbSpts):
- Pnm += Wc[i] * ((XEtnnp[:,point]-Xncm).reshape((-1,1)) * (XEtnnp[:,point]-Xncm))
+ dXEnnmuXhmn = XEnnmu[:,point].flat-Xhmn
+ Pmn += Wc[i] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
#
if selfA._parameters["EstimationOf"] == "Parameters" and selfA._parameters["Bounds"] is not None:
- Pnmdemi = selfA._parameters["Reconditioner"] * numpy.real(scipy.linalg.sqrtm(Pnm))
+ Pmndemi = selfA._parameters["Reconditioner"] * numpy.real(scipy.linalg.sqrtm(Pmn))
else:
- Pnmdemi = numpy.real(scipy.linalg.sqrtm(Pnm))
+ Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
#
- Xnnp = numpy.hstack([Xncm.reshape((-1,1)), Xncm.reshape((-1,1))+Gamma*Pnmdemi, Xncm.reshape((-1,1))-Gamma*Pnmdemi])
+ Xnnmu = numpy.hstack([Xhmn.reshape((-1,1)), Xhmn.reshape((-1,1))+Gamma*Pmndemi, Xhmn.reshape((-1,1))-Gamma*Pmndemi])
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
for point in range(nbSpts):
- Xnnp[:,point] = ApplyBounds( Xnnp[:,point], selfA._parameters["Bounds"] )
+ Xnnmu[:,point] = ApplyBounds( Xnnmu[:,point], selfA._parameters["Bounds"] )
#
Hm = HO["Direct"].appliedControledFormTo
- Ynnp = []
+ Ynnmu = []
for point in range(nbSpts):
if selfA._parameters["EstimationOf"] == "State":
- Ynnpi = Hm( (Xnnp[:,point], None) )
+ Ynnmui = Hm( (Xnnmu[:,point], None) )
elif selfA._parameters["EstimationOf"] == "Parameters":
- Ynnpi = Hm( (Xnnp[:,point], Un) )
- Ynnp.append( numpy.ravel(Ynnpi).reshape((-1,1)) )
- Ynnp = numpy.concatenate( Ynnp, axis=1 )
+ Ynnmui = Hm( (Xnnmu[:,point], Un) )
+ Ynnmu.append( numpy.ravel(Ynnmui).reshape((__p,1)) )
+ Ynnmu = numpy.concatenate( Ynnmu, axis=1 )
#
- Yncm = ( Ynnp * Wm ).sum(axis=1)
+ Yhmn = ( Ynnmu * Wm ).sum(axis=1)
#
- Pyyn = R
+ Pyyn = copy.copy(R)
Pxyn = 0.
for point in range(nbSpts):
- Pyyn += Wc[i] * ((Ynnp[:,point]-Yncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
- Pxyn += Wc[i] * ((Xnnp[:,point]-Xncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
+ dYnnmuYhmn = Ynnmu[:,point].flat-Yhmn
+ dXnnmuXhmn = Xnnmu[:,point].flat-Xhmn
+ Pyyn += Wc[i] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
+ Pxyn += Wc[i] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
#
if hasattr(Y,"store"):
Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
else:
Ynpu = numpy.ravel( Y ).reshape((__p,1))
- _Innovation = Ynpu - Yncm.reshape((-1,1))
+ _Innovation = Ynpu - Yhmn.reshape((-1,1))
if selfA._parameters["EstimationOf"] == "Parameters":
if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
_Innovation = _Innovation - Cm @ Un
#
- Kn = Pxyn * Pyyn.I
- Xn = Xncm.reshape((-1,1)) + Kn * _Innovation
- Pn = Pnm - Kn * Pyyn * Kn.T
+ Kn = Pxyn @ Pyyn.I
+ Xn = Xhmn.reshape((-1,1)) + Kn @ _Innovation
+ Pn = Pmn - Kn @ (Pyyn @ Kn.T)
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
- selfA.StoredVariables["ForecastState"].store( Xncm )
+ selfA.StoredVariables["ForecastState"].store( Xhmn )
if selfA._toStore("ForecastCovariance"):
- selfA.StoredVariables["ForecastCovariance"].store( Pnm )
+ selfA.StoredVariables["ForecastCovariance"].store( Pmn )
if selfA._toStore("BMA"):
- selfA.StoredVariables["BMA"].store( Xncm - Xa )
+ selfA.StoredVariables["BMA"].store( Xhmn - Xa )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yncm )
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
or selfA._toStore("CostFunctionJ") \
import numpy, scipy, logging
import daCore.Persistence
from daCore.NumericObjects import FindIndexesFromNames
+from daCore.NumericObjects import InterpolationErrorByColumn
+from daCore.NumericObjects import SingularValuesEstimation
from daCore.PlatformInfo import vt
# ==============================================================================
#
# Initialisations
# ---------------
+ if numpy.array(EOS).size == 0:
+ raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
if isinstance(EOS, (numpy.ndarray, numpy.matrix)):
__EOS = numpy.asarray(EOS)
elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
__EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
- # __EOS = numpy.asarray(EOS).T
else:
raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
__dimS, __nbmS = __EOS.shape
else:
selfA._parameters["EpsilonEIM"] = 1.e-2
#
- # Ne pas utiliser : scipy.linalg.svd
- __vs = scipy.linalg.svdvals( __EOS )
+ __sv, __svsq, __tisv, __qisv = SingularValuesEstimation( __EOS )
if vt(scipy.version.version) < vt("1.1.0"):
__rhoM = scipy.linalg.orth( __EOS )
- __rhoM = numpy.compress(__vs > selfA._parameters["EpsilonEIM"]*max(__vs), __rhoM, axis=1)
+ __rhoM = numpy.compress(__sv > selfA._parameters["EpsilonEIM"]*max(__sv), __rhoM, axis=1)
else:
__rhoM = scipy.linalg.orth( __EOS, selfA._parameters["EpsilonEIM"] )
__lVs, __svdM = __rhoM.shape
assert __lVs == __dimS, "Différence entre lVs et dim(EOS)"
- __vs2 = __vs**2
- __qivs = 1. - __vs2[:__svdM].cumsum()/__vs2.sum()
__maxM = min(__maxM,__svdM)
#
if __LcCsts and len(__IncludedMagicPoints) > 0:
__errors = []
#
__M = 1 # Car le premier est déjà construit
- __errors.append(__qivs[0])
+ if selfA._toStore("Residus"):
+ __eM, _ = InterpolationErrorByColumn(
+ __EOS, __Q, __I, __M,
+ __ErrorNorm = selfA._parameters["ErrorNorm"],
+ __LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints)
+ __errors.append(__eM)
#
# Boucle
# ------
))
__Q = numpy.column_stack((__Q, __rhoM[:,__M]))
#
- __errors.append(__qivs[__M])
__I.append(__iM)
__mu.append(None) # Convention
+ if selfA._toStore("Residus"):
+ __eM, _ = InterpolationErrorByColumn(
+ __EOS, __Q, __I, __M+1,
+ __ErrorNorm = selfA._parameters["ErrorNorm"],
+ __LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints)
+ __errors.append(__eM)
#
__M = __M + 1
#
#--------------------------
- if __errors[-1] < selfA._parameters["EpsilonEIM"]:
+ if len(__errors)>0 and __errors[-1] < selfA._parameters["EpsilonEIM"]:
logging.debug("%s %s (%.1e)"%(selfA._name,"The convergence is obtained when reaching the required EIM tolerance",selfA._parameters["EpsilonEIM"]))
if __M >= __maxM:
logging.debug("%s %s (%i)"%(selfA._name,"The convergence is obtained when reaching the maximum number of RB dimension",__maxM))
if selfA._toStore("ExcludedPoints"):
selfA.StoredVariables["ExcludedPoints"].store( __ExcludedMagicPoints )
if selfA._toStore("SingularValues"):
- selfA.StoredVariables["SingularValues"].store( __vs )
+ selfA.StoredVariables["SingularValues"].store( __sv )
#
return __mu, __I, __Q, __errors
import numpy, logging
import daCore.Persistence
from daCore.NumericObjects import FindIndexesFromNames
+from daCore.NumericObjects import InterpolationErrorByColumn
# ==============================================================================
def EIM_offline(selfA, EOS = None, Verbose = False):
#
# Initialisations
# ---------------
+ if numpy.array(EOS).size == 0:
+ raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
if isinstance(EOS, (numpy.ndarray, numpy.matrix)):
__EOS = numpy.asarray(EOS)
elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
__EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
- # __EOS = numpy.asarray(EOS).T
else:
raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
__dimS, __nbmS = __EOS.shape
logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name,__nbmS,__dimS))
#
- if selfA._parameters["ErrorNorm"] == "L2":
- MaxNormByColumn = MaxL2NormByColumn
- else:
- MaxNormByColumn = MaxLinfNormByColumn
- #
if selfA._parameters["Variant"] in ["EIM", "PositioningByEIM"]:
__LcCsts = False
else:
selfA._parameters["EpsilonEIM"] = selfA._parameters["ErrorNormTolerance"]
else:
selfA._parameters["EpsilonEIM"] = 1.e-2
+ if "ReduceMemoryUse" in selfA._parameters:
+ rmu = selfA._parameters["ReduceMemoryUse"]
+ else:
+ rmu = False
#
__mu = []
__I = []
__M = 0
__rhoM = numpy.empty(__dimS)
#
- __eM, __muM = MaxNormByColumn(__EOS, __LcCsts, __IncludedMagicPoints)
- __residuM = __EOS[:,__muM]
+ __eM, __muM, __residuM = InterpolationErrorByColumn(
+ __Differences = __EOS, __M = __M,
+ __ErrorNorm = selfA._parameters["ErrorNorm"],
+ __LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints,
+ __CDM = True, __RMU = rmu,
+ )
__errors.append(__eM)
#
# Boucle
__Q = __rhoM.reshape((-1,1))
__I.append(__iM)
#
- __restrictedQi = numpy.tril( __Q[__I,:] )
- if __M > 1:
- __Qi_inv = numpy.linalg.inv(__restrictedQi)
- else:
- __Qi_inv = 1. / __restrictedQi
- #
- __restrictedEOSi = __EOS[__I,:]
- #
- if __M > 1:
- __interpolator = numpy.dot(__Q,numpy.dot(__Qi_inv,__restrictedEOSi))
- else:
- __interpolator = numpy.outer(__Q,numpy.outer(__Qi_inv,__restrictedEOSi))
- #
- __dataForNextIter = __EOS - __interpolator
- __eM, __muM = MaxNormByColumn(__dataForNextIter, __LcCsts, __IncludedMagicPoints)
+ __eM, __muM, __residuM = InterpolationErrorByColumn(
+ __Ensemble = __EOS, __Basis = __Q, __Points = __I, __M = __M,
+ __ErrorNorm = selfA._parameters["ErrorNorm"],
+ __LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints,
+ __CDM = True, __RMU = rmu, __FTL = True,
+ )
__errors.append(__eM)
- #
- __residuM = __dataForNextIter[:,__muM]
#
#--------------------------
if __errors[-1] < selfA._parameters["EpsilonEIM"]:
#
return __gMmu
-# ==============================================================================
-def MaxL2NormByColumn(Ensemble, LcCsts = False, IncludedPoints = []):
- if LcCsts and len(IncludedPoints) > 0:
- normes = numpy.linalg.norm(
- numpy.take(Ensemble, IncludedPoints, axis=0, mode='clip'),
- axis = 0,
- )
- else:
- normes = numpy.linalg.norm( Ensemble, axis = 0)
- nmax = numpy.max(normes)
- imax = numpy.argmax(normes)
- return nmax, imax
-
-def MaxLinfNormByColumn(Ensemble, LcCsts = False, IncludedPoints = []):
- if LcCsts and len(IncludedPoints) > 0:
- normes = numpy.linalg.norm(
- numpy.take(Ensemble, IncludedPoints, axis=0, mode='clip'),
- axis = 0, ord=numpy.inf,
- )
- else:
- normes = numpy.linalg.norm( Ensemble, axis = 0, ord=numpy.inf)
- nmax = numpy.max(normes)
- imax = numpy.argmax(normes)
- return nmax, imax
-
# ==============================================================================
if __name__ == "__main__":
print('\n AUTODIAGNOSTIC\n')
import daAlgorithms.Atoms.lbfgsb19hlt as optimiseur
elif vt("1.11.0") <= vt(scipy.version.version) <= vt("1.11.99"):
import daAlgorithms.Atoms.lbfgsb111hlt as optimiseur
+ elif vt("1.12.0") <= vt(scipy.version.version) <= vt("1.12.99"):
+ import daAlgorithms.Atoms.lbfgsb112hlt as optimiseur
else:
import scipy.optimize as optimiseur
Minimum, J_optimal, Informations = optimiseur.fmin_l_bfgs_b(
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2024 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+__doc__ = """
+ Empirical Interpolation Method EIM & lcEIM with User Defined Function
+"""
+__author__ = "Jean-Philippe ARGAUD"
+
+import numpy, scipy, logging
+import daCore.Persistence
+from daCore.NumericObjects import FindIndexesFromNames
+from daCore.NumericObjects import InterpolationErrorByColumn
+from daCore.NumericObjects import SingularValuesEstimation
+
+# ==============================================================================
+def UBFEIM_offline(selfA, EOS = None, Verbose = False):
+ """
+ Établissement de la base
+ """
+ #
+ # Initialisations
+ # ---------------
+ if numpy.array(EOS).size == 0:
+ raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ if isinstance(EOS, (numpy.ndarray, numpy.matrix)):
+ __EOS = numpy.asarray(EOS)
+ elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
+ __EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
+ else:
+ raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ __dimS, __nbmS = __EOS.shape
+ logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name,__nbmS,__dimS))
+ #
+ if numpy.array(selfA._parameters["UserBasisFunctions"]).size == 0:
+ logging.debug("%s Using the snapshots in place of user defined basis functions, the latter being not provided"%(selfA._name))
+ UBF = __EOS
+ else:
+ UBF = selfA._parameters["UserBasisFunctions"]
+ if isinstance(UBF, (numpy.ndarray, numpy.matrix)):
+ __UBF = numpy.asarray(UBF)
+ elif isinstance(UBF, (list, tuple, daCore.Persistence.Persistence)):
+ __UBF = numpy.stack([numpy.ravel(_sn) for _sn in UBF], axis=1)
+ else:
+ raise ValueError("UserBasisFunctions has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ assert __EOS.shape[0] == __UBF.shape[0], "Individual snapshot and user defined basis function has to be of the same size, which is false: %i =/= %i"%(__EOS.shape[0], __UBF.shape[0])
+ __dimS, __nbmS = __UBF.shape
+ logging.debug("%s Using a collection of %i user defined basis functions of individual size of %i"%(selfA._name,__nbmS,__dimS))
+ #
+ if selfA._parameters["Variant"] in ["UBFEIM", "PositioningByUBFEIM"]:
+ __LcCsts = False
+ else:
+ __LcCsts = True
+ if __LcCsts and "ExcludeLocations" in selfA._parameters:
+ __ExcludedMagicPoints = selfA._parameters["ExcludeLocations"]
+ else:
+ __ExcludedMagicPoints = ()
+ if __LcCsts and "NameOfLocations" in selfA._parameters:
+ if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS:
+ __NameOfLocations = selfA._parameters["NameOfLocations"]
+ else:
+ __NameOfLocations = ()
+ else:
+ __NameOfLocations = ()
+ if __LcCsts and len(__ExcludedMagicPoints) > 0:
+ __ExcludedMagicPoints = FindIndexesFromNames( __NameOfLocations, __ExcludedMagicPoints )
+ __ExcludedMagicPoints = numpy.ravel(numpy.asarray(__ExcludedMagicPoints, dtype=int))
+ __IncludedMagicPoints = numpy.setdiff1d(
+ numpy.arange(__UBF.shape[0]),
+ __ExcludedMagicPoints,
+ assume_unique = True,
+ )
+ else:
+ __IncludedMagicPoints = []
+ #
+ if "MaximumNumberOfLocations" in selfA._parameters and "MaximumRBSize" in selfA._parameters:
+ selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"],selfA._parameters["MaximumRBSize"])
+ elif "MaximumNumberOfLocations" in selfA._parameters:
+ selfA._parameters["MaximumRBSize"] = selfA._parameters["MaximumNumberOfLocations"]
+ elif "MaximumRBSize" in selfA._parameters:
+ pass
+ else:
+ selfA._parameters["MaximumRBSize"] = __nbmS
+ __maxM = min(selfA._parameters["MaximumRBSize"], __dimS, __nbmS)
+ if "ErrorNormTolerance" in selfA._parameters:
+ selfA._parameters["EpsilonEIM"] = selfA._parameters["ErrorNormTolerance"]
+ else:
+ selfA._parameters["EpsilonEIM"] = 1.e-2
+ #
+ __rhoM = __UBF
+ #
+ if __LcCsts and len(__IncludedMagicPoints) > 0:
+ __iM = numpy.argmax( numpy.abs(
+ numpy.take(__rhoM[:,0], __IncludedMagicPoints, mode='clip')
+ ))
+ else:
+ __iM = numpy.argmax( numpy.abs(
+ __rhoM[:,0]
+ ))
+ #
+ __mu = [None,] # Convention
+ __I = [__iM,]
+ __Q = __rhoM[:,0].reshape((-1,1))
+ __errors = []
+ #
+ __M = 1 # Car le premier est déjà construit
+ if selfA._toStore("Residus"):
+ __eM, _ = InterpolationErrorByColumn(
+ __EOS, __Q, __I, __M,
+ __ErrorNorm = selfA._parameters["ErrorNorm"],
+ __LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints)
+ __errors.append(__eM)
+ #
+ # Boucle
+ # ------
+ while __M < __maxM:
+ #
+ __restrictedQi = __Q[__I,:]
+ if __M > 1:
+ __Qi_inv = numpy.linalg.inv(__restrictedQi)
+ else:
+ __Qi_inv = 1. / __restrictedQi
+ #
+ __restrictedrhoMi = __rhoM[__I,__M].reshape((-1,1))
+ #
+ if __M > 1:
+ __interpolator = numpy.dot(__Q,numpy.dot(__Qi_inv,__restrictedrhoMi))
+ else:
+ __interpolator = numpy.outer(__Q,numpy.outer(__Qi_inv,__restrictedrhoMi))
+ #
+ __residuM = __rhoM[:,__M].reshape((-1,1)) - __interpolator
+ #
+ if __LcCsts and len(__IncludedMagicPoints) > 0:
+ __iM = numpy.argmax( numpy.abs(
+ numpy.take(__residuM, __IncludedMagicPoints, mode='clip')
+ ))
+ else:
+ __iM = numpy.argmax( numpy.abs(
+ __residuM
+ ))
+ __Q = numpy.column_stack((__Q, __rhoM[:,__M]))
+ #
+ __I.append(__iM)
+ __mu.append(None) # Convention
+ if selfA._toStore("Residus"):
+ __eM, _ = InterpolationErrorByColumn(
+ __EOS, __Q, __I, __M+1,
+ __ErrorNorm = selfA._parameters["ErrorNorm"],
+ __LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints)
+ __errors.append(__eM)
+ #
+ __M = __M + 1
+ #
+ #--------------------------
+ if len(__errors)>0 and __errors[-1] < selfA._parameters["EpsilonEIM"]:
+ logging.debug("%s %s (%.1e)"%(selfA._name,"The convergence is obtained when reaching the required EIM tolerance",selfA._parameters["EpsilonEIM"]))
+ if __M >= __maxM:
+ logging.debug("%s %s (%i)"%(selfA._name,"The convergence is obtained when reaching the maximum number of RB dimension",__maxM))
+ logging.debug("%s The RB of size %i has been correctly build"%(selfA._name,__Q.shape[1]))
+ logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name,len(__ExcludedMagicPoints)))
+ if hasattr(selfA, "StoredVariables"):
+ selfA.StoredVariables["OptimalPoints"].store( __I )
+ if selfA._toStore("ReducedBasis"):
+ selfA.StoredVariables["ReducedBasis"].store( __Q )
+ if selfA._toStore("Residus"):
+ selfA.StoredVariables["Residus"].store( __errors )
+ if selfA._toStore("ExcludedPoints"):
+ selfA.StoredVariables["ExcludedPoints"].store( __ExcludedMagicPoints )
+ #
+ return __mu, __I, __Q, __errors
+
+# ==============================================================================
+# UBFEIM_online == EIM_online
+# ==============================================================================
+if __name__ == "__main__":
+ print('\n AUTODIAGNOSTIC\n')
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2024 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+__doc__ = """
+ Unscented Kalman Filter
+"""
+__author__ = "Jean-Philippe ARGAUD"
+
+import math, numpy, scipy, copy
+from daCore.PlatformInfo import vfloat
+
+# ==============================================================================
+def ecwukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
+ """
+ Unscented Kalman Filter
+ """
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA._parameters["StoreInternalVariables"] = True
+ #
+ L = Xb.size
+ Alpha = selfA._parameters["Alpha"]
+ Beta = selfA._parameters["Beta"]
+ if selfA._parameters["Kappa"] == 0:
+ if selfA._parameters["EstimationOf"] == "State":
+ Kappa = 0
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Kappa = 3 - L
+ else:
+ Kappa = selfA._parameters["Kappa"]
+ Lambda = float( Alpha**2 ) * ( L + Kappa ) - L
+ Gamma = math.sqrt( L + Lambda )
+ #
+ Ww = []
+ Ww.append( 0. )
+ for i in range(2*L):
+ Ww.append( 1. / (2.*(L + Lambda)) )
+ #
+ Wm = numpy.array( Ww )
+ Wm[0] = Lambda / (L + Lambda)
+ Wc = numpy.array( Ww )
+ Wc[0] = Lambda / (L + Lambda) + (1. - Alpha**2 + Beta)
+ #
+ # Durée d'observation et tailles
+ if hasattr(Y,"stepnumber"):
+ duration = Y.stepnumber()
+ __p = numpy.cumprod(Y.shape())[-1]
+ else:
+ duration = 2
+ __p = numpy.size(Y)
+ #
+ # Précalcul des inversions de B et R
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ BI = B.getI()
+ RI = R.getI()
+ #
+ __n = Xb.size
+ nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
+ #
+ if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ Xn = Xb
+ if hasattr(B,"asfullmatrix"):
+ Pn = B.asfullmatrix(__n)
+ else:
+ Pn = B
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( Xb )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ elif selfA._parameters["nextStep"]:
+ Xn = selfA._getInternalState("Xn")
+ Pn = selfA._getInternalState("Pn")
+ #
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ XaMin = Xn
+ previousJMinimum = numpy.finfo(float).max
+ #
+ for step in range(duration-1):
+ #
+ if U is not None:
+ if hasattr(U,"store") and len(U)>1:
+ Un = numpy.ravel( U[step] ).reshape((-1,1))
+ elif hasattr(U,"store") and len(U)==1:
+ Un = numpy.ravel( U[0] ).reshape((-1,1))
+ else:
+ Un = numpy.ravel( U ).reshape((-1,1))
+ else:
+ Un = None
+ #
+ if CM is not None and "Tangent" in CM and U is not None:
+ Cm = CM["Tangent"].asMatrix(Xn)
+ else:
+ Cm = None
+ #
+ Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
+ Xnmu = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
+ nbSpts = 2*Xn.size+1
+ #
+ XEnnmu = []
+ for point in range(nbSpts):
+ if selfA._parameters["EstimationOf"] == "State":
+ Mm = EM["Direct"].appliedControledFormTo
+ XEnnmui = numpy.asarray( Mm( (Xnmu[:,point], Un) ) ).reshape((-1,1))
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
+ XEnnmui = XEnnmui + Cm @ Un
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ # --- > Par principe, M = Id, Q = 0
+ XEnnmui = Xnmu[:,point]
+ XEnnmu.append( numpy.ravel(XEnnmui).reshape((-1,1)) )
+ XEnnmu = numpy.concatenate( XEnnmu, axis=1 )
+ #
+ Xhmn = ( XEnnmu * Wm ).sum(axis=1)
+ #
+ if selfA._parameters["EstimationOf"] == "State": Pmn = copy.copy(Q)
+ elif selfA._parameters["EstimationOf"] == "Parameters": Pmn = 0.
+ for point in range(nbSpts):
+ dXEnnmuXhmn = XEnnmu[:,point].flat-Xhmn
+ Pmn += Wc[i] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
+ #
+ Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
+ Xnnmu = numpy.hstack([Xhmn.reshape((-1,1)), Xhmn.reshape((-1,1))+Gamma*Pmndemi, Xhmn.reshape((-1,1))-Gamma*Pmndemi])
+ #
+ Hm = HO["Direct"].appliedControledFormTo
+ Ynnmu = []
+ for point in range(nbSpts):
+ if selfA._parameters["EstimationOf"] == "State":
+ Ynnmui = Hm( (Xnnmu[:,point], None) )
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Ynnmui = Hm( (Xnnmu[:,point], Un) )
+ Ynnmu.append( numpy.ravel(Ynnmui).reshape((__p,1)) )
+ Ynnmu = numpy.concatenate( Ynnmu, axis=1 )
+ #
+ Yhmn = ( Ynnmu * Wm ).sum(axis=1)
+ #
+ Pyyn = copy.copy(R)
+ Pxyn = 0.
+ for point in range(nbSpts):
+ dYnnmuYhmn = Ynnmu[:,point].flat-Yhmn
+ dXnnmuXhmn = Xnnmu[:,point].flat-Xhmn
+ Pyyn += Wc[i] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
+ Pxyn += Wc[i] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
+ #
+ if hasattr(Y,"store"):
+ Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ else:
+ Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ _Innovation = Ynpu - Yhmn.reshape((-1,1))
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ _Innovation = _Innovation - Cm @ Un
+ #
+ Kn = Pxyn @ Pyyn.I
+ Xn = Xhmn.reshape((-1,1)) + Kn @ _Innovation
+ Pn = Pmn - Kn @ (Pyyn @ Kn.T)
+ #
+ Xa = Xn # Pointeurs
+ #--------------------------
+ selfA._setInternalState("Xn", Xn)
+ selfA._setInternalState("Pn", Pn)
+ #--------------------------
+ #
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ # ---> avec analysis
+ selfA.StoredVariables["Analysis"].store( Xa )
+ if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, Un)) )
+ if selfA._toStore("InnovationAtCurrentAnalysis"):
+ selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
+ # ---> avec current state
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xn )
+ if selfA._toStore("ForecastState"):
+ selfA.StoredVariables["ForecastState"].store( Xhmn )
+ if selfA._toStore("ForecastCovariance"):
+ selfA.StoredVariables["ForecastCovariance"].store( Pmn )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( Xhmn - Xa )
+ if selfA._toStore("InnovationAtCurrentState"):
+ selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
+ if selfA._toStore("SimulatedObservationAtCurrentState") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
+ # ---> autres
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
+ Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
+ J = Jb + Jo
+ selfA.StoredVariables["CostFunctionJb"].store( Jb )
+ selfA.StoredVariables["CostFunctionJo"].store( Jo )
+ selfA.StoredVariables["CostFunctionJ" ].store( J )
+ #
+ if selfA._toStore("IndexOfOptimum") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
+ if selfA._toStore("IndexOfOptimum"):
+ selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
+ if selfA._toStore("CurrentOptimum"):
+ selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
+ if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ if selfA._toStore("CostFunctionJAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ if selfA._parameters["EstimationOf"] == "Parameters" \
+ and J < previousJMinimum:
+ previousJMinimum = J
+ XaMin = Xa
+ if selfA._toStore("APosterioriCovariance"):
+ covarianceXaMin = selfA.StoredVariables["APosterioriCovariance"][-1]
+ #
+ # Stockage final supplémentaire de l'optimum en estimation de paramètres
+ # ----------------------------------------------------------------------
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( XaMin )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( covarianceXaMin )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(XaMin) )
+ #
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print('\n AUTODIAGNOSTIC\n')
import daAlgorithms.Atoms.lbfgsb19hlt as optimiseur
elif vt("1.11.0") <= vt(scipy.version.version) <= vt("1.11.99"):
import daAlgorithms.Atoms.lbfgsb111hlt as optimiseur
+ elif vt("1.12.0") <= vt(scipy.version.version) <= vt("1.12.99"):
+ import daAlgorithms.Atoms.lbfgsb112hlt as optimiseur
else:
import scipy.optimize as optimiseur
Minimum, J_optimal, Informations = optimiseur.fmin_l_bfgs_b(
--- /dev/null
+# Modification de la version 1.12.0
+"""
+Functions
+---------
+.. autosummary::
+ :toctree: generated/
+
+ fmin_l_bfgs_b
+
+"""
+
+## License for the Python wrapper
+## ==============================
+
+## Copyright (c) 2004 David M. Cooke <cookedm@physics.mcmaster.ca>
+
+## Permission is hereby granted, free of charge, to any person obtaining a
+## copy of this software and associated documentation files (the "Software"),
+## to deal in the Software without restriction, including without limitation
+## the rights to use, copy, modify, merge, publish, distribute, sublicense,
+## and/or sell copies of the Software, and to permit persons to whom the
+## Software is furnished to do so, subject to the following conditions:
+
+## The above copyright notice and this permission notice shall be included in
+## all copies or substantial portions of the Software.
+
+## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+## IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+## FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+## AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+## LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+## FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+## DEALINGS IN THE SOFTWARE.
+
+## Modifications by Travis Oliphant and Enthought, Inc. for inclusion in SciPy
+
+import numpy as np
+from numpy import array, asarray, float64, zeros
+from scipy.optimize import _lbfgsb
+from scipy.optimize._optimize import (MemoizeJac, OptimizeResult, _call_callback_maybe_halt,
+ _wrap_callback, _check_unknown_options,
+ _prepare_scalar_function)
+from scipy.optimize._constraints import old_bound_to_new
+
+from scipy.sparse.linalg import LinearOperator
+
+__all__ = ['fmin_l_bfgs_b', 'LbfgsInvHessProduct']
+
+
+def fmin_l_bfgs_b(func, x0, fprime=None, args=(),
+ approx_grad=0,
+ bounds=None, m=10, factr=1e7, pgtol=1e-5,
+ epsilon=1e-8,
+ iprint=-1, maxfun=15000, maxiter=15000, disp=None,
+ callback=None, maxls=20):
+ """
+ Minimize a function func using the L-BFGS-B algorithm.
+
+ Parameters
+ ----------
+ func : callable f(x,*args)
+ Function to minimize.
+ x0 : ndarray
+ Initial guess.
+ fprime : callable fprime(x,*args), optional
+ The gradient of `func`. If None, then `func` returns the function
+ value and the gradient (``f, g = func(x, *args)``), unless
+ `approx_grad` is True in which case `func` returns only ``f``.
+ args : sequence, optional
+ Arguments to pass to `func` and `fprime`.
+ approx_grad : bool, optional
+ Whether to approximate the gradient numerically (in which case
+ `func` returns only the function value).
+ bounds : list, optional
+ ``(min, max)`` pairs for each element in ``x``, defining
+ the bounds on that parameter. Use None or +-inf for one of ``min`` or
+ ``max`` when there is no bound in that direction.
+ m : int, optional
+ The maximum number of variable metric corrections
+ used to define the limited memory matrix. (The limited memory BFGS
+ method does not store the full hessian but uses this many terms in an
+ approximation to it.)
+ factr : float, optional
+ The iteration stops when
+ ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``,
+ where ``eps`` is the machine precision, which is automatically
+ generated by the code. Typical values for `factr` are: 1e12 for
+ low accuracy; 1e7 for moderate accuracy; 10.0 for extremely
+ high accuracy. See Notes for relationship to `ftol`, which is exposed
+ (instead of `factr`) by the `scipy.optimize.minimize` interface to
+ L-BFGS-B.
+ pgtol : float, optional
+ The iteration will stop when
+ ``max{|proj g_i | i = 1, ..., n} <= pgtol``
+ where ``proj g_i`` is the i-th component of the projected gradient.
+ epsilon : float, optional
+ Step size used when `approx_grad` is True, for numerically
+ calculating the gradient
+ iprint : int, optional
+ Controls the frequency of output. ``iprint < 0`` means no output;
+ ``iprint = 0`` print only one line at the last iteration;
+ ``0 < iprint < 99`` print also f and ``|proj g|`` every iprint iterations;
+ ``iprint = 99`` print details of every iteration except n-vectors;
+ ``iprint = 100`` print also the changes of active set and final x;
+ ``iprint > 100`` print details of every iteration including x and g.
+ disp : int, optional
+ If zero, then no output. If a positive number, then this over-rides
+ `iprint` (i.e., `iprint` gets the value of `disp`).
+ maxfun : int, optional
+ Maximum number of function evaluations. Note that this function
+ may violate the limit because of evaluating gradients by numerical
+ differentiation.
+ maxiter : int, optional
+ Maximum number of iterations.
+ callback : callable, optional
+ Called after each iteration, as ``callback(xk)``, where ``xk`` is the
+ current parameter vector.
+ maxls : int, optional
+ Maximum number of line search steps (per iteration). Default is 20.
+
+ Returns
+ -------
+ x : array_like
+ Estimated position of the minimum.
+ f : float
+ Value of `func` at the minimum.
+ d : dict
+ Information dictionary.
+
+ * d['warnflag'] is
+
+ - 0 if converged,
+ - 1 if too many function evaluations or too many iterations,
+ - 2 if stopped for another reason, given in d['task']
+
+ * d['grad'] is the gradient at the minimum (should be 0 ish)
+ * d['funcalls'] is the number of function calls made.
+ * d['nit'] is the number of iterations.
+
+ See also
+ --------
+ minimize: Interface to minimization algorithms for multivariate
+ functions. See the 'L-BFGS-B' `method` in particular. Note that the
+ `ftol` option is made available via that interface, while `factr` is
+ provided via this interface, where `factr` is the factor multiplying
+ the default machine floating-point precision to arrive at `ftol`:
+ ``ftol = factr * numpy.finfo(float).eps``.
+
+ Notes
+ -----
+ License of L-BFGS-B (FORTRAN code):
+
+ The version included here (in fortran code) is 3.0
+ (released April 25, 2011). It was written by Ciyou Zhu, Richard Byrd,
+ and Jorge Nocedal <nocedal@ece.nwu.edu>. It carries the following
+ condition for use:
+
+ This software is freely available, but we expect that all publications
+ describing work using this software, or all commercial products using it,
+ quote at least one of the references given below. This software is released
+ under the BSD License.
+
+ References
+ ----------
+ * R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound
+ Constrained Optimization, (1995), SIAM Journal on Scientific and
+ Statistical Computing, 16, 5, pp. 1190-1208.
+ * C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B,
+ FORTRAN routines for large scale bound constrained optimization (1997),
+ ACM Transactions on Mathematical Software, 23, 4, pp. 550 - 560.
+ * J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778: L-BFGS-B,
+ FORTRAN routines for large scale bound constrained optimization (2011),
+ ACM Transactions on Mathematical Software, 38, 1.
+
+ """
+ # handle fprime/approx_grad
+ if approx_grad:
+ fun = func
+ jac = None
+ elif fprime is None:
+ fun = MemoizeJac(func)
+ jac = fun.derivative
+ else:
+ fun = func
+ jac = fprime
+
+ # build options
+ if disp is None:
+ disp = iprint
+ callback = _wrap_callback(callback)
+ opts = {'disp': disp,
+ 'iprint': iprint,
+ 'maxcor': m,
+ 'ftol': factr * np.finfo(float).eps,
+ 'gtol': pgtol,
+ 'eps': epsilon,
+ 'maxfun': maxfun,
+ 'maxiter': maxiter,
+ 'callback': callback,
+ 'maxls': maxls}
+
+ res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds,
+ **opts)
+ d = {'grad': res['jac'],
+ 'task': res['message'],
+ 'funcalls': res['nfev'],
+ 'nit': res['nit'],
+ 'warnflag': res['status']}
+ f = res['fun']
+ x = res['x']
+
+ return x, f, d
+
+
+def _minimize_lbfgsb(fun, x0, args=(), jac=None, bounds=None,
+ disp=None, maxcor=10, ftol=2.2204460492503131e-09,
+ gtol=1e-5, eps=1e-8, maxfun=15000, maxiter=15000,
+ iprint=-1, callback=None, maxls=20,
+ finite_diff_rel_step=None, **unknown_options):
+ """
+ Minimize a scalar function of one or more variables using the L-BFGS-B
+ algorithm.
+
+ Options
+ -------
+ disp : None or int
+ If `disp is None` (the default), then the supplied version of `iprint`
+ is used. If `disp is not None`, then it overrides the supplied version
+ of `iprint` with the behaviour you outlined.
+ maxcor : int
+ The maximum number of variable metric corrections used to
+ define the limited memory matrix. (The limited memory BFGS
+ method does not store the full hessian but uses this many terms
+ in an approximation to it.)
+ ftol : float
+ The iteration stops when ``(f^k -
+ f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol``.
+ gtol : float
+ The iteration will stop when ``max{|proj g_i | i = 1, ..., n}
+ <= gtol`` where ``proj g_i`` is the i-th component of the
+ projected gradient.
+ eps : float or ndarray
+ If `jac is None` the absolute step size used for numerical
+ approximation of the jacobian via forward differences.
+ maxfun : int
+ Maximum number of function evaluations. Note that this function
+ may violate the limit because of evaluating gradients by numerical
+ differentiation.
+ maxiter : int
+ Maximum number of iterations.
+ iprint : int, optional
+ Controls the frequency of output. ``iprint < 0`` means no output;
+ ``iprint = 0`` print only one line at the last iteration;
+ ``0 < iprint < 99`` print also f and ``|proj g|`` every iprint iterations;
+ ``iprint = 99`` print details of every iteration except n-vectors;
+ ``iprint = 100`` print also the changes of active set and final x;
+ ``iprint > 100`` print details of every iteration including x and g.
+ callback : callable, optional
+ Called after each iteration, as ``callback(xk)``, where ``xk`` is the
+ current parameter vector.
+ maxls : int, optional
+ Maximum number of line search steps (per iteration). Default is 20.
+ finite_diff_rel_step : None or array_like, optional
+ If `jac in ['2-point', '3-point', 'cs']` the relative step size to
+ use for numerical approximation of the jacobian. The absolute step
+ size is computed as ``h = rel_step * sign(x) * max(1, abs(x))``,
+ possibly adjusted to fit into the bounds. For ``method='3-point'``
+ the sign of `h` is ignored. If None (default) then step is selected
+ automatically.
+
+ Notes
+ -----
+ The option `ftol` is exposed via the `scipy.optimize.minimize` interface,
+ but calling `scipy.optimize.fmin_l_bfgs_b` directly exposes `factr`. The
+ relationship between the two is ``ftol = factr * numpy.finfo(float).eps``.
+ I.e., `factr` multiplies the default machine floating-point precision to
+ arrive at `ftol`.
+
+ """
+ _check_unknown_options(unknown_options)
+ m = maxcor
+ pgtol = gtol
+ factr = ftol / np.finfo(float).eps
+
+ x0 = asarray(x0).ravel()
+ n, = x0.shape
+
+ # historically old-style bounds were/are expected by lbfgsb.
+ # That's still the case but we'll deal with new-style from here on,
+ # it's easier
+ if bounds is None:
+ pass
+ elif len(bounds) != n:
+ raise ValueError('length of x0 != length of bounds')
+ else:
+ bounds = np.array(old_bound_to_new(bounds))
+
+ # check bounds
+ if (bounds[0] > bounds[1]).any():
+ raise ValueError(
+ "LBFGSB - one of the lower bounds is greater than an upper bound."
+ )
+
+ # initial vector must lie within the bounds. Otherwise ScalarFunction and
+ # approx_derivative will cause problems
+ x0 = np.clip(x0, bounds[0], bounds[1])
+
+ if disp is not None:
+ if disp == 0:
+ iprint = -1
+ else:
+ iprint = disp
+
+ # _prepare_scalar_function can use bounds=None to represent no bounds
+ sf = _prepare_scalar_function(fun, x0, jac=jac, args=args, epsilon=eps,
+ bounds=bounds,
+ finite_diff_rel_step=finite_diff_rel_step)
+
+ func_and_grad = sf.fun_and_grad
+
+ fortran_int = _lbfgsb.types.intvar.dtype
+
+ nbd = zeros(n, fortran_int)
+ low_bnd = zeros(n, float64)
+ upper_bnd = zeros(n, float64)
+ bounds_map = {(-np.inf, np.inf): 0,
+ (1, np.inf): 1,
+ (1, 1): 2,
+ (-np.inf, 1): 3}
+
+ if bounds is not None:
+ for i in range(0, n):
+ l, u = bounds[0, i], bounds[1, i]
+ if not np.isinf(l):
+ low_bnd[i] = l
+ l = 1
+ if not np.isinf(u):
+ upper_bnd[i] = u
+ u = 1
+ nbd[i] = bounds_map[l, u]
+
+ if not maxls > 0:
+ raise ValueError('maxls must be positive.')
+
+ x = array(x0, float64)
+ f = array(0.0, float64)
+ g = zeros((n,), float64)
+ wa = zeros(2*m*n + 5*n + 11*m*m + 8*m, float64)
+ iwa = zeros(3*n, fortran_int)
+ task = zeros(1, 'S60')
+ csave = zeros(1, 'S60')
+ lsave = zeros(4, fortran_int)
+ isave = zeros(44, fortran_int)
+ dsave = zeros(29, float64)
+
+ task[:] = 'START'
+
+ n_iterations = 0
+
+ while 1:
+ # g may become float32 if a user provides a function that calculates
+ # the Jacobian in float32 (see gh-18730). The underlying Fortran code
+ # expects float64, so upcast it
+ g = g.astype(np.float64)
+ # x, f, g, wa, iwa, task, csave, lsave, isave, dsave = \
+ _lbfgsb.setulb(m, x, low_bnd, upper_bnd, nbd, f, g, factr,
+ pgtol, wa, iwa, task, iprint, csave, lsave,
+ isave, dsave, maxls)
+ task_str = task.tobytes()
+ if task_str.startswith(b'FG'):
+ # The minimization routine wants f and g at the current x.
+ # Note that interruptions due to maxfun are postponed
+ # until the completion of the current minimization iteration.
+ # Overwrite f and g:
+ f, g = func_and_grad(x)
+ if sf.nfev > maxfun:
+ task[:] = ('STOP: TOTAL NO. of f AND g EVALUATIONS '
+ 'EXCEEDS LIMIT')
+ elif task_str.startswith(b'NEW_X'):
+ # new iteration
+ n_iterations += 1
+
+ intermediate_result = OptimizeResult(x=x, fun=f)
+ if _call_callback_maybe_halt(callback, intermediate_result):
+ task[:] = 'STOP: CALLBACK REQUESTED HALT'
+ if n_iterations >= maxiter:
+ task[:] = 'STOP: TOTAL NO. of ITERATIONS REACHED LIMIT'
+ elif sf.nfev > maxfun:
+ task[:] = ('STOP: TOTAL NO. of f AND g EVALUATIONS '
+ 'EXCEEDS LIMIT')
+ else:
+ break
+
+ task_str = task.tobytes().strip(b'\x00').strip()
+ if task_str.startswith(b'CONV'):
+ warnflag = 0
+ elif sf.nfev > maxfun or n_iterations >= maxiter:
+ warnflag = 1
+ else:
+ warnflag = 2
+
+ # These two portions of the workspace are described in the mainlb
+ # subroutine in lbfgsb.f. See line 363.
+ s = wa[0: m*n].reshape(m, n)
+ y = wa[m*n: 2*m*n].reshape(m, n)
+
+ # See lbfgsb.f line 160 for this portion of the workspace.
+ # isave(31) = the total number of BFGS updates prior the current iteration;
+ n_bfgs_updates = isave[30]
+
+ n_corrs = min(n_bfgs_updates, maxcor)
+ hess_inv = LbfgsInvHessProduct(s[:n_corrs], y[:n_corrs])
+
+ task_str = task_str.decode()
+ return OptimizeResult(fun=f, jac=g, nfev=sf.nfev,
+ njev=sf.ngev,
+ nit=n_iterations, status=warnflag, message=task_str,
+ x=x, success=(warnflag == 0), hess_inv=hess_inv)
+
+
+class LbfgsInvHessProduct(LinearOperator):
+ """Linear operator for the L-BFGS approximate inverse Hessian.
+
+ This operator computes the product of a vector with the approximate inverse
+ of the Hessian of the objective function, using the L-BFGS limited
+ memory approximation to the inverse Hessian, accumulated during the
+ optimization.
+
+ Objects of this class implement the ``scipy.sparse.linalg.LinearOperator``
+ interface.
+
+ Parameters
+ ----------
+ sk : array_like, shape=(n_corr, n)
+ Array of `n_corr` most recent updates to the solution vector.
+ (See [1]).
+ yk : array_like, shape=(n_corr, n)
+ Array of `n_corr` most recent updates to the gradient. (See [1]).
+
+ References
+ ----------
+ .. [1] Nocedal, Jorge. "Updating quasi-Newton matrices with limited
+ storage." Mathematics of computation 35.151 (1980): 773-782.
+
+ """
+
+ def __init__(self, sk, yk):
+ """Construct the operator."""
+ if sk.shape != yk.shape or sk.ndim != 2:
+ raise ValueError('sk and yk must have matching shape, (n_corrs, n)')
+ n_corrs, n = sk.shape
+
+ super().__init__(dtype=np.float64, shape=(n, n))
+
+ self.sk = sk
+ self.yk = yk
+ self.n_corrs = n_corrs
+ self.rho = 1 / np.einsum('ij,ij->i', sk, yk)
+
+ def _matvec(self, x):
+ """Efficient matrix-vector multiply with the BFGS matrices.
+
+ This calculation is described in Section (4) of [1].
+
+ Parameters
+ ----------
+ x : ndarray
+ An array with shape (n,) or (n,1).
+
+ Returns
+ -------
+ y : ndarray
+ The matrix-vector product
+
+ """
+ s, y, n_corrs, rho = self.sk, self.yk, self.n_corrs, self.rho
+ q = np.array(x, dtype=self.dtype, copy=True)
+ if q.ndim == 2 and q.shape[1] == 1:
+ q = q.reshape(-1)
+
+ alpha = np.empty(n_corrs)
+
+ for i in range(n_corrs-1, -1, -1):
+ alpha[i] = rho[i] * np.dot(s[i], q)
+ q = q - alpha[i]*y[i]
+
+ r = q
+ for i in range(n_corrs):
+ beta = rho[i] * np.dot(y[i], r)
+ r = r + s[i] * (alpha[i] - beta)
+
+ return r
+
+ def todense(self):
+ """Return a dense array representation of this operator.
+
+ Returns
+ -------
+ arr : ndarray, shape=(n, n)
+ An array with the same shape and containing
+ the same data represented by this `LinearOperator`.
+
+ """
+ s, y, n_corrs, rho = self.sk, self.yk, self.n_corrs, self.rho
+ I = np.eye(*self.shape, dtype=self.dtype)
+ Hk = I
+
+ for i in range(n_corrs):
+ A1 = I - s[i][:, np.newaxis] * y[i][np.newaxis, :] * rho[i]
+ A2 = I - y[i][:, np.newaxis] * s[i][np.newaxis, :] * rho[i]
+
+ Hk = np.dot(A1, np.dot(Hk, A2)) + (rho[i] * s[i][:, np.newaxis] *
+ s[i][np.newaxis, :])
+ return Hk
import daAlgorithms.Atoms.lbfgsb19hlt as optimiseur
elif vt("1.11.0") <= vt(scipy.version.version) <= vt("1.11.99"):
import daAlgorithms.Atoms.lbfgsb111hlt as optimiseur
+ elif vt("1.12.0") <= vt(scipy.version.version) <= vt("1.12.99"):
+ import daAlgorithms.Atoms.lbfgsb112hlt as optimiseur
else:
import scipy.optimize as optimiseur
Minimum, J_optimal, Informations = optimiseur.fmin_l_bfgs_b(
import daAlgorithms.Atoms.lbfgsb19hlt as optimiseur
elif vt("1.11.0") <= vt(scipy.version.version) <= vt("1.11.99"):
import daAlgorithms.Atoms.lbfgsb111hlt as optimiseur
+ elif vt("1.12.0") <= vt(scipy.version.version) <= vt("1.12.99"):
+ import daAlgorithms.Atoms.lbfgsb112hlt as optimiseur
else:
import scipy.optimize as optimiseur
Minimum, J_optimal, Informations = optimiseur.fmin_l_bfgs_b(
import daAlgorithms.Atoms.lbfgsb19hlt as optimiseur
elif vt("1.11.0") <= vt(scipy.version.version) <= vt("1.11.99"):
import daAlgorithms.Atoms.lbfgsb111hlt as optimiseur
+ elif vt("1.12.0") <= vt(scipy.version.version) <= vt("1.12.99"):
+ import daAlgorithms.Atoms.lbfgsb112hlt as optimiseur
else:
import scipy.optimize as optimiseur
Minimum, J_optimal, Informations = optimiseur.fmin_l_bfgs_b(
"""
__author__ = "Jean-Philippe ARGAUD"
-import math, numpy, scipy
-from daCore.PlatformInfo import PlatformInfo, vfloat
-mpr = PlatformInfo().MachinePrecision()
-mfp = PlatformInfo().MaximumPrecision()
+import math, numpy, scipy, copy
+from daCore.PlatformInfo import vfloat
# ==============================================================================
def uskf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
Cm = None
#
Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
- Xnp = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
+ Xnmu = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
nbSpts = 2*Xn.size+1
#
- XEtnnp = []
+ XEnnmu = []
for point in range(nbSpts):
if selfA._parameters["EstimationOf"] == "State":
Mm = EM["Direct"].appliedControledFormTo
- XEtnnpi = numpy.asarray( Mm( (Xnp[:,point], Un) ) ).reshape((-1,1))
+ XEnnmui = numpy.asarray( Mm( (Xnmu[:,point], Un) ) ).reshape((-1,1))
if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
- XEtnnpi = XEtnnpi + Cm @ Un
+ XEnnmui = XEnnmui + Cm @ Un
elif selfA._parameters["EstimationOf"] == "Parameters":
# --- > Par principe, M = Id, Q = 0
- XEtnnpi = Xnp[:,point]
- XEtnnp.append( numpy.ravel(XEtnnpi).reshape((-1,1)) )
- XEtnnp = numpy.concatenate( XEtnnp, axis=1 )
+ XEnnmui = Xnmu[:,point]
+ XEnnmu.append( numpy.ravel(XEnnmui).reshape((-1,1)) )
+ XEnnmu = numpy.concatenate( XEnnmu, axis=1 )
#
- Xncm = ( XEtnnp * Wm ).sum(axis=1)
+ Xhmn = ( XEnnmu * Wm ).sum(axis=1)
#
- if selfA._parameters["EstimationOf"] == "State": Pnm = Q
- elif selfA._parameters["EstimationOf"] == "Parameters": Pnm = 0.
+ if selfA._parameters["EstimationOf"] == "State": Pmn = copy.copy(Q)
+ elif selfA._parameters["EstimationOf"] == "Parameters": Pmn = 0.
for point in range(nbSpts):
- Pnm += Wc[i] * ((XEtnnp[:,point]-Xncm).reshape((-1,1)) * (XEtnnp[:,point]-Xncm))
+ dXEnnmuXhmn = XEnnmu[:,point].flat-Xhmn
+ Pmn += Wc[i] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
#
- Pnmdemi = numpy.real(scipy.linalg.sqrtm(Pnm))
- #
- Xnnp = numpy.hstack([Xncm.reshape((-1,1)), Xncm.reshape((-1,1))+Gamma*Pnmdemi, Xncm.reshape((-1,1))-Gamma*Pnmdemi])
+ Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
+ Xnnmu = numpy.hstack([Xhmn.reshape((-1,1)), Xhmn.reshape((-1,1))+Gamma*Pmndemi, Xhmn.reshape((-1,1))-Gamma*Pmndemi])
#
Hm = HO["Direct"].appliedControledFormTo
- Ynnp = []
+ Ynnmu = []
for point in range(nbSpts):
if selfA._parameters["EstimationOf"] == "State":
- Ynnpi = Hm( (Xnnp[:,point], None) )
+ Ynnmui = Hm( (Xnnmu[:,point], None) )
elif selfA._parameters["EstimationOf"] == "Parameters":
- Ynnpi = Hm( (Xnnp[:,point], Un) )
- Ynnp.append( numpy.ravel(Ynnpi).reshape((-1,1)) )
- Ynnp = numpy.concatenate( Ynnp, axis=1 )
+ Ynnmui = Hm( (Xnnmu[:,point], Un) )
+ Ynnmu.append( numpy.ravel(Ynnmui).reshape((__p,1)) )
+ Ynnmu = numpy.concatenate( Ynnmu, axis=1 )
#
- Yncm = ( Ynnp * Wm ).sum(axis=1)
+ Yhmn = ( Ynnmu * Wm ).sum(axis=1)
#
- Pyyn = R
+ Pyyn = copy.copy(R)
Pxyn = 0.
for point in range(nbSpts):
- Pyyn += Wc[i] * ((Ynnp[:,point]-Yncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
- Pxyn += Wc[i] * ((Xnnp[:,point]-Xncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
+ dYnnmuYhmn = Ynnmu[:,point].flat-Yhmn
+ dXnnmuXhmn = Xnnmu[:,point].flat-Xhmn
+ Pyyn += Wc[i] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
+ Pxyn += Wc[i] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
#
if hasattr(Y,"store"):
Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
else:
Ynpu = numpy.ravel( Y ).reshape((__p,1))
- _Innovation = Ynpu - Yncm.reshape((-1,1))
+ _Innovation = Ynpu - Yhmn.reshape((-1,1))
if selfA._parameters["EstimationOf"] == "Parameters":
if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
_Innovation = _Innovation - Cm @ Un
#
- Kn = Pxyn * Pyyn.I
- Xn = Xncm.reshape((-1,1)) + Kn * _Innovation
- Pn = Pnm - Kn * Pyyn * Kn.T
+ Kn = Pxyn @ Pyyn.I
+ Xn = Xhmn.reshape((-1,1)) + Kn @ _Innovation
+ Pn = Pmn - Kn @ (Pyyn @ Kn.T)
#
Xa = Xn # Pointeurs
#--------------------------
or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
- selfA.StoredVariables["ForecastState"].store( Xncm )
+ selfA.StoredVariables["ForecastState"].store( Xhmn )
if selfA._toStore("ForecastCovariance"):
- selfA.StoredVariables["ForecastCovariance"].store( Pnm )
+ selfA.StoredVariables["ForecastCovariance"].store( Pmn )
if selfA._toStore("BMA"):
- selfA.StoredVariables["BMA"].store( Xncm - Xa )
+ selfA.StoredVariables["BMA"].store( Xhmn - Xa )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yncm )
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
or selfA._toStore("CostFunctionJ") \
import daAlgorithms.Atoms.lbfgsb19hlt as optimiseur
elif vt("1.11.0") <= vt(scipy.version.version) <= vt("1.11.99"):
import daAlgorithms.Atoms.lbfgsb111hlt as optimiseur
+ elif vt("1.12.0") <= vt(scipy.version.version) <= vt("1.12.99"):
+ import daAlgorithms.Atoms.lbfgsb112hlt as optimiseur
else:
import scipy.optimize as optimiseur
Minimum, J_optimal, Informations = optimiseur.fmin_l_bfgs_b(
import numpy
from daCore import BasicObjects
-from daAlgorithms.Atoms import ecweim, ecwdeim, eosg
+from daAlgorithms.Atoms import ecweim, ecwdeim, ecwubfeim, eosg
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
"DEIM", "PositioningByDEIM",
"lcDEIM", "PositioningBylcDEIM",
],
+ listadv = [
+ "UBFEIM", "PositioningByUBFEIM",
+ "lcUBFEIM", "PositioningBylcUBFEIM",
+ ],
)
self.defineRequiredParameter(
name = "EnsembleOfSnapshots",
typecast = numpy.array,
message = "Ensemble de vecteurs d'état physique (snapshots), 1 état par colonne (Training Set)",
)
+ self.defineRequiredParameter(
+ name = "UserBasisFunctions",
+ default = [],
+ typecast = numpy.array,
+ message = "Ensemble de fonctions de base définis par l'utilisateur, 1 fonction de base par colonne",
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfLocations",
default = 1,
typecast = tuple,
message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]",
)
+ self.defineRequiredParameter(
+ name = "ReduceMemoryUse",
+ default = False,
+ typecast = bool,
+ message = "Réduction de l'empreinte mémoire lors de l'exécution au prix d'une augmentation du temps de calcul",
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
elif isinstance(HO, dict):
ecweim.EIM_offline(self, eosg.eosg(self, Xb, HO))
else:
- raise ValueError("Snapshots or Operator have to be given in order to launch the analysis")
+ raise ValueError("Snapshots or Operator have to be given in order to launch the EIM analysis")
#
#--------------------------
elif self._parameters["Variant"] in ["lcDEIM", "PositioningBylcDEIM", "DEIM", "PositioningByDEIM"]:
elif isinstance(HO, dict):
ecwdeim.DEIM_offline(self, eosg.eosg(self, Xb, HO))
else:
- raise ValueError("Snapshots or Operator have to be given in order to launch the analysis")
+ raise ValueError("Snapshots or Operator have to be given in order to launch the DEIM analysis")
+ #--------------------------
+ elif self._parameters["Variant"] in ["lcUBFEIM", "PositioningBylcUBFEIM", "UBFEIM", "PositioningByUBFEIM"]:
+ if len(self._parameters["EnsembleOfSnapshots"]) > 0:
+ if self._toStore("EnsembleOfSimulations"):
+ self.StoredVariables["EnsembleOfSimulations"].store( self._parameters["EnsembleOfSnapshots"] )
+ ecwubfeim.UBFEIM_offline(self, self._parameters["EnsembleOfSnapshots"])
+ elif isinstance(HO, dict):
+ ecwubfeim.UBFEIM_offline(self, eosg.eosg(self, Xb, HO))
+ else:
+ raise ValueError("Snapshots or Operator have to be given in order to launch the UBFEIM analysis")
#
#--------------------------
else:
else:
__Ensemble = __EOS
#
- __sv, __svsq = SingularValuesEstimation( __Ensemble )
- __tisv = __svsq / __svsq.sum()
- __qisv = 1. - __tisv.cumsum()
+ __sv, __svsq, __tisv, __qisv = SingularValuesEstimation( __Ensemble )
if self._parameters["MaximumNumberOfModes"] < len(__sv):
__sv = __sv[:self._parameters["MaximumNumberOfModes"]]
__tisv = __tisv[:self._parameters["MaximumNumberOfModes"]]
svalue = __sv[ns]
rvalue = __sv[ns] / __sv[0]
vsinfo = 100 * __tisv[ns]
- rsinfo = 100 * __qisv[ns]
+ rsinfo = max(100 * __qisv[ns],0.)
if __s:
msgs += (__marge + " %0"+str(__ordre)+"i | %22."+str(__p)+"e | %22."+str(__p)+"e | %2i%s , %4.1f%s\n")%(ns,svalue,rvalue,vsinfo,"%",rsinfo,"%")
if rsinfo > 10: cut1pd = ns+2 # 10%
msgs += __marge + "Representing more than 99.99%s of variance requires at least %i mode(s).\n"%("%",cut1pi)
#
if has_matplotlib and self._parameters["PlotAndSave"]:
+ # Evite les message debug de matplotlib
+ dL = logging.getLogger().getEffectiveLevel()
+ logging.getLogger().setLevel(logging.WARNING)
try:
msgs += ("\n")
msgs += (__marge + "Plot and save results in a file named \"%s\"\n"%str(self._parameters["ResultFile"]))
msgs += ("\n")
msgs += (__marge + "Saving figure fail, please update your Matplolib version.\n")
msgs += ("\n")
+ logging.getLogger().setLevel(dL)
#
msgs += ("\n")
msgs += (__marge + "%s\n"%("-"*75,))
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
from daCore import BasicObjects
-from daAlgorithms.Atoms import c2ukf, uskf
+from daAlgorithms.Atoms import ecwukf, c2ukf, uskf
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
"UKF",
"2UKF",
],
+ listadv = [
+ "UKF-Std",
+ "UKF-MSP",
+ ],
)
self.defineRequiredParameter(
name = "EstimationOf",
#--------------------------
# Default UKF
#--------------------------
- if self._parameters["Variant"] == "UKF":
- uskf.uskf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ if self._parameters["Variant"] in ["UKF", "UKF-Std"]:
+ ecwukf.ecwukf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
#--------------------------
# Default 2UKF
c2ukf.c2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
#--------------------------
+ # UKF-MSP
+ elif self._parameters["Variant"] == "UKF-MSP":
+ uskf.uskf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ #
+ #--------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
#
# Mise à jour des paramètres internes avec le contenu de Parameters, en
# reprenant les valeurs par défauts pour toutes celles non définies
- self.__setParameters(Parameters, reset=True)
+ self.__setParameters(Parameters, reset=True) # Copie
for k, v in self.__variable_names_not_public.items():
if k not in self._parameters: self.__setParameters( {k:v} )
logging.debug("%s %s vector %s is not set, but is not required."%(self._name,argname,symbol))
else:
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- logging.debug("%s %s vector %s is required and set, and its size is %i."%(self._name,argname,symbol,numpy.array(argument).size))
+ logging.debug(
+ "%s %s vector %s is required and set, and its size is %i."%(
+ self._name,argname,symbol,numpy.array(argument).size))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s vector %s is optional and set, and its size is %i."%(self._name,argname,symbol,numpy.array(argument).size))
+ logging.debug(
+ "%s %s vector %s is optional and set, and its size is %i."%(
+ self._name,argname,symbol,numpy.array(argument).size))
else:
logging.debug(
"%s %s vector %s is set although neither required nor optional, and its size is %i."%(
__test_vvalue( Xb, "Xb", "Background or initial state" )
__test_vvalue( Y, "Y", "Observation" )
__test_vvalue( U, "U", "Control" )
-
+ #
# Corrections et compléments des covariances
def __test_cvalue(argument, variable, argname, symbol=None):
if symbol is None: symbol = variable
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
logging.debug("%s %s error covariance matrix %s is optional and set."%(self._name,argname,symbol))
else:
- logging.debug("%s %s error covariance matrix %s is set although neither required nor optional."%(self._name,argname,symbol))
+ logging.debug(
+ "%s %s error covariance matrix %s is set although neither required nor optional."%(
+ self._name,argname,symbol))
return 0
__test_cvalue( B, "B", "Background" )
__test_cvalue( R, "R", "Observation" )
__test_cvalue( Q, "Q", "Evolution" )
-
+ #
# Corrections et compléments des opérateurs
def __test_ovalue(argument, variable, argname, symbol=None):
if symbol is None: symbol = variable
self._parameters[k] = self.setParameterValue(k)
else:
pass
- if hasattr(self._parameters[k],"__len__") and len(self._parameters[k]) > 100:
+ if hasattr(self._parameters[k],"size") and self._parameters[k].size > 100:
+ logging.debug("%s %s d'une taille totale de %s", self._name, self.__required_parameters[k]["message"], self._parameters[k].size)
+ elif hasattr(self._parameters[k],"__len__") and len(self._parameters[k]) > 100:
logging.debug("%s %s de longueur %s", self._name, self.__required_parameters[k]["message"], len(self._parameters[k]))
else:
logging.debug("%s %s : %s", self._name, self.__required_parameters[k]["message"], self._parameters[k])
if self._format == "text/csv" or Format.upper() == "CSV":
self._format = "text/csv"
self.__filestring = "".join(self.__header)
- if self.__filestring.count(",") > 1:
+ if self.__filestring.count(",") > 0:
self._delimiter = ","
- elif self.__filestring.count(";") > 1:
+ elif self.__filestring.count(";") > 0:
self._delimiter = ";"
else:
self._delimiter = None
usecols = __usecols,
skiprows = self._skiprows,
converters = __converters,
+ ndmin = 1,
)
elif self._format in ["text/csv", "text/tab-separated-values"]:
__content = numpy.loadtxt(
skiprows = self._skiprows,
converters = __converters,
delimiter = self._delimiter,
+ ndmin = 1,
)
else:
raise ValueError("Unkown file format \"%s\""%self._format)
else:
raise ValueError("Error in requested variant name: %s"%__Using)
#
- return __sv, __svsq
+ __tisv = __svsq / __svsq.sum()
+ __qisv = 1. - __svsq.cumsum() / __svsq.sum()
+ # Différence à 1.e-16 : __qisv = 1. - __tisv.cumsum()
+ #
+ return __sv, __svsq, __tisv, __qisv
+
+# ==============================================================================
+def MaxL2NormByColumn(__Ensemble, __LcCsts = False, __IncludedPoints = []):
+ "Maximum des normes L2 calculées par colonne"
+ if __LcCsts and len(__IncludedPoints) > 0:
+ normes = numpy.linalg.norm(
+ numpy.take(__Ensemble, __IncludedPoints, axis=0, mode='clip'),
+ axis = 0,
+ )
+ else:
+ normes = numpy.linalg.norm( __Ensemble, axis = 0)
+ nmax = numpy.max(normes)
+ imax = numpy.argmax(normes)
+ return nmax, imax, normes
+
+def MaxLinfNormByColumn(__Ensemble, __LcCsts = False, __IncludedPoints = []):
+ "Maximum des normes Linf calculées par colonne"
+ if __LcCsts and len(__IncludedPoints) > 0:
+ normes = numpy.linalg.norm(
+ numpy.take(__Ensemble, __IncludedPoints, axis=0, mode='clip'),
+ axis = 0, ord=numpy.inf,
+ )
+ else:
+ normes = numpy.linalg.norm( __Ensemble, axis = 0, ord=numpy.inf)
+ nmax = numpy.max(normes)
+ imax = numpy.argmax(normes)
+ return nmax, imax, normes
+
+def InterpolationErrorByColumn(
+ __Ensemble = None, __Basis = None, __Points = None, __M = 2, # Usage 1
+ __Differences = None, # Usage 2
+ __ErrorNorm = None, # Commun
+ __LcCsts = False, __IncludedPoints = [], # Commun
+ __CDM = False, # ComputeMaxDifference # Commun
+ __RMU = False, # ReduceMemoryUse # Commun
+ __FTL = False, # ForceTril # Commun
+ ):
+ "Analyse des normes d'erreurs d'interpolation calculées par colonne"
+ if __ErrorNorm == "L2":
+ NormByColumn = MaxL2NormByColumn
+ else:
+ NormByColumn = MaxLinfNormByColumn
+ #
+ if __Differences is None and not __RMU: # Usage 1
+ if __FTL:
+ rBasis = numpy.tril( __Basis[__Points,:] )
+ else:
+ rBasis = __Basis[__Points,:]
+ rEnsemble = __Ensemble[__Points,:]
+ #
+ if __M > 1:
+ rBasis_inv = numpy.linalg.inv(rBasis)
+ Interpolator = numpy.dot(__Basis,numpy.dot(rBasis_inv,rEnsemble))
+ else:
+ rBasis_inv = 1. / rBasis
+ Interpolator = numpy.outer(__Basis,numpy.outer(rBasis_inv,rEnsemble))
+ #
+ differences = __Ensemble - Interpolator
+ #
+ error, nbr, _ = NormByColumn(differences, __LcCsts, __IncludedPoints)
+ #
+ if __CDM:
+ maxDifference = differences[:,nbr]
+ #
+ elif __Differences is None and __RMU: # Usage 1
+ if __FTL:
+ rBasis = numpy.tril( __Basis[__Points,:] )
+ else:
+ rBasis = __Basis[__Points,:]
+ rEnsemble = __Ensemble[__Points,:]
+ #
+ if __M > 1:
+ rBasis_inv = numpy.linalg.inv(rBasis)
+ rCoordinates = numpy.dot(rBasis_inv,rEnsemble)
+ else:
+ rBasis_inv = 1. / rBasis
+ rCoordinates = numpy.outer(rBasis_inv,rEnsemble)
+ #
+ error = 0.
+ nbr = -1
+ for iCol in range(__Ensemble.shape[1]):
+ if __M > 1:
+ iDifference = __Ensemble[:,iCol] - numpy.dot(__Basis, rCoordinates[:,iCol])
+ else:
+ iDifference = __Ensemble[:,iCol] - numpy.ravel(numpy.outer(__Basis, rCoordinates[:,iCol]))
+ #
+ normDifference, _, _ = NormByColumn(iDifference, __LcCsts, __IncludedPoints)
+ #
+ if normDifference > error:
+ error = normDifference
+ nbr = iCol
+ #
+ if __CDM:
+ maxDifference = __Ensemble[:,nbr] - numpy.dot(__Basis, rCoordinates[:,nbr])
+ #
+ else: # Usage 2
+ differences = __Differences
+ #
+ error, nbr, _ = NormByColumn(differences, __LcCsts, __IncludedPoints)
+ #
+ if __CDM:
+ # faire cette variable intermédiaire coûte cher
+ maxDifference = differences[:,nbr]
+ #
+ if __CDM:
+ return error, nbr, maxDifference
+ else:
+ return error, nbr
# ==============================================================================
def EnsemblePerturbationWithGivenCovariance(
# ---------------------------------------------------------
def means(self):
"""
- Renvoie la série, contenant à chaque pas, la valeur moyenne des données
+ Renvoie la série contenant, à chaque pas, la valeur moyenne des données
au pas. Il faut que le type de base soit compatible avec les types
élémentaires numpy.
"""
def stds(self, ddof=0):
"""
- Renvoie la série, contenant à chaque pas, l'écart-type des données
+ Renvoie la série contenant, à chaque pas, l'écart-type des données
au pas. Il faut que le type de base soit compatible avec les types
élémentaires numpy.
def sums(self):
"""
- Renvoie la série, contenant à chaque pas, la somme des données au pas.
+ Renvoie la série contenant, à chaque pas, la somme des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy.
"""
def mins(self):
"""
- Renvoie la série, contenant à chaque pas, le minimum des données au pas.
+ Renvoie la série contenant, à chaque pas, le minimum des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy.
"""
def maxs(self):
"""
- Renvoie la série, contenant à chaque pas, la maximum des données au pas.
+ Renvoie la série contenant, à chaque pas, la maximum des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy.
"""
def powers(self, x2):
"""
- Renvoie la série, contenant à chaque pas, la puissance "**x2" au pas.
+ Renvoie la série contenant, à chaque pas, la puissance "**x2" au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy.
"""
"""
Norm (_ord : voir numpy.linalg.norm)
- Renvoie la série, contenant à chaque pas, la norme des données au pas.
+ Renvoie la série contenant, à chaque pas, la norme des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy.
"""
except Exception:
raise TypeError("Base type is incompatible with numpy")
+ def traces(self, offset=0):
+ """
+ Trace
+
+ Renvoie la série contenant, à chaque pas, la trace (avec l'offset) des
+ données au pas. Il faut que le type de base soit compatible avec les
+ types élémentaires numpy.
+ """
+ try:
+ return [numpy.trace(item, offset, dtype=mfp) for item in self.__values]
+ except Exception:
+ raise TypeError("Base type is incompatible with numpy")
+
def maes(self, _predictor=None):
"""
Mean Absolute Error (MAE)
mae(dX) = 1/n sum(dX_i)
- Renvoie la série, contenant à chaque pas, la MAE des données au pas.
+ Renvoie la série contenant, à chaque pas, la MAE des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy. C'est réservé aux variables d'écarts ou d'incréments si le
prédicteur est None, sinon c'est appliqué à l'écart entre les données
Mean-Square Error (MSE) ou Mean-Square Deviation (MSD)
mse(dX) = 1/n sum(dX_i**2)
- Renvoie la série, contenant à chaque pas, la MSE des données au pas. Il
+ Renvoie la série contenant, à chaque pas, la MSE des données au pas. Il
faut que le type de base soit compatible avec les types élémentaires
numpy. C'est réservé aux variables d'écarts ou d'incréments si le
prédicteur est None, sinon c'est appliqué à l'écart entre les données
Root-Mean-Square Error (RMSE) ou Root-Mean-Square Deviation (RMSD)
rmse(dX) = sqrt( 1/n sum(dX_i**2) ) = sqrt( mse(dX) )
- Renvoie la série, contenant à chaque pas, la RMSE des données au pas.
+ Renvoie la série contenant, à chaque pas, la RMSE des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy. C'est réservé aux variables d'écarts ou d'incréments si le
prédicteur est None, sinon c'est appliqué à l'écart entre les données
raise ImportError("The Gnuplot module is required to plot the object.")
#
# Vérification et compléments sur les paramètres d'entrée
+ if ltitle is None: ltitle = ""
+ __geometry = str(geometry)
+ __sizespec = (__geometry.split('+')[0]).replace('x',',')
+ #
if persist:
- Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist -geometry '+geometry
- else:
- Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -geometry '+geometry
- if ltitle is None:
- ltitle = ""
+ Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist '
+ #
self.__g = Gnuplot.Gnuplot() # persist=1
- self.__g('set terminal '+Gnuplot.GnuplotOpts.default_term)
+ self.__g('set terminal '+Gnuplot.GnuplotOpts.default_term+' size '+__sizespec)
self.__g('set style data lines')
self.__g('set grid')
self.__g('set autoscale')
i = -1
for index in indexes:
self.__g('set title "'+str(title)+' (pas '+str(index)+')"')
- if isinstance(steps,list) or isinstance(steps,numpy.ndarray):
+ if isinstance(steps, (list, numpy.ndarray)):
Steps = list(steps)
else:
Steps = list(range(len(self.__values[index])))
raise ImportError("The Gnuplot module is required to plot the object.")
#
# Vérification et compléments sur les paramètres d'entrée
- if persist:
- Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist -geometry '+geometry
- else:
- Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -geometry '+geometry
- if ltitle is None:
- ltitle = ""
- if isinstance(steps,list) or isinstance(steps, numpy.ndarray):
+ if ltitle is None: ltitle = ""
+ if isinstance(steps, (list, numpy.ndarray)):
Steps = list(steps)
else:
Steps = list(range(len(self.__values[0])))
+ __geometry = str(geometry)
+ __sizespec = (__geometry.split('+')[0]).replace('x',',')
+ #
+ if persist:
+ Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist '
+ #
self.__g = Gnuplot.Gnuplot() # persist=1
- self.__g('set terminal '+Gnuplot.GnuplotOpts.default_term)
+ self.__g('set terminal '+Gnuplot.GnuplotOpts.default_term+' size '+__sizespec)
self.__g('set style data lines')
self.__g('set grid')
self.__g('set autoscale')