pair: Variant ; 3DVAR-Incr
pair: Variant ; 3DVAR-PSAS
-- "3DVAR" (3D Variational analysis, see [Lorenc86]_, [LeDimet86]_, [Talagrand97]_), original classical algorithm and very robust, which operates in the model space,
-- "3DVAR-VAN" (3D Variational Analysis with No inversion of B, see [Lorenc88]_), similar algorithm, which operates in the model space, but avoiding inversion of the covariance matrix B,
-- "3DVAR-Incr" (Incremental 3DVAR, see [Courtier94]_), cheaper algorithm than the previous ones, but involving an approximation of non-linear operators,
-- "3DVAR-PSAS" (Physical-space Statistical Analysis Scheme for 3DVAR, see [Courtier97]_, [Cohn98]_), algorithm sometimes cheaper because it operates in the observation space, but involving an approximation of non-linear operators and not allowing to take into account bounds.
+- "3DVAR" (3D Variational analysis, see [Lorenc86]_, [LeDimet86]_, [Talagrand97]_), original classical algorithm, extremely robust, which operates in the model space,
+- "3DVAR-VAN" (3D Variational Analysis with No inversion of B, see [Lorenc88]_), similar algorithm, which operates in the model space, avoiding inversion of the covariance matrix B (except in the case where there are bounds.),
+- "3DVAR-Incr" (Incremental 3DVAR, see [Courtier94]_), cheaper algorithm than the previous ones, involving an approximation of non-linear operators,
+- "3DVAR-PSAS" (Physical-space Statistical Analysis Scheme for 3DVAR, see [Courtier97]_, [Cohn98]_), algorithm sometimes cheaper because it operates in the observation space, involving an approximation of non-linear operators, not allowing to take into account bounds.
It is highly recommended to use the original "3DVAR". The "3DVAR" and
-"3DVAR-Incr" algorithms (and not the others) allow modification of the
-initialization point for the minimization, but it is not recommended.
+"3DVAR-Incr" algorithms (and not the others) explicitly allow the modification
+of the initial point of their minimization, even if it is not recommended.
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
"ForecastState",
"IndexOfOptimum",
"Innovation",
+ "InnovationAtCurrentAnalysis",
"InnovationAtCurrentState",
"JacobianMatrixAtBackground",
"JacobianMatrixAtOptimum",
.. include:: snippets/Innovation.rst
+.. include:: snippets/InnovationAtCurrentAnalysis.rst
+
.. include:: snippets/InnovationAtCurrentState.rst
.. include:: snippets/JacobianMatrixAtBackground.rst
pair: Variant ; 3DVAR-Incr
pair: Variant ; 3DVAR-PSAS
-- "3DVAR" (3D Variational analysis, voir [Lorenc86]_, [LeDimet86]_, [Talagrand97]_), algorithme classique d'origine, très robuste, opérant dans l'espace du modèle,
-- "3DVAR-VAN" (3D Variational Analysis with No inversion of B, voir [Lorenc88]_), algorithme similaire, opérant dans l'espace du modèle, mais permettant d'éviter l'inversion de la matrice de covariance B,
-- "3DVAR-Incr" (Incremental 3DVAR, voir [Courtier94]_), algorithme plus économique que les précédents, mais impliquant une approximation des opérateurs non-linéaires,
-- "3DVAR-PSAS" (Physical-space Statistical Analysis Scheme for 3DVAR, voir [Courtier97]_, [Cohn98]_), algorithme parfois plus économique car opérant dans l'espace des observations, mais impliquant une approximation des opérateurs non-linéaires et ne permettant pas la prise en compte de bornes.
+- "3DVAR" (3D Variational analysis, voir [Lorenc86]_, [LeDimet86]_, [Talagrand97]_), algorithme classique d'origine, extrêmement robuste, opérant dans l'espace du modèle,
+- "3DVAR-VAN" (3D Variational Analysis with No inversion of B, voir [Lorenc88]_), algorithme similaire, opérant dans l'espace du modèle, permettant d'éviter l'inversion de la matrice de covariance B (sauf dans le cas où il y des bornes),
+- "3DVAR-Incr" (Incremental 3DVAR, voir [Courtier94]_), algorithme plus économique que les précédents, impliquant une approximation des opérateurs non-linéaires,
+- "3DVAR-PSAS" (Physical-space Statistical Analysis Scheme for 3DVAR, voir [Courtier97]_, [Cohn98]_), algorithme parfois plus économique car opérant dans l'espace des observations, impliquant une approximation des opérateurs non-linéaires, ne permettant pas la prise en compte de bornes.
On recommande fortement d'utiliser le "3DVAR" d'origine. Les algorithmes
-"3DVAR" et "3DVAR-Incr" (et pas les autres) permettent la modification du point
-initial de leur minimisation, mais ce n'est pas recommandé.
+"3DVAR" et "3DVAR-Incr" (et pas les autres) permettent explicitement la
+modification du point initial de leur minimisation, même si ce n'est pas
+recommandé.
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
"ForecastState",
"IndexOfOptimum",
"Innovation",
+ "InnovationAtCurrentAnalysis",
"InnovationAtCurrentState",
"JacobianMatrixAtBackground",
"JacobianMatrixAtOptimum",
.. include:: snippets/Innovation.rst
+.. include:: snippets/InnovationAtCurrentAnalysis.rst
+
.. include:: snippets/InnovationAtCurrentState.rst
.. include:: snippets/JacobianMatrixAtBackground.rst
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
-from daCore import BasicObjects, NumericObjects
-import numpy, scipy.optimize, scipy.version
+import numpy
+from daCore import BasicObjects
+from daAlgorithms.Atoms import std4dvar
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
)
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "EM", "R", "B" ),
- optional = ("U", "CM"),
+ optional = ("U", "CM", "Q"),
)
self.setAttributes(tags=(
"DataAssimilation",
#--------------------------
# Default 4DVAR
if self._parameters["Variant"] in ["4DVAR", "4DVAR-Std"]:
- NumericObjects.std4dvar(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ std4dvar.std4dvar(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
#--------------------------
else:
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
-from daCore import BasicObjects, PlatformInfo
import numpy
+from daCore import BasicObjects, PlatformInfo
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
-if sys.version_info.major > 2:
- unicode = str
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
Y doit etre dans l'image de F. S'il n'est pas donne, on prend Y = F(X).\n""" + __precision
#
if len(self._parameters["ResultTitle"]) > 0:
- __rt = unicode(self._parameters["ResultTitle"])
+ __rt = str(self._parameters["ResultTitle"])
msgs = u"\n"
msgs += __marge + "====" + "="*len(__rt) + "====\n"
msgs += __marge + " " + __rt + "\n"
msgs += "\n" + __marge + __entete
msgs += "\n" + __marge + "-"*__nbtirets
#
- Normalisation= -1
- #
# ----------
for i,amplitude in enumerate(Perturbations):
dX = amplitude * dX0
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2022 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2022 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+__doc__ = """
+ Constrained (2UKF) Unscented Kalman Filter
+"""
+__author__ = "Jean-Philippe ARGAUD"
+
+import math, numpy, scipy
+from daCore.NumericObjects import ForceNumericBounds
+from daCore.NumericObjects import ApplyBounds
+from daCore.PlatformInfo import PlatformInfo
+mpr = PlatformInfo().MachinePrecision()
+mfp = PlatformInfo().MaximumPrecision()
+
+# ==============================================================================
+def c2ukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
+ """
+ Constrained (2UKF) Unscented Kalman Filter
+ """
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA._parameters["StoreInternalVariables"] = True
+ selfA._parameters["Bounds"] = ForceNumericBounds( selfA._parameters["Bounds"] )
+ #
+ L = Xb.size
+ Alpha = selfA._parameters["Alpha"]
+ Beta = selfA._parameters["Beta"]
+ if selfA._parameters["Kappa"] == 0:
+ if selfA._parameters["EstimationOf"] == "State":
+ Kappa = 0
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Kappa = 3 - L
+ else:
+ Kappa = selfA._parameters["Kappa"]
+ Lambda = float( Alpha**2 ) * ( L + Kappa ) - L
+ Gamma = math.sqrt( L + Lambda )
+ #
+ Ww = []
+ Ww.append( 0. )
+ for i in range(2*L):
+ Ww.append( 1. / (2.*(L + Lambda)) )
+ #
+ Wm = numpy.array( Ww )
+ Wm[0] = Lambda / (L + Lambda)
+ Wc = numpy.array( Ww )
+ Wc[0] = Lambda / (L + Lambda) + (1. - Alpha**2 + Beta)
+ #
+ # Opérateurs
+ Hm = HO["Direct"].appliedControledFormTo
+ #
+ if selfA._parameters["EstimationOf"] == "State":
+ Mm = EM["Direct"].appliedControledFormTo
+ #
+ if CM is not None and "Tangent" in CM and U is not None:
+ Cm = CM["Tangent"].asMatrix(Xb)
+ else:
+ Cm = None
+ #
+ # Durée d'observation et tailles
+ if hasattr(Y,"stepnumber"):
+ duration = Y.stepnumber()
+ __p = numpy.cumprod(Y.shape())[-1]
+ else:
+ duration = 2
+ __p = numpy.array(Y).size
+ #
+ # Précalcul des inversions de B et R
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ BI = B.getI()
+ RI = R.getI()
+ #
+ __n = Xb.size
+ nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
+ #
+ if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ Xn = Xb
+ if hasattr(B,"asfullmatrix"):
+ Pn = B.asfullmatrix(__n)
+ else:
+ Pn = B
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( Xb )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ elif selfA._parameters["nextStep"]:
+ Xn = selfA._getInternalState("Xn")
+ Pn = selfA._getInternalState("Pn")
+ #
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ XaMin = Xn
+ previousJMinimum = numpy.finfo(float).max
+ #
+ for step in range(duration-1):
+ if hasattr(Y,"store"):
+ Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ else:
+ Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ #
+ if U is not None:
+ if hasattr(U,"store") and len(U)>1:
+ Un = numpy.ravel( U[step] ).reshape((-1,1))
+ elif hasattr(U,"store") and len(U)==1:
+ Un = numpy.ravel( U[0] ).reshape((-1,1))
+ else:
+ Un = numpy.ravel( U ).reshape((-1,1))
+ else:
+ Un = None
+ #
+ Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
+ Xnp = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
+ nbSpts = 2*Xn.size+1
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ for point in range(nbSpts):
+ Xnp[:,point] = ApplyBounds( Xnp[:,point], selfA._parameters["Bounds"] )
+ #
+ XEtnnp = []
+ for point in range(nbSpts):
+ if selfA._parameters["EstimationOf"] == "State":
+ XEtnnpi = numpy.asarray( Mm( (Xnp[:,point], Un) ) ).reshape((-1,1))
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
+ XEtnnpi = XEtnnpi + Cm @ Un
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ XEtnnpi = ApplyBounds( XEtnnpi, selfA._parameters["Bounds"] )
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ # --- > Par principe, M = Id, Q = 0
+ XEtnnpi = Xnp[:,point]
+ XEtnnp.append( numpy.ravel(XEtnnpi).reshape((-1,1)) )
+ XEtnnp = numpy.concatenate( XEtnnp, axis=1 )
+ #
+ Xncm = ( XEtnnp * Wm ).sum(axis=1)
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ Xncm = ApplyBounds( Xncm, selfA._parameters["Bounds"] )
+ #
+ if selfA._parameters["EstimationOf"] == "State": Pnm = Q
+ elif selfA._parameters["EstimationOf"] == "Parameters": Pnm = 0.
+ for point in range(nbSpts):
+ Pnm += Wc[i] * ((XEtnnp[:,point]-Xncm).reshape((-1,1)) * (XEtnnp[:,point]-Xncm))
+ #
+ if selfA._parameters["EstimationOf"] == "Parameters" and selfA._parameters["Bounds"] is not None:
+ Pnmdemi = selfA._parameters["Reconditioner"] * numpy.real(scipy.linalg.sqrtm(Pnm))
+ else:
+ Pnmdemi = numpy.real(scipy.linalg.sqrtm(Pnm))
+ #
+ Xnnp = numpy.hstack([Xncm.reshape((-1,1)), Xncm.reshape((-1,1))+Gamma*Pnmdemi, Xncm.reshape((-1,1))-Gamma*Pnmdemi])
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ for point in range(nbSpts):
+ Xnnp[:,point] = ApplyBounds( Xnnp[:,point], selfA._parameters["Bounds"] )
+ #
+ Ynnp = []
+ for point in range(nbSpts):
+ if selfA._parameters["EstimationOf"] == "State":
+ Ynnpi = Hm( (Xnnp[:,point], None) )
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Ynnpi = Hm( (Xnnp[:,point], Un) )
+ Ynnp.append( numpy.ravel(Ynnpi).reshape((-1,1)) )
+ Ynnp = numpy.concatenate( Ynnp, axis=1 )
+ #
+ Yncm = ( Ynnp * Wm ).sum(axis=1)
+ #
+ Pyyn = R
+ Pxyn = 0.
+ for point in range(nbSpts):
+ Pyyn += Wc[i] * ((Ynnp[:,point]-Yncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
+ Pxyn += Wc[i] * ((Xnnp[:,point]-Xncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
+ #
+ _Innovation = Ynpu - Yncm.reshape((-1,1))
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ _Innovation = _Innovation - Cm @ Un
+ #
+ Kn = Pxyn * Pyyn.I
+ Xn = Xncm.reshape((-1,1)) + Kn * _Innovation
+ Pn = Pnm - Kn * Pyyn * Kn.T
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
+ #
+ Xa = Xn # Pointeurs
+ #--------------------------
+ selfA._setInternalState("Xn", Xn)
+ selfA._setInternalState("Pn", Pn)
+ #--------------------------
+ #
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ # ---> avec analysis
+ selfA.StoredVariables["Analysis"].store( Xa )
+ if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, Un)) )
+ if selfA._toStore("InnovationAtCurrentAnalysis"):
+ selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
+ # ---> avec current state
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xn )
+ if selfA._toStore("ForecastState"):
+ selfA.StoredVariables["ForecastState"].store( Xncm )
+ if selfA._toStore("ForecastCovariance"):
+ selfA.StoredVariables["ForecastCovariance"].store( Pnm )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( Xncm - Xa )
+ if selfA._toStore("InnovationAtCurrentState"):
+ selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
+ if selfA._toStore("SimulatedObservationAtCurrentState") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yncm )
+ # ---> autres
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ Jb = float( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
+ Jo = float( 0.5 * _Innovation.T * (RI * _Innovation) )
+ J = Jb + Jo
+ selfA.StoredVariables["CostFunctionJb"].store( Jb )
+ selfA.StoredVariables["CostFunctionJo"].store( Jo )
+ selfA.StoredVariables["CostFunctionJ" ].store( J )
+ #
+ if selfA._toStore("IndexOfOptimum") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
+ if selfA._toStore("IndexOfOptimum"):
+ selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
+ if selfA._toStore("CurrentOptimum"):
+ selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
+ if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ if selfA._toStore("CostFunctionJAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ if selfA._parameters["EstimationOf"] == "Parameters" \
+ and J < previousJMinimum:
+ previousJMinimum = J
+ XaMin = Xa
+ if selfA._toStore("APosterioriCovariance"):
+ covarianceXaMin = selfA.StoredVariables["APosterioriCovariance"][-1]
+ #
+ # Stockage final supplémentaire de l'optimum en estimation de paramètres
+ # ----------------------------------------------------------------------
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( XaMin )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( covarianceXaMin )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(XaMin) )
+ #
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print('\n AUTODIAGNOSTIC\n')
--- /dev/null
+# Modification de la version 1.1.0
+"""
+Functions
+---------
+.. autosummary::
+ :toctree: generated/
+
+ fmin_l_bfgs_b
+
+"""
+
+## License for the Python wrapper
+## ==============================
+
+## Copyright (c) 2004 David M. Cooke <cookedm@physics.mcmaster.ca>
+
+## Permission is hereby granted, free of charge, to any person obtaining a
+## copy of this software and associated documentation files (the "Software"),
+## to deal in the Software without restriction, including without limitation
+## the rights to use, copy, modify, merge, publish, distribute, sublicense,
+## and/or sell copies of the Software, and to permit persons to whom the
+## Software is furnished to do so, subject to the following conditions:
+
+## The above copyright notice and this permission notice shall be included in
+## all copies or substantial portions of the Software.
+
+## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+## IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+## FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+## AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+## LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+## FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+## DEALINGS IN THE SOFTWARE.
+
+## Modifications by Travis Oliphant and Enthought, Inc. for inclusion in SciPy
+
+from __future__ import division, print_function, absolute_import
+
+import numpy as np
+from numpy import array, asarray, float64, int32, zeros
+from scipy.optimize import _lbfgsb
+from scipy.optimize.optimize import (approx_fprime, MemoizeJac, OptimizeResult,
+ _check_unknown_options, wrap_function,
+ _approx_fprime_helper)
+from scipy.sparse.linalg import LinearOperator
+
+__all__ = ['fmin_l_bfgs_b', 'LbfgsInvHessProduct']
+
+
+def fmin_l_bfgs_b(func, x0, fprime=None, args=(),
+ approx_grad=0,
+ bounds=None, m=10, factr=1e7, pgtol=1e-5,
+ epsilon=1e-8,
+ iprint=-1, maxfun=15000, maxiter=15000, disp=None,
+ callback=None, maxls=20):
+ """
+ Minimize a function func using the L-BFGS-B algorithm.
+
+ Parameters
+ ----------
+ func : callable f(x,*args)
+ Function to minimise.
+ x0 : ndarray
+ Initial guess.
+ fprime : callable fprime(x,*args), optional
+ The gradient of `func`. If None, then `func` returns the function
+ value and the gradient (``f, g = func(x, *args)``), unless
+ `approx_grad` is True in which case `func` returns only ``f``.
+ args : sequence, optional
+ Arguments to pass to `func` and `fprime`.
+ approx_grad : bool, optional
+ Whether to approximate the gradient numerically (in which case
+ `func` returns only the function value).
+ bounds : list, optional
+ ``(min, max)`` pairs for each element in ``x``, defining
+ the bounds on that parameter. Use None or +-inf for one of ``min`` or
+ ``max`` when there is no bound in that direction.
+ m : int, optional
+ The maximum number of variable metric corrections
+ used to define the limited memory matrix. (The limited memory BFGS
+ method does not store the full hessian but uses this many terms in an
+ approximation to it.)
+ factr : float, optional
+ The iteration stops when
+ ``(f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps``,
+ where ``eps`` is the machine precision, which is automatically
+ generated by the code. Typical values for `factr` are: 1e12 for
+ low accuracy; 1e7 for moderate accuracy; 10.0 for extremely
+ high accuracy. See Notes for relationship to `ftol`, which is exposed
+ (instead of `factr`) by the `scipy.optimize.minimize` interface to
+ L-BFGS-B.
+ pgtol : float, optional
+ The iteration will stop when
+ ``max{|proj g_i | i = 1, ..., n} <= pgtol``
+ where ``pg_i`` is the i-th component of the projected gradient.
+ epsilon : float, optional
+ Step size used when `approx_grad` is True, for numerically
+ calculating the gradient
+ iprint : int, optional
+ Controls the frequency of output. ``iprint < 0`` means no output;
+ ``iprint = 0`` print only one line at the last iteration;
+ ``0 < iprint < 99`` print also f and ``|proj g|`` every iprint iterations;
+ ``iprint = 99`` print details of every iteration except n-vectors;
+ ``iprint = 100`` print also the changes of active set and final x;
+ ``iprint > 100`` print details of every iteration including x and g.
+ disp : int, optional
+ If zero, then no output. If a positive number, then this over-rides
+ `iprint` (i.e., `iprint` gets the value of `disp`).
+ maxfun : int, optional
+ Maximum number of function evaluations.
+ maxiter : int, optional
+ Maximum number of iterations.
+ callback : callable, optional
+ Called after each iteration, as ``callback(xk)``, where ``xk`` is the
+ current parameter vector.
+ maxls : int, optional
+ Maximum number of line search steps (per iteration). Default is 20.
+
+ Returns
+ -------
+ x : array_like
+ Estimated position of the minimum.
+ f : float
+ Value of `func` at the minimum.
+ d : dict
+ Information dictionary.
+
+ * d['warnflag'] is
+
+ - 0 if converged,
+ - 1 if too many function evaluations or too many iterations,
+ - 2 if stopped for another reason, given in d['task']
+
+ * d['grad'] is the gradient at the minimum (should be 0 ish)
+ * d['funcalls'] is the number of function calls made.
+ * d['nit'] is the number of iterations.
+
+ See also
+ --------
+ minimize: Interface to minimization algorithms for multivariate
+ functions. See the 'L-BFGS-B' `method` in particular. Note that the
+ `ftol` option is made available via that interface, while `factr` is
+ provided via this interface, where `factr` is the factor multiplying
+ the default machine floating-point precision to arrive at `ftol`:
+ ``ftol = factr * numpy.finfo(float).eps``.
+
+ Notes
+ -----
+ License of L-BFGS-B (FORTRAN code):
+
+ The version included here (in fortran code) is 3.0
+ (released April 25, 2011). It was written by Ciyou Zhu, Richard Byrd,
+ and Jorge Nocedal <nocedal@ece.nwu.edu>. It carries the following
+ condition for use:
+
+ This software is freely available, but we expect that all publications
+ describing work using this software, or all commercial products using it,
+ quote at least one of the references given below. This software is released
+ under the BSD License.
+
+ References
+ ----------
+ * R. H. Byrd, P. Lu and J. Nocedal. A Limited Memory Algorithm for Bound
+ Constrained Optimization, (1995), SIAM Journal on Scientific and
+ Statistical Computing, 16, 5, pp. 1190-1208.
+ * C. Zhu, R. H. Byrd and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B,
+ FORTRAN routines for large scale bound constrained optimization (1997),
+ ACM Transactions on Mathematical Software, 23, 4, pp. 550 - 560.
+ * J.L. Morales and J. Nocedal. L-BFGS-B: Remark on Algorithm 778: L-BFGS-B,
+ FORTRAN routines for large scale bound constrained optimization (2011),
+ ACM Transactions on Mathematical Software, 38, 1.
+
+ """
+ # handle fprime/approx_grad
+ if approx_grad:
+ fun = func
+ jac = None
+ elif fprime is None:
+ fun = MemoizeJac(func)
+ jac = fun.derivative
+ else:
+ fun = func
+ jac = fprime
+
+ # build options
+ if disp is None:
+ disp = iprint
+ opts = {'disp': disp,
+ 'iprint': iprint,
+ 'maxcor': m,
+ 'ftol': factr * np.finfo(float).eps,
+ 'gtol': pgtol,
+ 'eps': epsilon,
+ 'maxfun': maxfun,
+ 'maxiter': maxiter,
+ 'callback': callback,
+ 'maxls': maxls}
+
+ res = _minimize_lbfgsb(fun, x0, args=args, jac=jac, bounds=bounds,
+ **opts)
+ d = {'grad': res['jac'],
+ 'task': res['message'],
+ 'funcalls': res['nfev'],
+ 'nit': res['nit'],
+ 'warnflag': res['status']}
+ f = res['fun']
+ x = res['x']
+
+ return x, f, d
+
+
+def _minimize_lbfgsb(fun, x0, args=(), jac=None, bounds=None,
+ disp=None, maxcor=10, ftol=2.2204460492503131e-09,
+ gtol=1e-5, eps=1e-8, maxfun=15000, maxiter=15000,
+ iprint=-1, callback=None, maxls=20, **unknown_options):
+ """
+ Minimize a scalar function of one or more variables using the L-BFGS-B
+ algorithm.
+
+ Options
+ -------
+ disp : bool
+ Set to True to print convergence messages.
+ maxcor : int
+ The maximum number of variable metric corrections used to
+ define the limited memory matrix. (The limited memory BFGS
+ method does not store the full hessian but uses this many terms
+ in an approximation to it.)
+ ftol : float
+ The iteration stops when ``(f^k -
+ f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= ftol``.
+ gtol : float
+ The iteration will stop when ``max{|proj g_i | i = 1, ..., n}
+ <= gtol`` where ``pg_i`` is the i-th component of the
+ projected gradient.
+ eps : float
+ Step size used for numerical approximation of the jacobian.
+ disp : int
+ Set to True to print convergence messages.
+ maxfun : int
+ Maximum number of function evaluations.
+ maxiter : int
+ Maximum number of iterations.
+ maxls : int, optional
+ Maximum number of line search steps (per iteration). Default is 20.
+
+ Notes
+ -----
+ The option `ftol` is exposed via the `scipy.optimize.minimize` interface,
+ but calling `scipy.optimize.fmin_l_bfgs_b` directly exposes `factr`. The
+ relationship between the two is ``ftol = factr * numpy.finfo(float).eps``.
+ I.e., `factr` multiplies the default machine floating-point precision to
+ arrive at `ftol`.
+
+ """
+ _check_unknown_options(unknown_options)
+ m = maxcor
+ epsilon = eps
+ pgtol = gtol
+ factr = ftol / np.finfo(float).eps
+
+ x0 = asarray(x0).ravel()
+ n, = x0.shape
+
+ if bounds is None:
+ bounds = [(None, None)] * n
+ if len(bounds) != n:
+ raise ValueError('length of x0 != length of bounds')
+ # unbounded variables must use None, not +-inf, for optimizer to work properly
+ bounds = [(None if l == -np.inf else l, None if u == np.inf else u) for l, u in bounds]
+
+ if disp is not None:
+ if disp == 0:
+ iprint = -1
+ else:
+ iprint = disp
+
+ n_function_evals, fun = wrap_function(fun, ())
+ if jac is None:
+ def func_and_grad(x):
+ f = fun(x, *args)
+ g = _approx_fprime_helper(x, fun, epsilon, args=args, f0=f)
+ return f, g
+ else:
+ def func_and_grad(x):
+ f = fun(x, *args)
+ g = jac(x, *args)
+ return f, g
+
+ nbd = zeros(n, int32)
+ low_bnd = zeros(n, float64)
+ upper_bnd = zeros(n, float64)
+ bounds_map = {(None, None): 0,
+ (1, None): 1,
+ (1, 1): 2,
+ (None, 1): 3}
+ for i in range(0, n):
+ l, u = bounds[i]
+ if l is not None:
+ low_bnd[i] = l
+ l = 1
+ if u is not None:
+ upper_bnd[i] = u
+ u = 1
+ nbd[i] = bounds_map[l, u]
+
+ if not maxls > 0:
+ raise ValueError('maxls must be positive.')
+
+ x = array(x0, float64)
+ f = array(0.0, float64)
+ g = zeros((n,), float64)
+ wa = zeros(2*m*n + 5*n + 11*m*m + 8*m, float64)
+ iwa = zeros(3*n, int32)
+ task = zeros(1, 'S60')
+ csave = zeros(1, 'S60')
+ lsave = zeros(4, int32)
+ isave = zeros(44, int32)
+ dsave = zeros(29, float64)
+
+ task[:] = 'START'
+
+ n_iterations = 0
+
+ while 1:
+ # x, f, g, wa, iwa, task, csave, lsave, isave, dsave = \
+ _lbfgsb.setulb(m, x, low_bnd, upper_bnd, nbd, f, g, factr,
+ pgtol, wa, iwa, task, iprint, csave, lsave,
+ isave, dsave, maxls)
+ task_str = task.tostring()
+ if task_str.startswith(b'FG'):
+ # The minimization routine wants f and g at the current x.
+ # Note that interruptions due to maxfun are postponed
+ # until the completion of the current minimization iteration.
+ # Overwrite f and g:
+ f, g = func_and_grad(x)
+ if n_function_evals[0] > maxfun:
+ task[:] = ('STOP: TOTAL NO. of f AND g EVALUATIONS '
+ 'EXCEEDS LIMIT')
+ elif task_str.startswith(b'NEW_X'):
+ # new iteration
+ n_iterations += 1
+ if callback is not None:
+ callback(np.copy(x))
+
+ if n_iterations >= maxiter:
+ task[:] = 'STOP: TOTAL NO. of ITERATIONS REACHED LIMIT'
+ elif n_function_evals[0] > maxfun:
+ task[:] = ('STOP: TOTAL NO. of f AND g EVALUATIONS '
+ 'EXCEEDS LIMIT')
+ else:
+ break
+
+ task_str = task.tostring().strip(b'\x00').strip()
+ if task_str.startswith(b'CONV'):
+ warnflag = 0
+ elif n_function_evals[0] > maxfun or n_iterations >= maxiter:
+ warnflag = 1
+ else:
+ warnflag = 2
+
+ # These two portions of the workspace are described in the mainlb
+ # subroutine in lbfgsb.f. See line 363.
+ s = wa[0: m*n].reshape(m, n)
+ y = wa[m*n: 2*m*n].reshape(m, n)
+
+ # See lbfgsb.f line 160 for this portion of the workspace.
+ # isave(31) = the total number of BFGS updates prior the current iteration;
+ n_bfgs_updates = isave[30]
+
+ n_corrs = min(n_bfgs_updates, maxcor)
+ hess_inv = LbfgsInvHessProduct(s[:n_corrs], y[:n_corrs])
+
+ return OptimizeResult(fun=f, jac=g, nfev=n_function_evals[0],
+ nit=n_iterations, status=warnflag, message=task_str,
+ x=x, success=(warnflag == 0), hess_inv=hess_inv)
+
+
+class LbfgsInvHessProduct(LinearOperator):
+ """Linear operator for the L-BFGS approximate inverse Hessian.
+
+ This operator computes the product of a vector with the approximate inverse
+ of the Hessian of the objective function, using the L-BFGS limited
+ memory approximation to the inverse Hessian, accumulated during the
+ optimization.
+
+ Objects of this class implement the ``scipy.sparse.linalg.LinearOperator``
+ interface.
+
+ Parameters
+ ----------
+ sk : array_like, shape=(n_corr, n)
+ Array of `n_corr` most recent updates to the solution vector.
+ (See [1]).
+ yk : array_like, shape=(n_corr, n)
+ Array of `n_corr` most recent updates to the gradient. (See [1]).
+
+ References
+ ----------
+ .. [1] Nocedal, Jorge. "Updating quasi-Newton matrices with limited
+ storage." Mathematics of computation 35.151 (1980): 773-782.
+
+ """
+ def __init__(self, sk, yk):
+ """Construct the operator."""
+ if sk.shape != yk.shape or sk.ndim != 2:
+ raise ValueError('sk and yk must have matching shape, (n_corrs, n)')
+ n_corrs, n = sk.shape
+
+ super(LbfgsInvHessProduct, self).__init__(
+ dtype=np.float64, shape=(n, n))
+
+ self.sk = sk
+ self.yk = yk
+ self.n_corrs = n_corrs
+ self.rho = 1 / np.einsum('ij,ij->i', sk, yk)
+
+ def _matvec(self, x):
+ """Efficient matrix-vector multiply with the BFGS matrices.
+
+ This calculation is described in Section (4) of [1].
+
+ Parameters
+ ----------
+ x : ndarray
+ An array with shape (n,) or (n,1).
+
+ Returns
+ -------
+ y : ndarray
+ The matrix-vector product
+
+ """
+ s, y, n_corrs, rho = self.sk, self.yk, self.n_corrs, self.rho
+ q = np.array(x, dtype=self.dtype, copy=True)
+ if q.ndim == 2 and q.shape[1] == 1:
+ q = q.reshape(-1)
+
+ alpha = np.zeros(n_corrs)
+
+ for i in range(n_corrs-1, -1, -1):
+ alpha[i] = rho[i] * np.dot(s[i], q)
+ q = q - alpha[i]*y[i]
+
+ r = q
+ for i in range(n_corrs):
+ beta = rho[i] * np.dot(y[i], r)
+ r = r + s[i] * (alpha[i] - beta)
+
+ return r
+
+ def todense(self):
+ """Return a dense array representation of this operator.
+
+ Returns
+ -------
+ arr : ndarray, shape=(n, n)
+ An array with the same shape and containing
+ the same data represented by this `LinearOperator`.
+
+ """
+ s, y, n_corrs, rho = self.sk, self.yk, self.n_corrs, self.rho
+ I = np.eye(*self.shape, dtype=self.dtype)
+ Hk = I
+
+ for i in range(n_corrs):
+ A1 = I - s[i][:, np.newaxis] * y[i][np.newaxis, :] * rho[i]
+ A2 = I - y[i][:, np.newaxis] * s[i][np.newaxis, :] * rho[i]
+
+ Hk = np.dot(A1, np.dot(Hk, A2)) + (rho[i] * s[i][:, np.newaxis] *
+ s[i][np.newaxis, :])
+ return Hk
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2022 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+__doc__ = """
+ MMQR
+"""
+__author__ = "Jean-Philippe ARGAUD"
+
+import sys, math, numpy
+from daCore.PlatformInfo import PlatformInfo
+mpr = PlatformInfo().MachinePrecision()
+mfp = PlatformInfo().MaximumPrecision()
+
+# ==============================================================================
+def mmqr(
+ func = None,
+ x0 = None,
+ fprime = None,
+ bounds = None,
+ quantile = 0.5,
+ maxfun = 15000,
+ toler = 1.e-06,
+ y = None,
+ ):
+ """
+ Implémentation informatique de l'algorithme MMQR, basée sur la publication :
+ David R. Hunter, Kenneth Lange, "Quantile Regression via an MM Algorithm",
+ Journal of Computational and Graphical Statistics, 9, 1, pp.60-77, 2000.
+ """
+ #
+ # Recuperation des donnees et informations initiales
+ # --------------------------------------------------
+ variables = numpy.ravel( x0 )
+ mesures = numpy.ravel( y )
+ increment = sys.float_info[0]
+ p = variables.size
+ n = mesures.size
+ quantile = float(quantile)
+ #
+ # Calcul des parametres du MM
+ # ---------------------------
+ tn = float(toler) / n
+ e0 = -tn / math.log(tn)
+ epsilon = (e0-tn)/(1+math.log(e0))
+ #
+ # Calculs d'initialisation
+ # ------------------------
+ residus = mesures - numpy.ravel( func( variables ) )
+ poids = 1./(epsilon+numpy.abs(residus))
+ veps = 1. - 2. * quantile - residus * poids
+ lastsurrogate = -numpy.sum(residus*veps) - (1.-2.*quantile)*numpy.sum(residus)
+ iteration = 0
+ #
+ # Recherche iterative
+ # -------------------
+ while (increment > toler) and (iteration < maxfun) :
+ iteration += 1
+ #
+ Derivees = numpy.array(fprime(variables))
+ Derivees = Derivees.reshape(n,p) # ADAO & check shape
+ DeriveesT = Derivees.transpose()
+ M = numpy.dot( DeriveesT , (numpy.array(numpy.matrix(p*[poids,]).T)*Derivees) )
+ SM = numpy.transpose(numpy.dot( DeriveesT , veps ))
+ step = - numpy.linalg.lstsq( M, SM, rcond=-1 )[0]
+ #
+ variables = variables + step
+ if bounds is not None:
+ # Attention : boucle infinie à éviter si un intervalle est trop petit
+ while( (variables < numpy.ravel(numpy.asmatrix(bounds)[:,0])).any() or (variables > numpy.ravel(numpy.asmatrix(bounds)[:,1])).any() ):
+ step = step/2.
+ variables = variables - step
+ residus = mesures - numpy.ravel( func(variables) )
+ surrogate = numpy.sum(residus**2 * poids) + (4.*quantile-2.) * numpy.sum(residus)
+ #
+ while ( (surrogate > lastsurrogate) and ( max(list(numpy.abs(step))) > 1.e-16 ) ) :
+ step = step/2.
+ variables = variables - step
+ residus = mesures - numpy.ravel( func(variables) )
+ surrogate = numpy.sum(residus**2 * poids) + (4.*quantile-2.) * numpy.sum(residus)
+ #
+ increment = lastsurrogate-surrogate
+ poids = 1./(epsilon+numpy.abs(residus))
+ veps = 1. - 2. * quantile - residus * poids
+ lastsurrogate = -numpy.sum(residus * veps) - (1.-2.*quantile)*numpy.sum(residus)
+ #
+ # Mesure d'écart
+ # --------------
+ Ecart = quantile * numpy.sum(residus) - numpy.sum( residus[residus<0] )
+ #
+ return variables, Ecart, [n,p,iteration,increment,0]
+
+# ==============================================================================
+if __name__ == "__main__":
+ print('\n AUTODIAGNOSTIC\n')
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2022 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+__doc__ = """
+ Unscented Kalman Filter
+"""
+__author__ = "Jean-Philippe ARGAUD"
+
+import math, numpy, scipy
+from daCore.PlatformInfo import PlatformInfo
+mpr = PlatformInfo().MachinePrecision()
+mfp = PlatformInfo().MaximumPrecision()
+
+# ==============================================================================
+def uskf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
+ """
+ Unscented Kalman Filter
+ """
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA._parameters["StoreInternalVariables"] = True
+ #
+ L = Xb.size
+ Alpha = selfA._parameters["Alpha"]
+ Beta = selfA._parameters["Beta"]
+ if selfA._parameters["Kappa"] == 0:
+ if selfA._parameters["EstimationOf"] == "State":
+ Kappa = 0
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Kappa = 3 - L
+ else:
+ Kappa = selfA._parameters["Kappa"]
+ Lambda = float( Alpha**2 ) * ( L + Kappa ) - L
+ Gamma = math.sqrt( L + Lambda )
+ #
+ Ww = []
+ Ww.append( 0. )
+ for i in range(2*L):
+ Ww.append( 1. / (2.*(L + Lambda)) )
+ #
+ Wm = numpy.array( Ww )
+ Wm[0] = Lambda / (L + Lambda)
+ Wc = numpy.array( Ww )
+ Wc[0] = Lambda / (L + Lambda) + (1. - Alpha**2 + Beta)
+ #
+ # Opérateurs
+ Hm = HO["Direct"].appliedControledFormTo
+ #
+ if selfA._parameters["EstimationOf"] == "State":
+ Mm = EM["Direct"].appliedControledFormTo
+ #
+ if CM is not None and "Tangent" in CM and U is not None:
+ Cm = CM["Tangent"].asMatrix(Xb)
+ else:
+ Cm = None
+ #
+ # Durée d'observation et tailles
+ if hasattr(Y,"stepnumber"):
+ duration = Y.stepnumber()
+ __p = numpy.cumprod(Y.shape())[-1]
+ else:
+ duration = 2
+ __p = numpy.array(Y).size
+ #
+ # Précalcul des inversions de B et R
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ BI = B.getI()
+ RI = R.getI()
+ #
+ __n = Xb.size
+ nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
+ #
+ if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ Xn = Xb
+ if hasattr(B,"asfullmatrix"):
+ Pn = B.asfullmatrix(__n)
+ else:
+ Pn = B
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( Xb )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ elif selfA._parameters["nextStep"]:
+ Xn = selfA._getInternalState("Xn")
+ Pn = selfA._getInternalState("Pn")
+ #
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ XaMin = Xn
+ previousJMinimum = numpy.finfo(float).max
+ #
+ for step in range(duration-1):
+ if hasattr(Y,"store"):
+ Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ else:
+ Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ #
+ if U is not None:
+ if hasattr(U,"store") and len(U)>1:
+ Un = numpy.ravel( U[step] ).reshape((-1,1))
+ elif hasattr(U,"store") and len(U)==1:
+ Un = numpy.ravel( U[0] ).reshape((-1,1))
+ else:
+ Un = numpy.ravel( U ).reshape((-1,1))
+ else:
+ Un = None
+ #
+ Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
+ Xnp = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
+ nbSpts = 2*Xn.size+1
+ #
+ XEtnnp = []
+ for point in range(nbSpts):
+ if selfA._parameters["EstimationOf"] == "State":
+ XEtnnpi = numpy.asarray( Mm( (Xnp[:,point], Un) ) ).reshape((-1,1))
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
+ XEtnnpi = XEtnnpi + Cm @ Un
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ # --- > Par principe, M = Id, Q = 0
+ XEtnnpi = Xnp[:,point]
+ XEtnnp.append( numpy.ravel(XEtnnpi).reshape((-1,1)) )
+ XEtnnp = numpy.concatenate( XEtnnp, axis=1 )
+ #
+ Xncm = ( XEtnnp * Wm ).sum(axis=1)
+ #
+ if selfA._parameters["EstimationOf"] == "State": Pnm = Q
+ elif selfA._parameters["EstimationOf"] == "Parameters": Pnm = 0.
+ for point in range(nbSpts):
+ Pnm += Wc[i] * ((XEtnnp[:,point]-Xncm).reshape((-1,1)) * (XEtnnp[:,point]-Xncm))
+ #
+ Pnmdemi = numpy.real(scipy.linalg.sqrtm(Pnm))
+ #
+ Xnnp = numpy.hstack([Xncm.reshape((-1,1)), Xncm.reshape((-1,1))+Gamma*Pnmdemi, Xncm.reshape((-1,1))-Gamma*Pnmdemi])
+ #
+ Ynnp = []
+ for point in range(nbSpts):
+ if selfA._parameters["EstimationOf"] == "State":
+ Ynnpi = Hm( (Xnnp[:,point], None) )
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Ynnpi = Hm( (Xnnp[:,point], Un) )
+ Ynnp.append( numpy.ravel(Ynnpi).reshape((-1,1)) )
+ Ynnp = numpy.concatenate( Ynnp, axis=1 )
+ #
+ Yncm = ( Ynnp * Wm ).sum(axis=1)
+ #
+ Pyyn = R
+ Pxyn = 0.
+ for point in range(nbSpts):
+ Pyyn += Wc[i] * ((Ynnp[:,point]-Yncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
+ Pxyn += Wc[i] * ((Xnnp[:,point]-Xncm).reshape((-1,1)) * (Ynnp[:,point]-Yncm))
+ #
+ _Innovation = Ynpu - Yncm.reshape((-1,1))
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ _Innovation = _Innovation - Cm @ Un
+ #
+ Kn = Pxyn * Pyyn.I
+ Xn = Xncm.reshape((-1,1)) + Kn * _Innovation
+ Pn = Pnm - Kn * Pyyn * Kn.T
+ #
+ Xa = Xn # Pointeurs
+ #--------------------------
+ selfA._setInternalState("Xn", Xn)
+ selfA._setInternalState("Pn", Pn)
+ #--------------------------
+ #
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ # ---> avec analysis
+ selfA.StoredVariables["Analysis"].store( Xa )
+ if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, Un)) )
+ if selfA._toStore("InnovationAtCurrentAnalysis"):
+ selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
+ # ---> avec current state
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xn )
+ if selfA._toStore("ForecastState"):
+ selfA.StoredVariables["ForecastState"].store( Xncm )
+ if selfA._toStore("ForecastCovariance"):
+ selfA.StoredVariables["ForecastCovariance"].store( Pnm )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( Xncm - Xa )
+ if selfA._toStore("InnovationAtCurrentState"):
+ selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
+ if selfA._toStore("SimulatedObservationAtCurrentState") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yncm )
+ # ---> autres
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ Jb = float( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
+ Jo = float( 0.5 * _Innovation.T * (RI * _Innovation) )
+ J = Jb + Jo
+ selfA.StoredVariables["CostFunctionJb"].store( Jb )
+ selfA.StoredVariables["CostFunctionJo"].store( Jo )
+ selfA.StoredVariables["CostFunctionJ" ].store( J )
+ #
+ if selfA._toStore("IndexOfOptimum") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
+ if selfA._toStore("IndexOfOptimum"):
+ selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
+ if selfA._toStore("CurrentOptimum"):
+ selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
+ if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ if selfA._toStore("CostFunctionJAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ if selfA._parameters["EstimationOf"] == "Parameters" \
+ and J < previousJMinimum:
+ previousJMinimum = J
+ XaMin = Xa
+ if selfA._toStore("APosterioriCovariance"):
+ covarianceXaMin = selfA.StoredVariables["APosterioriCovariance"][-1]
+ #
+ # Stockage final supplémentaire de l'optimum en estimation de paramètres
+ # ----------------------------------------------------------------------
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( XaMin )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( covarianceXaMin )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(XaMin) )
+ #
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print('\n AUTODIAGNOSTIC\n')
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
+import logging, numpy, scipy.optimize
from daCore import BasicObjects
-import numpy, scipy.optimize
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
-from daCore import BasicObjects
import numpy
+from daCore import BasicObjects
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
+import logging
from daCore import BasicObjects, PlatformInfo
import numpy, copy
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
-if sys.version_info.major > 2:
- unicode = str
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
__marge = 5*u" "
_p = self._parameters["NumberOfPrintedDigits"]
if len(self._parameters["ResultTitle"]) > 0:
- __rt = unicode(self._parameters["ResultTitle"])
+ __rt = str(self._parameters["ResultTitle"])
msgs = u"\n"
msgs += __marge + "====" + "="*len(__rt) + "====\n"
msgs += __marge + " " + __rt + "\n"
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
+import math, numpy
from daCore import BasicObjects, PlatformInfo
-import numpy, math
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
-if sys.version_info.major > 2:
- unicode = str
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
NormeX = numpy.linalg.norm( X )
NormeFX = numpy.linalg.norm( FX )
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( numpy.ravel(Xn) )
+ self.StoredVariables["CurrentState"].store( numpy.ravel(X) )
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(FX) )
#
On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
#
if len(self._parameters["ResultTitle"]) > 0:
- __rt = unicode(self._parameters["ResultTitle"])
+ __rt = str(self._parameters["ResultTitle"])
msgs = u"\n"
msgs += __marge + "====" + "="*len(__rt) + "====\n"
msgs += __marge + " " + __rt + "\n"
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
import numpy
from daCore import BasicObjects
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
+import math, numpy
from daCore import BasicObjects, PlatformInfo
-import numpy, math
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
-if sys.version_info.major > 2:
- unicode = str
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
#
if len(self._parameters["ResultTitle"]) > 0:
- __rt = unicode(self._parameters["ResultTitle"])
+ __rt = str(self._parameters["ResultTitle"])
msgs = u"\n"
msgs += __marge + "====" + "="*len(__rt) + "====\n"
msgs += __marge + " " + __rt + "\n"
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
-from daCore import BasicObjects, PlatformInfo
-import numpy, copy
+import logging, numpy
+from daCore import BasicObjects
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
-from daCore import BasicObjects
import numpy
+from daCore import BasicObjects
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
+import logging
from daCore import BasicObjects, PlatformInfo
import numpy, copy
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
-if sys.version_info.major > 2:
- unicode = str
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
__marge = 5*u" "
_p = self._parameters["NumberOfPrintedDigits"]
if len(self._parameters["ResultTitle"]) > 0:
- __rt = unicode(self._parameters["ResultTitle"])
+ __rt = str(self._parameters["ResultTitle"])
msgs = u"\n"
msgs += __marge + "====" + "="*len(__rt) + "====\n"
msgs += __marge + " " + __rt + "\n"
msgs += (" -----------------------------\n")
msgs += (" Characteristics of input vector X, internally converted:\n")
msgs += (" Type...............: %s\n")%type( Xn )
- msgs += (" Lenght of vector...: %i\n")%max(numpy.matrix( Xn ).shape)
+ msgs += (" Lenght of vector...: %i\n")%max(numpy.asarray( Xn ).shape)
msgs += (" Minimum value......: %."+str(_p)+"e\n")%numpy.min( Xn )
msgs += (" Maximum value......: %."+str(_p)+"e\n")%numpy.max( Xn )
msgs += (" Mean of vector.....: %."+str(_p)+"e\n")%numpy.mean( Xn, dtype=mfp )
msgs = ("===> Information after evaluation:\n")
msgs += ("\n Characteristics of simulated output vector Y=H(X), to compare to others:\n")
msgs += (" Type...............: %s\n")%type( Yn )
- msgs += (" Lenght of vector...: %i\n")%max(numpy.matrix( Yn ).shape)
+ msgs += (" Lenght of vector...: %i\n")%max(numpy.asarray( Yn ).shape)
msgs += (" Minimum value......: %."+str(_p)+"e\n")%numpy.min( Yn )
msgs += (" Maximum value......: %."+str(_p)+"e\n")%numpy.max( Yn )
msgs += (" Mean of vector.....: %."+str(_p)+"e\n")%numpy.mean( Yn, dtype=mfp )
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
-from daCore import BasicObjects, NumericObjects
import numpy
+from daCore import BasicObjects, NumericObjects
+from daAlgorithms.Atoms import mmqr
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
# Minimisation de la fonctionnelle
# --------------------------------
if self._parameters["Minimizer"] == "MMQR":
- Minimum, J_optimal, Informations = NumericObjects.mmqr(
+ Minimum, J_optimal, Informations = mmqr.mmqr(
func = CostFunction,
x0 = Xini,
fprime = GradientOfCostFunction,
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import copy, logging, itertools
-import numpy
+import numpy, logging, itertools
from daCore import BasicObjects
from daCore.PlatformInfo import PlatformInfo
mfp = PlatformInfo().MaximumPrecision()
raise ValueError("For dimension %i, the distribution name \"%s\" is not allowed, please choose in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]"%(i,dim[0]))
else:
distribution = getattr(numpy.random,str(dim[0]),'normal')
- parameters = dim[1]
coordinatesList.append(distribution(*dim[1], size=max(1,int(dim[2]))))
sampleList = itertools.product(*coordinatesList)
else:
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
-from daCore import BasicObjects
import numpy
+from daCore import BasicObjects
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, logging
+import numpy
from daCore import BasicObjects, PlatformInfo
-import numpy, math
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
-if sys.version_info.major > 2:
- unicode = str
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
#
if len(self._parameters["ResultTitle"]) > 0:
- __rt = unicode(self._parameters["ResultTitle"])
+ __rt = str(self._parameters["ResultTitle"])
msgs = u"\n"
msgs += __marge + "====" + "="*len(__rt) + "====\n"
msgs += __marge + " " + __rt + "\n"
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
-from daCore import BasicObjects, NumericObjects
+from daCore import BasicObjects
+from daAlgorithms.Atoms import c2ukf, uskf
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
"2UKF",
],
)
- self.defineRequiredParameter(
- name = "ConstrainedBy",
- default = "EstimateProjection",
- typecast = str,
- message = "Prise en compte des contraintes",
- listval = ["EstimateProjection"],
- )
self.defineRequiredParameter(
name = "EstimationOf",
default = "State",
message = "Estimation d'etat ou de parametres",
listval = ["State", "Parameters"],
)
+ self.defineRequiredParameter(
+ name = "ConstrainedBy",
+ default = "EstimateProjection",
+ typecast = str,
+ message = "Prise en compte des contraintes",
+ listval = ["EstimateProjection"],
+ )
self.defineRequiredParameter(
name = "Alpha",
default = 1.,
# Default UKF
#--------------------------
if self._parameters["Variant"] == "UKF":
- NumericObjects.uskf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ uskf.uskf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
#--------------------------
# Default 2UKF
elif self._parameters["Variant"] == "2UKF":
- NumericObjects.c2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ c2ukf.c2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
#--------------------------
else:
else:
raise ValueError("the variable named '%s' is not allowed."%str(Concept))
except Exception as e:
- if isinstance(e, SyntaxError): msg = "at %s: %s"%(e.offset, e.text)
+ if isinstance(e, SyntaxError): msg = " at %s: %s"%(e.offset, e.text)
else: msg = ""
raise ValueError(("during settings, the following error occurs:\n"+\
"\n%s%s\n\nSee also the potential messages, "+\
else:
return False
def is_not_local_file(self):
- if not os.path.isfile(os.path.realpath(self.__url)):
- return True
- else:
- return False
+ return not self.is_local_file()
def raise_error_if_not_local_file(self):
- if not os.path.isfile(os.path.realpath(self.__url)):
+ if self.is_not_local_file():
raise ValueError("The name or the url of the file object doesn't seem to exist. The given name is:\n \"%s\""%str(self.__url))
else:
return False
else:
return False
def is_not_local_dir(self):
- if not os.path.isdir(self.__url):
- return True
- else:
- return False
+ return not self.is_local_dir()
def raise_error_if_not_local_dir(self):
- if not os.path.isdir(self.__url):
+ if self.is_not_local_dir():
raise ValueError("The name or the url of the directory object doesn't seem to exist. The given name is:\n \"%s\""%str(self.__url))
else:
return False