.. [Johnson08] Johnson S. G., *The NLopt nonlinear-optimization package*, http://ab-initio.mit.edu/nlopt
+.. [Julier95] Julier S., Uhlmann J., Durrant-Whyte H., *A new approach for filtering nonlinear systems*, in: Proceedings of the 1995 American Control Conference, IEEE, 1995
+
+.. [Julier00] Julier S., Uhlmann J., Durrant-Whyte H., *A new method for the nonlinear transformation of means and covariances in filters and estimators*, IEEE Trans. Automat. Control., 45, pp.477–482, 2000
+
+.. [Julier07] Julier S., Laviola J., *On Kalman filtering with nonlinear equality constraints*, IEEE Trans. Signal Process., 55(6), pp.2774-2784, 2007
+
.. [Kalnay03] Kalnay E., *Atmospheric Modeling, Data Assimilation and Predictability*, Cambridge University Press, 2003
.. [Koenker00] Koenker R., Hallock K. F., *Quantile Regression: an Introduction*, 2000, http://www.econ.uiuc.edu/~roger/research/intro/intro.html
.. [Nelder65] Nelder J. A., Mead R., *A simplex method for function minimization*, The Computer Journal, 7, pp.308-313, 1965
-.. [NumPy20] Harris C.R. et al., *Array programming with NumPy*, Nature 585, pp.357–362, 2020, https://numpy.org/
+.. [NumPy20] Harris C. R. et al., *Array programming with NumPy*, Nature 585, pp.357–362, 2020, https://numpy.org/
+
+.. [Papakonstantinou22] Papakonstantinou K. G., Amir M., Warn G. P., *A Scaled Spherical Simplex Filter (S3F) with a decreased n+2 sigma points set size and equivalent 2n+1 Unscented Kalman Filter (UKF) accuracy*, Mechanical Systems and Signal Processing, 163, 107433, 2022
.. [Powell64] Powell M. J. D., *An efficient method for finding the minimum of a function of several variables without calculating derivatives*, Computer Journal, 7(2), pp.155-162, 1964
.. [Tikhonov77] Tikhonov A. N., Arsenin V. Y., *Solution of Ill-posed Problems*, Winston & Sons, 1977
+.. [Wan00] Wan E. A., van der Merwe R., *The Unscented Kalman Filter for Nonlinear Estimation*, in: Adaptive Systems for Signal Processing, Communications, and Control Symposium, IEEE, 2000.
+
.. [Welch06] Welch G., Bishop G., *An Introduction to the Kalman Filter*, University of North Carolina at Chapel Hill, Department of Computer Science, TR 95-041, 2006, http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf
.. [WikipediaDA] Wikipedia, *Data assimilation*, http://en.wikipedia.org/wiki/Data_assimilation
In any cases, one take :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` and
:math:`\mathbf{dx}=\alpha*\mathbf{dx}_0` with :math:`\alpha_0` a user scaling
of the initial perturbation, with default to 1. :math:`F` is the calculation
-code.
+code (given here by the user by using the observation operator command
+"*ObservationOperator*").
"Taylor" residue
****************
"ExcludedPoints",
"OptimalPoints",
"ReducedBasis",
+ "ReducedBasisMus",
"Residus",
"SingularValues",
].
.. include:: snippets/ReducedBasis.rst
+.. include:: snippets/ReducedBasisMus.rst
+
.. include:: snippets/Residus.rst
.. include:: snippets/SingularValues.rst
.. include:: snippets/Header2Algo01.rst
This algorithm realizes an estimation of the state of a system by minimization
-of a cost function :math:`J` by using an evolutionary strategy of particle
-swarm. It is a method that does not use the derivatives of the cost function.
-It is based on the evolution of a population (called a "swarm") of states (each
-state is called a "particle" or an "insect"). It falls in the same category
-than the
+without gradient of a cost function :math:`J` by using an evolutionary strategy
+of particle swarm. It is a method that does not use the derivatives of the cost
+function. It falls in the same category than the
:ref:`section_ref_algorithm_DerivativeFreeOptimization`, the
:ref:`section_ref_algorithm_DifferentialEvolution` or the
:ref:`section_ref_algorithm_TabuSearch`.
-This is a mono-objective optimization method, allowing for global minimum search
-of a general error function :math:`J` of type :math:`L^1`, :math:`L^2` or
-:math:`L^{\infty}`, with or without weights, as described in the section for
+This is a mono-objective optimization method, allowing for global minimum
+search of a general error function :math:`J` of type :math:`L^1`, :math:`L^2`
+or :math:`L^{\infty}`, with or without weights, as described in the section for
:ref:`section_theory_optimization`. The default error function is the augmented
weighted least squares function, classically used in data assimilation.
-There exists various variants of this algorithm. The following stable and
-robust formulations are proposed here:
+It is based on the evolution of a population (called a "swarm") of states (each
+state is called a "particle" or an "insect"). There exists various variants of
+this algorithm. The following stable and robust formulations are proposed here:
.. index::
pair: Variant ; CanonicalPSO
pair: Variant ; SPSO-2011
pair: Variant ; AIS PSO
pair: Variant ; APSO
+ pair: Variant ; SPSO-2011-SIS
+ pair: Variant ; SPSO-2011-PSIS
- "CanonicalPSO" (Canonical Particle Swarm Optimization, see
[ZambranoBigiarini13]_), classical algorithm called "canonical" of particle
"Asynchronous Iteration Strategy") or "APSO" (for "Advanced Particle Swarm
Optimization") because it incorporates evolutionary updating of the best
elements, leading to intrinsically improved convergence of the algorithm.
+- "SPSO-2011-SIS" (Standard Particle Swarm Optimisation 2011 with Synchronous
+ Iteration Strategy), very similar to the 2011 reference algorithm, and with
+ a synchronous particle update, called "SIS",
+- "SPSO-2011-PSIS" (Standard Particle Swarm Optimisation 2011 with Parallel
+ Synchronous Iteration Strategy), similar to the "SPSO-2011-SIS" algorithm
+ with synchronous updating and parallelization, known as "PSIS", of the
+ particles.
The following are a few practical suggestions for the effective use of these
algorithms:
- The recommended variant of this algorithm is the "SPSO-2011" even if the
- "CanonicalPSO" algorithm remains by default the more robust one.
+ "CanonicalPSO" algorithm remains by default the more robust one. If the state
+ evaluation can be carried out in parallel, the "SPSO-2011-PSIS" algorithm can
+ be used, even if its convergence is sometimes a little less efficient.
- The number of particles or insects usually recommended varies between 40 and
100 depending on the algorithm, more or less independently of the dimension
of the state space. Usually, the best performances are obtained for
.. include:: snippets/Header2Algo01.rst
This algorithm realizes an estimation of the state of a dynamic system by a
-"unscented" Kalman Filter, avoiding to have to perform the tangent and adjoint
-operators for the observation and evolution operators, as in the simple or
-extended Kalman filter.
+Kalman Filter using an "unscented" transform and a sampling by "sigma" points,
+avoiding to have to perform the tangent and adjoint operators for the
+observation and evolution operators, as in the simple or extended Kalman
+filters.
It applies to non-linear observation and incremental evolution (process)
operators with excellent robustness and performance qualities. It can be
to evaluate on small systems. One can verify the linearity of the operators
with the help of the :ref:`section_ref_algorithm_LinearityTest`.
+There exists various variants of this algorithm. The following stable and
+robust formulations are proposed here:
+
.. index::
pair: Variant ; UKF
+ pair: Variant ; S3F
+ pair: Variant ; CUKF
+ pair: Variant ; CS3F
pair: Variant ; 2UKF
-A difference is made between the "unscented" Kalman filter taking into account
-bounds on the states (the variant named "2UKF", which is recommended and used
-by default), and the canonical "unscented" Kalman filter conducted without any
-constraint (the variant named "UKF", which is not recommended).
+- "UKF" (Unscented Kalman Filter, see [Julier95]_, [Julier00]_, [Wan00]_),
+ original and reference canonical algorithm, highly robust and efficient,
+- "CUKF", also named "2UKF" (Constrained Unscented Kalman Filter, see
+ [Julier07]_), inequality or boundary constrained version of the algorithm
+ "UKF",
+- "S3F" (Scaled Spherical Simplex Filter, see [Papakonstantinou22]_),
+ improved algorithm, reducing the number of sampling (sigma) points to achieve
+ the same quality as the canonical "UKF" variant,
+- "CS3F" (Constrained Scaled Spherical Simplex Filter), inequality or boundary
+ constrained version of the algorithm "S3F".
+
+The following are a few practical suggestions for the effective use of these
+algorithms:
+
+- The recommended variant of this algorithm is the "S3F" even if the canonical
+ "UKF" algorithm remains by default the more robust one.
+- When there are no defined bounds, the constraint-aware versions of the
+ algorithms are identical to the unconstrained versions. This is not the case
+ if constraints are defined, even if the bounds are very wide.
+- An essential difference between the algorithms is the number of sampling
+ "sigma" points used, depending on the :math:`n` dimension of the state space.
+ The canonical "UKF" algorithm uses :math:`2n+1`, the "S3F" algorithm uses
+ :math:`n+2`. This means that about twice as many evaluations of the function to
+ be simulated are required for one as for the other.
+- The evaluations of the function to be simulated are algorithmically
+ independent at each filtering stage (evolution or observation) and can
+ therefore be parallelized or distributed if the function to be simulated
+ supports this.
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. ------------------------------------ ..
.. include:: snippets/Header2Algo07.rst
+- [Julier95]_
+- [Julier00]_
+- [Julier07]_
+- [Papakonstantinou22]_
+- [Wan00]_
- [WikipediaUKF]_
AmplitudeOfInitialDirection
*Real value*. This key indicates the scaling of the initial perturbation
build as a vector used for the directional derivative around the nominal
- checking point. The default is 1, that means no scaling.
+ checking point. The default is 1, that means no scaling. It's useful to
+ modify this value, and in particular to decrease it when the biggest
+ perturbations are going out of the allowed domain for the function.
Example:
``{"AmplitudeOfInitialDirection":0.5}``
lower bounds for every state variable being optimized. Bounds have to be
given by a list of list of pairs of lower/upper bounds for each variable,
with extreme values every time there is no bound (``None`` is not allowed
- when there is no bound).
+ when there is no bound). If the list is empty, there are no bounds.
Example:
``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
given by a list of list of pairs of lower/upper bounds for each variable,
with a value of ``None`` each time there is no bound. The bounds can always
be specified, but they are taken into account only by the constrained
- optimizers.
+ optimizers. If the list is empty, there are no bounds.
Example:
``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
:header: "Tool", "Minimal version", "Reached version"
:widths: 20, 10, 10
- Python, 3.6.5, 3.11.7
+ Python, 3.6.5, 3.12.2
Numpy, 1.14.3, 1.26.4
Scipy, 0.19.1, 1.12.0
MatplotLib, 2.2.2, 3.8.3
*List of integer series*. Each element is a series, containing the indices of
ideal positions or optimal points where a measurement is required, determined
by the optimal search, ordered by decreasing preference and in the same order
- as the reduced basis vectors found iteratively.
+ as the vectors iteratively found to form the reduced basis.
Example :
``op = ADD.get("OptimalPoints")[-1]``
--- /dev/null
+.. index:: single: ReducedBasisMus
+
+ReducedBasisMus
+ *List of integer series*. Each element is a series, containing the indices of
+ the :math:`\mu` parameters characterizing a state, in the order chosen during
+ the iterative search process for vectors of the reduced basis.
+
+ Example :
+ ``op = ADD.get("ReducedBasisMus")[-1]``
.. index::
single: Variant
pair: Variant ; UKF
+ pair: Variant ; CUKF
pair: Variant ; 2UKF
+ pair: Variant ; S3F
+ pair: Variant ; CS3F
Variant
*Predefined name*. This key allows to choose one of the possible variants for
- the main algorithm. The default variant is the constrained version "2UKF" of
- the original algorithm "UKF", and the possible choices are
+ the main algorithm. The default variant is the constrained version
+ "CUKF/2UKF" of the original algorithm "UKF", and the possible choices are
"UKF" (Unscented Kalman Filter),
- "2UKF" (Constrained Unscented Kalman Filter).
+ "CUKF" ou "2UKF" (Constrained Unscented Kalman Filter),
+ "S3F" (Scaled Spherical Simplex Filter),
+ "CS3F" (Constrained Scaled Spherical Simplex Filter).
It is highly recommended to keep the default value.
Example :
The available commands are:
+.. index:: single: set
+
+**set** (*Concept,...*)
+ This command allows to have an equivalent syntax for all the commands of
+ these section. Its first argument is the name of the concept to be defined
+ (for example "*Background*" or "*ObservationOperator*"), on which the
+ following arguments, which are the same as in the individual previous
+ commands, are applied. When using this command, it is required to name the
+ arguments (for example "*Vector=...*").
+
.. index:: single: Background
.. index:: single: setBackground
they can be given through the variable "*ExtraArguments*" as a named
parameters dictionary.
-.. index:: single: set
-
-**set** (*Concept,...*)
- This command allows to have an equivalent syntax for all the commands of
- these section. Its first argument is the name of the concept to be defined
- (for example "*Background*" or "*ObservationOperator*"), on which the
- following arguments, which are the same as in the individual previous
- commands, are applied. When using this command, it is required to name the
- arguments (for example "*Vector=...*").
-
Setting the calculation, outputs, etc.
++++++++++++++++++++++++++++++++++++++
one the commands establishing the current calculation case. Some formats
are only available as input or as output.
-In addition, simple information about the case study as defined by the user can
-be obtained by using the Python "*print*" command directly on the case, at any
-stage during its design. For example:
+Obtain information on the case, the computation or the system
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+It's easy to obtain **aggregate information on the study case** as defined by
+the user, by using Python's "*print*" command directly on the case, at any
+stage during its completion. For example:
.. literalinclude:: scripts/tui_example_07.py
:language: python
.. literalinclude:: scripts/tui_example_07.res
+.. index:: single: callinfo
+
+**Synthetic information on the number of calls to operator computations** can
+be dynamically obtained with the "**callinfo()**" command. These operator
+computations are those defined by the user in an ADAO case, for the observation
+and evolution operators. It is used after the calculation has been performed in
+the ADAO case, bearing in mind that the result of this command is simply empty
+when no calculation has been performed:
+::
+
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ ...
+ case.execute()
+ print(case.callinfo())
+
+.. index:: single: sysinfo
+
+Synthetic **system information** can be obtained with the "**sysinfo()**"
+command, present in every calculation case. It dynamically returns system
+information and details of Python modules useful for ADAO. It is used as
+follows:
+::
+
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ print(case.sysinfo())
+
.. _subsection_tui_advanced:
More advanced examples of ADAO TUI calculation case
.. [Johnson08] Johnson S. G., *The NLopt nonlinear-optimization package*, http://ab-initio.mit.edu/nlopt
+.. [Julier95] Julier S., Uhlmann J., Durrant-Whyte H., *A new approach for filtering nonlinear systems*, in: Proceedings of the 1995 American Control Conference, IEEE, 1995
+
+.. [Julier00] Julier S., Uhlmann J., Durrant-Whyte H., *A new method for the nonlinear transformation of means and covariances in filters and estimators*, IEEE Trans. Automat. Control., 45, pp.477–482, 2000
+
+.. [Julier07] Julier S., Laviola J., *On Kalman filtering with nonlinear equality constraints*, IEEE Trans. Signal Process., 55(6), pp.2774-2784, 2007
+
.. [Kalnay03] Kalnay E., *Atmospheric Modeling, Data Assimilation and Predictability*, Cambridge University Press, 2003
.. [Koenker00] Koenker R., Hallock K. F., *Quantile Regression: an Introduction*, 2000, http://www.econ.uiuc.edu/~roger/research/intro/intro.html
.. [Nelder65] Nelder J. A., Mead R., *A simplex method for function minimization*, The Computer Journal, 7, pp.308-313, 1965
-.. [NumPy20] Harris C.R. et al., *Array programming with NumPy*, Nature 585, pp.357–362, 2020, https://numpy.org/
+.. [NumPy20] Harris C. R. et al., *Array programming with NumPy*, Nature 585, pp.357–362, 2020, https://numpy.org/
+
+.. [Papakonstantinou22] Papakonstantinou K. G., Amir M., Warn G. P., *A Scaled Spherical Simplex Filter (S3F) with a decreased n+2 sigma points set size and equivalent 2n+1 Unscented Kalman Filter (UKF) accuracy*, Mechanical Systems and Signal Processing, 163, 107433, 2022
.. [Powell64] Powell M. J. D., *An efficient method for finding the minimum of a function of several variables without calculating derivatives*, Computer Journal, 7(2), pp.155-162, 1964
.. [Tikhonov77] Tikhonov A. N., Arsenin V. Y., *Solution of Ill-posed Problems*, Winston & Sons, 1977
+.. [Wan00] Wan E. A., van der Merwe R., *The Unscented Kalman Filter for Nonlinear Estimation*, in: Adaptive Systems for Signal Processing, Communications, and Control Symposium, IEEE, 2000.
+
.. [Welch06] Welch G., Bishop G., *An Introduction to the Kalman Filter*, University of North Carolina at Chapel Hill, Department of Computer Science, TR 95-041, 2006, http://www.cs.unc.edu/~welch/media/pdf/kalman_intro.pdf
.. [WikipediaDA] Wikipedia, *Data assimilation*, http://en.wikipedia.org/wiki/Data_assimilation
Pour toutes les formules, avec :math:`\mathbf{x}` le point courant de
vérification, on prend :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` et
:math:`\mathbf{dx}=\alpha_0*\mathbf{dx}_0` avec :math:`\alpha_0` un paramètre
-utilisateur de mise à l'échelle, par défaut à 1. :math:`F` est l'opérateur ou
-le code de calcul (qui est ici acquis par la commande d'opérateur d'observation
+utilisateur de mise à l'échelle de l'amplitude initiale, par défaut à 1.
+:math:`F` est l'opérateur ou le code de calcul (qui est ici donné par
+l'utilisateur à l'aide de la commande de l'opérateur d'observation
"*ObservationOperator*").
Résidu "Taylor"
"ExcludedPoints",
"OptimalPoints",
"ReducedBasis",
+ "ReducedBasisMus",
"Residus",
"SingularValues",
].
.. include:: snippets/ReducedBasis.rst
+.. include:: snippets/ReducedBasisMus.rst
+
.. include:: snippets/Residus.rst
.. include:: snippets/SingularValues.rst
pair: Variant ; SPSO-2011
pair: Variant ; AIS PSO
pair: Variant ; APSO
+ pair: Variant ; SPSO-2011-SIS
+ pair: Variant ; SPSO-2011-PSIS
- "CanonicalPSO" (Canonical Particule Swarm Optimisation, voir
[ZambranoBigiarini13]_), algorithme classique dit "canonique" d'essaim
d'inertie, ou encore appelé "AIS" (pour "Asynchronous Iteration Strategy") ou
"APSO" (pour "Advanced Particle Swarm Optimisation") car il intègre la mise à
jour évolutive des meilleurs éléments, conduisant à une convergence
- intrinsèquement améliorée de l'algorithme.
+ intrinsèquement améliorée de l'algorithme,
- "SPSO-2011-SIS" (Standard Particle Swarm Optimisation 2011 with Synchronous
Iteration Strategy), très similaire à l'algorithme de référence 2011 et avec
- une mise à jour synchrone, appelée "SIS", des particules.
+ une mise à jour synchrone, appelée "SIS", des particules,
- "SPSO-2011-PSIS" (Standard Particle Swarm Optimisation 2011 with Parallel
Synchronous Iteration Strategy), similaire à l'algorithme "SPSO-2011-SIS"
avec mise à jour synchrone et parallélisation, appelée "PSIS", des
.. include:: snippets/Header2Algo01.rst
Cet algorithme réalise une estimation de l'état d'un système dynamique par un
-filtre de Kalman "unscented", permettant d'éviter de devoir calculer les
-opérateurs tangent ou adjoint pour les opérateurs d'observation ou d'évolution,
-comme dans les filtres de Kalman simple ou étendu.
+filtre de Kalman utilisant une transformation "unscented" et un échantillonnage
+par points "sigma", permettant d'éviter de devoir calculer les opérateurs tangent
+ou adjoint pour les opérateurs d'observation ou d'évolution, comme dans les
+filtres de Kalman simple ou étendu.
Il s'applique aux cas d'opérateurs d'observation et d'évolution incrémentale
(processus) non-linéaires et présente d'excellentes qualités de robustesse et
coûteux en évaluation sur de petits systèmes. On peut vérifier la linéarité des
opérateurs à l'aide de l':ref:`section_ref_algorithm_LinearityTest`.
+Il existe diverses variantes de cet algorithme. On propose ici les formulations
+stables et robustes suivantes :
+
.. index::
pair: Variant ; UKF
+ pair: Variant ; S3F
+ pair: Variant ; CUKF
+ pair: Variant ; CS3F
pair: Variant ; 2UKF
-On fait une différence entre le filtre de Kalman "unscented" tenant compte de
-bornes sur les états (la variante nommée "2UKF", qui est recommandée et qui est
-utilisée par défaut), et le filtre de Kalman "unscented" canonique conduit sans
-aucune contrainte (la variante nommée "UKF", qui n'est pas recommandée).
+- "UKF" (Unscented Kalman Filter, voir [Julier95]_, [Julier00]_, [Wan00]_),
+ algorithme canonique d'origine et de référence, très robuste et performant,
+- "CUKF", aussi nommée "2UKF" (Constrained Unscented Kalman Filter, voir
+ [Julier07]_), version avec contraintes d'inégalités ou de bornes de
+ l'algorithme "UKF",
+- "S3F" (Scaled Spherical Simplex Filter, voir [Papakonstantinou22]_),
+ algorithme amélioré, réduisant le nombre de (sigma) points d'échantillonnage
+ pour avoir la même qualité que la variante "UKF" canonique,
+- "CS3F" (Constrained Scaled Spherical Simplex Filter), version avec
+ contraintes d'inégalités ou de bornes de l'algorithme "S3F".
+
+Voici quelques suggestions pratiques pour une utilisation efficace de ces
+algorithmes :
+
+- La variante recommandée de cet algorithme est le "S3F" même si l'algorithme
+ canonique "UKF" reste par défaut le plus robuste.
+- Lorsqu'il n'y a aucune borne de définie, les versions avec prise en compte
+ des contraintes des algorithmes sont identiques aux versions sans
+ contraintes. Ce n'est pas le cas s'il a des contraintes définies mêmes si les
+ bornes sont très larges.
+- Une différence essentielle entre les algorithmes est le nombre de "sigma"
+ points d'échantillonnage utilisés en fonction de la dimension :math:`n` de
+ l'espace des états. L'algorithme canonique "UKF" en utilise :math:`2n+1`,
+ l'algorithme "S3F" en utilise :math:`n+2`. Cela signifie qu'il faut de
+ l'ordre de deux fois plus d'évaluations de la fonction à simuler pour l'une
+ que l'autre.
+- Les évaluations de la fonction à simuler sont algorithmiquement indépendantes
+ à chaque étape du filtrage (évolution ou observation) et peuvent donc être
+ parallélisées ou distribuées dans le cas où la fonction à simuler le
+ supporte.
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. ------------------------------------ ..
.. include:: snippets/Header2Algo07.rst
+- [Julier95]_
+- [Julier00]_
+- [Julier07]_
+- [Papakonstantinou22]_
+- [Wan00]_
- [WikipediaUKF]_
*Valeur réelle*. Cette clé indique la mise à l'échelle de la perturbation
initiale construite comme un vecteur utilisé pour la dérivée directionnelle
autour du point nominal de vérification. La valeur par défaut est de 1, ce
- qui signifie qu'il n'y a aucune mise à l'échelle.
+ qui signifie qu'il n'y a aucune mise à l'échelle. Il est utile de modifier
+ cette valeur, et en particulier de la diminuer dans le cas où les
+ perturbations les plus grandes sortent du domaine de définition de la
+ fonction.
Exemple :
``{"AmplitudeOfInitialDirection":0.5}``
bornes doivent être données par une liste de liste de paires de bornes
inférieure/supérieure pour chaque variable, avec une valeur extrême chaque
fois qu'il n'y a pas de borne (``None`` n'est pas une valeur autorisée
- lorsqu'il n'y a pas de borne).
+ lorsqu'il n'y a pas de borne). Si la liste est vide, cela équivaut à une
+ absence de bornes.
Exemple :
``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
bornes doivent être données par une liste de liste de paires de bornes
inférieure/supérieure pour chaque variable, avec une valeur ``None`` chaque
fois qu'il n'y a pas de borne. Les bornes peuvent toujours être spécifiées,
- mais seuls les optimiseurs sous contraintes les prennent en compte.
+ mais seuls les optimiseurs sous contraintes les prennent en compte. Si la
+ liste est vide, cela équivaut à une absence de bornes.
Exemple :
``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
:header: "Outil", "Version minimale", "Version atteinte"
:widths: 20, 10, 10
- Python, 3.6.5, 3.11.7
+ Python, 3.6.5, 3.12.2
Numpy, 1.14.3, 1.26.4
Scipy, 0.19.1, 1.12.0
MatplotLib, 2.2.2, 3.8.3
*Liste de série d'entiers*. Chaque élément est une série, contenant les
indices des positions idéales ou points optimaux auxquels une mesure est
requise, déterminés par la recherche optimale, rangés par ordre de préférence
- décroissante et dans le même ordre que les vecteurs de base réduite trouvés
- itérativement.
+ décroissante et dans le même ordre que les vecteurs trouvés itérativement
+ pour constituer la base réduite.
Exemple :
``op = ADD.get("OptimalPoints")[-1]``
--- /dev/null
+.. index:: single: ReducedBasisMus
+
+ReducedBasisMus
+ *Liste de série d'entiers*. Chaque élément est une série, contenant les
+ indices des paramètres :math:`\mu` caractérisant un état, dans l'ordre choisi
+ lors de la recherche itérative des vecteurs de la base réduite.
+
+ Exemple :
+ ``op = ADD.get("ReducedBasisMus")[-1]``
.. index::
single: Variant
pair: Variant ; UKF
+ pair: Variant ; CUKF
pair: Variant ; 2UKF
+ pair: Variant ; S3F
+ pair: Variant ; CS3F
Variant
*Nom prédéfini*. Cette clé permet de choisir l'une des variantes possibles
pour l'algorithme principal. La variante par défaut est la version contrainte
- "2UKF" de l'algorithme original "UKF", et les choix possibles sont
+ "CUKF/2UKF" de l'algorithme original "UKF", et les choix possibles sont
"UKF" (Unscented Kalman Filter),
- "2UKF" (Constrained Unscented Kalman Filter).
+ "CUKF" ou "2UKF" (Constrained Unscented Kalman Filter),
+ "S3F" (Scaled Spherical Simplex Filter),
+ "CS3F" (Constrained Scaled Spherical Simplex Filter).
Il est fortement recommandé de conserver la valeur par défaut.
Exemple :
Les commandes disponibles sont les suivantes :
+.. index:: single: set
+
+**set** (*Concept,...*)
+ Cette commande permet de disposer d'une syntaxe équivalente pour toutes les
+ commandes de ce paragraphe. Son premier argument est le nom du concept à
+ définir (par exemple "*Background*" ou "*ObservationOperator*"), sur lequel
+ s'applique ensuite les arguments qui suivent, qui sont les mêmes que dans
+ les commandes individuelles précédentes. Lors de l'usage de cette commande,
+ il est indispensable de nommer les arguments (par exemple "*Vector=...*").
+
.. index:: single: Background
.. index:: single: setBackground
fournis par la variable "*ExtraArguments*" sous la forme d'un dictionnaire
de paramètres nommés.
-.. index:: single: set
-
-**set** (*Concept,...*)
- Cette commande permet de disposer d'une syntaxe équivalente pour toutes les
- commandes de ce paragraphe. Son premier argument est le nom du concept à
- définir (par exemple "*Background*" ou "*ObservationOperator*"), sur lequel
- s'applique ensuite les arguments qui suivent, qui sont les mêmes que dans
- les commandes individuelles précédentes. Lors de l'usage de cette commande,
- il est indispensable de nommer les arguments (par exemple "*Vector=...*").
-
Paramétrer le calcul, les sorties, etc.
+++++++++++++++++++++++++++++++++++++++
autre les commandes établissant le cas de calcul en cours. Certains
formats ne sont disponibles qu'en entrée ou qu'en sortie.
-De plus, on peut obtenir une information simple sur le cas d'étude tel que
-défini par l'utilisateur en utilisant directement la commande "*print*" de Python
-sur le cas, à toute étape lors de sa construction. Par exemple :
+Obtenir des informations sur le cas, le calcul ou le système
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+On peut obtenir de manière simple une **information agrégée sur le cas
+d'étude** tel que défini par l'utilisateur, en utilisant directement la
+commande "*print*" de Python sur le cas, à n'importe quelle étape lors de sa
+construction. Par exemple :
.. literalinclude:: scripts/tui_example_07.py
:language: python
.. literalinclude:: scripts/tui_example_07.res
+.. index:: single: callinfo
+
+Une **information synthétique sur le nombre d'appels aux calculs d'opérateurs**
+peut être dynamiquement obtenue par la commande "**callinfo()**". Ces calculs
+d'opérateurs sont ceux définis par l'utilisateur dans un cas ADAO, pour les
+opérateurs d'observation et d'évolution. Elle s'utilise après l'exécution du
+calcul dans le cas ADAO, sachant que le résultat de cette commande est
+simplement vide lorsqu'aucun calcul n'a été effectué :
+::
+
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ ...
+ case.execute()
+ print(case.callinfo())
+
+.. index:: single: sysinfo
+
+Une **information synthétique sur le système** peut être obtenue par la
+commande "**sysinfo()**", présente dans chaque cas de calcul ADAO. Elle
+retourne dynamiquement des informations système et des détails sur les modules
+Python utiles pour ADAO. Elle s'utilise de la manière suivante :
+::
+
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ print(case.sysinfo())
+
.. _subsection_tui_advanced:
Exemples plus avancés de cas de calcul TUI ADAO
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
+
def __init__(self):
BasicObjects.Algorithm.__init__(self, "3DVAR")
self.defineRequiredParameter(
"3DVAR-VAN",
"3DVAR-Incr",
"3DVAR-PSAS",
- ],
+ ],
listadv = [
"OneCorrection",
"3DVAR-Std",
"Incr3DVAR",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "Minimizer",
default = "LBFGSB",
"TNC",
"CG",
"BFGS",
- ],
+ ],
listadv = [
"NCG",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "Parameters",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 15000,
message = "Nombre maximal de pas d'optimisation",
minval = -1,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "CostDecrementTolerance",
default = 1.e-7,
typecast = float,
message = "Diminution relative minimale du coût lors de l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "ProjectedGradientTolerance",
default = -1,
typecast = float,
message = "Maximum des composantes du gradient projeté lors de l'arrêt",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "GradientNormTolerance",
default = 1.e-05,
typecast = float,
message = "Maximum des composantes du gradient lors de l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
"SimulationQuantiles",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "Quantiles",
default = [],
message = "Liste des valeurs de quantiles",
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfSamplesForQuantiles",
default = 100,
typecast = int,
message = "Nombre d'échantillons simulés pour le calcul des quantiles",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "SimulationForQuantiles",
default = "Linear",
typecast = str,
message = "Type de simulation en estimation des quantiles",
listval = ["Linear", "NonLinear"]
- )
- self.defineRequiredParameter( # Pas de type
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des paires de bornes",
- )
- self.defineRequiredParameter( # Pas de type
+ )
+ self.defineRequiredParameter( # Pas de type
name = "StateBoundsForQuantiles",
message = "Liste des paires de bornes pour les états utilisés en estimation des quantiles",
- )
+ )
self.defineRequiredParameter(
name = "InitializationPoint",
typecast = numpy.ravel,
message = "État initial imposé (par défaut, c'est l'ébauche si None)",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
- )
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Variational",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Variational",
+ ),
+ features=(
+ "NonLocalOptimization",
+ "DerivativeNeeded",
+ "ParallelDerivativesOnly",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] in ["3DVAR", "3DVAR-Std"]:
+ # --------------------------
+ if self._parameters["Variant"] in ["3DVAR", "3DVAR-Std"]:
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, std3dvar.std3dvar)
#
elif self._parameters["Variant"] == "3DVAR-VAN":
elif self._parameters["Variant"] == "3DVAR-PSAS":
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, psas3dvar.psas3dvar)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "OneCorrection":
std3dvar.std3dvar(self, Xb, Y, U, HO, CM, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
+
def __init__(self):
BasicObjects.Algorithm.__init__(self, "4DVAR")
self.defineRequiredParameter(
typecast = str,
message = "Prise en compte des contraintes",
listval = ["EstimateProjection"],
- )
+ )
self.defineRequiredParameter(
name = "Variant",
default = "4DVAR",
message = "Variant ou formulation de la méthode",
listval = [
"4DVAR",
- ],
+ ],
listadv = [
"4DVAR-Std",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "State",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "Minimizer",
default = "LBFGSB",
"TNC",
"CG",
"BFGS",
- ],
+ ],
listadv = [
"NCG",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 15000,
message = "Nombre maximal de pas d'optimisation",
minval = -1,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "CostDecrementTolerance",
default = 1.e-7,
typecast = float,
message = "Diminution relative minimale du coût lors de l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "ProjectedGradientTolerance",
default = -1,
typecast = float,
message = "Maximum des composantes du gradient projeté lors de l'arrêt",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "GradientNormTolerance",
default = 1.e-05,
typecast = float,
message = "Maximum des composantes du gradient lors de l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"CurrentOptimum",
"CurrentState",
"IndexOfOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.defineRequiredParameter(
name = "InitializationPoint",
typecast = numpy.ravel,
message = "État initial imposé (par défaut, c'est l'ébauche si None)",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "EM", "R", "B"),
optional = ("U", "CM", "Q"),
- )
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Variational",
- "Dynamic",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Variational",
+ "Dynamic",
+ ),
+ features=(
+ "NonLocalOptimization",
+ "DerivativeNeeded",
+ "ParallelDerivativesOnly",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- # Default 4DVAR
- if self._parameters["Variant"] in ["4DVAR", "4DVAR-Std"]:
+ # --------------------------
+ if self._parameters["Variant"] in ["4DVAR", "4DVAR-Std"]:
std4dvar.std4dvar(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
typecast = str,
message = "Formule de résidu utilisée",
listval = ["ScalarProduct"],
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfInitialDirection",
default = 1.,
typecast = float,
message = "Amplitude de la direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "EpsilonMinimumExponent",
default = -8,
message = "Exposant minimal en puissance de 10 pour le multiplicateur d'incrément",
minval = -20,
maxval = 0,
- )
+ )
self.defineRequiredParameter(
name = "InitialDirection",
default = [],
typecast = list,
message = "Direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"CurrentState",
"Residu",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
- mandatory= ("Xb", "HO" ),
+ mandatory= ("Xb", "HO"),
optional = ("Y", ),
- )
+ )
self.setAttributes(tags=(
"Checking",
- ))
+ ))
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
Ht = HO["Tangent"].appliedInXTo
Ha = HO["Adjoint"].appliedInXTo
#
- X0 = numpy.ravel( Xb ).reshape((-1,1))
+ X0 = numpy.ravel( Xb ).reshape((-1, 1))
#
# ----------
__p = self._parameters["NumberOfPrintedDigits"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the quality of an adjoint operator associated\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
msgs += (__flech + "Numerical quality indicators:\n")
msgs += (__marge + "-----------------------------\n")
msgs += ("\n")
#
if self._parameters["ResiduFormula"] == "ScalarProduct":
- msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"]) # noqa: E501
msgs += (__marge + "difference of two scalar products:\n")
msgs += ("\n")
msgs += (__marge + " R(Alpha) = | < TangentF_X(dX) , Y > - < dX , AdjointF_X(Y) > |\n")
msgs += (__marge + "operator. If it is given, Y must be in the image of F. If it is not given,\n")
msgs += (__marge + "one takes Y = F(X).\n")
#
- __entete = str.rstrip(" i Alpha " + \
- str.center("||X||",2+__p+7) + \
- str.center("||Y||",2+__p+7) + \
- str.center("||dX||",2+__p+7) + \
- str.center("R(Alpha)",2+__p+7))
+ __entete = str.rstrip(
+ " i Alpha " + \
+ str.center("||X||", 2 + __p + 7) + \
+ str.center("||Y||", 2 + __p + 7) + \
+ str.center("||dX||", 2 + __p + 7) + \
+ str.center("R(Alpha)", 2 + __p + 7)
+ )
__nbtirets = len(__entete) + 2
#
msgs += ("\n")
msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
- print(msgs) # 1
+ print(msgs) # 1
#
- Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
+ Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"], 1) ]
Perturbations.reverse()
#
NormeX = numpy.linalg.norm( X0 )
if Y is None:
- Yn = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ Yn = numpy.ravel( Hm( X0 ) ).reshape((-1, 1))
else:
- Yn = numpy.ravel( Y ).reshape((-1,1))
+ Yn = numpy.ravel( Y ).reshape((-1, 1))
NormeY = numpy.linalg.norm( Yn )
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
X0,
- )
+ )
#
# Boucle sur les perturbations
# ----------------------------
- msgs = ("") # 2
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs = ("") # 2
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += "\n" + __marge + __entete
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += ("\n")
- __pf = " %"+str(__p+7)+"."+str(__p)+"e"
- __ms = " %2i %5.0e"+(__pf*4)+"\n"
- for i,amplitude in enumerate(Perturbations):
+ __pf = " %" + str(__p + 7) + "." + str(__p) + "e"
+ __ms = " %2i %5.0e" + (__pf * 4) + "\n"
+ for ip, amplitude in enumerate(Perturbations):
dX = amplitude * dX0
NormedX = numpy.linalg.norm( dX )
#
if self._parameters["ResiduFormula"] == "ScalarProduct":
- TangentFXdX = numpy.ravel( Ht( (X0,dX) ) )
- AdjointFXY = numpy.ravel( Ha( (X0,Yn) ) )
+ TangentFXdX = numpy.ravel( Ht( (X0, dX) ) )
+ AdjointFXY = numpy.ravel( Ha( (X0, Yn) ) )
#
Residu = abs(vfloat(numpy.dot( TangentFXdX, Yn ) - numpy.dot( dX, AdjointFXY )))
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = __ms%(i,amplitude,NormeX,NormeY,NormedX,Residu)
+ ttsep = __ms%(ip, amplitude, NormeX, NormeY, NormedX, Residu)
msgs += __marge + ttsep
#
- msgs += (__marge + "-"*__nbtirets + "\n\n")
- msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 2
+ msgs += (__marge + "-" * __nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name, self._parameters["ResiduFormula"])) # noqa: E501
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 2
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
+++ /dev/null
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2008-2024 EDF R&D
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-#
-# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
-#
-# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-
-__doc__ = """
- Constrained (2UKF) Unscented Kalman Filter
-"""
-__author__ = "Jean-Philippe ARGAUD"
-
-import math, numpy, scipy, copy
-from daCore.PlatformInfo import vfloat
-from daCore.NumericObjects import ApplyBounds, ForceNumericBounds
-
-# ==============================================================================
-def c2ukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
- """
- Constrained (2UKF) Unscented Kalman Filter
- """
- if selfA._parameters["EstimationOf"] == "Parameters":
- selfA._parameters["StoreInternalVariables"] = True
- selfA._parameters["Bounds"] = ForceNumericBounds( selfA._parameters["Bounds"] )
- #
- L = Xb.size
- Alpha = selfA._parameters["Alpha"]
- Beta = selfA._parameters["Beta"]
- if selfA._parameters["Kappa"] == 0:
- if selfA._parameters["EstimationOf"] == "State":
- Kappa = 0
- elif selfA._parameters["EstimationOf"] == "Parameters":
- Kappa = 3 - L
- else:
- Kappa = selfA._parameters["Kappa"]
- Lambda = float( Alpha**2 ) * ( L + Kappa ) - L
- Gamma = math.sqrt( L + Lambda )
- #
- Ww = []
- Ww.append( 0. )
- for i in range(2*L):
- Ww.append( 1. / (2.*(L + Lambda)) )
- #
- Wm = numpy.array( Ww )
- Wm[0] = Lambda / (L + Lambda)
- Wc = numpy.array( Ww )
- Wc[0] = Lambda / (L + Lambda) + (1. - Alpha**2 + Beta)
- #
- # Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
- duration = Y.stepnumber()
- __p = numpy.cumprod(Y.shape())[-1]
- else:
- duration = 2
- __p = numpy.size(Y)
- #
- # Précalcul des inversions de B et R
- if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
- BI = B.getI()
- RI = R.getI()
- #
- __n = Xb.size
- nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
- #
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
- Xn = Xb
- if hasattr(B,"asfullmatrix"):
- Pn = B.asfullmatrix(__n)
- else:
- Pn = B
- selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- selfA.StoredVariables["Analysis"].store( Xb )
- if selfA._toStore("APosterioriCovariance"):
- selfA.StoredVariables["APosterioriCovariance"].store( Pn )
- elif selfA._parameters["nextStep"]:
- Xn = selfA._getInternalState("Xn")
- Pn = selfA._getInternalState("Pn")
- #
- if selfA._parameters["EstimationOf"] == "Parameters":
- XaMin = Xn
- previousJMinimum = numpy.finfo(float).max
- #
- for step in range(duration-1):
- #
- if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
- else:
- Un = numpy.ravel( U ).reshape((-1,1))
- else:
- Un = None
- #
- if CM is not None and "Tangent" in CM and U is not None:
- Cm = CM["Tangent"].asMatrix(Xn)
- else:
- Cm = None
- #
- Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
- Xnmu = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
- nbSpts = 2*Xn.size+1
- #
- if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- for point in range(nbSpts):
- Xnmu[:,point] = ApplyBounds( Xnmu[:,point], selfA._parameters["Bounds"] )
- #
- XEnnmu = []
- for point in range(nbSpts):
- if selfA._parameters["EstimationOf"] == "State":
- Mm = EM["Direct"].appliedControledFormTo
- XEnnmui = numpy.asarray( Mm( (Xnmu[:,point], Un) ) ).reshape((-1,1))
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
- XEnnmui = XEnnmui + Cm @ Un
- if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- XEnnmui = ApplyBounds( XEnnmui, selfA._parameters["Bounds"] )
- elif selfA._parameters["EstimationOf"] == "Parameters":
- # --- > Par principe, M = Id, Q = 0
- XEnnmui = Xnmu[:,point]
- XEnnmu.append( numpy.ravel(XEnnmui).reshape((-1,1)) )
- XEnnmu = numpy.concatenate( XEnnmu, axis=1 )
- #
- Xhmn = ( XEnnmu * Wm ).sum(axis=1)
- #
- if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- Xhmn = ApplyBounds( Xhmn, selfA._parameters["Bounds"] )
- #
- if selfA._parameters["EstimationOf"] == "State": Pmn = copy.copy(Q)
- elif selfA._parameters["EstimationOf"] == "Parameters": Pmn = 0.
- for point in range(nbSpts):
- dXEnnmuXhmn = XEnnmu[:,point].flat-Xhmn
- Pmn += Wc[i] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
- #
- if selfA._parameters["EstimationOf"] == "Parameters" and selfA._parameters["Bounds"] is not None:
- Pmndemi = selfA._parameters["Reconditioner"] * numpy.real(scipy.linalg.sqrtm(Pmn))
- else:
- Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
- #
- Xnnmu = numpy.hstack([Xhmn.reshape((-1,1)), Xhmn.reshape((-1,1))+Gamma*Pmndemi, Xhmn.reshape((-1,1))-Gamma*Pmndemi])
- #
- if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- for point in range(nbSpts):
- Xnnmu[:,point] = ApplyBounds( Xnnmu[:,point], selfA._parameters["Bounds"] )
- #
- Hm = HO["Direct"].appliedControledFormTo
- Ynnmu = []
- for point in range(nbSpts):
- if selfA._parameters["EstimationOf"] == "State":
- Ynnmui = Hm( (Xnnmu[:,point], None) )
- elif selfA._parameters["EstimationOf"] == "Parameters":
- Ynnmui = Hm( (Xnnmu[:,point], Un) )
- Ynnmu.append( numpy.ravel(Ynnmui).reshape((__p,1)) )
- Ynnmu = numpy.concatenate( Ynnmu, axis=1 )
- #
- Yhmn = ( Ynnmu * Wm ).sum(axis=1)
- #
- Pyyn = copy.copy(R)
- Pxyn = 0.
- for point in range(nbSpts):
- dYnnmuYhmn = Ynnmu[:,point].flat-Yhmn
- dXnnmuXhmn = Xnnmu[:,point].flat-Xhmn
- Pyyn += Wc[i] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
- Pxyn += Wc[i] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
- #
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
- else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
- _Innovation = Ynpu - Yhmn.reshape((-1,1))
- if selfA._parameters["EstimationOf"] == "Parameters":
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
- _Innovation = _Innovation - Cm @ Un
- #
- Kn = Pxyn @ Pyyn.I
- Xn = Xhmn.reshape((-1,1)) + Kn @ _Innovation
- Pn = Pmn - Kn @ (Pyyn @ Kn.T)
- #
- if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
- Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
- #
- Xa = Xn # Pointeurs
- #--------------------------
- selfA._setInternalState("Xn", Xn)
- selfA._setInternalState("Pn", Pn)
- #--------------------------
- #
- selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- # ---> avec analysis
- selfA.StoredVariables["Analysis"].store( Xa )
- if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, Un)) )
- if selfA._toStore("InnovationAtCurrentAnalysis"):
- selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
- # ---> avec current state
- if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
- selfA.StoredVariables["CurrentState"].store( Xn )
- if selfA._toStore("ForecastState"):
- selfA.StoredVariables["ForecastState"].store( Xhmn )
- if selfA._toStore("ForecastCovariance"):
- selfA.StoredVariables["ForecastCovariance"].store( Pmn )
- if selfA._toStore("BMA"):
- selfA.StoredVariables["BMA"].store( Xhmn - Xa )
- if selfA._toStore("InnovationAtCurrentState"):
- selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
- if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
- # ---> autres
- if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
- Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
- Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
- J = Jb + Jo
- selfA.StoredVariables["CostFunctionJb"].store( Jb )
- selfA.StoredVariables["CostFunctionJo"].store( Jo )
- selfA.StoredVariables["CostFunctionJ" ].store( J )
- #
- if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
- if selfA._toStore("IndexOfOptimum"):
- selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
- if selfA._toStore("CurrentOptimum"):
- selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
- if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
- if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
- if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
- if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
- if selfA._toStore("APosterioriCovariance"):
- selfA.StoredVariables["APosterioriCovariance"].store( Pn )
- if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
- previousJMinimum = J
- XaMin = Xa
- if selfA._toStore("APosterioriCovariance"):
- covarianceXaMin = selfA.StoredVariables["APosterioriCovariance"][-1]
- #
- # Stockage final supplémentaire de l'optimum en estimation de paramètres
- # ----------------------------------------------------------------------
- if selfA._parameters["EstimationOf"] == "Parameters":
- selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- selfA.StoredVariables["Analysis"].store( XaMin )
- if selfA._toStore("APosterioriCovariance"):
- selfA.StoredVariables["APosterioriCovariance"].store( covarianceXaMin )
- if selfA._toStore("BMA"):
- selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(XaMin) )
- #
- return 0
-
-# ==============================================================================
-if __name__ == "__main__":
- print('\n AUTODIAGNOSTIC\n')
selfA._parameters["Bounds"] = ForceNumericBounds( selfA._parameters["Bounds"] )
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
- (__p > __n):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
+ (__p > __n):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = Xb
Pn = B
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
Pn = selfA._getInternalState("Pn")
- if hasattr(Pn,"asfullmatrix"):
+ if hasattr(Pn, "asfullmatrix"):
Pn = Pn.asfullmatrix(Xn.size)
#
if selfA._parameters["EstimationOf"] == "Parameters":
XaMin = Xn
previousJMinimum = numpy.finfo(float).max
#
- for step in range(duration-1):
+ for step in range(duration - 1):
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
+ if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
Mt = EM["Tangent"].asMatrix(Xn)
- Mt = Mt.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Mt = Mt.reshape(Xn.size, Xn.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(Xn)
- Ma = Ma.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Ma = Ma.reshape(Xn.size, Xn.size) # ADAO & check shape
M = EM["Direct"].appliedControledFormTo
- Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
Pn_predicted = Q + Mt @ (Pn @ Ma)
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Xn_predicted = Xn
Pn_predicted = Pn
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn_predicted = ApplyBounds( Xn_predicted, selfA._parameters["Bounds"] )
#
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
Ht = HO["Tangent"].asMatrix(Xn_predicted)
- Ht = Ht.reshape(Ynpu.size,Xn.size) # ADAO & check shape
+ Ht = Ht.reshape(Ynpu.size, Xn.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xn_predicted)
- Ha = Ha.reshape(Xn.size,Ynpu.size) # ADAO & check shape
+ Ha = Ha.reshape(Xn.size, Ynpu.size) # ADAO & check shape
H = HO["Direct"].appliedControledFormTo
#
if selfA._parameters["EstimationOf"] == "State":
- HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
elif selfA._parameters["EstimationOf"] == "Parameters":
- HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
_Innovation = _Innovation - Cm @ Un
#
if Ynpu.size <= Xn.size:
_HNHt = numpy.dot(Ht, Pn_predicted @ Ha)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , _Innovation )
- Xn = Xn_predicted + (Pn_predicted @ (Ha @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, _Innovation )
+ Xn = Xn_predicted + (Pn_predicted @ (Ha @ _u)).reshape((-1, 1))
Kn = Pn_predicted @ (Ha @ numpy.linalg.inv(_A))
else:
_HtRH = numpy.dot(Ha, RI @ Ht)
_A = numpy.linalg.inv(Pn_predicted) + _HtRH
- _u = numpy.linalg.solve( _A , numpy.dot(Ha, RI @ _Innovation) )
- Xn = Xn_predicted + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.dot(Ha, RI @ _Innovation) )
+ Xn = Xn_predicted + _u.reshape((-1, 1))
Kn = numpy.linalg.inv(_A) @ (Ha @ RI.asfullmatrix(Ynpu.size))
#
Pn = Pn_predicted - Kn @ (Ht @ Pn_predicted)
- Pn = (Pn + Pn.T) * 0.5 # Symétrie
- Pn = Pn + mpr*numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
+ Pn = (Pn + Pn.T) * 0.5 # Symétrie
+ Pn = Pn + mpr * numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
#
- Xa = Xn # Pointeurs
- #--------------------------
+ Xa = Xn # Pointeurs
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("Pn", Pn)
- #--------------------------
+ # --------------------------
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
# ---> avec analysis
selfA.StoredVariables["Analysis"].store( Xa )
if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, Un)) )
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, None)) )
if selfA._toStore("InnovationAtCurrentAnalysis"):
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( Xn_predicted )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX_predicted )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T @ (RI @ _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( Pn )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
selfA._parameters["Bounds"] = ForceNumericBounds( selfA._parameters["Bounds"] )
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
- (__p > __n):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
+ (__p > __n):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
RI = R.getI()
if __p > __n:
- QI = Q.getI() # Q non nul
+ QI = Q.getI() # Q non nul
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = Xb
Pn = B
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
Pn = selfA._getInternalState("Pn")
- if hasattr(Pn,"asfullmatrix"):
+ if hasattr(Pn, "asfullmatrix"):
Pn = Pn.asfullmatrix(Xn.size)
#
if selfA._parameters["EstimationOf"] == "Parameters":
XaMin = Xn
previousJMinimum = numpy.finfo(float).max
#
- for step in range(duration-1):
+ for step in range(duration - 1):
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast
+ if selfA._parameters["EstimationOf"] == "State": # Forecast
Mt = EM["Tangent"].asMatrix(Xn)
- Mt = Mt.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Mt = Mt.reshape(Xn.size, Xn.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(Xn)
- Ma = Ma.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Ma = Ma.reshape(Xn.size, Xn.size) # ADAO & check shape
M = EM["Direct"].appliedControledFormTo
- Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Mt = Ma = 1.
Q = QI = 0.
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn_predicted = ApplyBounds( Xn_predicted, selfA._parameters["Bounds"] )
#
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
Ht = HO["Tangent"].asMatrix(Xn)
- Ht = Ht.reshape(Ynpu.size,Xn.size) # ADAO & check shape
+ Ht = Ht.reshape(Ynpu.size, Xn.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xn)
- Ha = Ha.reshape(Xn.size,Ynpu.size) # ADAO & check shape
+ Ha = Ha.reshape(Xn.size, Ynpu.size) # ADAO & check shape
H = HO["Direct"].appliedControledFormTo
#
if selfA._parameters["EstimationOf"] == "State":
- HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
elif selfA._parameters["EstimationOf"] == "Parameters":
- HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
_Innovation = _Innovation - Cm @ Un
#
Htstar = Ht @ Mt
if Ynpu.size <= Xn.size:
_HNHt = numpy.dot(Ht, Q @ Ha) + numpy.dot(Htstar, Pn @ Hastar)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , _Innovation )
- Xs = Xn + (Pn @ (Hastar @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, _Innovation )
+ Xs = Xn + (Pn @ (Hastar @ _u)).reshape((-1, 1))
Ks = Pn @ (Hastar @ numpy.linalg.inv(_A))
else:
_HtRH = numpy.dot(Ha, QI @ Ht) + numpy.dot(Hastar, RI @ Htstar)
_A = numpy.linalg.inv(Pn) + _HtRH
- _u = numpy.linalg.solve( _A , numpy.dot(Hastar, RI @ _Innovation) )
- Xs = Xn + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.dot(Hastar, RI @ _Innovation) )
+ Xs = Xn + _u.reshape((-1, 1))
Ks = numpy.linalg.inv(_A) @ (Hastar @ RI.asfullmatrix(Ynpu.size))
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
#
if selfA._parameters["EstimationOf"] == "State":
Mt = EM["Tangent"].asMatrix(Xs)
- Mt = Mt.reshape(Xs.size,Xs.size) # ADAO & check shape
+ Mt = Mt.reshape(Xs.size, Xs.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(Xs)
- Ma = Ma.reshape(Xs.size,Xs.size) # ADAO & check shape
+ Ma = Ma.reshape(Xs.size, Xs.size) # ADAO & check shape
M = EM["Direct"].appliedControledFormTo
- Xn = numpy.ravel( M( (Xs, Un) ) ).reshape((__n,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn = numpy.ravel( M( (Xs, Un) ) ).reshape((__n, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn = Xn + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Mt = Ma = 1.
Xn = Xs
#
- Pn = Mt @ (Pn_predicted @ Ma)
- Pn = (Pn + Pn.T) * 0.5 # Symétrie
- Pn = Pn + mpr*numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
+ Pn = Mt @ (Pn_predicted @ Ma)
+ Pn = (Pn + Pn.T) * 0.5 # Symétrie
+ Pn = Pn + mpr * numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
#
if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
#
- Xa = Xn # Pointeurs
- #--------------------------
+ Xa = Xn # Pointeurs
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("Pn", Pn)
- #--------------------------
+ # --------------------------
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
# ---> avec analysis
selfA.StoredVariables["Analysis"].store( Xa )
if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, Un)) )
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, None)) )
if selfA._toStore("InnovationAtCurrentAnalysis"):
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( Xn_predicted )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX_predicted )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T @ (RI @ _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( Pn )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2024 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+__doc__ = """
+ Constrained Unscented Kalman Filter
+"""
+__author__ = "Jean-Philippe ARGAUD"
+
+import numpy, scipy, copy
+from daCore.NumericObjects import GenerateWeightsAndSigmaPoints
+from daCore.PlatformInfo import PlatformInfo, vfloat
+from daCore.NumericObjects import ApplyBounds, ForceNumericBounds
+mpr = PlatformInfo().MachinePrecision()
+
+# ==============================================================================
+def ecw2ukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="UKF"):
+ """
+ Constrained Unscented Kalman Filter
+ """
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA._parameters["StoreInternalVariables"] = True
+ selfA._parameters["Bounds"] = ForceNumericBounds( selfA._parameters["Bounds"] )
+ #
+ wsp = GenerateWeightsAndSigmaPoints(
+ Nn = Xb.size,
+ EO = selfA._parameters["EstimationOf"],
+ VariantM = VariantM,
+ Alpha = selfA._parameters["Alpha"],
+ Beta = selfA._parameters["Beta"],
+ Kappa = selfA._parameters["Kappa"],
+ )
+ Wm, Wc, SC = wsp.get()
+ #
+ # Durée d'observation et tailles
+ if hasattr(Y, "stepnumber"):
+ duration = Y.stepnumber()
+ __p = numpy.cumprod(Y.shape())[-1]
+ else:
+ duration = 2
+ __p = numpy.size(Y)
+ #
+ # Précalcul des inversions de B et R
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ BI = B.getI()
+ RI = R.getI()
+ #
+ __n = Xb.size
+ nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
+ #
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
+ Xn = Xb
+ if hasattr(B, "asfullmatrix"):
+ Pn = B.asfullmatrix(__n)
+ else:
+ Pn = B
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( Xb )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ elif selfA._parameters["nextStep"]:
+ Xn = selfA._getInternalState("Xn")
+ Pn = selfA._getInternalState("Pn")
+ #
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ XaMin = Xn
+ previousJMinimum = numpy.finfo(float).max
+ #
+ for step in range(duration - 1):
+ #
+ if U is not None:
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
+ else:
+ Un = numpy.ravel( U ).reshape((-1, 1))
+ else:
+ Un = None
+ #
+ Hm = HO["Direct"].appliedControledFormTo
+ if selfA._parameters["EstimationOf"] == "State":
+ Mm = EM["Direct"].appliedControledFormTo
+ if CM is not None and "Tangent" in CM and U is not None:
+ Cm = CM["Tangent"].asMatrix(Xn)
+ else:
+ Cm = None
+ #
+ # Pndemi = numpy.real(scipy.linalg.cholesky(Pn))
+ Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
+ Xnmu = Xn + Pndemi @ SC
+ nbSpts = SC.shape[1]
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ for point in range(nbSpts):
+ Xnmu[:, point] = ApplyBounds( Xnmu[:, point], selfA._parameters["Bounds"] )
+ #
+ if selfA._parameters["EstimationOf"] == "State":
+ XEnnmu = Mm( [(Xnmu[:, point].reshape((-1, 1)), Un) for point in range(nbSpts)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
+ XEnnmu = XEnnmu + Cm @ Un
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ # --- > Par principe, M = Id, Q = 0
+ XEnnmu = numpy.array( Xnmu )
+ #
+ Xhmn = ( XEnnmu * Wm ).sum(axis=1)
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ Xhmn = ApplyBounds( Xhmn, selfA._parameters["Bounds"] )
+ #
+ if selfA._parameters["EstimationOf"] == "State":
+ Pmn = copy.copy(Q)
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Pmn = 0.
+ for point in range(nbSpts):
+ dXEnnmuXhmn = XEnnmu[:, point].flat - Xhmn
+ Pmn += Wc[point] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
+ #
+ if selfA._parameters["EstimationOf"] == "Parameters" and selfA._parameters["Bounds"] is not None:
+ Pmndemi = selfA._parameters["Reconditioner"] * numpy.real(scipy.linalg.sqrtm(Pmn))
+ else:
+ Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
+ #
+ Xnnmu = Xhmn.reshape((-1, 1)) + Pmndemi @ SC
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ for point in range(nbSpts):
+ Xnnmu[:, point] = ApplyBounds( Xnnmu[:, point], selfA._parameters["Bounds"] )
+ #
+ Ynnmu = Hm( [(Xnnmu[:, point], None) for point in range(nbSpts)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ #
+ Yhmn = ( Ynnmu * Wm ).sum(axis=1)
+ #
+ Pyyn = copy.copy(R)
+ Pxyn = 0.
+ for point in range(nbSpts):
+ dYnnmuYhmn = Ynnmu[:, point].flat - Yhmn
+ dXnnmuXhmn = Xnnmu[:, point].flat - Xhmn
+ Pyyn += Wc[point] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
+ Pxyn += Wc[point] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
+ #
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
+ else:
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
+ _Innovation = Ynpu - Yhmn.reshape((-1, 1))
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ _Innovation = _Innovation - Cm @ Un
+ #
+ Kn = Pxyn @ scipy.linalg.inv(Pyyn)
+ Xn = Xhmn.reshape((-1, 1)) + Kn @ _Innovation
+ Pn = Pmn - Kn @ (Pyyn @ Kn.T)
+ #
+ if selfA._parameters["Bounds"] is not None and selfA._parameters["ConstrainedBy"] == "EstimateProjection":
+ Xn = ApplyBounds( Xn, selfA._parameters["Bounds"] )
+ #
+ Xa = Xn # Pointeurs
+ # --------------------------
+ selfA._setInternalState("Xn", Xn)
+ selfA._setInternalState("Pn", Pn)
+ # --------------------------
+ #
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ # ---> avec analysis
+ selfA.StoredVariables["Analysis"].store( Xa )
+ if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, None)) )
+ if selfA._toStore("InnovationAtCurrentAnalysis"):
+ selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
+ # ---> avec current state
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xn )
+ if selfA._toStore("ForecastState"):
+ selfA.StoredVariables["ForecastState"].store( Xhmn )
+ if selfA._toStore("ForecastCovariance"):
+ selfA.StoredVariables["ForecastCovariance"].store( Pmn )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( Xhmn - Xa )
+ if selfA._toStore("InnovationAtCurrentState"):
+ selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
+ if selfA._toStore("SimulatedObservationAtCurrentState") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
+ # ---> autres
+ if selfA._parameters["StoreInternalVariables"] \
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
+ Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
+ Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
+ J = Jb + Jo
+ selfA.StoredVariables["CostFunctionJb"].store( Jb )
+ selfA.StoredVariables["CostFunctionJo"].store( Jo )
+ selfA.StoredVariables["CostFunctionJ" ].store( J )
+ #
+ if selfA._toStore("IndexOfOptimum") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
+ if selfA._toStore("IndexOfOptimum"):
+ selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
+ if selfA._toStore("CurrentOptimum"):
+ selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
+ if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
+ if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
+ if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
+ if selfA._toStore("CostFunctionJAtCurrentOptimum"):
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
+ if selfA._parameters["EstimationOf"] == "Parameters" \
+ and J < previousJMinimum:
+ previousJMinimum = J
+ XaMin = Xa
+ if selfA._toStore("APosterioriCovariance"):
+ covarianceXaMin = selfA.StoredVariables["APosterioriCovariance"][-1]
+ #
+ # Stockage final supplémentaire de l'optimum en estimation de paramètres
+ # ----------------------------------------------------------------------
+ if selfA._parameters["EstimationOf"] == "Parameters":
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["Analysis"].store( XaMin )
+ if selfA._toStore("APosterioriCovariance"):
+ selfA.StoredVariables["APosterioriCovariance"].store( covarianceXaMin )
+ if selfA._toStore("BMA"):
+ selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(XaMin) )
+ #
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print('\n AUTODIAGNOSTIC\n')
"""
__author__ = "Jean-Philippe ARGAUD"
-import numpy, logging, copy, math
+import numpy, logging
from daCore.NumericObjects import ApplyBounds, VariablesAndIncrementsBounds
from daCore.NumericObjects import GenerateRandomPointInHyperSphere
from daCore.NumericObjects import GetNeighborhoodTopology
Xini,
selfA._name,
0.5,
- )
- #
+ )
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.asarray( x ).reshape((-1,1))
- _HX = numpy.asarray( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.asarray( x ).reshape((-1, 1))
+ _HX = numpy.asarray( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = 0.5 * (_X - Xb).T @ (BI @ (_X - Xb))
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = 0.5 * _Innovation.T @ _Innovation
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = numpy.sum( numpy.abs(_Innovation) )
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = numpy.max( numpy.abs(_Innovation) )
#
J = vfloat( Jb ) + vfloat( Jo )
#
return J, vfloat( Jb ), vfloat( Jo )
- #
+
def KeepRunningCondition(__step, __nbfct):
if __step >= selfA._parameters["MaximumNumberOfIterations"]:
- logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"]))
+ logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"])) # noqa: E501
return False
elif __nbfct >= selfA._parameters["MaximumNumberOfFunctionEvaluations"]:
- logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"]))
+ logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"])) # noqa: E501
return False
else:
return True
# Paramètres internes
# -------------------
__nbI = selfA._parameters["NumberOfInsects"]
- __nbP = len(Xini) # Dimension ou nombre de paramètres
+ __nbP = len(Xini) # Dimension ou nombre de paramètres
#
__iw = float( selfA._parameters["InertiaWeight"] )
__sa = float( selfA._parameters["SocialAcceleration"] )
__ca = float( selfA._parameters["CognitiveAcceleration"] )
__vc = float( selfA._parameters["VelocityClampingFactor"] )
- logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca)))
+ logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca))) # noqa: E501
logging.debug("%s Social acceleration (recall to the best insect value of the group) = %s"%(selfA._name, str(__sa)))
logging.debug("%s Inertial weight = %s"%(selfA._name, str(__iw)))
logging.debug("%s Velocity clamping factor = %s"%(selfA._name, str(__vc)))
LimitPlace = Bounds
LimitSpeed = BoxBounds
#
- nbfct = 1 # Nb d'évaluations
- JXini, JbXini, JoXini = CostFunction(Xini,selfA._parameters["QualityCriterion"])
+ nbfct = 1 # Nb d'évaluations
+ JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
#
- Swarm = numpy.zeros((__nbI,4,__nbP)) # 4 car (x,v,gbest,lbest)
- for __p in range(__nbP) :
- Swarm[:,0,__p] = rand( low=LimitPlace[__p,0], high=LimitPlace[__p,1], size=__nbI) # Position
- Swarm[:,1,__p] = rand( low=LimitSpeed[__p,0], high=LimitSpeed[__p,1], size=__nbI) # Velocity
- logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name,Swarm.shape[0],Swarm.shape[2]))
+ Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,gbest,lbest)
+ for __p in range(__nbP):
+ Swarm[:, 0, __p] = rand( low=LimitPlace[__p, 0], high=LimitPlace[__p, 1], size=__nbI) # Position
+ Swarm[:, 1, __p] = rand( low=LimitSpeed[__p, 0], high=LimitSpeed[__p, 1], size=__nbI) # Velocity
+ logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name, Swarm.shape[0], Swarm.shape[2])) # noqa: E501
#
__nbh = GetNeighborhoodTopology( selfA._parameters["SwarmTopology"], list(range(__nbI)) )
#
- qSwarm = JXini * numpy.ones((__nbI,6)) # Qualités (J, Jb, Jo) par insecte + par voisinage
+ qSwarm = JXini * numpy.ones((__nbI, 6)) # Qualités (J, Jb, Jo) par insecte + par voisinage
for __i in range(__nbI):
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
if JTest < JXini:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:3] = (JTest, JbTest, JoTest)
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :3 ] = (JTest, JbTest, JoTest)
else:
- Swarm[__i,2,:] = Xini # xBest
- qSwarm[__i,:3] = (JXini, JbXini, JoXini)
+ Swarm[__i, 2, :] = Xini # xBest
+ qSwarm[__i, :3 ] = (JXini, JbXini, JoXini)
logging.debug("%s Initialisation of the best previous insects"%selfA._name)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
for __i in range(__nbI):
- Swarm[__i,3,:] = xBest # lBest
- qSwarm[__i,3:] = qSwarm[iBest,:3]
+ Swarm[__i, 3, :] = xBest # lBest
+ qSwarm[__i, 3: ] = qSwarm[iBest, :3]
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
#
for __i in range(__nbI):
__rct = rand(size=__nbP)
__rst = rand(size=__nbP)
- __xPoint = Swarm[__i,0,:]
+ __xPoint = Swarm[__i, 0, :]
# Points
- __pPoint = __xPoint + __ca * __rct * (Swarm[__i,2,:] - __xPoint)
- __lPoint = __xPoint + __sa * __rst * (Swarm[__i,3,:] - __xPoint)
+ __pPoint = __xPoint + __ca * __rct * (Swarm[__i, 2, :] - __xPoint)
+ __lPoint = __xPoint + __sa * __rst * (Swarm[__i, 3, :] - __xPoint)
__gPoint = (__xPoint + __pPoint + __lPoint) / 3
__radius = numpy.linalg.norm(__gPoint - __xPoint)
__rPoint = GenerateRandomPointInHyperSphere( __gPoint, __radius )
# Maj vitesse
- __value = __iw * Swarm[__i,1,:] + __rPoint - __xPoint
- Swarm[__i,1,:] = ApplyBounds( __value, LimitSpeed )
+ __value = __iw * Swarm[__i, 1, :] + __rPoint - __xPoint
+ Swarm[__i, 1, :] = ApplyBounds( __value, LimitSpeed )
# Maj position
- __value = __xPoint + Swarm[__i,1,:]
- Swarm[__i,0,:] = ApplyBounds( __value, LimitPlace )
+ __value = __xPoint + Swarm[__i, 1, :]
+ Swarm[__i, 0, :] = ApplyBounds( __value, LimitPlace )
#
nbfct += 1
# Évalue
- JTest, JbTest, JoTest = CostFunction(__xPoint,selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(__xPoint, selfA._parameters["QualityCriterion"])
# Maj lbest
- if JTest < qSwarm[__i,0]:
- Swarm[__i,2,:] = Swarm[__i,0,:]
- qSwarm[__i,:3] = (JTest, JbTest, JoTest)
+ if JTest < qSwarm[__i, 0]:
+ Swarm[__i, 2, :] = Swarm[__i, 0, :]
+ qSwarm[__i, :3 ] = (JTest, JbTest, JoTest)
#
for __i in range(__nbI):
# Maj gbest
- __im = numpy.argmin( [qSwarm[__v,0] for __v in __nbh[__i]] )
- __il = __nbh[__i][__im] # Best in NB
- if qSwarm[__il,0] < qSwarm[__i,3]:
- Swarm[__i,3,:] = Swarm[__il,2,:] # lBest
- qSwarm[__i,3:] = qSwarm[__il,:3]
+ __im = numpy.argmin( [qSwarm[__v, 0] for __v in __nbh[__i]] )
+ __il = __nbh[__i][__im] # Best in NB
+ if qSwarm[__il, 0] < qSwarm[__i, 3]:
+ Swarm[__i, 3, :] = Swarm[__il, 2, :] # lBest
+ qSwarm[__i, 3: ] = qSwarm[__il, :3]
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
if selfA._toStore("SimulatedObservationAtCurrentState"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( xBest ) )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
- logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name,step,iBest,qSwarm[iBest,0]))
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
+ logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name, step, iBest, qSwarm[iBest, 0])) # noqa: E501
#
# Obtention de l'analyse
# ----------------------
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("OMA") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("SimulatedObservationAtOptimum"):
HXa = Hm(Xa)
if selfA._toStore("Innovation") or \
- selfA._toStore("OMB") or \
- selfA._toStore("SimulatedObservationAtBackground"):
+ selfA._toStore("OMB") or \
+ selfA._toStore("SimulatedObservationAtBackground"):
HXb = Hm(Xb)
Innovation = Y - HXb
if selfA._toStore("Innovation"):
# Initialisations
# ---------------
Hm = HO["Tangent"].asMatrix(Xb)
- Hm = Hm.reshape(Y.size,Xb.size) # ADAO & check shape
+ Hm = Hm.reshape(Y.size, Xb.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xb)
- Ha = Ha.reshape(Xb.size,Y.size) # ADAO & check shape
+ Ha = Ha.reshape(Xb.size, Y.size) # ADAO & check shape
#
if HO["AppliedInX"] is not None and "HXb" in HO["AppliedInX"]:
HXb = numpy.asarray(HO["AppliedInX"]["HXb"])
else:
HXb = Hm @ Xb
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("MahalanobisConsistency") or \
- (Y.size > Xb.size):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ (Y.size > Xb.size):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
#
Innovation = Y - HXb
if selfA._parameters["EstimationOf"] == "Parameters":
- if CM is not None and "Tangent" in CM and U is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and U is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xb)
- Cm = Cm.reshape(Xb.size,U.size) # ADAO & check shape
- Innovation = Innovation - (Cm @ U).reshape((-1,1))
+ Cm = Cm.reshape(Xb.size, U.size) # ADAO & check shape
+ Innovation = Innovation - (Cm @ U).reshape((-1, 1))
#
# Calcul de l'analyse
# -------------------
if Y.size <= Xb.size:
_HNHt = numpy.dot(Hm, B @ Ha)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , numpy.ravel(Innovation) )
- Xa = Xb + (B @ numpy.ravel(Ha @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.ravel(Innovation) )
+ Xa = Xb + (B @ numpy.ravel(Ha @ _u)).reshape((-1, 1))
else:
_HtRH = numpy.dot(Ha, RI @ Hm)
_A = BI + _HtRH
- _u = numpy.linalg.solve( _A , numpy.ravel(numpy.dot(Ha, RI @ numpy.ravel(Innovation))) )
- Xa = Xb + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.ravel(numpy.dot(Ha, RI @ numpy.ravel(Innovation))) )
+ Xa = Xb + _u.reshape((-1, 1))
#
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
# Calcul de la fonction coût
# --------------------------
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("MahalanobisConsistency") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtOptimum") or \
- selfA._toStore("SimulationQuantiles"):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("OMA") or \
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentState") or \
+ selfA._toStore("SimulatedObservationAtOptimum") or \
+ selfA._toStore("SimulationQuantiles"):
HXa = Hm @ Xa
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - HXa.reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("MahalanobisConsistency"):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("MahalanobisConsistency"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * oma.T * (RI * oma) )
J = Jb + Jo
# Calcul de la covariance d'analyse
# ---------------------------------
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles"):
- if (Y.size <= Xb.size): K = B * Ha * (R + numpy.dot(Hm, B * Ha)).I
- elif (Y.size > Xb.size): K = (BI + numpy.dot(Ha, RI * Hm)).I * Ha * RI
+ selfA._toStore("SimulationQuantiles"):
+ if (Y.size <= Xb.size):
+ K = B * Ha * (R + numpy.dot(Hm, B * Ha)).I
+ elif (Y.size > Xb.size):
+ K = (BI + numpy.dot(Ha, RI * Hm)).I * Ha * RI
A = B - K * Hm * B
- A = (A + A.T) * 0.5 # Symétrie
- A = A + mpr*numpy.trace( A ) * numpy.identity(Xa.size) # Positivité
+ A = (A + A.T) * 0.5 # Symétrie
+ A = A + mpr * numpy.trace( A ) * numpy.identity(Xa.size) # Positivité
if min(A.shape) != max(A.shape):
- raise ValueError("The %s a posteriori covariance matrix A is of shape %s, despites it has to be a squared matrix. There is an error in the observation operator, please check it."%(selfA._name,str(A.shape)))
+ raise ValueError("The %s a posteriori covariance matrix A is of shape %s, despites it has to be a squared matrix. There is an error in the observation operator, please check it."%(selfA._name, str(A.shape))) # noqa: E501
if (numpy.diag(A) < 0).any():
- raise ValueError("The %s a posteriori covariance matrix A has at least one negative value %.2e on its diagonal. There is an error in the observation operator or in the covariances, please check them."%(selfA._name,min(numpy.diag(A))))
- if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
+ raise ValueError("The %s a posteriori covariance matrix A has at least one negative value %.2e on its diagonal. There is an error in the observation operator or in the covariances, please check them."%(selfA._name, min(numpy.diag(A)))) # noqa: E501
+ if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
try:
numpy.linalg.cholesky( A )
- except:
- raise ValueError("The %s a posteriori covariance matrix A is not symmetric positive-definite. Please check your a priori covariances and your observation operator."%(selfA._name,))
+ except Exception:
+ raise ValueError("The %s a posteriori covariance matrix A is not symmetric positive-definite. Please check your a priori covariances and your observation operator."%(selfA._name,)) # noqa: E501
selfA.StoredVariables["APosterioriCovariance"].store( A )
#
# Calculs et/ou stockages supplémentaires
TraceR = R.trace(Y.size)
selfA.StoredVariables["SigmaObs2"].store( vfloat( Innovation.T @ oma ) / TraceR )
if selfA._toStore("SigmaBck2"):
- selfA.StoredVariables["SigmaBck2"].store( vfloat( (Innovation.T @ (Hm @ (numpy.ravel(Xa) - numpy.ravel(Xb))))/(Hm * (B * Hm.T)).trace() ) )
+ selfA.StoredVariables["SigmaBck2"].store( vfloat( (Innovation.T @ (Hm @ (numpy.ravel(Xa) - numpy.ravel(Xb)))) / (Hm * (B * Hm.T)).trace() ) ) # noqa: E501
if selfA._toStore("MahalanobisConsistency"):
- selfA.StoredVariables["MahalanobisConsistency"].store( float( 2.*J/Innovation.size ) )
+ selfA.StoredVariables["MahalanobisConsistency"].store( float( 2. * J / Innovation.size ) )
if selfA._toStore("SimulationQuantiles"):
H = HO["Direct"].appliedTo
QuantilesEstimations(selfA, A, Xa, HXa, H, Hm)
# Initialisations
# ---------------
if numpy.array(EOS).size == 0:
- raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
if isinstance(EOS, (numpy.ndarray, numpy.matrix)):
__EOS = numpy.asarray(EOS)
elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
__EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
else:
- raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
__dimS, __nbmS = __EOS.shape
- logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name,__nbmS,__dimS))
+ logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name, __nbmS, __dimS))
#
if selfA._parameters["Variant"] in ["DEIM", "PositioningByDEIM"]:
__LcCsts = False
else:
__ExcludedMagicPoints = ()
if __LcCsts and "NameOfLocations" in selfA._parameters:
- if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS:
+ if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS: # noqa: E501
__NameOfLocations = selfA._parameters["NameOfLocations"]
else:
__NameOfLocations = ()
numpy.arange(__EOS.shape[0]),
__ExcludedMagicPoints,
assume_unique = True,
- )
+ )
else:
__IncludedMagicPoints = []
#
if "MaximumNumberOfLocations" in selfA._parameters and "MaximumRBSize" in selfA._parameters:
- selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"],selfA._parameters["MaximumRBSize"])
+ selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"], selfA._parameters["MaximumRBSize"]) # noqa: E501
elif "MaximumNumberOfLocations" in selfA._parameters:
selfA._parameters["MaximumRBSize"] = selfA._parameters["MaximumNumberOfLocations"]
elif "MaximumRBSize" in selfA._parameters:
__sv, __svsq, __tisv, __qisv = SingularValuesEstimation( __EOS )
if vt(scipy.version.version) < vt("1.1.0"):
__rhoM = scipy.linalg.orth( __EOS )
- __rhoM = numpy.compress(__sv > selfA._parameters["EpsilonEIM"]*max(__sv), __rhoM, axis=1)
+ __rhoM = numpy.compress(__sv > selfA._parameters["EpsilonEIM"] * max(__sv), __rhoM, axis=1)
else:
__rhoM = scipy.linalg.orth( __EOS, selfA._parameters["EpsilonEIM"] )
__lVs, __svdM = __rhoM.shape
assert __lVs == __dimS, "Différence entre lVs et dim(EOS)"
- __maxM = min(__maxM,__svdM)
+ __maxM = min(__maxM, __svdM)
#
if __LcCsts and len(__IncludedMagicPoints) > 0:
__iM = numpy.argmax( numpy.abs(
- numpy.take(__rhoM[:,0], __IncludedMagicPoints, mode='clip')
- ))
+ numpy.take(__rhoM[:, 0], __IncludedMagicPoints, mode='clip')
+ ))
else:
__iM = numpy.argmax( numpy.abs(
- __rhoM[:,0]
- ))
+ __rhoM[:, 0]
+ ))
#
- __mu = [None,] # Convention
+ __mu = [None,] # Convention
__I = [__iM,]
- __Q = __rhoM[:,0].reshape((-1,1))
+ __Q = __rhoM[:, 0].reshape((-1, 1))
__errors = []
#
- __M = 1 # Car le premier est déjà construit
+ __M = 1 # Car le premier est déjà construit
if selfA._toStore("Residus"):
__eM, _ = InterpolationErrorByColumn(
__EOS, __Q, __I, __M,
# ------
while __M < __maxM:
#
- __restrictedQi = __Q[__I,:]
+ __restrictedQi = __Q[__I, :]
if __M > 1:
__Qi_inv = numpy.linalg.inv(__restrictedQi)
else:
__Qi_inv = 1. / __restrictedQi
#
- __restrictedrhoMi = __rhoM[__I,__M].reshape((-1,1))
+ __restrictedrhoMi = __rhoM[__I, __M].reshape((-1, 1))
#
if __M > 1:
- __interpolator = numpy.dot(__Q,numpy.dot(__Qi_inv,__restrictedrhoMi))
+ __interpolator = numpy.dot(__Q, numpy.dot(__Qi_inv, __restrictedrhoMi))
else:
- __interpolator = numpy.outer(__Q,numpy.outer(__Qi_inv,__restrictedrhoMi))
+ __interpolator = numpy.outer(__Q, numpy.outer(__Qi_inv, __restrictedrhoMi))
#
- __residuM = __rhoM[:,__M].reshape((-1,1)) - __interpolator
+ __residuM = __rhoM[:, __M].reshape((-1, 1)) - __interpolator
#
if __LcCsts and len(__IncludedMagicPoints) > 0:
__iM = numpy.argmax( numpy.abs(
numpy.take(__residuM, __IncludedMagicPoints, mode='clip')
- ))
+ ))
else:
__iM = numpy.argmax( numpy.abs(
__residuM
- ))
- __Q = numpy.column_stack((__Q, __rhoM[:,__M]))
+ ))
+ __Q = numpy.column_stack((__Q, __rhoM[:, __M]))
#
__I.append(__iM)
- __mu.append(None) # Convention
+ __mu.append(None) # Convention
if selfA._toStore("Residus"):
__eM, _ = InterpolationErrorByColumn(
- __EOS, __Q, __I, __M+1,
+ __EOS, __Q, __I, __M + 1,
__ErrorNorm = selfA._parameters["ErrorNorm"],
__LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints)
__errors.append(__eM)
#
__M = __M + 1
#
- #--------------------------
- if len(__errors)>0 and __errors[-1] < selfA._parameters["EpsilonEIM"]:
- logging.debug("%s %s (%.1e)"%(selfA._name,"The convergence is obtained when reaching the required EIM tolerance",selfA._parameters["EpsilonEIM"]))
+ # --------------------------
+ if len(__errors) > 0 and __errors[-1] < selfA._parameters["EpsilonEIM"]:
+ logging.debug("%s %s (%.1e)"%(selfA._name, "The convergence is obtained when reaching the required EIM tolerance", selfA._parameters["EpsilonEIM"])) # noqa: E501
if __M >= __maxM:
- logging.debug("%s %s (%i)"%(selfA._name,"The convergence is obtained when reaching the maximum number of RB dimension",__maxM))
- logging.debug("%s The RB of size %i has been correctly build"%(selfA._name,__Q.shape[1]))
- logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name,len(__ExcludedMagicPoints)))
+ logging.debug("%s %s (%i)"%(selfA._name, "The convergence is obtained when reaching the maximum number of RB dimension", __maxM)) # noqa: E501
+ logging.debug("%s The RB of size %i has been correctly build"%(selfA._name, __Q.shape[1]))
+ logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name, len(__ExcludedMagicPoints))) # noqa: E501
if hasattr(selfA, "StoredVariables"):
selfA.StoredVariables["OptimalPoints"].store( __I )
+ if selfA._toStore("ReducedBasisMus"):
+ selfA.StoredVariables["ReducedBasisMus"].store( __mu )
if selfA._toStore("ReducedBasis"):
selfA.StoredVariables["ReducedBasis"].store( __Q )
if selfA._toStore("Residus"):
# Initialisations
# ---------------
if numpy.array(EOS).size == 0:
- raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
if isinstance(EOS, (numpy.ndarray, numpy.matrix)):
__EOS = numpy.asarray(EOS)
elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
__EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
else:
- raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
__dimS, __nbmS = __EOS.shape
- logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name,__nbmS,__dimS))
+ logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name, __nbmS, __dimS))
#
if selfA._parameters["Variant"] in ["EIM", "PositioningByEIM"]:
__LcCsts = False
else:
__ExcludedMagicPoints = ()
if __LcCsts and "NameOfLocations" in selfA._parameters:
- if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS:
+ if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS: # noqa: E501
__NameOfLocations = selfA._parameters["NameOfLocations"]
else:
__NameOfLocations = ()
numpy.arange(__EOS.shape[0]),
__ExcludedMagicPoints,
assume_unique = True,
- )
+ )
else:
__IncludedMagicPoints = []
#
if "MaximumNumberOfLocations" in selfA._parameters and "MaximumRBSize" in selfA._parameters:
- selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"],selfA._parameters["MaximumRBSize"])
+ selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"], selfA._parameters["MaximumRBSize"]) # noqa: E501
elif "MaximumNumberOfLocations" in selfA._parameters:
selfA._parameters["MaximumRBSize"] = selfA._parameters["MaximumNumberOfLocations"]
elif "MaximumRBSize" in selfA._parameters:
#
__mu = []
__I = []
- __Q = numpy.empty(__dimS).reshape((-1,1))
+ __Q = numpy.empty(__dimS).reshape((-1, 1))
__errors = []
#
__M = 0
__ErrorNorm = selfA._parameters["ErrorNorm"],
__LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints,
__CDM = True, __RMU = rmu,
- )
+ )
__errors.append(__eM)
#
# Boucle
if __M > 1:
__Q = numpy.column_stack((__Q, __rhoM))
else:
- __Q = __rhoM.reshape((-1,1))
+ __Q = __rhoM.reshape((-1, 1))
__I.append(__iM)
#
__eM, __muM, __residuM = InterpolationErrorByColumn(
__ErrorNorm = selfA._parameters["ErrorNorm"],
__LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints,
__CDM = True, __RMU = rmu, __FTL = True,
- )
+ )
__errors.append(__eM)
#
- #--------------------------
+ # --------------------------
if __errors[-1] < selfA._parameters["EpsilonEIM"]:
- logging.debug("%s %s (%.1e)"%(selfA._name,"The convergence is obtained when reaching the required EIM tolerance",selfA._parameters["EpsilonEIM"]))
+ logging.debug("%s %s (%.1e)"%(selfA._name, "The convergence is obtained when reaching the required EIM tolerance", selfA._parameters["EpsilonEIM"])) # noqa: E501
if __M >= __maxM:
- logging.debug("%s %s (%i)"%(selfA._name,"The convergence is obtained when reaching the maximum number of RB dimension",__maxM))
- logging.debug("%s The RB of size %i has been correctly build"%(selfA._name,__Q.shape[1]))
- logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name,len(__ExcludedMagicPoints)))
+ logging.debug("%s %s (%i)"%(selfA._name, "The convergence is obtained when reaching the maximum number of RB dimension", __maxM)) # noqa: E501
+ logging.debug("%s The RB of size %i has been correctly build"%(selfA._name, __Q.shape[1]))
+ logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name, len(__ExcludedMagicPoints))) # noqa: E501
if hasattr(selfA, "StoredVariables"):
selfA.StoredVariables["OptimalPoints"].store( __I )
+ if selfA._toStore("ReducedBasisMus"):
+ selfA.StoredVariables["ReducedBasisMus"].store( __mu )
if selfA._toStore("ReducedBasis"):
selfA.StoredVariables["ReducedBasis"].store( __Q )
if selfA._toStore("Residus"):
return __mu, __I, __Q, __errors
# ==============================================================================
-def EIM_online(selfA, QEIM, gJmu = None, mPoints = None, mu = None, PseudoInverse = True, rbDimension = None, Verbose = False):
+def EIM_online(selfA, QEIM, gJmu = None, mPoints = None, mu = None,
+ PseudoInverse = True, rbDimension = None, Verbose = False):
"""
Reconstruction du champ complet
"""
if gJmu is None and mu is None:
- raise ValueError("Either measurements or parameters has to be given as a list, both can not be None simultaneously.")
+ raise ValueError("Either measurements or parameters has to be given as a list, both can not be None simultaneously.") # noqa: E501
if mPoints is None:
raise ValueError("List of optimal locations for measurements has to be given.")
if gJmu is not None:
if len(gJmu) > len(mPoints):
- raise ValueError("The number of measurements (%i) has to be less or equal to the number of optimal locations (%i)."%(len(gJmu),len(mPoints)))
+ raise ValueError("The number of measurements (%i) has to be less or equal to the number of optimal locations (%i)."%(len(gJmu), len(mPoints))) # noqa: E501
if len(gJmu) > QEIM.shape[1]:
- raise ValueError("The number of measurements (%i) in optimal locations has to be less or equal to the dimension of the RB (%i)."%(len(gJmu),QEIM.shape[1]))
+ raise ValueError("The number of measurements (%i) in optimal locations has to be less or equal to the dimension of the RB (%i)."%(len(gJmu), QEIM.shape[1])) # noqa: E501
__gJmu = numpy.ravel(gJmu)
if mu is not None:
# __gJmu = H(mu)
rbDimension = min(QEIM.shape[1], rbDimension)
else:
rbDimension = QEIM.shape[1]
- __rbDim = min(QEIM.shape[1],len(mPoints),len(gJmu),rbDimension) # Modulation
- #--------------------------
+ __rbDim = min(QEIM.shape[1], len(mPoints), len(gJmu), rbDimension) # Modulation
+ # --------------------------
#
# Restriction aux mesures
if PseudoInverse:
- __QJinv = numpy.linalg.pinv( QEIM[mPoints,0:__rbDim] )
+ __QJinv = numpy.linalg.pinv( QEIM[mPoints, 0:__rbDim] )
__gammaMu = numpy.dot( __QJinv, __gJmu[0:__rbDim])
else:
- __gammaMu = numpy.linalg.solve( QEIM[mPoints,0:__rbDim], __gJmu[0:__rbDim] )
+ __gammaMu = numpy.linalg.solve( QEIM[mPoints, 0:__rbDim], __gJmu[0:__rbDim] )
#
# Interpolation du champ complet
- __gMmu = numpy.dot( QEIM[:,0:__rbDim], __gammaMu )
+ __gMmu = numpy.dot( QEIM[:, 0:__rbDim], __gammaMu )
#
- #--------------------------
- logging.debug("%s The full field of size %i has been correctly build"%(selfA._name,__gMmu.size))
+ # --------------------------
+ logging.debug("%s The full field of size %i has been correctly build"%(selfA._name, __gMmu.size))
if hasattr(selfA, "StoredVariables"):
selfA.StoredVariables["Analysis"].store( __gMmu )
if selfA._toStore("ReducedCoordinates"):
# Initialisations
# ---------------
Hm = HO["Tangent"].asMatrix(Xb)
- Hm = Hm.reshape(Y.size,Xb.size) # ADAO & check shape
+ Hm = Hm.reshape(Y.size, Xb.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xb)
- Ha = Ha.reshape(Xb.size,Y.size) # ADAO & check shape
+ Ha = Ha.reshape(Xb.size, Y.size) # ADAO & check shape
H = HO["Direct"].appliedTo
#
if HO["AppliedInX"] is not None and "HXb" in HO["AppliedInX"]:
HXb = numpy.asarray(H( Xb, HO["AppliedInX"]["HXb"]))
else:
HXb = numpy.asarray(H( Xb ))
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("MahalanobisConsistency") or \
- (Y.size > Xb.size):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ (Y.size > Xb.size):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
#
Innovation = Y - HXb
if selfA._parameters["EstimationOf"] == "Parameters":
- if CM is not None and "Tangent" in CM and U is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and U is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xb)
- Cm = Cm.reshape(Xb.size,U.size) # ADAO & check shape
- Innovation = Innovation - (Cm @ U).reshape((-1,1))
+ Cm = Cm.reshape(Xb.size, U.size) # ADAO & check shape
+ Innovation = Innovation - (Cm @ U).reshape((-1, 1))
#
# Calcul de l'analyse
# -------------------
if Y.size <= Xb.size:
_HNHt = numpy.dot(Hm, B @ Ha)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , numpy.ravel(Innovation) )
- Xa = Xb + (B @ numpy.ravel(Ha @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.ravel(Innovation) )
+ Xa = Xb + (B @ numpy.ravel(Ha @ _u)).reshape((-1, 1))
else:
_HtRH = numpy.dot(Ha, RI @ Hm)
_A = BI + _HtRH
- _u = numpy.linalg.solve( _A , numpy.ravel(numpy.dot(Ha, RI @ numpy.ravel(Innovation))) )
- Xa = Xb + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.ravel(numpy.dot(Ha, RI @ numpy.ravel(Innovation))) )
+ Xa = Xb + _u.reshape((-1, 1))
#
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
# Calcul de la fonction coût
# --------------------------
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("MahalanobisConsistency") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtOptimum") or \
- selfA._toStore("SimulationQuantiles"):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("OMA") or \
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentState") or \
+ selfA._toStore("SimulatedObservationAtOptimum") or \
+ selfA._toStore("SimulationQuantiles"):
HXa = H( Xa )
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - numpy.asarray(HXa).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("MahalanobisConsistency"):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("MahalanobisConsistency"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * oma.T * (RI * oma) )
J = Jb + Jo
# Calcul de la covariance d'analyse
# ---------------------------------
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles"):
- if (Y.size <= Xb.size): K = B * Ha * (R + numpy.dot(Hm, B * Ha)).I
- elif (Y.size > Xb.size): K = (BI + numpy.dot(Ha, RI * Hm)).I * Ha * RI
+ selfA._toStore("SimulationQuantiles"):
+ if (Y.size <= Xb.size):
+ K = B * Ha * (R + numpy.dot(Hm, B * Ha)).I
+ elif (Y.size > Xb.size):
+ K = (BI + numpy.dot(Ha, RI * Hm)).I * Ha * RI
A = B - K * Hm * B
- A = (A + A.T) * 0.5 # Symétrie
- A = A + mpr*numpy.trace( A ) * numpy.identity(Xa.size) # Positivité
+ A = (A + A.T) * 0.5 # Symétrie
+ A = A + mpr * numpy.trace( A ) * numpy.identity(Xa.size) # Positivité
if min(A.shape) != max(A.shape):
- raise ValueError("The %s a posteriori covariance matrix A is of shape %s, despites it has to be a squared matrix. There is an error in the observation operator, please check it."%(selfA._name,str(A.shape)))
+ raise ValueError("The %s a posteriori covariance matrix A is of shape %s, despites it has to be a squared matrix. There is an error in the observation operator, please check it."%(selfA._name, str(A.shape))) # noqa: E501
if (numpy.diag(A) < 0).any():
- raise ValueError("The %s a posteriori covariance matrix A has at least one negative value %.2e on its diagonal. There is an error in the observation operator or in the covariances, please check them."%(selfA._name,min(numpy.diag(A))))
- if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
+ raise ValueError("The %s a posteriori covariance matrix A has at least one negative value %.2e on its diagonal. There is an error in the observation operator or in the covariances, please check them."%(selfA._name, min(numpy.diag(A)))) # noqa: E501
+ if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
try:
numpy.linalg.cholesky( A )
- except:
- raise ValueError("The %s a posteriori covariance matrix A is not symmetric positive-definite. Please check your a priori covariances and your observation operator."%(selfA._name,))
+ except Exception:
+ raise ValueError("The %s a posteriori covariance matrix A is not symmetric positive-definite. Please check your a priori covariances and your observation operator."%(selfA._name,)) # noqa: E501
selfA.StoredVariables["APosterioriCovariance"].store( A )
#
# Calculs et/ou stockages supplémentaires
TraceR = R.trace(Y.size)
selfA.StoredVariables["SigmaObs2"].store( vfloat( Innovation.T @ oma ) / TraceR )
if selfA._toStore("SigmaBck2"):
- selfA.StoredVariables["SigmaBck2"].store( vfloat( (Innovation.T @ (Hm @ (numpy.ravel(Xa) - numpy.ravel(Xb))))/(Hm * (B * Hm.T)).trace() ) )
+ selfA.StoredVariables["SigmaBck2"].store( vfloat( (Innovation.T @ (Hm @ (numpy.ravel(Xa) - numpy.ravel(Xb)))) / (Hm * (B * Hm.T)).trace() ) ) # noqa: E501
if selfA._toStore("MahalanobisConsistency"):
- selfA.StoredVariables["MahalanobisConsistency"].store( float( 2.*J/Innovation.size ) )
+ selfA.StoredVariables["MahalanobisConsistency"].store( float( 2. * J / Innovation.size ) )
if selfA._toStore("SimulationQuantiles"):
HtM = HO["Tangent"].asMatrix(Xa)
- HtM = HtM.reshape(Y.size,Xa.size) # ADAO & check shape
+ HtM = HtM.reshape(Y.size, Xa.size) # ADAO & check shape
QuantilesEstimations(selfA, A, Xa, HXa, H, HtM)
if selfA._toStore("SimulatedObservationAtBackground"):
selfA.StoredVariables["SimulatedObservationAtBackground"].store( HXb )
# Initialisations
# ---------------
Hm = HO["Tangent"].asMatrix(Xb)
- Hm = Hm.reshape(Y.size,-1) # ADAO & check shape
+ Hm = Hm.reshape(Y.size, -1) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xb)
- Ha = Ha.reshape(-1,Y.size) # ADAO & check shape
+ Ha = Ha.reshape(-1, Y.size) # ADAO & check shape
#
if R is None:
RI = 1.
# Calcul de l'analyse
# -------------------
K = (Ha * (RI * Hm)).I * Ha * RI
- Xa = K * Y
+ Xa = K * Y
#
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
# Calcul de la fonction coût
# --------------------------
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("OMA") or \
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentState") or \
+ selfA._toStore("SimulatedObservationAtOptimum"):
HXa = Hm @ Xa
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - HXa.reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum"):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum"):
Jb = 0.
Jo = vfloat( 0.5 * oma.T * (RI * oma) )
J = Jb + Jo
HXb = numpy.asarray(Hm( Xb, HO["AppliedInX"]["HXb"] ))
else:
HXb = numpy.asarray(Hm( Xb ))
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
RI = R.getI()
if selfA._parameters["Minimizer"] == "LM":
#
# Définition de la fonction-coût
# ------------------------------
+
def CostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState") or \
- selfA._toStore("CurrentOptimum"):
+ selfA._toStore("CurrentState") or \
+ selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentState"].store( _X )
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
_Innovation = Y - _HX
if selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
if selfA._toStore("IndexOfOptimum") or \
- selfA._toStore("CurrentOptimum") or \
- selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("CurrentOptimum") or \
+ selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
return J
- #
+
def GradientOfCostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
GradJb = 0.
GradJo = - Ha( (_X, RI * (Y - _HX)) )
GradJ = numpy.ravel( GradJb ) + numpy.ravel( GradJo )
return GradJ
- #
+
def CostFunctionLM(x):
- _X = numpy.ravel( x ).reshape((-1,1))
- _HX = Hm( _X ).reshape((-1,1))
+ _X = numpy.ravel( x ).reshape((-1, 1))
+ _HX = Hm( _X ).reshape((-1, 1))
_Innovation = Y - _HX
Jb = 0.
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
J = Jb + Jo
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState"):
+ selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( _X )
selfA.StoredVariables["CostFunctionJb"].store( Jb )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
- return numpy.ravel( RdemiI*_Innovation )
- #
+ return numpy.ravel( RdemiI * _Innovation )
+
def GradientOfCostFunctionLM(x):
- _X = x.reshape((-1,1))
- return - RdemiI*HO["Tangent"].asMatrix( _X )
+ _X = x.reshape((-1, 1))
+ return - RdemiI * HO["Tangent"].asMatrix( _X )
#
# Minimisation de la fonctionnelle
# --------------------------------
nbPreviousSteps = selfA.StoredVariables["CostFunctionJ"].stepnumber()
#
if selfA._parameters["Minimizer"] == "LBFGSB":
- if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
+ if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
import daAlgorithms.Atoms.lbfgsb14hlt as optimiseur
elif vt("1.5.0") <= vt(scipy.version.version) <= vt("1.7.99"):
import daAlgorithms.Atoms.lbfgsb17hlt as optimiseur
fprime = GradientOfCostFunction,
args = (),
bounds = selfA._parameters["Bounds"],
- maxfun = selfA._parameters["MaximumNumberOfIterations"]-1,
- factr = selfA._parameters["CostDecrementTolerance"]*1.e14,
+ maxfun = selfA._parameters["MaximumNumberOfIterations"] - 1,
+ factr = selfA._parameters["CostDecrementTolerance"] * 1.e14,
pgtol = selfA._parameters["ProjectedGradientTolerance"],
iprint = selfA._parameters["optiprint"],
- )
+ )
# nfeval = Informations['funcalls']
# rc = Informations['warnflag']
elif selfA._parameters["Minimizer"] == "TNC":
pgtol = selfA._parameters["ProjectedGradientTolerance"],
ftol = selfA._parameters["CostDecrementTolerance"],
messages = selfA._parameters["optmessages"],
- )
+ )
elif selfA._parameters["Minimizer"] == "CG":
Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "NCG":
Minimum, fopt, nfeval, grad_calls, hcalls, rc = scipy.optimize.fmin_ncg(
f = CostFunction,
avextol = selfA._parameters["CostDecrementTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "BFGS":
Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "LM":
Minimum, cov_x, infodict, mesg, rc = scipy.optimize.leastsq(
func = CostFunctionLM,
maxfev = selfA._parameters["MaximumNumberOfIterations"],
gtol = selfA._parameters["GradientNormTolerance"],
full_output = True,
- )
+ )
# nfeval = infodict['nfev']
else:
raise ValueError("Error in minimizer name: %s is unkown"%selfA._parameters["Minimizer"])
Minimum = selfA.StoredVariables["CurrentState"][IndexMin]
#
Xa = Minimum
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
if selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SimulatedObservationAtOptimum"):
if selfA._toStore("SimulatedObservationAtCurrentState"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif selfA._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm( Xa )
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - numpy.asarray(HXa).reshape((-1, 1))
#
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("Innovation") or \
- selfA._toStore("OMB"):
+ selfA._toStore("OMB"):
Innovation = Y - HXb
if selfA._toStore("Innovation"):
selfA.StoredVariables["Innovation"].store( Innovation )
"""
__author__ = "Jean-Philippe ARGAUD"
-import numpy, logging, copy
+import numpy, logging
from daCore.NumericObjects import ApplyBounds, VariablesAndIncrementsBounds
from daCore.PlatformInfo import vfloat
from numpy.random import uniform as rand
selfA._parameters["BoxBounds"],
Xini,
selfA._name,
- 0.5, # Similaire au SPSO-2011
- )
- #
+ 0.5, # Similaire au SPSO-2011
+ )
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.asarray( x ).reshape((-1,1))
- _HX = numpy.asarray( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.asarray( x ).reshape((-1, 1))
+ _HX = numpy.asarray( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = 0.5 * (_X - Xb).T @ (BI @ (_X - Xb))
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = 0.5 * _Innovation.T @ _Innovation
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = numpy.sum( numpy.abs(_Innovation) )
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = numpy.max( numpy.abs(_Innovation) )
#
J = vfloat( Jb ) + vfloat( Jo )
#
return J, vfloat( Jb ), vfloat( Jo )
- #
+
def KeepRunningCondition(__step, __nbfct):
if __step >= selfA._parameters["MaximumNumberOfIterations"]:
- logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"]))
+ logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"])) # noqa: E501
return False
elif __nbfct >= selfA._parameters["MaximumNumberOfFunctionEvaluations"]:
- logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"]))
+ logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"])) # noqa: E501
return False
else:
return True
# Paramètres internes
# -------------------
__nbI = selfA._parameters["NumberOfInsects"]
- __nbP = len(Xini) # Dimension ou nombre de paramètres
+ __nbP = len(Xini) # Dimension ou nombre de paramètres
#
__iw = float( selfA._parameters["InertiaWeight"] )
__sa = float( selfA._parameters["SocialAcceleration"] )
__ca = float( selfA._parameters["CognitiveAcceleration"] )
__vc = float( selfA._parameters["VelocityClampingFactor"] )
- logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca)))
+ logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca))) # noqa: E501
logging.debug("%s Social acceleration (recall to the best insect value of the group) = %s"%(selfA._name, str(__sa)))
logging.debug("%s Inertial weight = %s"%(selfA._name, str(__iw)))
logging.debug("%s Velocity clamping factor = %s"%(selfA._name, str(__vc)))
LimitPlace = Bounds
LimitSpeed = BoxBounds
#
- nbfct = 1 # Nb d'évaluations
- JXini, JbXini, JoXini = CostFunction(Xini,selfA._parameters["QualityCriterion"])
+ nbfct = 1 # Nb d'évaluations
+ JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
#
- Swarm = numpy.zeros((__nbI,4,__nbP)) # 4 car (x,v,xbest,lbest)
- for __p in range(__nbP) :
- Swarm[:,0,__p] = rand( low=LimitPlace[__p,0], high=LimitPlace[__p,1], size=__nbI) # Position
- Swarm[:,1,__p] = rand( low=LimitSpeed[__p,0], high=LimitSpeed[__p,1], size=__nbI) # Velocity
- logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name,Swarm.shape[0],Swarm.shape[2]))
+ Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,xbest,lbest)
+ for __p in range(__nbP):
+ Swarm[:, 0, __p] = rand( low=LimitPlace[__p, 0], high=LimitPlace[__p, 1], size=__nbI) # Position
+ Swarm[:, 1, __p] = rand( low=LimitSpeed[__p, 0], high=LimitSpeed[__p, 1], size=__nbI) # Velocity
+ logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name, Swarm.shape[0], Swarm.shape[2])) # noqa: E501
#
- qSwarm = JXini * numpy.ones((__nbI,3)) # Qualité (J, Jb, Jo) par insecte
+ qSwarm = JXini * numpy.ones((__nbI, 3)) # Qualité (J, Jb, Jo) par insecte
for __i in range(__nbI):
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
if JTest < JXini:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:] = (JTest, JbTest, JoTest)
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :] = (JTest, JbTest, JoTest)
else:
- Swarm[__i,2,:] = Xini # xBest
- qSwarm[__i,:] = (JXini, JbXini, JoXini)
+ Swarm[__i, 2, :] = Xini # xBest
+ qSwarm[__i, :] = (JXini, JbXini, JoXini)
logging.debug("%s Initialisation of the best previous insects"%selfA._name)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
#
rct = rand(size=__nbP)
rst = rand(size=__nbP)
# Vitesse
- __velins = __iw * Swarm[__i,1,:] \
- + __ca * rct * (Swarm[__i,2,:] - Swarm[__i,0,:]) \
- + __sa * rst * (Swarm[iBest,2,:] - Swarm[__i,0,:])
- Swarm[__i,1,:] = ApplyBounds( __velins, LimitSpeed )
+ __velins = __iw * Swarm[__i, 1, :] \
+ + __ca * rct * (Swarm[__i, 2, :] - Swarm[__i, 0, :]) \
+ + __sa * rst * (Swarm[iBest, 2, :] - Swarm[__i, 0, :])
+ Swarm[__i, 1, :] = ApplyBounds( __velins, LimitSpeed )
# Position
- __velins = Swarm[__i,0,:] + Swarm[__i,1,:]
- Swarm[__i,0,:] = ApplyBounds( __velins, LimitPlace )
+ __velins = Swarm[__i, 0, :] + Swarm[__i, 1, :]
+ Swarm[__i, 0, :] = ApplyBounds( __velins, LimitPlace )
#
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
- if JTest < qSwarm[__i,0]:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:] = (JTest, JbTest, JoTest)
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
+ if JTest < qSwarm[__i, 0]:
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :] = (JTest, JbTest, JoTest)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
if selfA._toStore("SimulatedObservationAtCurrentState"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( xBest ) )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
- logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name,step,iBest,qSwarm[iBest,0]))
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
+ logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name, step, iBest, qSwarm[iBest, 0])) # noqa: E501
#
# Obtention de l'analyse
# ----------------------
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("OMA") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("SimulatedObservationAtOptimum"):
HXa = Hm(Xa)
if selfA._toStore("Innovation") or \
- selfA._toStore("OMB") or \
- selfA._toStore("SimulatedObservationAtBackground"):
+ selfA._toStore("OMB") or \
+ selfA._toStore("SimulatedObservationAtBackground"):
HXb = Hm(Xb)
Innovation = Y - HXb
if selfA._toStore("Innovation"):
"""
__author__ = "Jean-Philippe ARGAUD"
-import numpy, logging, copy
+import numpy, logging
from daCore.NumericObjects import VariablesAndIncrementsBounds
from daCore.PlatformInfo import vfloat
from numpy.random import uniform as rand
selfA._parameters["BoxBounds"],
Xini,
selfA._name,
- 0.5, # Similaire au SPSO-2011
- )
- #
+ 0.5, # Similaire au SPSO-2011
+ )
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.asarray( x ).reshape((-1,1))
- _HX = numpy.asarray( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.asarray( x ).reshape((-1, 1))
+ _HX = numpy.asarray( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = 0.5 * (_X - Xb).T @ (BI @ (_X - Xb))
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = 0.5 * _Innovation.T @ _Innovation
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = numpy.sum( numpy.abs(_Innovation) )
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = numpy.max( numpy.abs(_Innovation) )
#
J = vfloat( Jb ) + vfloat( Jo )
#
return J, vfloat( Jb ), vfloat( Jo )
- #
+
def KeepRunningCondition(__step, __nbfct):
if __step >= selfA._parameters["MaximumNumberOfIterations"]:
- logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"]))
+ logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"])) # noqa: E501
return False
elif __nbfct >= selfA._parameters["MaximumNumberOfFunctionEvaluations"]:
- logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"]))
+ logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"])) # noqa: E501
return False
else:
return True
# Paramètres internes
# -------------------
__nbI = selfA._parameters["NumberOfInsects"]
- __nbP = len(Xini) # Dimension ou nombre de paramètres
+ __nbP = len(Xini) # Dimension ou nombre de paramètres
#
__iw = float( selfA._parameters["InertiaWeight"] )
__sa = float( selfA._parameters["SocialAcceleration"] )
__ca = float( selfA._parameters["CognitiveAcceleration"] )
__vc = float( selfA._parameters["VelocityClampingFactor"] )
- logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca)))
+ logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca))) # noqa: E501
logging.debug("%s Social acceleration (recall to the best insect value of the group) = %s"%(selfA._name, str(__sa)))
logging.debug("%s Inertial weight = %s"%(selfA._name, str(__iw)))
logging.debug("%s Velocity clamping factor = %s"%(selfA._name, str(__vc)))
LimitPlace = Bounds
LimitSpeed = BoxBounds
#
- nbfct = 1 # Nb d'évaluations
- JXini, JbXini, JoXini = CostFunction(Xini,selfA._parameters["QualityCriterion"])
+ nbfct = 1 # Nb d'évaluations
+ JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
#
- Swarm = numpy.zeros((__nbI,3,__nbP)) # 3 car (x,v,xbest)
- for __p in range(__nbP) :
- Swarm[:,0,__p] = rand( low=LimitPlace[__p,0], high=LimitPlace[__p,1], size=__nbI) # Position
- Swarm[:,1,__p] = rand( low=LimitSpeed[__p,0], high=LimitSpeed[__p,1], size=__nbI) # Velocity
- logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name,Swarm.shape[0],Swarm.shape[2]))
+ Swarm = numpy.zeros((__nbI, 3, __nbP)) # 3 car (x,v,xbest)
+ for __p in range(__nbP):
+ Swarm[:, 0, __p] = rand( low=LimitPlace[__p, 0], high=LimitPlace[__p, 1], size=__nbI) # Position
+ Swarm[:, 1, __p] = rand( low=LimitSpeed[__p, 0], high=LimitSpeed[__p, 1], size=__nbI) # Velocity
+ logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name, Swarm.shape[0], Swarm.shape[2])) # noqa: E501
#
- qSwarm = JXini * numpy.ones((__nbI,3)) # Qualité (J, Jb, Jo) par insecte
+ qSwarm = JXini * numpy.ones((__nbI, 3)) # Qualité (J, Jb, Jo) par insecte
for __i in range(__nbI):
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
if JTest < JXini:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:] = (JTest, JbTest, JoTest)
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :] = (JTest, JbTest, JoTest)
else:
- Swarm[__i,2,:] = Xini # xBest
- qSwarm[__i,:] = (JXini, JbXini, JoXini)
+ Swarm[__i, 2, :] = Xini # xBest
+ qSwarm[__i, :] = (JXini, JbXini, JoXini)
logging.debug("%s Initialisation of the best previous insects"%selfA._name)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
#
for __i in range(__nbI):
for __p in range(__nbP):
# Vitesse
- Swarm[__i,1,__p] = __iw * Swarm[__i,1,__p] \
- + __ca * rand() * (Swarm[__i,2,__p] - Swarm[__i,0,__p]) \
- + __sa * rand() * (Swarm[iBest,2,__p] - Swarm[__i,0,__p])
+ Swarm[__i, 1, __p] = __iw * Swarm[__i, 1, __p] \
+ + __ca * rand() * (Swarm[__i, 2, __p] - Swarm[__i, 0, __p]) \
+ + __sa * rand() * (Swarm[iBest, 2, __p] - Swarm[__i, 0, __p])
# Position
- Swarm[__i,0,__p] = Swarm[__i,0,__p] + Swarm[__i,1,__p]
+ Swarm[__i, 0, __p] = Swarm[__i, 0, __p] + Swarm[__i, 1, __p]
#
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
- if JTest < qSwarm[__i,0]:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:] = (JTest, JbTest, JoTest)
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
+ if JTest < qSwarm[__i, 0]:
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :] = (JTest, JbTest, JoTest)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
if selfA._toStore("SimulatedObservationAtCurrentState"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( xBest ) )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
- logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name,step,iBest,qSwarm[iBest,0]))
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
+ logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name, step, iBest, qSwarm[iBest, 0])) # noqa: E501
#
# Obtention de l'analyse
# ----------------------
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("OMA") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("SimulatedObservationAtOptimum"):
HXa = Hm(Xa)
if selfA._toStore("Innovation") or \
- selfA._toStore("OMB") or \
- selfA._toStore("SimulatedObservationAtBackground"):
+ selfA._toStore("OMB") or \
+ selfA._toStore("SimulatedObservationAtBackground"):
HXb = Hm(Xb)
Innovation = Y - HXb
if selfA._toStore("Innovation"):
"""
__author__ = "Jean-Philippe ARGAUD"
-import numpy, logging, copy, math
+import numpy, logging
from daCore.NumericObjects import ApplyBounds, VariablesAndIncrementsBounds
from daCore.NumericObjects import GenerateRandomPointInHyperSphere
from daCore.NumericObjects import GetNeighborhoodTopology
Xini,
selfA._name,
0.5,
- )
- #
+ )
+
def CostFunction(x, hm, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.asarray( x ).reshape((-1,1))
- _HX = numpy.asarray( hm ).reshape((-1,1))
+ _X = numpy.asarray( x ).reshape((-1, 1))
+ _HX = numpy.asarray( hm ).reshape((-1, 1))
_Innovation = Y - _HX
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = 0.5 * (_X - Xb).T @ (BI @ (_X - Xb))
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = 0.5 * _Innovation.T @ _Innovation
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = numpy.sum( numpy.abs(_Innovation) )
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = numpy.max( numpy.abs(_Innovation) )
#
J = vfloat( Jb ) + vfloat( Jo )
#
return J, vfloat( Jb ), vfloat( Jo )
- #
+
def KeepRunningCondition(__step, __nbfct):
if __step >= selfA._parameters["MaximumNumberOfIterations"]:
- logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"]))
+ logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"])) # noqa: E501
return False
elif __nbfct >= selfA._parameters["MaximumNumberOfFunctionEvaluations"]:
- logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"]))
+ logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"])) # noqa: E501
return False
else:
return True
# Paramètres internes
# -------------------
__nbI = selfA._parameters["NumberOfInsects"]
- __nbP = len(Xini) # Dimension ou nombre de paramètres
+ __nbP = len(Xini) # Dimension ou nombre de paramètres
#
__iw = float( selfA._parameters["InertiaWeight"] )
__sa = float( selfA._parameters["SocialAcceleration"] )
__ca = float( selfA._parameters["CognitiveAcceleration"] )
__vc = float( selfA._parameters["VelocityClampingFactor"] )
- logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca)))
+ logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca))) # noqa: E501
logging.debug("%s Social acceleration (recall to the best insect value of the group) = %s"%(selfA._name, str(__sa)))
logging.debug("%s Inertial weight = %s"%(selfA._name, str(__iw)))
logging.debug("%s Velocity clamping factor = %s"%(selfA._name, str(__vc)))
LimitPlace = Bounds
LimitSpeed = BoxBounds
#
- nbfct = 1 # Nb d'évaluations
+ nbfct = 1 # Nb d'évaluations
HX = Hm( Xini )
- JXini, JbXini, JoXini = CostFunction(Xini,HX,selfA._parameters["QualityCriterion"])
+ JXini, JbXini, JoXini = CostFunction(Xini, HX, selfA._parameters["QualityCriterion"])
#
- Swarm = numpy.zeros((__nbI,4,__nbP)) # 4 car (x,v,gbest,lbest)
- for __p in range(__nbP) :
- Swarm[:,0,__p] = rand( low=LimitPlace[__p,0], high=LimitPlace[__p,1], size=__nbI) # Position
- Swarm[:,1,__p] = rand( low=LimitSpeed[__p,0], high=LimitSpeed[__p,1], size=__nbI) # Velocity
- logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name,Swarm.shape[0],Swarm.shape[2]))
+ Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,gbest,lbest)
+ for __p in range(__nbP):
+ Swarm[:, 0, __p] = rand( low=LimitPlace[__p, 0], high=LimitPlace[__p, 1], size=__nbI) # Position
+ Swarm[:, 1, __p] = rand( low=LimitSpeed[__p, 0], high=LimitSpeed[__p, 1], size=__nbI) # Velocity
+ logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name, Swarm.shape[0], Swarm.shape[2])) # noqa: E501
#
__nbh = GetNeighborhoodTopology( selfA._parameters["SwarmTopology"], list(range(__nbI)) )
#
- qSwarm = JXini * numpy.ones((__nbI,6)) # Qualités (J, Jb, Jo) par insecte + par voisinage
+ qSwarm = JXini * numpy.ones((__nbI, 6)) # Qualités (J, Jb, Jo) par insecte + par voisinage
__EOS = Hm(
- numpy.vsplit(Swarm[:,0,:], __nbI),
+ numpy.vsplit(Swarm[:, 0, :], __nbI),
argsAsSerie = True,
returnSerieAsArrayMatrix = False,
- )
+ )
for __i in range(__nbI):
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],__EOS[__i],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], __EOS[__i], selfA._parameters["QualityCriterion"])
if JTest < JXini:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:3] = (JTest, JbTest, JoTest)
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :3 ] = (JTest, JbTest, JoTest)
else:
- Swarm[__i,2,:] = Xini # xBest
- qSwarm[__i,:3] = (JXini, JbXini, JoXini)
+ Swarm[__i, 2, :] = Xini # xBest
+ qSwarm[__i, :3 ] = (JXini, JbXini, JoXini)
logging.debug("%s Initialisation of the best previous insects"%selfA._name)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
for __i in range(__nbI):
- Swarm[__i,3,:] = xBest # lBest
- qSwarm[__i,3:] = qSwarm[iBest,:3]
+ Swarm[__i, 3, :] = xBest # lBest
+ qSwarm[__i, 3: ] = qSwarm[iBest, :3]
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
#
step += 1
#
__EOS = Hm(
- numpy.vsplit(Swarm[:,0,:], __nbI),
+ numpy.vsplit(Swarm[:, 0, :], __nbI),
argsAsSerie = True,
returnSerieAsArrayMatrix = False,
- )
+ )
for __i in range(__nbI):
# Évalue
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],__EOS[__i],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], __EOS[__i], selfA._parameters["QualityCriterion"])
# Maj lbest
- if JTest < qSwarm[__i,0]:
- Swarm[__i,2,:] = Swarm[__i,0,:]
- qSwarm[__i,:3] = (JTest, JbTest, JoTest)
+ if JTest < qSwarm[__i, 0]:
+ Swarm[__i, 2, :] = Swarm[__i, 0, :]
+ qSwarm[__i, :3 ] = (JTest, JbTest, JoTest)
#
for __i in range(__nbI):
# Maj gbest
- __im = numpy.argmin( [qSwarm[__v,0] for __v in __nbh[__i]] )
- __il = __nbh[__i][__im] # Best in NB
- if qSwarm[__il,0] < qSwarm[__i,3]:
- Swarm[__i,3,:] = Swarm[__il,2,:]
- qSwarm[__i,3:] = qSwarm[__il,:3]
+ __im = numpy.argmin( [qSwarm[__v, 0] for __v in __nbh[__i]] )
+ __il = __nbh[__i][__im] # Best in NB
+ if qSwarm[__il, 0] < qSwarm[__i, 3]:
+ Swarm[__i, 3, :] = Swarm[__il, 2, :]
+ qSwarm[__i, 3: ] = qSwarm[__il, :3]
#
- for __i in range(__nbI-1,0-1,-1):
+ for __i in range(__nbI - 1, 0 - 1, -1):
__rct = rand(size=__nbP)
__rst = rand(size=__nbP)
- __xPoint = Swarm[__i,0,:]
+ __xPoint = Swarm[__i, 0, :]
# Points
- __pPoint = __xPoint + __ca * __rct * (Swarm[__i,2,:] - __xPoint)
- __lPoint = __xPoint + __sa * __rst * (Swarm[__i,3,:] - __xPoint)
+ __pPoint = __xPoint + __ca * __rct * (Swarm[__i, 2, :] - __xPoint)
+ __lPoint = __xPoint + __sa * __rst * (Swarm[__i, 3, :] - __xPoint)
__gPoint = (__xPoint + __pPoint + __lPoint) / 3
__radius = numpy.linalg.norm(__gPoint - __xPoint)
__rPoint = GenerateRandomPointInHyperSphere( __gPoint, __radius )
# Maj vitesse
- __value = __iw * Swarm[__i,1,:] + __rPoint - __xPoint
- Swarm[__i,1,:] = ApplyBounds( __value, LimitSpeed )
+ __value = __iw * Swarm[__i, 1, :] + __rPoint - __xPoint
+ Swarm[__i, 1, :] = ApplyBounds( __value, LimitSpeed )
# Maj position
- __value = __xPoint + Swarm[__i,1,:]
- Swarm[__i,0,:] = ApplyBounds( __value, LimitPlace )
+ __value = __xPoint + Swarm[__i, 1, :]
+ Swarm[__i, 0, :] = ApplyBounds( __value, LimitPlace )
#
nbfct += 1
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
if selfA._toStore("SimulatedObservationAtCurrentState"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( xBest ) )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
- logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name,step,iBest,qSwarm[iBest,0]))
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
+ logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name, step, iBest, qSwarm[iBest, 0])) # noqa: E501
#
# Obtention de l'analyse
# ----------------------
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("OMA") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("SimulatedObservationAtOptimum"):
HXa = Hm(Xa)
if selfA._toStore("Innovation") or \
- selfA._toStore("OMB") or \
- selfA._toStore("SimulatedObservationAtBackground"):
+ selfA._toStore("OMB") or \
+ selfA._toStore("SimulatedObservationAtBackground"):
HXb = Hm(Xb)
Innovation = Y - HXb
if selfA._toStore("Innovation"):
"""
__author__ = "Jean-Philippe ARGAUD"
-import numpy, logging, copy, math
+import numpy, logging
from daCore.NumericObjects import ApplyBounds, VariablesAndIncrementsBounds
from daCore.NumericObjects import GenerateRandomPointInHyperSphere
from daCore.NumericObjects import GetNeighborhoodTopology
Xini,
selfA._name,
0.5,
- )
- #
+ )
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.asarray( x ).reshape((-1,1))
- _HX = numpy.asarray( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.asarray( x ).reshape((-1, 1))
+ _HX = numpy.asarray( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = 0.5 * (_X - Xb).T @ (BI @ (_X - Xb))
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = 0.5 * _Innovation.T @ (RI @ _Innovation)
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = 0.5 * _Innovation.T @ _Innovation
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = numpy.sum( numpy.abs(_Innovation) )
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = numpy.max( numpy.abs(_Innovation) )
#
J = vfloat( Jb ) + vfloat( Jo )
#
return J, vfloat( Jb ), vfloat( Jo )
- #
+
def KeepRunningCondition(__step, __nbfct):
if __step >= selfA._parameters["MaximumNumberOfIterations"]:
- logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"]))
+ logging.debug("%s Stopping search because the number %i of evolving iterations is exceeding the maximum %i."%(selfA._name, __step, selfA._parameters["MaximumNumberOfIterations"])) # noqa: E501
return False
elif __nbfct >= selfA._parameters["MaximumNumberOfFunctionEvaluations"]:
- logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"]))
+ logging.debug("%s Stopping search because the number %i of function evaluations is exceeding the maximum %i."%(selfA._name, __nbfct, selfA._parameters["MaximumNumberOfFunctionEvaluations"])) # noqa: E501
return False
else:
return True
# Paramètres internes
# -------------------
__nbI = selfA._parameters["NumberOfInsects"]
- __nbP = len(Xini) # Dimension ou nombre de paramètres
+ __nbP = len(Xini) # Dimension ou nombre de paramètres
#
__iw = float( selfA._parameters["InertiaWeight"] )
__sa = float( selfA._parameters["SocialAcceleration"] )
__ca = float( selfA._parameters["CognitiveAcceleration"] )
__vc = float( selfA._parameters["VelocityClampingFactor"] )
- logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca)))
+ logging.debug("%s Cognitive acceleration (recall to the best previously known value of the insect) = %s"%(selfA._name, str(__ca))) # noqa: E501
logging.debug("%s Social acceleration (recall to the best insect value of the group) = %s"%(selfA._name, str(__sa)))
logging.debug("%s Inertial weight = %s"%(selfA._name, str(__iw)))
logging.debug("%s Velocity clamping factor = %s"%(selfA._name, str(__vc)))
LimitPlace = Bounds
LimitSpeed = BoxBounds
#
- nbfct = 1 # Nb d'évaluations
- JXini, JbXini, JoXini = CostFunction(Xini,selfA._parameters["QualityCriterion"])
+ nbfct = 1 # Nb d'évaluations
+ JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
#
- Swarm = numpy.zeros((__nbI,4,__nbP)) # 4 car (x,v,gbest,lbest)
- for __p in range(__nbP) :
- Swarm[:,0,__p] = rand( low=LimitPlace[__p,0], high=LimitPlace[__p,1], size=__nbI) # Position
- Swarm[:,1,__p] = rand( low=LimitSpeed[__p,0], high=LimitSpeed[__p,1], size=__nbI) # Velocity
- logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name,Swarm.shape[0],Swarm.shape[2]))
+ Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,gbest,lbest)
+ for __p in range(__nbP):
+ Swarm[:, 0, __p] = rand( low=LimitPlace[__p, 0], high=LimitPlace[__p, 1], size=__nbI) # Position
+ Swarm[:, 1, __p] = rand( low=LimitSpeed[__p, 0], high=LimitSpeed[__p, 1], size=__nbI) # Velocity
+ logging.debug("%s Initialisation of the swarm with %i insects of size %i "%(selfA._name, Swarm.shape[0], Swarm.shape[2])) # noqa: E501
#
__nbh = GetNeighborhoodTopology( selfA._parameters["SwarmTopology"], list(range(__nbI)) )
#
- qSwarm = JXini * numpy.ones((__nbI,6)) # Qualités (J, Jb, Jo) par insecte + par voisinage
+ qSwarm = JXini * numpy.ones((__nbI, 6)) # Qualités (J, Jb, Jo) par insecte + par voisinage
for __i in range(__nbI):
nbfct += 1
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
if JTest < JXini:
- Swarm[__i,2,:] = Swarm[__i,0,:] # xBest
- qSwarm[__i,:3] = (JTest, JbTest, JoTest)
+ Swarm[__i, 2, :] = Swarm[__i, 0, :] # xBest
+ qSwarm[__i, :3 ] = (JTest, JbTest, JoTest)
else:
- Swarm[__i,2,:] = Xini # xBest
- qSwarm[__i,:3] = (JXini, JbXini, JoXini)
+ Swarm[__i, 2, :] = Xini # xBest
+ qSwarm[__i, :3 ] = (JXini, JbXini, JoXini)
logging.debug("%s Initialisation of the best previous insects"%selfA._name)
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
for __i in range(__nbI):
- Swarm[__i,3,:] = xBest # lBest
- qSwarm[__i,3:] = qSwarm[iBest,:3]
+ Swarm[__i, 3, :] = xBest # lBest
+ qSwarm[__i, 3:] = qSwarm[iBest, :3]
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
#
#
for __i in range(__nbI):
# Évalue
- JTest, JbTest, JoTest = CostFunction(Swarm[__i,0,:],selfA._parameters["QualityCriterion"])
+ JTest, JbTest, JoTest = CostFunction(Swarm[__i, 0, :], selfA._parameters["QualityCriterion"])
# Maj lbest
- if JTest < qSwarm[__i,0]:
- Swarm[__i,2,:] = Swarm[__i,0,:]
- qSwarm[__i,:3] = (JTest, JbTest, JoTest)
+ if JTest < qSwarm[__i, 0]:
+ Swarm[__i, 2, :] = Swarm[__i, 0, :]
+ qSwarm[__i, :3] = (JTest, JbTest, JoTest)
#
for __i in range(__nbI):
# Maj gbest
- __im = numpy.argmin( [qSwarm[__v,0] for __v in __nbh[__i]] )
- __il = __nbh[__i][__im] # Best in NB
- if qSwarm[__il,0] < qSwarm[__i,3]:
- Swarm[__i,3,:] = Swarm[__il,2,:]
- qSwarm[__i,3:] = qSwarm[__il,:3]
+ __im = numpy.argmin( [qSwarm[__v, 0] for __v in __nbh[__i]] )
+ __il = __nbh[__i][__im] # Best in NB
+ if qSwarm[__il, 0] < qSwarm[__i, 3]:
+ Swarm[__i, 3, :] = Swarm[__il, 2, :]
+ qSwarm[__i, 3: ] = qSwarm[__il, :3]
#
- for __i in range(__nbI-1,0-1,-1):
+ for __i in range(__nbI - 1, 0 - 1, -1):
__rct = rand(size=__nbP)
__rst = rand(size=__nbP)
- __xPoint = Swarm[__i,0,:]
+ __xPoint = Swarm[__i, 0, :]
# Points
- __pPoint = __xPoint + __ca * __rct * (Swarm[__i,2,:] - __xPoint)
- __lPoint = __xPoint + __sa * __rst * (Swarm[__i,3,:] - __xPoint)
+ __pPoint = __xPoint + __ca * __rct * (Swarm[__i, 2, :] - __xPoint)
+ __lPoint = __xPoint + __sa * __rst * (Swarm[__i, 3, :] - __xPoint)
__gPoint = (__xPoint + __pPoint + __lPoint) / 3
__radius = numpy.linalg.norm(__gPoint - __xPoint)
__rPoint = GenerateRandomPointInHyperSphere( __gPoint, __radius )
# Maj vitesse
- __value = __iw * Swarm[__i,1,:] + __rPoint - __xPoint
- Swarm[__i,1,:] = ApplyBounds( __value, LimitSpeed )
+ __value = __iw * Swarm[__i, 1, :] + __rPoint - __xPoint
+ Swarm[__i, 1, :] = ApplyBounds( __value, LimitSpeed )
# Maj position
- __value = __xPoint + Swarm[__i,1,:]
- Swarm[__i,0,:] = ApplyBounds( __value, LimitPlace )
+ __value = __xPoint + Swarm[__i, 1, :]
+ Swarm[__i, 0, :] = ApplyBounds( __value, LimitPlace )
#
nbfct += 1
#
- iBest = numpy.argmin(qSwarm[:,0])
- xBest = Swarm[iBest,2,:]
+ iBest = numpy.argmin(qSwarm[:, 0])
+ xBest = Swarm[iBest, 2, :]
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( xBest )
if selfA._toStore("SimulatedObservationAtCurrentState"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( xBest ) )
- selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest,0] )
- selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest,1] )
- selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest,2] )
+ selfA.StoredVariables["CostFunctionJ" ].store( qSwarm[iBest, 0] )
+ selfA.StoredVariables["CostFunctionJb"].store( qSwarm[iBest, 1] )
+ selfA.StoredVariables["CostFunctionJo"].store( qSwarm[iBest, 2] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalStates"):
- selfA.StoredVariables["InternalStates"].store( Swarm[:,0,:].T )
+ selfA.StoredVariables["InternalStates"].store( Swarm[:, 0, :].T )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJ"):
- selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:,0] )
+ selfA.StoredVariables["InternalCostFunctionJ"].store( qSwarm[:, 0] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJb"):
- selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:,1] )
+ selfA.StoredVariables["InternalCostFunctionJb"].store( qSwarm[:, 1] )
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("InternalCostFunctionJo"):
- selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:,2] )
- logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name,step,iBest,qSwarm[iBest,0]))
+ selfA.StoredVariables["InternalCostFunctionJo"].store( qSwarm[:, 2] )
+ logging.debug("%s Step %i: insect %i is the better one with J =%.7f"%(selfA._name, step, iBest, qSwarm[iBest, 0])) # noqa: E501
#
# Obtention de l'analyse
# ----------------------
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("OMA") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("SimulatedObservationAtOptimum"):
HXa = Hm(Xa)
if selfA._toStore("Innovation") or \
- selfA._toStore("OMB") or \
- selfA._toStore("SimulatedObservationAtBackground"):
+ selfA._toStore("OMB") or \
+ selfA._toStore("SimulatedObservationAtBackground"):
HXb = Hm(Xb)
Innovation = Y - HXb
if selfA._toStore("Innovation"):
# Initialisations
# ---------------
Hm = HO["Tangent"].asMatrix(Xb)
- Hm = Hm.reshape(Y.size,Xb.size) # ADAO & check shape
+ Hm = Hm.reshape(Y.size, Xb.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xb)
- Ha = Ha.reshape(Xb.size,Y.size) # ADAO & check shape
+ Ha = Ha.reshape(Xb.size, Y.size) # ADAO & check shape
#
HXb = Hm @ Xb
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
- (Y.size > Xb.size):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
+ (Y.size > Xb.size):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
#
Innovation = Y - HXb
if selfA._parameters["EstimationOf"] == "Parameters":
- if CM is not None and "Tangent" in CM and U is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and U is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xb)
- Cm = Cm.reshape(Xb.size,U.size) # ADAO & check shape
- Innovation = Innovation - (Cm @ U).reshape((-1,1))
+ Cm = Cm.reshape(Xb.size, U.size) # ADAO & check shape
+ Innovation = Innovation - (Cm @ U).reshape((-1, 1))
#
# Calcul de l'analyse
# -------------------
if Y.size <= Xb.size:
_HNHt = numpy.dot(Hm, B @ Ha)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , Innovation )
- Xa = Xb + (B @ (Ha @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, Innovation )
+ Xa = Xb + (B @ (Ha @ _u)).reshape((-1, 1))
K = B @ (Ha @ numpy.linalg.inv(_A))
else:
_HtRH = numpy.dot(Ha, RI @ Hm)
_A = BI + _HtRH
- _u = numpy.linalg.solve( _A , numpy.dot(Ha, RI @ Innovation) )
- Xa = Xb + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.dot(Ha, RI @ Innovation) )
+ Xa = Xb + _u.reshape((-1, 1))
K = numpy.linalg.inv(_A) @ (Ha @ RI.asfullmatrix(Y.size))
#
Pa = B - K @ (Hm @ B)
- Pa = (Pa + Pa.T) * 0.5 # Symétrie
- Pa = Pa + mpr*numpy.trace( Pa ) * numpy.identity(Xa.size) # Positivité
+ Pa = (Pa + Pa.T) * 0.5 # Symétrie
+ Pa = Pa + mpr * numpy.trace( Pa ) * numpy.identity(Xa.size) # Positivité
#
- if __storeState: selfA._setInternalState("Xn", Xa)
- if __storeState: selfA._setInternalState("Pn", Pa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ selfA._setInternalState("Pn", Pa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xa )
if selfA._toStore("BMA"):
selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(Xa) )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HXb )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * Innovation.T @ (RI @ Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][:] )
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( Pa )
#
"""
__author__ = "Jean-Philippe ARGAUD"
-import numpy, scipy, logging
+import numpy, logging
import daCore.Persistence
from daCore.NumericObjects import FindIndexesFromNames
from daCore.NumericObjects import InterpolationErrorByColumn
-from daCore.NumericObjects import SingularValuesEstimation
# ==============================================================================
def UBFEIM_offline(selfA, EOS = None, Verbose = False):
# Initialisations
# ---------------
if numpy.array(EOS).size == 0:
- raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has not to be void, but an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
if isinstance(EOS, (numpy.ndarray, numpy.matrix)):
__EOS = numpy.asarray(EOS)
elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
__EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
else:
- raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
__dimS, __nbmS = __EOS.shape
- logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name,__nbmS,__dimS))
+ logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(selfA._name, __nbmS, __dimS)) # noqa: E501
#
if numpy.array(selfA._parameters["UserBasisFunctions"]).size == 0:
- logging.debug("%s Using the snapshots in place of user defined basis functions, the latter being not provided"%(selfA._name))
+ logging.debug("%s Using the snapshots in place of user defined basis functions, the latter being not provided"%(selfA._name)) # noqa: E501
UBF = __EOS
else:
UBF = selfA._parameters["UserBasisFunctions"]
elif isinstance(UBF, (list, tuple, daCore.Persistence.Persistence)):
__UBF = numpy.stack([numpy.ravel(_sn) for _sn in UBF], axis=1)
else:
- raise ValueError("UserBasisFunctions has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
- assert __EOS.shape[0] == __UBF.shape[0], "Individual snapshot and user defined basis function has to be of the same size, which is false: %i =/= %i"%(__EOS.shape[0], __UBF.shape[0])
+ raise ValueError("UserBasisFunctions has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
+ assert __EOS.shape[0] == __UBF.shape[0], "Individual snapshot and user defined basis function has to be of the same size, which is false: %i =/= %i"%(__EOS.shape[0], __UBF.shape[0]) # noqa: E501
__dimS, __nbmS = __UBF.shape
- logging.debug("%s Using a collection of %i user defined basis functions of individual size of %i"%(selfA._name,__nbmS,__dimS))
+ logging.debug("%s Using a collection of %i user defined basis functions of individual size of %i"%(selfA._name, __nbmS, __dimS)) # noqa: E501
#
if selfA._parameters["Variant"] in ["UBFEIM", "PositioningByUBFEIM"]:
__LcCsts = False
else:
__ExcludedMagicPoints = ()
if __LcCsts and "NameOfLocations" in selfA._parameters:
- if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS:
+ if isinstance(selfA._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(selfA._parameters["NameOfLocations"]) == __dimS: # noqa: E501
__NameOfLocations = selfA._parameters["NameOfLocations"]
else:
__NameOfLocations = ()
numpy.arange(__UBF.shape[0]),
__ExcludedMagicPoints,
assume_unique = True,
- )
+ )
else:
__IncludedMagicPoints = []
#
if "MaximumNumberOfLocations" in selfA._parameters and "MaximumRBSize" in selfA._parameters:
- selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"],selfA._parameters["MaximumRBSize"])
+ selfA._parameters["MaximumRBSize"] = min(selfA._parameters["MaximumNumberOfLocations"], selfA._parameters["MaximumRBSize"]) # noqa: E501
elif "MaximumNumberOfLocations" in selfA._parameters:
selfA._parameters["MaximumRBSize"] = selfA._parameters["MaximumNumberOfLocations"]
elif "MaximumRBSize" in selfA._parameters:
#
if __LcCsts and len(__IncludedMagicPoints) > 0:
__iM = numpy.argmax( numpy.abs(
- numpy.take(__rhoM[:,0], __IncludedMagicPoints, mode='clip')
- ))
+ numpy.take(__rhoM[:, 0], __IncludedMagicPoints, mode='clip')
+ ))
else:
__iM = numpy.argmax( numpy.abs(
- __rhoM[:,0]
- ))
+ __rhoM[:, 0]
+ ))
#
- __mu = [None,] # Convention
+ __mu = [None,] # Convention
__I = [__iM,]
- __Q = __rhoM[:,0].reshape((-1,1))
+ __Q = __rhoM[:, 0].reshape((-1, 1))
__errors = []
#
- __M = 1 # Car le premier est déjà construit
+ __M = 1 # Car le premier est déjà construit
if selfA._toStore("Residus"):
__eM, _ = InterpolationErrorByColumn(
__EOS, __Q, __I, __M,
# ------
while __M < __maxM:
#
- __restrictedQi = __Q[__I,:]
+ __restrictedQi = __Q[__I, :]
if __M > 1:
__Qi_inv = numpy.linalg.inv(__restrictedQi)
else:
__Qi_inv = 1. / __restrictedQi
#
- __restrictedrhoMi = __rhoM[__I,__M].reshape((-1,1))
+ __restrictedrhoMi = __rhoM[__I, __M].reshape((-1, 1))
#
if __M > 1:
- __interpolator = numpy.dot(__Q,numpy.dot(__Qi_inv,__restrictedrhoMi))
+ __interpolator = numpy.dot(__Q, numpy.dot(__Qi_inv, __restrictedrhoMi))
else:
- __interpolator = numpy.outer(__Q,numpy.outer(__Qi_inv,__restrictedrhoMi))
+ __interpolator = numpy.outer(__Q, numpy.outer(__Qi_inv, __restrictedrhoMi))
#
- __residuM = __rhoM[:,__M].reshape((-1,1)) - __interpolator
+ __residuM = __rhoM[:, __M].reshape((-1, 1)) - __interpolator
#
if __LcCsts and len(__IncludedMagicPoints) > 0:
__iM = numpy.argmax( numpy.abs(
numpy.take(__residuM, __IncludedMagicPoints, mode='clip')
- ))
+ ))
else:
__iM = numpy.argmax( numpy.abs(
__residuM
- ))
- __Q = numpy.column_stack((__Q, __rhoM[:,__M]))
+ ))
+ __Q = numpy.column_stack((__Q, __rhoM[:, __M]))
#
__I.append(__iM)
- __mu.append(None) # Convention
+ __mu.append(None) # Convention
if selfA._toStore("Residus"):
__eM, _ = InterpolationErrorByColumn(
- __EOS, __Q, __I, __M+1,
+ __EOS, __Q, __I, __M + 1,
__ErrorNorm = selfA._parameters["ErrorNorm"],
__LcCsts = __LcCsts, __IncludedPoints = __IncludedMagicPoints)
__errors.append(__eM)
#
__M = __M + 1
#
- #--------------------------
- if len(__errors)>0 and __errors[-1] < selfA._parameters["EpsilonEIM"]:
- logging.debug("%s %s (%.1e)"%(selfA._name,"The convergence is obtained when reaching the required EIM tolerance",selfA._parameters["EpsilonEIM"]))
+ # --------------------------
+ if len(__errors) > 0 and __errors[-1] < selfA._parameters["EpsilonEIM"]:
+ logging.debug("%s %s (%.1e)"%(selfA._name, "The convergence is obtained when reaching the required EIM tolerance", selfA._parameters["EpsilonEIM"])) # noqa: E501
if __M >= __maxM:
- logging.debug("%s %s (%i)"%(selfA._name,"The convergence is obtained when reaching the maximum number of RB dimension",__maxM))
- logging.debug("%s The RB of size %i has been correctly build"%(selfA._name,__Q.shape[1]))
- logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name,len(__ExcludedMagicPoints)))
+ logging.debug("%s %s (%i)"%(selfA._name, "The convergence is obtained when reaching the maximum number of RB dimension", __maxM)) # noqa: E501
+ logging.debug("%s The RB of size %i has been correctly build"%(selfA._name, __Q.shape[1]))
+ logging.debug("%s There are %i points that have been excluded from the potential optimal points"%(selfA._name, len(__ExcludedMagicPoints))) # noqa: E501
if hasattr(selfA, "StoredVariables"):
selfA.StoredVariables["OptimalPoints"].store( __I )
+ if selfA._toStore("ReducedBasisMus"):
+ selfA.StoredVariables["ReducedBasisMus"].store( __mu )
if selfA._toStore("ReducedBasis"):
selfA.StoredVariables["ReducedBasis"].store( __Q )
if selfA._toStore("Residus"):
"""
__author__ = "Jean-Philippe ARGAUD"
-import math, numpy, scipy, copy
-from daCore.PlatformInfo import vfloat
+import numpy, scipy, copy
+from daCore.NumericObjects import GenerateWeightsAndSigmaPoints
+from daCore.PlatformInfo import PlatformInfo, vfloat
+mpr = PlatformInfo().MachinePrecision()
# ==============================================================================
-def ecwukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
+def ecwukf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="UKF"):
"""
Unscented Kalman Filter
"""
if selfA._parameters["EstimationOf"] == "Parameters":
selfA._parameters["StoreInternalVariables"] = True
#
- L = Xb.size
- Alpha = selfA._parameters["Alpha"]
- Beta = selfA._parameters["Beta"]
- if selfA._parameters["Kappa"] == 0:
- if selfA._parameters["EstimationOf"] == "State":
- Kappa = 0
- elif selfA._parameters["EstimationOf"] == "Parameters":
- Kappa = 3 - L
- else:
- Kappa = selfA._parameters["Kappa"]
- Lambda = float( Alpha**2 ) * ( L + Kappa ) - L
- Gamma = math.sqrt( L + Lambda )
- #
- Ww = []
- Ww.append( 0. )
- for i in range(2*L):
- Ww.append( 1. / (2.*(L + Lambda)) )
- #
- Wm = numpy.array( Ww )
- Wm[0] = Lambda / (L + Lambda)
- Wc = numpy.array( Ww )
- Wc[0] = Lambda / (L + Lambda) + (1. - Alpha**2 + Beta)
+ wsp = GenerateWeightsAndSigmaPoints(
+ Nn = Xb.size,
+ EO = selfA._parameters["EstimationOf"],
+ VariantM = VariantM,
+ Alpha = selfA._parameters["Alpha"],
+ Beta = selfA._parameters["Beta"],
+ Kappa = selfA._parameters["Kappa"],
+ )
+ Wm, Wc, SC = wsp.get()
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
BI = B.getI()
RI = R.getI()
#
__n = Xb.size
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = Xb
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
Pn = B.asfullmatrix(__n)
else:
Pn = B
XaMin = Xn
previousJMinimum = numpy.finfo(float).max
#
- for step in range(duration-1):
+ for step in range(duration - 1):
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
+ Hm = HO["Direct"].appliedControledFormTo
+ if selfA._parameters["EstimationOf"] == "State":
+ Mm = EM["Direct"].appliedControledFormTo
if CM is not None and "Tangent" in CM and U is not None:
Cm = CM["Tangent"].asMatrix(Xn)
else:
Cm = None
#
+ # Pndemi = numpy.real(scipy.linalg.cholesky(Pn))
Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
- Xnmu = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
- nbSpts = 2*Xn.size+1
+ Xnmu = Xn + Pndemi @ SC
+ nbSpts = SC.shape[1]
#
- XEnnmu = []
- for point in range(nbSpts):
- if selfA._parameters["EstimationOf"] == "State":
- Mm = EM["Direct"].appliedControledFormTo
- XEnnmui = numpy.asarray( Mm( (Xnmu[:,point], Un) ) ).reshape((-1,1))
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
- XEnnmui = XEnnmui + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters":
- # --- > Par principe, M = Id, Q = 0
- XEnnmui = Xnmu[:,point]
- XEnnmu.append( numpy.ravel(XEnnmui).reshape((-1,1)) )
- XEnnmu = numpy.concatenate( XEnnmu, axis=1 )
+ if selfA._parameters["EstimationOf"] == "State":
+ XEnnmu = Mm( [(Xnmu[:, point].reshape((-1, 1)), Un) for point in range(nbSpts)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
+ XEnnmu = XEnnmu + Cm @ Un
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ # --- > Par principe, M = Id, Q = 0
+ XEnnmu = numpy.array( Xnmu )
#
Xhmn = ( XEnnmu * Wm ).sum(axis=1)
#
- if selfA._parameters["EstimationOf"] == "State": Pmn = copy.copy(Q)
- elif selfA._parameters["EstimationOf"] == "Parameters": Pmn = 0.
+ if selfA._parameters["EstimationOf"] == "State":
+ Pmn = copy.copy(Q)
+ elif selfA._parameters["EstimationOf"] == "Parameters":
+ Pmn = 0.
for point in range(nbSpts):
- dXEnnmuXhmn = XEnnmu[:,point].flat-Xhmn
- Pmn += Wc[i] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
+ dXEnnmuXhmn = XEnnmu[:, point].flat - Xhmn
+ Pmn += Wc[point] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
#
+ # Pmndemi = numpy.real(scipy.linalg.cholesky(Pmn))
Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
- Xnnmu = numpy.hstack([Xhmn.reshape((-1,1)), Xhmn.reshape((-1,1))+Gamma*Pmndemi, Xhmn.reshape((-1,1))-Gamma*Pmndemi])
+ Xnnmu = Xhmn.reshape((-1, 1)) + Pmndemi @ SC
#
- Hm = HO["Direct"].appliedControledFormTo
- Ynnmu = []
- for point in range(nbSpts):
- if selfA._parameters["EstimationOf"] == "State":
- Ynnmui = Hm( (Xnnmu[:,point], None) )
- elif selfA._parameters["EstimationOf"] == "Parameters":
- Ynnmui = Hm( (Xnnmu[:,point], Un) )
- Ynnmu.append( numpy.ravel(Ynnmui).reshape((__p,1)) )
- Ynnmu = numpy.concatenate( Ynnmu, axis=1 )
+ Ynnmu = Hm( [(Xnnmu[:, point], None) for point in range(nbSpts)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
#
Yhmn = ( Ynnmu * Wm ).sum(axis=1)
#
Pyyn = copy.copy(R)
Pxyn = 0.
for point in range(nbSpts):
- dYnnmuYhmn = Ynnmu[:,point].flat-Yhmn
- dXnnmuXhmn = Xnnmu[:,point].flat-Xhmn
- Pyyn += Wc[i] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
- Pxyn += Wc[i] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
+ dYnnmuYhmn = Ynnmu[:, point].flat - Yhmn
+ dXnnmuXhmn = Xnnmu[:, point].flat - Xhmn
+ Pyyn += Wc[point] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
+ Pxyn += Wc[point] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
#
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
- _Innovation = Ynpu - Yhmn.reshape((-1,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
+ _Innovation = Ynpu - Yhmn.reshape((-1, 1))
if selfA._parameters["EstimationOf"] == "Parameters":
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
_Innovation = _Innovation - Cm @ Un
#
- Kn = Pxyn @ Pyyn.I
- Xn = Xhmn.reshape((-1,1)) + Kn @ _Innovation
+ Kn = Pxyn @ scipy.linalg.inv(Pyyn)
+ Xn = Xhmn.reshape((-1, 1)) + Kn @ _Innovation
Pn = Pmn - Kn @ (Pyyn @ Kn.T)
#
- Xa = Xn # Pointeurs
- #--------------------------
+ Xa = Xn # Pointeurs
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("Pn", Pn)
- #--------------------------
+ # --------------------------
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
# ---> avec analysis
selfA.StoredVariables["Analysis"].store( Xa )
if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, Un)) )
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, None)) )
if selfA._toStore("InnovationAtCurrentAnalysis"):
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( Xhmn )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( Pn )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
#
# Durée d'observation et tailles
LagL = selfA._parameters["SmootherLagL"]
- if (not hasattr(Y,"store")) or (not hasattr(Y,"stepnumber")):
+ if (not hasattr(Y, "store")) or (not hasattr(Y, "stepnumber")):
raise ValueError("Fixed-lag smoother requires a series of observation")
if Y.stepnumber() < LagL:
raise ValueError("Fixed-lag smoother requires a series of observation greater then the lag L")
__n = Xb.size
__m = selfA._parameters["NumberOfMembers"]
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
else:
raise ValueError("VariantM has to be chosen in the authorized methods list.")
if LagL > 0:
- EL = selfB.StoredVariables["CurrentEnsembleState"][LagL-1]
+ EL = selfB.StoredVariables["CurrentEnsembleState"][LagL - 1]
else:
- EL = EnsembleOfBackgroundPerturbations( Xb, None, __m ) # Cf. etkf
+ EL = EnsembleOfBackgroundPerturbations( Xb, None, __m ) # Cf. etkf
selfA._parameters["SetSeed"] = numpy.random.set_state(__seed)
#
- for step in range(LagL,duration-1):
+ for step in range(LagL, duration - 1):
#
- sEL = selfB.StoredVariables["CurrentEnsembleState"][step+1-LagL:step+1]
+ sEL = selfB.StoredVariables["CurrentEnsembleState"][step + 1 - LagL:step + 1]
sEL.append(None)
#
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
- #--------------------------
+ # --------------------------
if VariantM == "EnKS16-KalmanFilterFormula":
- if selfA._parameters["EstimationOf"] == "State": # Forecast
- EL = M( [(EL[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ if selfA._parameters["EstimationOf"] == "State": # Forecast
+ EL = M( [(EL[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
EL = EnsemblePerturbationWithGivenCovariance( EL, Q )
- EZ = H( [(EL[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ EZ = H( [(EL[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
EZ = EZ + Cm @ Un
elif selfA._parameters["EstimationOf"] == "Parameters":
# --- > Par principe, M = Id, Q = 0
- EZ = H( [(EL[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ EZ = H( [(EL[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
#
- vEm = EL.mean(axis=1, dtype=mfp).astype('float').reshape((__n,1))
- vZm = EZ.mean(axis=1, dtype=mfp).astype('float').reshape((__p,1))
+ vEm = EL.mean(axis=1, dtype=mfp).astype('float').reshape((__n, 1))
+ vZm = EZ.mean(axis=1, dtype=mfp).astype('float').reshape((__p, 1))
#
- mS = RIdemi @ EnsembleOfAnomalies( EZ, vZm, 1./math.sqrt(__m-1) )
- mS = mS.reshape((-1,__m)) # Pour dimension 1
+ mS = RIdemi @ EnsembleOfAnomalies( EZ, vZm, 1. / math.sqrt(__m - 1) )
+ mS = mS.reshape((-1, __m)) # Pour dimension 1
delta = RIdemi @ ( Ynpu - vZm )
mT = numpy.linalg.inv( numpy.identity(__m) + mS.T @ mS )
vw = mT @ mS.T @ delta
#
Tdemi = numpy.real(scipy.linalg.sqrtm(mT))
mU = numpy.identity(__m)
- wTU = (vw.reshape((__m,1)) + math.sqrt(__m-1) * Tdemi @ mU)
+ wTU = (vw.reshape((__m, 1)) + math.sqrt(__m - 1) * Tdemi @ mU)
#
- EX = EnsembleOfAnomalies( EL, vEm, 1./math.sqrt(__m-1) )
+ EX = EnsembleOfAnomalies( EL, vEm, 1. / math.sqrt(__m - 1) )
EL = vEm + EX @ wTU
#
sEL[LagL] = EL
- for irl in range(LagL): # Lissage des L précédentes analysis
- vEm = sEL[irl].mean(axis=1, dtype=mfp).astype('float').reshape((__n,1))
- EX = EnsembleOfAnomalies( sEL[irl], vEm, 1./math.sqrt(__m-1) )
+ for irl in range(LagL): # Lissage des L précédentes analysis
+ vEm = sEL[irl].mean(axis=1, dtype=mfp).astype('float').reshape((__n, 1))
+ EX = EnsembleOfAnomalies( sEL[irl], vEm, 1. / math.sqrt(__m - 1) )
sEL[irl] = vEm + EX @ wTU
#
# Conservation de l'analyse retrospective d'ordre 0 avant rotation
- Xa = sEL[0].mean(axis=1, dtype=mfp).astype('float').reshape((__n,1))
+ Xa = sEL[0].mean(axis=1, dtype=mfp).astype('float').reshape((__n, 1))
if selfA._toStore("APosterioriCovariance"):
EXn = sEL[0]
#
for irl in range(LagL):
- sEL[irl] = sEL[irl+1]
+ sEL[irl] = sEL[irl + 1]
sEL[LagL] = None
- #--------------------------
+ # --------------------------
else:
raise ValueError("VariantM has to be chosen in the authorized methods list.")
#
# Stockage des dernières analyses incomplètement remises à jour
for irl in range(LagL):
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- Xa = sEL[irl].mean(axis=1, dtype=mfp).astype('float').reshape((__n,1))
+ Xa = sEL[irl].mean(axis=1, dtype=mfp).astype('float').reshape((__n, 1))
selfA.StoredVariables["Analysis"].store( Xa )
#
return 0
selfA._parameters["SampleAsIndependantRandomVariables"],
Xb,
selfA._parameters["SetSeed"],
- )
+ )
#
- if hasattr(sampleList,"__len__") and len(sampleList) == 0:
- if outputEOX: return numpy.array([[]]), numpy.array([[]])
- else: return numpy.array([[]])
+ if hasattr(sampleList, "__len__") and len(sampleList) == 0:
+ if outputEOX:
+ return numpy.array([[]]), numpy.array([[]])
+ else:
+ return numpy.array([[]])
#
if outputEOX or selfA._toStore("EnsembleOfStates"):
EOX = numpy.stack(tuple(copy.copy(sampleList)), axis=1)
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
logging.getLogger().setLevel(logging.DEBUG)
print("===> Beginning of evaluation, activating debug\n")
- print(" %s\n"%("-"*75,))
+ print(" %s\n"%("-" * 75,))
#
Hm = HO["Direct"].appliedTo
if assumeNoFailure:
sampleList,
argsAsSerie = True,
returnSerieAsArrayMatrix = True,
- )
+ )
else:
try:
EOS = Hm(
sampleList,
argsAsSerie = True,
returnSerieAsArrayMatrix = True,
- )
- except: # Reprise séquentielle sur erreur de calcul
+ )
+ except Exception: # Reprise séquentielle sur erreur de calcul
EOS, __s = [], 1
for state in sampleList:
if numpy.any(numpy.isin((None, numpy.nan), state)):
- EOS.append( () ) # Résultat vide
+ EOS.append( () ) # Résultat vide
else:
try:
EOS.append( Hm(state) )
__s = numpy.asarray(EOS[-1]).size
- except:
- EOS.append( () ) # Résultat vide
+ except Exception:
+ EOS.append( () ) # Résultat vide
for i, resultat in enumerate(EOS):
- if len(resultat) == 0: # Résultat vide
- EOS[i] = numpy.nan*numpy.ones(__s)
+ if len(resultat) == 0: # Résultat vide
+ EOS[i] = numpy.nan * numpy.ones(__s)
EOS = numpy.stack(EOS, axis=1)
#
- if len(EOS.shape) > 2 and EOS.shape[2]==1: # RaJ si transposition de Hm
+ if len(EOS.shape) > 2 and EOS.shape[2] == 1: # RaJ si transposition de Hm
EOS = EOS.squeeze( axis = 2 )
#
if selfA._parameters["SetDebug"]:
- print("\n %s\n"%("-"*75,))
+ print("\n %s\n"%("-" * 75,))
print("===> End evaluation, deactivating debug if necessary\n")
logging.getLogger().setLevel(CUR_LEVEL)
# ----------
#
if selfA._toStore("EnsembleOfStates"):
if EOX.shape[1] != EOS.shape[1]:
- raise ValueError("Numbers of states (=%i) and snapshots (=%i) has to be the same!"%(EOX.shape[1], EOS.shape[1]))
+ raise ValueError("Numbers of states (=%i) and snapshots (=%i) has to be the same!"%(EOX.shape[1], EOS.shape[1])) # noqa: E501
selfA.StoredVariables["EnsembleOfStates"].store( EOX )
if selfA._toStore("EnsembleOfSimulations"):
selfA.StoredVariables["EnsembleOfSimulations"].store( EOS )
#
if outputEOX:
if EOX.shape[1] != EOS.shape[1]:
- raise ValueError("Numbers of states (=%i) and snapshots (=%i) has to be the same!"%(EOX.shape[1], EOS.shape[1]))
+ raise ValueError("Numbers of states (=%i) and snapshots (=%i) has to be the same!"%(EOX.shape[1], EOS.shape[1])) # noqa: E501
return EOX, EOS
else:
return EOS
from daCore.PlatformInfo import vfloat
# ==============================================================================
-def etkf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q,
- VariantM="KalmanFilterFormula",
- Hybrid=None,
- ):
+def etkf( selfA, Xb, Y, U, HO, EM, CM, R, B, Q,
+ VariantM="KalmanFilterFormula",
+ Hybrid=None,
+ ):
"""
Ensemble-Transform Kalman Filter
"""
Cm = None
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
BI = B.getI()
RI = R.getI()
elif VariantM != "KalmanFilterFormula":
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
previousJMinimum = numpy.finfo(float).max
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = EnsembleOfBackgroundPerturbations( Xb, None, __m )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
#
- for step in range(duration-1):
+ for step in range(duration - 1):
numpy.random.set_state(selfA._getInternalState("seed"))
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["InflationType"] == "MultiplicativeOnBackgroundAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
- EMX = M( [(Xn[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
+ EMX = M( [(Xn[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
Xn_predicted = EnsemblePerturbationWithGivenCovariance( EMX, Q )
- HX_predicted = H( [(Xn_predicted[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ HX_predicted = H( [(Xn_predicted[:, i], None) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Xn_predicted = EMX = Xn
- HX_predicted = H( [(Xn_predicted[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ HX_predicted = H( [(Xn_predicted[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
#
# Mean of forecast and observation of forecast
Xfm = EnsembleMean( Xn_predicted )
EaX = EnsembleOfAnomalies( Xn_predicted, Xfm )
EaHX = EnsembleOfAnomalies( HX_predicted, Hfm)
#
- #--------------------------
+ # --------------------------
if VariantM == "KalmanFilterFormula":
- mS = RIdemi * EaHX / math.sqrt(__m-1)
- mS = mS.reshape((-1,__m)) # Pour dimension 1
+ mS = RIdemi * EaHX / math.sqrt(__m - 1)
+ mS = mS.reshape((-1, __m)) # Pour dimension 1
delta = RIdemi * ( Ynpu - Hfm )
mT = numpy.linalg.inv( numpy.identity(__m) + mS.T @ mS )
vw = mT @ mS.T @ delta
Tdemi = numpy.real(scipy.linalg.sqrtm(mT))
mU = numpy.identity(__m)
#
- EaX = EaX / math.sqrt(__m-1)
- Xn = Xfm + EaX @ ( vw.reshape((__m,1)) + math.sqrt(__m-1) * Tdemi @ mU )
- #--------------------------
+ EaX = EaX / math.sqrt(__m - 1)
+ Xn = Xfm + EaX @ ( vw.reshape((__m, 1)) + math.sqrt(__m - 1) * Tdemi @ mU )
+ # --------------------------
elif VariantM == "Variational":
- HXfm = H((Xfm[:,None], Un)) # Eventuellement Hfm
+ HXfm = H((Xfm[:, None], Un)) # Eventuellement Hfm
+
def CostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_Jo = 0.5 * _A.T @ (RI * _A)
- _Jb = 0.5 * (__m-1) * w.T @ w
+ _Jb = 0.5 * (__m - 1) * w.T @ w
_J = _Jo + _Jb
return vfloat(_J)
+
def GradientOfCostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_GardJo = - EaHX.T @ (RI * _A)
- _GradJb = (__m-1) * w.reshape((__m,1))
+ _GradJb = (__m - 1) * w.reshape((__m, 1))
_GradJ = _GardJo + _GradJb
return numpy.ravel(_GradJ)
+
vw = scipy.optimize.fmin_cg(
f = CostFunction,
x0 = numpy.zeros(__m),
fprime = GradientOfCostFunction,
args = (),
disp = False,
- )
+ )
#
- Hto = EaHX.T @ (RI * EaHX).reshape((-1,__m))
- Htb = (__m-1) * numpy.identity(__m)
+ Hto = EaHX.T @ (RI * EaHX).reshape((-1, __m))
+ Htb = (__m - 1) * numpy.identity(__m)
Hta = Hto + Htb
#
Pta = numpy.linalg.inv( Hta )
- EWa = numpy.real(scipy.linalg.sqrtm((__m-1)*Pta)) # Partie imaginaire ~= 10^-18
+ EWa = numpy.real(scipy.linalg.sqrtm((__m - 1) * Pta)) # Partie imaginaire ~= 10^-18
#
- Xn = Xfm + EaX @ (vw[:,None] + EWa)
- #--------------------------
- elif VariantM == "FiniteSize11": # Jauge Boc2011
- HXfm = H((Xfm[:,None], Un)) # Eventuellement Hfm
+ Xn = Xfm + EaX @ (vw[:, None] + EWa)
+ # --------------------------
+ elif VariantM == "FiniteSize11": # Jauge Boc2011
+ HXfm = H((Xfm[:, None], Un)) # Eventuellement Hfm
+
def CostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_Jo = 0.5 * _A.T @ (RI * _A)
- _Jb = 0.5 * __m * math.log(1 + 1/__m + w.T @ w)
+ _Jb = 0.5 * __m * math.log(1 + 1 / __m + w.T @ w)
_J = _Jo + _Jb
return vfloat(_J)
+
def GradientOfCostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_GardJo = - EaHX.T @ (RI * _A)
- _GradJb = __m * w.reshape((__m,1)) / (1 + 1/__m + w.T @ w)
+ _GradJb = __m * w.reshape((__m, 1)) / (1 + 1 / __m + w.T @ w)
_GradJ = _GardJo + _GradJb
return numpy.ravel(_GradJ)
+
vw = scipy.optimize.fmin_cg(
f = CostFunction,
x0 = numpy.zeros(__m),
fprime = GradientOfCostFunction,
args = (),
disp = False,
- )
+ )
#
- Hto = EaHX.T @ (RI * EaHX).reshape((-1,__m))
+ Hto = EaHX.T @ (RI * EaHX).reshape((-1, __m))
Htb = __m * \
- ( (1 + 1/__m + vw.T @ vw) * numpy.identity(__m) - 2 * vw @ vw.T ) \
- / (1 + 1/__m + vw.T @ vw)**2
+ ( (1 + 1 / __m + vw.T @ vw) * numpy.identity(__m) - 2 * vw @ vw.T ) \
+ / (1 + 1 / __m + vw.T @ vw)**2
Hta = Hto + Htb
#
Pta = numpy.linalg.inv( Hta )
- EWa = numpy.real(scipy.linalg.sqrtm((__m-1)*Pta)) # Partie imaginaire ~= 10^-18
+ EWa = numpy.real(scipy.linalg.sqrtm((__m - 1) * Pta)) # Partie imaginaire ~= 10^-18
#
- Xn = Xfm + EaX @ (vw.reshape((__m,1)) + EWa)
- #--------------------------
- elif VariantM == "FiniteSize15": # Jauge Boc2015
- HXfm = H((Xfm[:,None], Un)) # Eventuellement Hfm
+ Xn = Xfm + EaX @ (vw.reshape((__m, 1)) + EWa)
+ # --------------------------
+ elif VariantM == "FiniteSize15": # Jauge Boc2015
+ HXfm = H((Xfm[:, None], Un)) # Eventuellement Hfm
+
def CostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_Jo = 0.5 * _A.T * (RI * _A)
- _Jb = 0.5 * (__m+1) * math.log(1 + 1/__m + w.T @ w)
+ _Jb = 0.5 * (__m + 1) * math.log(1 + 1 / __m + w.T @ w)
_J = _Jo + _Jb
return vfloat(_J)
+
def GradientOfCostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_GardJo = - EaHX.T @ (RI * _A)
- _GradJb = (__m+1) * w.reshape((__m,1)) / (1 + 1/__m + w.T @ w)
+ _GradJb = (__m + 1) * w.reshape((__m, 1)) / (1 + 1 / __m + w.T @ w)
_GradJ = _GardJo + _GradJb
return numpy.ravel(_GradJ)
+
vw = scipy.optimize.fmin_cg(
f = CostFunction,
x0 = numpy.zeros(__m),
fprime = GradientOfCostFunction,
args = (),
disp = False,
- )
+ )
#
- Hto = EaHX.T @ (RI * EaHX).reshape((-1,__m))
- Htb = (__m+1) * \
- ( (1 + 1/__m + vw.T @ vw) * numpy.identity(__m) - 2 * vw @ vw.T ) \
- / (1 + 1/__m + vw.T @ vw)**2
+ Hto = EaHX.T @ (RI * EaHX).reshape((-1, __m))
+ Htb = (__m + 1) * \
+ ( (1 + 1 / __m + vw.T @ vw) * numpy.identity(__m) - 2 * vw @ vw.T ) \
+ / (1 + 1 / __m + vw.T @ vw)**2
Hta = Hto + Htb
#
Pta = numpy.linalg.inv( Hta )
- EWa = numpy.real(scipy.linalg.sqrtm((__m-1)*Pta)) # Partie imaginaire ~= 10^-18
+ EWa = numpy.real(scipy.linalg.sqrtm((__m - 1) * Pta)) # Partie imaginaire ~= 10^-18
#
- Xn = Xfm + EaX @ (vw.reshape((__m,1)) + EWa)
- #--------------------------
- elif VariantM == "FiniteSize16": # Jauge Boc2016
- HXfm = H((Xfm[:,None], Un)) # Eventuellement Hfm
+ Xn = Xfm + EaX @ (vw.reshape((__m, 1)) + EWa)
+ # --------------------------
+ elif VariantM == "FiniteSize16": # Jauge Boc2016
+ HXfm = H((Xfm[:, None], Un)) # Eventuellement Hfm
+
def CostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_Jo = 0.5 * _A.T @ (RI * _A)
- _Jb = 0.5 * (__m+1) * math.log(1 + 1/__m + w.T @ w / (__m-1))
+ _Jb = 0.5 * (__m + 1) * math.log(1 + 1 / __m + w.T @ w / (__m - 1))
_J = _Jo + _Jb
return vfloat(_J)
+
def GradientOfCostFunction(w):
- _A = Ynpu - HXfm.reshape((__p,1)) - (EaHX @ w).reshape((__p,1))
+ _A = Ynpu - HXfm.reshape((__p, 1)) - (EaHX @ w).reshape((__p, 1))
_GardJo = - EaHX.T @ (RI * _A)
- _GradJb = ((__m+1) / (__m-1)) * w.reshape((__m,1)) / (1 + 1/__m + w.T @ w / (__m-1))
+ _GradJb = ((__m + 1) / (__m - 1)) * w.reshape((__m, 1)) / (1 + 1 / __m + w.T @ w / (__m - 1))
_GradJ = _GardJo + _GradJb
return numpy.ravel(_GradJ)
+
vw = scipy.optimize.fmin_cg(
f = CostFunction,
x0 = numpy.zeros(__m),
fprime = GradientOfCostFunction,
args = (),
disp = False,
- )
+ )
#
- Hto = EaHX.T @ (RI * EaHX).reshape((-1,__m))
- Htb = ((__m+1) / (__m-1)) * \
- ( (1 + 1/__m + vw.T @ vw / (__m-1)) * numpy.identity(__m) - 2 * vw @ vw.T / (__m-1) ) \
- / (1 + 1/__m + vw.T @ vw / (__m-1))**2
+ Hto = EaHX.T @ (RI * EaHX).reshape((-1, __m))
+ Htb = ((__m + 1) / (__m - 1)) * \
+ ( (1 + 1 / __m + vw.T @ vw / (__m - 1)) * numpy.identity(__m) - 2 * vw @ vw.T / (__m - 1) ) \
+ / (1 + 1 / __m + vw.T @ vw / (__m - 1))**2
Hta = Hto + Htb
#
Pta = numpy.linalg.inv( Hta )
- EWa = numpy.real(scipy.linalg.sqrtm((__m-1)*Pta)) # Partie imaginaire ~= 10^-18
+ EWa = numpy.real(scipy.linalg.sqrtm((__m - 1) * Pta)) # Partie imaginaire ~= 10^-18
#
- Xn = Xfm + EaX @ (vw[:,None] + EWa)
- #--------------------------
+ Xn = Xfm + EaX @ (vw[:, None] + EWa)
+ # --------------------------
else:
raise ValueError("VariantM has to be chosen in the authorized methods list.")
#
if selfA._parameters["InflationType"] == "MultiplicativeOnAnalysisAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
if Hybrid == "E3DVAR":
Xn = Apply3DVarRecentringOnEnsemble(Xn, EMX, Ynpu, HO, R, B, selfA._parameters)
#
Xa = EnsembleMean( Xn )
- #--------------------------
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("seed", numpy.random.get_state())
- #--------------------------
+ # --------------------------
#
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("APosterioriCovariance") \
- or selfA._toStore("InnovationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- _HXa = numpy.ravel( H((Xa, Un)) ).reshape((-1,1))
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("APosterioriCovariance") \
+ or selfA._toStore("InnovationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ _HXa = numpy.ravel( H((Xa, None)) ).reshape((-1, 1))
_Innovation = Ynpu - _HXa
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( EMX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( - HX_predicted + Ynpu )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX_predicted )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( EnsembleErrorCovariance(Xn) )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
selfA._parameters["StoreInternalVariables"] = True
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
- (__p > __n):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
+ (__p > __n):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = Xb
Pn = B
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
Pn = selfA._getInternalState("Pn")
- if hasattr(Pn,"asfullmatrix"):
+ if hasattr(Pn, "asfullmatrix"):
Pn = Pn.asfullmatrix(Xn.size)
#
if selfA._parameters["EstimationOf"] == "Parameters":
XaMin = Xn
previousJMinimum = numpy.finfo(float).max
#
- for step in range(duration-1):
+ for step in range(duration - 1):
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
+ if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
Mt = EM["Tangent"].asMatrix(Xn)
- Mt = Mt.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Mt = Mt.reshape(Xn.size, Xn.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(Xn)
- Ma = Ma.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Ma = Ma.reshape(Xn.size, Xn.size) # ADAO & check shape
M = EM["Direct"].appliedControledFormTo
- Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
Pn_predicted = Q + Mt @ (Pn @ Ma)
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Xn_predicted = Xn
Pn_predicted = Pn
#
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
Ht = HO["Tangent"].asMatrix(Xn_predicted)
- Ht = Ht.reshape(Ynpu.size,Xn.size) # ADAO & check shape
+ Ht = Ht.reshape(Ynpu.size, Xn.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xn_predicted)
- Ha = Ha.reshape(Xn.size,Ynpu.size) # ADAO & check shape
+ Ha = Ha.reshape(Xn.size, Ynpu.size) # ADAO & check shape
H = HO["Direct"].appliedControledFormTo
#
if selfA._parameters["EstimationOf"] == "State":
- HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
elif selfA._parameters["EstimationOf"] == "Parameters":
- HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
_Innovation = _Innovation - Cm @ Un
#
if Ynpu.size <= Xn.size:
_HNHt = numpy.dot(Ht, Pn_predicted @ Ha)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , _Innovation )
- Xn = Xn_predicted + (Pn_predicted @ (Ha @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, _Innovation )
+ Xn = Xn_predicted + (Pn_predicted @ (Ha @ _u)).reshape((-1, 1))
Kn = Pn_predicted @ (Ha @ numpy.linalg.inv(_A))
else:
_HtRH = numpy.dot(Ha, RI @ Ht)
_A = numpy.linalg.inv(Pn_predicted) + _HtRH
- _u = numpy.linalg.solve( _A , numpy.dot(Ha, RI @ _Innovation) )
- Xn = Xn_predicted + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.dot(Ha, RI @ _Innovation) )
+ Xn = Xn_predicted + _u.reshape((-1, 1))
Kn = numpy.linalg.inv(_A) @ (Ha @ RI.asfullmatrix(Ynpu.size))
#
Pn = Pn_predicted - Kn @ (Ht @ Pn_predicted)
- Pn = (Pn + Pn.T) * 0.5 # Symétrie
- Pn = Pn + mpr*numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
+ Pn = (Pn + Pn.T) * 0.5 # Symétrie
+ Pn = Pn + mpr * numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
#
- Xa = Xn # Pointeurs
- #--------------------------
+ Xa = Xn # Pointeurs
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("Pn", Pn)
- #--------------------------
+ # --------------------------
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
# ---> avec analysis
selfA.StoredVariables["Analysis"].store( Xa )
if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, Un)) )
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, None)) )
if selfA._toStore("InnovationAtCurrentAnalysis"):
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( Xn_predicted )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX_predicted )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T @ (RI @ _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( Pn )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
selfA._parameters["StoreInternalVariables"] = True
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CostFunctionJ") or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
- (__p > __n):
- if isinstance(B,numpy.ndarray):
+ selfA._toStore("CostFunctionJ" ) or selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJb") or selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJo") or selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("CurrentOptimum") or selfA._toStore("APosterioriCovariance") or \
+ (__p > __n):
+ if isinstance(B, numpy.ndarray):
BI = numpy.linalg.inv(B)
else:
BI = B.getI()
RI = R.getI()
if __p > __n:
- QI = Q.getI() # Q non nul
+ QI = Q.getI() # Q non nul
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = Xb
Pn = B
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
Pn = selfA._getInternalState("Pn")
- if hasattr(Pn,"asfullmatrix"):
+ if hasattr(Pn, "asfullmatrix"):
Pn = Pn.asfullmatrix(Xn.size)
#
if selfA._parameters["EstimationOf"] == "Parameters":
XaMin = Xn
previousJMinimum = numpy.finfo(float).max
#
- for step in range(duration-1):
+ for step in range(duration - 1):
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast
+ if selfA._parameters["EstimationOf"] == "State": # Forecast
Mt = EM["Tangent"].asMatrix(Xn)
- Mt = Mt.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Mt = Mt.reshape(Xn.size, Xn.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(Xn)
- Ma = Ma.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Ma = Ma.reshape(Xn.size, Xn.size) # ADAO & check shape
M = EM["Direct"].appliedControledFormTo
- Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn_predicted = numpy.ravel( M( (Xn, Un) ) ).reshape((__n, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Mt = Ma = 1.
Q = QI = 0.
Xn_predicted = Xn
#
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
Ht = HO["Tangent"].asMatrix(Xn)
- Ht = Ht.reshape(Ynpu.size,Xn.size) # ADAO & check shape
+ Ht = Ht.reshape(Ynpu.size, Xn.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xn)
- Ha = Ha.reshape(Xn.size,Ynpu.size) # ADAO & check shape
+ Ha = Ha.reshape(Xn.size, Ynpu.size) # ADAO & check shape
H = HO["Direct"].appliedControledFormTo
#
if selfA._parameters["EstimationOf"] == "State":
- HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, None) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
elif selfA._parameters["EstimationOf"] == "Parameters":
- HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p,1))
+ HX_predicted = numpy.ravel( H( (Xn_predicted, Un) ) ).reshape((__p, 1))
_Innovation = Ynpu - HX_predicted
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans H, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
_Innovation = _Innovation - Cm @ Un
#
Htstar = Ht @ Mt
if Ynpu.size <= Xn.size:
_HNHt = numpy.dot(Ht, Q @ Ha) + numpy.dot(Htstar, Pn @ Hastar)
_A = R + _HNHt
- _u = numpy.linalg.solve( _A , _Innovation )
- Xs = Xn + (Pn @ (Hastar @ _u)).reshape((-1,1))
+ _u = numpy.linalg.solve( _A, _Innovation )
+ Xs = Xn + (Pn @ (Hastar @ _u)).reshape((-1, 1))
Ks = Pn @ (Hastar @ numpy.linalg.inv(_A))
else:
_HtRH = numpy.dot(Ha, QI @ Ht) + numpy.dot(Hastar, RI @ Htstar)
_A = numpy.linalg.inv(Pn) + _HtRH
- _u = numpy.linalg.solve( _A , numpy.dot(Hastar, RI @ _Innovation) )
- Xs = Xn + _u.reshape((-1,1))
+ _u = numpy.linalg.solve( _A, numpy.dot(Hastar, RI @ _Innovation) )
+ Xs = Xn + _u.reshape((-1, 1))
Ks = numpy.linalg.inv(_A) @ (Hastar @ RI.asfullmatrix(Ynpu.size))
#
Pn_predicted = Pn - Ks @ (Htstar @ Pn)
#
if selfA._parameters["EstimationOf"] == "State":
Mt = EM["Tangent"].asMatrix(Xs)
- Mt = Mt.reshape(Xs.size,Xs.size) # ADAO & check shape
+ Mt = Mt.reshape(Xs.size, Xs.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(Xs)
- Ma = Ma.reshape(Xs.size,Xs.size) # ADAO & check shape
+ Ma = Ma.reshape(Xs.size, Xs.size) # ADAO & check shape
M = EM["Direct"].appliedControledFormTo
- Xn = numpy.ravel( M( (Xs, Un) ) ).reshape((__n,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn = numpy.ravel( M( (Xs, Un) ) ).reshape((__n, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn = Xn + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Mt = Ma = 1.
Xn = Xs
#
- Pn = Mt @ (Pn_predicted @ Ma)
- Pn = (Pn + Pn.T) * 0.5 # Symétrie
- Pn = Pn + mpr*numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
+ Pn = Mt @ (Pn_predicted @ Ma)
+ Pn = (Pn + Pn.T) * 0.5 # Symétrie
+ Pn = Pn + mpr * numpy.trace( Pn ) * numpy.identity(Xn.size) # Positivité
#
- Xa = Xn # Pointeurs
- #--------------------------
+ Xa = Xn # Pointeurs
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("Pn", Pn)
- #--------------------------
+ # --------------------------
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
# ---> avec analysis
selfA.StoredVariables["Analysis"].store( Xa )
if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, Un)) )
+ selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( H((Xa, None)) )
if selfA._toStore("InnovationAtCurrentAnalysis"):
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( Xn_predicted )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX_predicted )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T @ (BI @ (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T @ (RI @ _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( Pn )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
__author__ = "Jean-Philippe ARGAUD"
import math, numpy, scipy, scipy.optimize, scipy.version
-from daCore.NumericObjects import EnsembleOfBackgroundPerturbations
-from daCore.NumericObjects import EnsembleOfAnomalies
from daCore.NumericObjects import CovarianceInflation
-from daCore.NumericObjects import EnsembleMean
from daCore.NumericObjects import EnsembleErrorCovariance
+from daCore.NumericObjects import EnsembleMean
+from daCore.NumericObjects import EnsembleOfAnomalies
+from daCore.NumericObjects import EnsembleOfBackgroundPerturbations
from daCore.PlatformInfo import PlatformInfo, vfloat
mpr = PlatformInfo().MachinePrecision()
mfp = PlatformInfo().MaximumPrecision()
# ==============================================================================
def ienkf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="IEnKF12",
- BnotT=False, _epsilon=1.e-3, _e=1.e-7, _jmax=15000):
+ BnotT=False, _epsilon=1.e-3, _e=1.e-7, _jmax=15000):
"""
Iterative Ensemble Kalman Filter
"""
if selfA._parameters["EstimationOf"] == "State":
M = EM["Direct"].appliedControledFormTo
#
- if CM is not None and "Tangent" in CM and U is not None:
- Cm = CM["Tangent"].asMatrix(Xb)
- else:
- Cm = None
- #
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
BI = B.getI()
RI = R.getI()
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
previousJMinimum = numpy.finfo(float).max
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
- if hasattr(B,"asfullmatrix"): Pn = B.asfullmatrix(__n)
- else: Pn = B
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
+ if hasattr(B, "asfullmatrix"):
+ Pn = B.asfullmatrix(__n)
+ else:
+ Pn = B
Xn = EnsembleOfBackgroundPerturbations( Xb, Pn, __m )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
- selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
- else:
- selfA.StoredVariables["APosterioriCovariance"].store( B )
+ selfA.StoredVariables["APosterioriCovariance"].store( Pn )
selfA._setInternalState("seed", numpy.random.get_state())
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
#
- for step in range(duration-1):
+ for step in range(duration - 1):
numpy.random.set_state(selfA._getInternalState("seed"))
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["InflationType"] == "MultiplicativeOnBackgroundAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
- #--------------------------
+ # --------------------------
if VariantM == "IEnKF12":
Xfm = numpy.ravel(Xn.mean(axis=1, dtype=mfp).astype('float'))
- EaX = EnsembleOfAnomalies( Xn ) / math.sqrt(__m-1)
+ EaX = EnsembleOfAnomalies( Xn ) / math.sqrt(__m - 1)
__j = 0
Deltaw = 1
if not BnotT:
Ta = numpy.identity(__m)
vw = numpy.zeros(__m)
while numpy.linalg.norm(Deltaw) >= _e and __j <= _jmax:
- vx1 = (Xfm + EaX @ vw).reshape((__n,1))
+ vx1 = (Xfm + EaX @ vw).reshape((__n, 1))
#
if BnotT:
E1 = vx1 + _epsilon * EaX
else:
- E1 = vx1 + math.sqrt(__m-1) * EaX @ Ta
+ E1 = vx1 + math.sqrt(__m - 1) * EaX @ Ta
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast + Q
- E2 = M( [(E1[:,i,numpy.newaxis], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ if selfA._parameters["EstimationOf"] == "State": # Forecast + Q
+ E2 = M( [(E1[:, i, numpy.newaxis], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
elif selfA._parameters["EstimationOf"] == "Parameters":
# --- > Par principe, M = Id
E2 = Xn
- vx2 = E2.mean(axis=1, dtype=mfp).astype('float').reshape((__n,1))
- vy1 = H((vx2, Un)).reshape((__p,1))
+ vx2 = E2.mean(axis=1, dtype=mfp).astype('float').reshape((__n, 1))
+ vy1 = H((vx2, Un)).reshape((__p, 1))
#
- HE2 = H( [(E2[:,i,numpy.newaxis], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
- vy2 = HE2.mean(axis=1, dtype=mfp).astype('float').reshape((__p,1))
+ HE2 = H( [(E2[:, i, numpy.newaxis], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ vy2 = HE2.mean(axis=1, dtype=mfp).astype('float').reshape((__p, 1))
#
if BnotT:
EaY = (HE2 - vy2) / _epsilon
else:
- EaY = ( (HE2 - vy2) @ numpy.linalg.inv(Ta) ) / math.sqrt(__m-1)
+ EaY = ( (HE2 - vy2) @ numpy.linalg.inv(Ta) ) / math.sqrt(__m - 1)
#
- GradJ = numpy.ravel(vw[:,None] - EaY.transpose() @ (RI * ( Ynpu - vy1 )))
- mH = numpy.identity(__m) + EaY.transpose() @ (RI * EaY).reshape((-1,__m))
- Deltaw = - numpy.linalg.solve(mH,GradJ)
+ GradJ = numpy.ravel(vw[:, None] - EaY.transpose() @ (RI * ( Ynpu - vy1 )))
+ mH = numpy.identity(__m) + EaY.transpose() @ (RI * EaY).reshape((-1, __m))
+ Deltaw = - numpy.linalg.solve(mH, GradJ)
#
vw = vw + Deltaw
#
#
if BnotT:
Ta = numpy.real(scipy.linalg.sqrtm(numpy.linalg.inv( mH )))
- A2 = math.sqrt(__m-1) * A2 @ Ta / _epsilon
+ A2 = math.sqrt(__m - 1) * A2 @ Ta / _epsilon
#
Xn = vx2 + A2
- #--------------------------
+ # --------------------------
else:
raise ValueError("VariantM has to be chosen in the authorized methods list.")
#
if selfA._parameters["InflationType"] == "MultiplicativeOnAnalysisAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
Xa = EnsembleMean( Xn )
- #--------------------------
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("seed", numpy.random.get_state())
- #--------------------------
+ # --------------------------
#
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("APosterioriCovariance") \
- or selfA._toStore("InnovationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- _HXa = numpy.ravel( H((Xa, Un)) ).reshape((-1,1))
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("APosterioriCovariance") \
+ or selfA._toStore("InnovationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ _HXa = numpy.ravel( H((Xa, Un)) ).reshape((-1, 1))
_Innovation = Ynpu - _HXa
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( E2 )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( - HE2 + Ynpu )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HE2 )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( EnsembleErrorCovariance(Xn) )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
HXb = numpy.asarray(Hm( Xb, HO["AppliedInX"]["HXb"] ))
else:
HXb = numpy.asarray(Hm( Xb ))
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
if selfA._toStore("JacobianMatrixAtBackground"):
HtMb = HO["Tangent"].asMatrix(ValueForMethodForm = Xb)
- HtMb = HtMb.reshape(Y.size,Xb.size) # ADAO & check shape
+ HtMb = HtMb.reshape(Y.size, Xb.size) # ADAO & check shape
selfA.StoredVariables["JacobianMatrixAtBackground"].store( HtMb )
#
BI = B.getI()
# Outer Loop
# ----------
iOuter = 0
- J = 1./mpr
- DeltaJ = 1./mpr
- Xr = numpy.asarray(selfA._parameters["InitializationPoint"]).reshape((-1,1))
- while abs(DeltaJ) >= selfA._parameters["CostDecrementTolerance"] and iOuter <= selfA._parameters["MaximumNumberOfIterations"]:
+ J = 1. / mpr
+ DeltaJ = 1. / mpr
+ Xr = numpy.asarray(selfA._parameters["InitializationPoint"]).reshape((-1, 1))
+ while abs(DeltaJ) >= selfA._parameters["CostDecrementTolerance"] and iOuter <= selfA._parameters["MaximumNumberOfIterations"]: # noqa: E501
#
# Inner Loop
# ----------
Ht = HO["Tangent"].asMatrix(Xr)
- Ht = Ht.reshape(Y.size,Xr.size) # ADAO & check shape
+ Ht = Ht.reshape(Y.size, Xr.size) # ADAO & check shape
#
# Définition de la fonction-coût
# ------------------------------
+
def CostFunction(dx):
- _dX = numpy.asarray(dx).reshape((-1,1))
+ _dX = numpy.asarray(dx).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState") or \
- selfA._toStore("CurrentOptimum"):
+ selfA._toStore("CurrentState") or \
+ selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentState"].store( Xb + _dX )
- _HdX = (Ht @ _dX).reshape((-1,1))
+ _HdX = (Ht @ _dX).reshape((-1, 1))
_dInnovation = Innovation - _HdX
if selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HXb + _HdX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _dInnovation )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
if selfA._toStore("IndexOfOptimum") or \
- selfA._toStore("CurrentOptimum") or \
- selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("CurrentOptimum") or \
+ selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
return J
- #
+
def GradientOfCostFunction(dx):
_dX = numpy.ravel( dx )
- _HdX = (Ht @ _dX).reshape((-1,1))
+ _HdX = (Ht @ _dX).reshape((-1, 1))
_dInnovation = Innovation - _HdX
GradJb = BI @ _dX
GradJo = - Ht.T @ (RI * _dInnovation)
#
if selfA._parameters["Minimizer"] == "LBFGSB":
# Minimum, J_optimal, Informations = scipy.optimize.fmin_l_bfgs_b(
- if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
+ if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
import daAlgorithms.Atoms.lbfgsb14hlt as optimiseur
elif vt("1.5.0") <= vt(scipy.version.version) <= vt("1.7.99"):
import daAlgorithms.Atoms.lbfgsb17hlt as optimiseur
fprime = GradientOfCostFunction,
args = (),
bounds = RecentredBounds(selfA._parameters["Bounds"], Xb),
- maxfun = selfA._parameters["MaximumNumberOfIterations"]-1,
- factr = selfA._parameters["CostDecrementTolerance"]*1.e14,
+ maxfun = selfA._parameters["MaximumNumberOfIterations"] - 1,
+ factr = selfA._parameters["CostDecrementTolerance"] * 1.e14,
pgtol = selfA._parameters["ProjectedGradientTolerance"],
iprint = selfA._parameters["optiprint"],
- )
+ )
# nfeval = Informations['funcalls']
# rc = Informations['warnflag']
elif selfA._parameters["Minimizer"] == "TNC":
pgtol = selfA._parameters["ProjectedGradientTolerance"],
ftol = selfA._parameters["CostDecrementTolerance"],
messages = selfA._parameters["optmessages"],
- )
+ )
elif selfA._parameters["Minimizer"] == "CG":
Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "NCG":
Minimum, fopt, nfeval, grad_calls, hcalls, rc = scipy.optimize.fmin_ncg(
f = CostFunction,
avextol = selfA._parameters["CostDecrementTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "BFGS":
Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
else:
raise ValueError("Error in minimizer name: %s is unkown"%selfA._parameters["Minimizer"])
#
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
Minimum = selfA.StoredVariables["CurrentState"][IndexMin]
else:
- Minimum = Xb + Minimum.reshape((-1,1))
+ Minimum = Xb + Minimum.reshape((-1, 1))
#
Xr = Minimum
DeltaJ = selfA.StoredVariables["CostFunctionJ" ][-1] - J
iOuter = selfA.StoredVariables["CurrentIterationNumber"][-1]
#
Xa = Xr
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
if selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("SimulatedObservationAtOptimum"):
if selfA._toStore("SimulatedObservationAtCurrentState"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif selfA._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm( Xa )
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - numpy.asarray(HXa).reshape((-1, 1))
#
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("JacobianMatrixAtOptimum") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("JacobianMatrixAtOptimum") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HtM = HO["Tangent"].asMatrix(ValueForMethodForm = Xa)
- HtM = HtM.reshape(Y.size,Xa.size) # ADAO & check shape
+ HtM = HtM.reshape(Y.size, Xa.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HaM = HO["Adjoint"].asMatrix(ValueForMethodForm = Xa)
- HaM = HaM.reshape(Xa.size,Y.size) # ADAO & check shape
+ HaM = HaM.reshape(Xa.size, Y.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles"):
+ selfA._toStore("SimulationQuantiles"):
A = HessienneEstimation(selfA, Xa.size, HaM, HtM, BI, RI)
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( A )
if selfA._toStore("JacobianMatrixAtOptimum"):
selfA.StoredVariables["JacobianMatrixAtOptimum"].store( HtM )
if selfA._toStore("KalmanGainAtOptimum"):
- if (Y.size <= Xb.size): KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
- elif (Y.size > Xb.size): KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
+ if (Y.size <= Xb.size):
+ KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
+ elif (Y.size > Xb.size):
+ KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
selfA.StoredVariables["KalmanGainAtOptimum"].store( KG )
#
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("Innovation") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("MahalanobisConsistency") or \
- selfA._toStore("OMB"):
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ selfA._toStore("OMB"):
Innovation = Y - HXb
if selfA._toStore("Innovation"):
selfA.StoredVariables["Innovation"].store( Innovation )
TraceR = R.trace(Y.size)
selfA.StoredVariables["SigmaObs2"].store( vfloat( (Innovation.T @ oma) ) / TraceR )
if selfA._toStore("MahalanobisConsistency"):
- selfA.StoredVariables["MahalanobisConsistency"].store( float( 2.*MinJ/Innovation.size ) )
+ selfA.StoredVariables["MahalanobisConsistency"].store( float( 2. * MinJ / Innovation.size ) )
if selfA._toStore("SimulationQuantiles"):
QuantilesEstimations(selfA, A, Xa, HXa, Hm, HtM)
if selfA._toStore("SimulatedObservationAtBackground"):
# Modification de la version 1.11.0
+# flake8: noqa
"""
Functions
---------
# Modification de la version 1.12.0
+# flake8: noqa
"""
Functions
---------
# Modification de la version 1.4.1
+# flake8: noqa
"""
Functions
---------
# Modification de la version 1.7.1
+# flake8: noqa
+
"""
Functions
---------
# Modification de la version 1.8.1
+# flake8: noqa
"""
Functions
---------
# Modification de la version 1.9.1 et 1.10.1
+# flake8: noqa
"""
Functions
---------
mfp = PlatformInfo().MaximumPrecision()
# ==============================================================================
-def mlef(selfA, Xb, Y, U, HO, EM, CM, R, B, Q,
- VariantM="MLEF13", BnotT=False, _epsilon=1.e-3, _e=1.e-7, _jmax=15000,
- Hybrid=None,
- ):
+def mlef( selfA, Xb, Y, U, HO, EM, CM, R, B, Q,
+ VariantM="MLEF13", BnotT=False, _epsilon=1.e-3, _e=1.e-7, _jmax=15000,
+ Hybrid=None,
+ ):
"""
Maximum Likelihood Ensemble Filter (MLEF)
"""
Cm = None
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
BI = B.getI()
RI = R.getI()
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
previousJMinimum = numpy.finfo(float).max
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = EnsembleOfBackgroundPerturbations( Xb, None, __m )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(__n) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
#
- for step in range(duration-1):
+ for step in range(duration - 1):
numpy.random.set_state(selfA._getInternalState("seed"))
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["InflationType"] == "MultiplicativeOnBackgroundAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
- EMX = M( [(Xn[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
+ EMX = M( [(Xn[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
Xn_predicted = EnsemblePerturbationWithGivenCovariance( EMX, Q )
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Xn_predicted = EMX = Xn
#
- #--------------------------
+ # --------------------------
if VariantM == "MLEF13":
Xfm = numpy.ravel(Xn_predicted.mean(axis=1, dtype=mfp).astype('float'))
- EaX = EnsembleOfAnomalies( Xn_predicted, Xfm, 1./math.sqrt(__m-1) )
+ EaX = EnsembleOfAnomalies( Xn_predicted, Xfm, 1. / math.sqrt(__m - 1) )
Ua = numpy.identity(__m)
__j = 0
Deltaw = 1
Ta = numpy.identity(__m)
vw = numpy.zeros(__m)
while numpy.linalg.norm(Deltaw) >= _e and __j <= _jmax:
- vx1 = (Xfm + EaX @ vw).reshape((__n,1))
+ vx1 = (Xfm + EaX @ vw).reshape((__n, 1))
#
if BnotT:
E1 = vx1 + _epsilon * EaX
else:
- E1 = vx1 + math.sqrt(__m-1) * EaX @ Ta
+ E1 = vx1 + math.sqrt(__m - 1) * EaX @ Ta
#
- HE2 = H( [(E1[:,i,numpy.newaxis], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
- vy2 = HE2.mean(axis=1, dtype=mfp).astype('float').reshape((__p,1))
+ HE2 = H( [(E1[:, i, numpy.newaxis], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ vy2 = HE2.mean(axis=1, dtype=mfp).astype('float').reshape((__p, 1))
#
if BnotT:
EaY = (HE2 - vy2) / _epsilon
else:
- EaY = ( (HE2 - vy2) @ numpy.linalg.inv(Ta) ) / math.sqrt(__m-1)
+ EaY = ( (HE2 - vy2) @ numpy.linalg.inv(Ta) ) / math.sqrt(__m - 1)
#
- GradJ = numpy.ravel(vw[:,None] - EaY.transpose() @ (RI * ( Ynpu - vy2 )))
- mH = numpy.identity(__m) + EaY.transpose() @ (RI * EaY).reshape((-1,__m))
- Deltaw = - numpy.linalg.solve(mH,GradJ)
+ GradJ = numpy.ravel(vw[:, None] - EaY.transpose() @ (RI * ( Ynpu - vy2 )))
+ mH = numpy.identity(__m) + EaY.transpose() @ (RI * EaY).reshape((-1, __m))
+ Deltaw = - numpy.linalg.solve(mH, GradJ)
#
vw = vw + Deltaw
#
if BnotT:
Ta = numpy.real(scipy.linalg.sqrtm(numpy.linalg.inv( mH )))
#
- Xn = vx1 + math.sqrt(__m-1) * EaX @ Ta @ Ua
- #--------------------------
+ Xn = vx1 + math.sqrt(__m - 1) * EaX @ Ta @ Ua
+ # --------------------------
else:
raise ValueError("VariantM has to be chosen in the authorized methods list.")
#
if selfA._parameters["InflationType"] == "MultiplicativeOnAnalysisAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
if Hybrid == "E3DVAR":
Xn = Apply3DVarRecentringOnEnsemble(Xn, EMX, Ynpu, HO, R, B, selfA._parameters)
#
Xa = EnsembleMean( Xn )
- #--------------------------
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("seed", numpy.random.get_state())
- #--------------------------
+ # --------------------------
#
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("APosterioriCovariance") \
- or selfA._toStore("InnovationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- _HXa = numpy.ravel( H((Xa, Un)) ).reshape((-1,1))
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("APosterioriCovariance") \
+ or selfA._toStore("InnovationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ _HXa = numpy.ravel( H((Xa, Un)) ).reshape((-1, 1))
_Innovation = Ynpu - _HXa
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( EMX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( - HE2 + Ynpu )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HE2 )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( EnsembleErrorCovariance(Xn) )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
mfp = PlatformInfo().MaximumPrecision()
# ==============================================================================
-def mmqr(
- func = None,
- x0 = None,
- fprime = None,
- bounds = None,
- quantile = 0.5,
- maxfun = 15000,
- toler = 1.e-06,
- y = None,
- ):
+def mmqr( func = None,
+ x0 = None,
+ fprime = None,
+ bounds = None,
+ quantile = 0.5,
+ maxfun = 15000,
+ toler = 1.e-06,
+ y = None,
+ ):
"""
Implémentation informatique de l'algorithme MMQR, basée sur la publication :
David R. Hunter, Kenneth Lange, "Quantile Regression via an MM Algorithm",
# ---------------------------
tn = float(toler) / n
e0 = -tn / math.log(tn)
- epsilon = (e0-tn)/(1+math.log(e0))
+ epsilon = (e0 - tn) / (1 + math.log(e0))
#
# Calculs d'initialisation
# ------------------------
residus = mesures - numpy.ravel( func( variables ) )
- poids = 1./(epsilon+numpy.abs(residus))
+ poids = 1. / (epsilon + numpy.abs(residus))
veps = 1. - 2. * quantile - residus * poids
- lastsurrogate = -numpy.sum(residus*veps) - (1.-2.*quantile)*numpy.sum(residus)
+ lastsurrogate = - numpy.sum(residus * veps) - (1. - 2. * quantile) * numpy.sum(residus)
iteration = 0
#
# Recherche itérative
# -------------------
- while (increment > toler) and (iteration < maxfun) :
+ while (increment > toler) and (iteration < maxfun):
iteration += 1
#
Derivees = numpy.array(fprime(variables))
- Derivees = Derivees.reshape(n,p) # ADAO & check shape
+ Derivees = Derivees.reshape(n, p) # ADAO & check shape
DeriveesT = Derivees.transpose()
- M = numpy.dot( DeriveesT , (numpy.array(p*[poids,]).T * Derivees) )
- SM = numpy.transpose(numpy.dot( DeriveesT , veps ))
+ M = numpy.dot( DeriveesT, (numpy.array(p * [poids,]).T * Derivees) )
+ SM = numpy.transpose(numpy.dot( DeriveesT, veps ))
step = - numpy.linalg.lstsq( M, SM, rcond=-1 )[0]
#
variables = variables + step
if bounds is not None:
# Attention : boucle infinie à éviter si un intervalle est trop petit
- while( (variables < numpy.ravel(numpy.asarray(bounds)[:,0])).any() or (variables > numpy.ravel(numpy.asarray(bounds)[:,1])).any() ):
- step = step/2.
+ while ( (variables < numpy.ravel(numpy.asarray(bounds)[:, 0])).any() or (variables > numpy.ravel(numpy.asarray(bounds)[:, 1])).any() ): # noqa: E501
+ step = step / 2.
variables = variables - step
residus = mesures - numpy.ravel( func(variables) )
- surrogate = numpy.sum(residus**2 * poids) + (4.*quantile-2.) * numpy.sum(residus)
+ surrogate = numpy.sum(residus**2 * poids) + (4. * quantile - 2.) * numpy.sum(residus)
#
- while ( (surrogate > lastsurrogate) and ( max(list(numpy.abs(step))) > 1.e-16 ) ) :
- step = step/2.
+ while ( (surrogate > lastsurrogate) and ( max(list(numpy.abs(step))) > 1.e-16 ) ):
+ step = step / 2.
variables = variables - step
residus = mesures - numpy.ravel( func(variables) )
- surrogate = numpy.sum(residus**2 * poids) + (4.*quantile-2.) * numpy.sum(residus)
+ surrogate = numpy.sum(residus**2 * poids) + (4. * quantile - 2.) * numpy.sum(residus)
#
- increment = abs(lastsurrogate-surrogate)
- poids = 1./(epsilon+numpy.abs(residus))
+ increment = abs(lastsurrogate - surrogate)
+ poids = 1. / (epsilon + numpy.abs(residus))
veps = 1. - 2. * quantile - residus * poids
- lastsurrogate = -numpy.sum(residus * veps) - (1.-2.*quantile)*numpy.sum(residus)
+ lastsurrogate = -numpy.sum(residus * veps) - (1. - 2. * quantile) * numpy.sum(residus)
#
# Mesure d'écart
# --------------
- Ecart = quantile * numpy.sum(residus) - numpy.sum( residus[residus<0] )
+ Ecart = quantile * numpy.sum(residus) - numpy.sum( residus[residus < 0] )
#
- return variables, Ecart, [n,p,iteration,increment,0]
+ return variables, Ecart, [n, p, iteration, increment, 0]
# ==============================================================================
if __name__ == "__main__":
HXb = numpy.asarray(Hm( Xb, HO["AppliedInX"]["HXb"] ))
else:
HXb = numpy.asarray(Hm( Xb ))
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
Ht = HO["Tangent"].asMatrix(Xb)
- Ht = Ht.reshape(Y.size,Xb.size) # ADAO & check shape
+ Ht = Ht.reshape(Y.size, Xb.size) # ADAO & check shape
BHT = B * Ht.T
HBHTpR = R + Ht * BHT
Innovation = Y - HXb
#
# Définition de la fonction-coût
# ------------------------------
+
def CostFunction(w):
- _W = numpy.asarray(w).reshape((-1,1))
+ _W = numpy.asarray(w).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState") or \
- selfA._toStore("CurrentOptimum"):
+ selfA._toStore("CurrentState") or \
+ selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentState"].store( Xb + BHT @ _W )
if selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( Xb + BHT @ _W ) )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( Innovation )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
if selfA._toStore("IndexOfOptimum") or \
- selfA._toStore("CurrentOptimum") or \
- selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("CurrentOptimum") or \
+ selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
return J
- #
+
def GradientOfCostFunction(w):
- _W = numpy.asarray(w).reshape((-1,1))
+ _W = numpy.asarray(w).reshape((-1, 1))
GradJb = HBHTpR @ _W
GradJo = - Innovation
GradJ = numpy.ravel( GradJb ) + numpy.ravel( GradJo )
nbPreviousSteps = selfA.StoredVariables["CostFunctionJ"].stepnumber()
#
if selfA._parameters["Minimizer"] == "LBFGSB":
- if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
+ if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
import daAlgorithms.Atoms.lbfgsb14hlt as optimiseur
elif vt("1.5.0") <= vt(scipy.version.version) <= vt("1.7.99"):
import daAlgorithms.Atoms.lbfgsb17hlt as optimiseur
x0 = Xini,
fprime = GradientOfCostFunction,
args = (),
- maxfun = selfA._parameters["MaximumNumberOfIterations"]-1,
- factr = selfA._parameters["CostDecrementTolerance"]*1.e14,
+ maxfun = selfA._parameters["MaximumNumberOfIterations"] - 1,
+ factr = selfA._parameters["CostDecrementTolerance"] * 1.e14,
pgtol = selfA._parameters["ProjectedGradientTolerance"],
iprint = selfA._parameters["optiprint"],
- )
+ )
# nfeval = Informations['funcalls']
# rc = Informations['warnflag']
elif selfA._parameters["Minimizer"] == "TNC":
pgtol = selfA._parameters["ProjectedGradientTolerance"],
ftol = selfA._parameters["CostDecrementTolerance"],
messages = selfA._parameters["optmessages"],
- )
+ )
elif selfA._parameters["Minimizer"] == "CG":
Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "NCG":
Minimum, fopt, nfeval, grad_calls, hcalls, rc = scipy.optimize.fmin_ncg(
f = CostFunction,
avextol = selfA._parameters["CostDecrementTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "BFGS":
Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
else:
raise ValueError("Error in minimizer name: %s is unkown"%selfA._parameters["Minimizer"])
#
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
Minimum = selfA.StoredVariables["CurrentState"][IndexMin]
else:
- Minimum = Xb + BHT @ Minimum.reshape((-1,1))
+ Minimum = Xb + BHT @ Minimum.reshape((-1, 1))
#
Xa = Minimum
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
if selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("SimulatedObservationAtOptimum"):
if selfA._toStore("SimulatedObservationAtCurrentState"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif selfA._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm( Xa )
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - numpy.asarray(HXa).reshape((-1, 1))
#
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("JacobianMatrixAtOptimum") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("JacobianMatrixAtOptimum") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HtM = HO["Tangent"].asMatrix(ValueForMethodForm = Xa)
- HtM = HtM.reshape(Y.size,Xa.size) # ADAO & check shape
+ HtM = HtM.reshape(Y.size, Xa.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HaM = HO["Adjoint"].asMatrix(ValueForMethodForm = Xa)
- HaM = HaM.reshape(Xa.size,Y.size) # ADAO & check shape
+ HaM = HaM.reshape(Xa.size, Y.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles"):
+ selfA._toStore("SimulationQuantiles"):
BI = B.getI()
RI = R.getI()
A = HessienneEstimation(selfA, Xa.size, HaM, HtM, BI, RI)
if selfA._toStore("JacobianMatrixAtOptimum"):
selfA.StoredVariables["JacobianMatrixAtOptimum"].store( HtM )
if selfA._toStore("KalmanGainAtOptimum"):
- if (Y.size <= Xb.size): KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
- elif (Y.size > Xb.size): KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
+ if (Y.size <= Xb.size):
+ KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
+ elif (Y.size > Xb.size):
+ KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
selfA.StoredVariables["KalmanGainAtOptimum"].store( KG )
#
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("Innovation") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("MahalanobisConsistency") or \
- selfA._toStore("OMB"):
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ selfA._toStore("OMB"):
Innovation = Y - HXb
if selfA._toStore("Innovation"):
selfA.StoredVariables["Innovation"].store( Innovation )
TraceR = R.trace(Y.size)
selfA.StoredVariables["SigmaObs2"].store( vfloat( (Innovation.T @ oma) ) / TraceR )
if selfA._toStore("MahalanobisConsistency"):
- selfA.StoredVariables["MahalanobisConsistency"].store( float( 2.*MinJ/Innovation.size ) )
+ selfA.StoredVariables["MahalanobisConsistency"].store( float( 2. * MinJ / Innovation.size ) )
if selfA._toStore("SimulationQuantiles"):
QuantilesEstimations(selfA, A, Xa, HXa, Hm, HtM)
if selfA._toStore("SimulatedObservationAtBackground"):
mfp = PlatformInfo().MaximumPrecision()
# ==============================================================================
-def senkf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q,
- VariantM="KalmanFilterFormula16",
- Hybrid=None,
- ):
+def senkf( selfA, Xb, Y, U, HO, EM, CM, R, B, Q,
+ VariantM="KalmanFilterFormula16",
+ Hybrid=None,
+ ):
"""
Stochastic EnKF
"""
Cm = None
#
# Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
__p = numpy.cumprod(Y.shape())[-1]
else:
#
# Précalcul des inversions de B et R
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
BI = B.getI()
RI = R.getI()
#
nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
previousJMinimum = numpy.finfo(float).max
#
- if hasattr(R,"asfullmatrix"): Rn = R.asfullmatrix(__p)
- else: Rn = R
+ if hasattr(R, "asfullmatrix"):
+ Rn = R.asfullmatrix(__p)
+ else:
+ Rn = R
#
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
- if hasattr(B,"asfullmatrix"): Pn = B.asfullmatrix(__n)
- else: Pn = B
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
+ if hasattr(B, "asfullmatrix"):
+ Pn = B.asfullmatrix(__n)
+ else:
+ Pn = B
Xn = EnsembleOfBackgroundPerturbations( Xb, Pn, __m )
selfA.StoredVariables["Analysis"].store( Xb )
if selfA._toStore("APosterioriCovariance"):
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
#
- for step in range(duration-1):
+ for step in range(duration - 1):
numpy.random.set_state(selfA._getInternalState("seed"))
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.ravel( Y[step + 1] ).reshape((__p, 1))
else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
+ Ynpu = numpy.ravel( Y ).reshape((__p, 1))
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["InflationType"] == "MultiplicativeOnBackgroundAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
- if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
- EMX = M( [(Xn[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ if selfA._parameters["EstimationOf"] == "State": # Forecast + Q and observation of forecast
+ EMX = M( [(Xn[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
Xn_predicted = EnsemblePerturbationWithGivenCovariance( EMX, Q )
- HX_predicted = H( [(Xn_predicted[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(__n,Un.size) # ADAO & check shape
+ HX_predicted = H( [(Xn_predicted[:, i], None) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
+ if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Cm = Cm.reshape(__n, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
+ elif selfA._parameters["EstimationOf"] == "Parameters": # Observation of forecast
# --- > Par principe, M = Id, Q = 0
Xn_predicted = EMX = Xn
- HX_predicted = H( [(Xn_predicted[:,i], Un) for i in range(__m)],
- argsAsSerie = True,
- returnSerieAsArrayMatrix = True )
+ HX_predicted = H( [(Xn_predicted[:, i], Un) for i in range(__m)],
+ argsAsSerie = True,
+ returnSerieAsArrayMatrix = True )
#
# Mean of forecast and observation of forecast
Xfm = EnsembleMean( Xn_predicted )
Hfm = EnsembleMean( HX_predicted )
#
- #--------------------------
+ # --------------------------
if VariantM == "KalmanFilterFormula05":
PfHT, HPfHT = 0., 0.
for i in range(__m):
- Exfi = Xn_predicted[:,i].reshape((__n,1)) - Xfm
- Eyfi = HX_predicted[:,i].reshape((__p,1)) - Hfm
+ Exfi = Xn_predicted[:, i].reshape((__n, 1)) - Xfm
+ Eyfi = HX_predicted[:, i].reshape((__p, 1)) - Hfm
PfHT += Exfi * Eyfi.T
HPfHT += Eyfi * Eyfi.T
- PfHT = (1./(__m-1)) * PfHT
- HPfHT = (1./(__m-1)) * HPfHT
+ PfHT = (1. / (__m - 1)) * PfHT
+ HPfHT = (1. / (__m - 1)) * HPfHT
Kn = PfHT * ( R + HPfHT ).I
del PfHT, HPfHT
#
for i in range(__m):
ri = numpy.random.multivariate_normal(numpy.zeros(__p), Rn)
- Xn[:,i] = numpy.ravel(Xn_predicted[:,i]) + Kn @ (numpy.ravel(Ynpu) + ri - HX_predicted[:,i])
- #--------------------------
+ Xn[:, i] = numpy.ravel(Xn_predicted[:, i]) + Kn @ (numpy.ravel(Ynpu) + ri - HX_predicted[:, i])
+ # --------------------------
elif VariantM == "KalmanFilterFormula16":
EpY = EnsembleOfCenteredPerturbations(Ynpu, Rn, __m)
- EpYm = EpY.mean(axis=1, dtype=mfp).astype('float').reshape((__p,1))
+ EpYm = EpY.mean(axis=1, dtype=mfp).astype('float').reshape((__p, 1))
#
- EaX = EnsembleOfAnomalies( Xn_predicted ) / math.sqrt(__m-1)
- EaY = (HX_predicted - Hfm - EpY + EpYm) / math.sqrt(__m-1)
+ EaX = EnsembleOfAnomalies( Xn_predicted ) / math.sqrt(__m - 1)
+ EaY = (HX_predicted - Hfm - EpY + EpYm) / math.sqrt(__m - 1)
#
Kn = EaX @ EaY.T @ numpy.linalg.inv( EaY @ EaY.T)
#
for i in range(__m):
- Xn[:,i] = numpy.ravel(Xn_predicted[:,i]) + Kn @ (numpy.ravel(EpY[:,i]) - HX_predicted[:,i])
- #--------------------------
+ Xn[:, i] = numpy.ravel(Xn_predicted[:, i]) + Kn @ (numpy.ravel(EpY[:, i]) - HX_predicted[:, i])
+ # --------------------------
else:
raise ValueError("VariantM has to be chosen in the authorized methods list.")
#
if selfA._parameters["InflationType"] == "MultiplicativeOnAnalysisAnomalies":
- Xn = CovarianceInflation( Xn,
+ Xn = CovarianceInflation(
+ Xn,
selfA._parameters["InflationType"],
selfA._parameters["InflationFactor"],
- )
+ )
#
if Hybrid == "E3DVAR":
Xn = Apply3DVarRecentringOnEnsemble(Xn, EMX, Ynpu, HO, R, B, selfA._parameters)
#
Xa = EnsembleMean( Xn )
- #--------------------------
+ # --------------------------
selfA._setInternalState("Xn", Xn)
selfA._setInternalState("seed", numpy.random.get_state())
- #--------------------------
+ # --------------------------
#
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("APosterioriCovariance") \
- or selfA._toStore("InnovationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- _HXa = numpy.ravel( H((Xa, Un)) ).reshape((-1,1))
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("APosterioriCovariance") \
+ or selfA._toStore("InnovationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentAnalysis") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ _HXa = numpy.ravel( H((Xa, None)) ).reshape((-1, 1))
_Innovation = Ynpu - _HXa
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
# ---> avec current state
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
+ or selfA._toStore("CurrentState"):
selfA.StoredVariables["CurrentState"].store( Xn )
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( EMX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( - HX_predicted + Ynpu )
if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX_predicted )
# ---> autres
if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
+ or selfA._toStore("CostFunctionJ") \
+ or selfA._toStore("CostFunctionJb") \
+ or selfA._toStore("CostFunctionJo") \
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("APosterioriCovariance"):
Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
J = Jb + Jo
selfA.StoredVariables["CostFunctionJ" ].store( J )
#
if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ or selfA._toStore("CurrentOptimum") \
+ or selfA._toStore("CostFunctionJAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
+ or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
+ or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( EnsembleErrorCovariance(Xn) )
if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
+ and J < previousJMinimum:
previousJMinimum = J
XaMin = Xa
if selfA._toStore("APosterioriCovariance"):
HXb = numpy.asarray(Hm( Xb, HO["AppliedInX"]["HXb"] ))
else:
HXb = numpy.asarray(Hm( Xb ))
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
if selfA._toStore("JacobianMatrixAtBackground"):
HtMb = HO["Tangent"].asMatrix(Xb)
- HtMb = HtMb.reshape(Y.size,Xb.size) # ADAO & check shape
+ HtMb = HtMb.reshape(Y.size, Xb.size) # ADAO & check shape
selfA.StoredVariables["JacobianMatrixAtBackground"].store( HtMb )
#
BI = B.getI()
#
# Définition de la fonction-coût
# ------------------------------
+
def CostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState") or \
- selfA._toStore("CurrentOptimum"):
+ selfA._toStore("CurrentState") or \
+ selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentState"].store( _X )
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
_Innovation = Y - _HX
if selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
if selfA._toStore("IndexOfOptimum") or \
- selfA._toStore("CurrentOptimum") or \
- selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("CurrentOptimum") or \
+ selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] )
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
return J
- #
+
def GradientOfCostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
GradJb = BI * (_X - Xb)
GradJo = - Ha( (_X, RI * (Y - _HX)) )
GradJ = numpy.ravel( GradJb ) + numpy.ravel( GradJo )
nbPreviousSteps = selfA.StoredVariables["CostFunctionJ"].stepnumber()
#
if selfA._parameters["Minimizer"] == "LBFGSB":
- if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
+ if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
import daAlgorithms.Atoms.lbfgsb14hlt as optimiseur
elif vt("1.5.0") <= vt(scipy.version.version) <= vt("1.7.99"):
import daAlgorithms.Atoms.lbfgsb17hlt as optimiseur
fprime = GradientOfCostFunction,
args = (),
bounds = selfA._parameters["Bounds"],
- maxfun = selfA._parameters["MaximumNumberOfIterations"]-1,
- factr = selfA._parameters["CostDecrementTolerance"]*1.e14,
+ maxfun = selfA._parameters["MaximumNumberOfIterations"] - 1,
+ factr = selfA._parameters["CostDecrementTolerance"] * 1.e14,
pgtol = selfA._parameters["ProjectedGradientTolerance"],
iprint = selfA._parameters["optiprint"],
- )
+ )
# nfeval = Informations['funcalls']
# rc = Informations['warnflag']
elif selfA._parameters["Minimizer"] == "TNC":
pgtol = selfA._parameters["ProjectedGradientTolerance"],
ftol = selfA._parameters["CostDecrementTolerance"],
messages = selfA._parameters["optmessages"],
- )
+ )
elif selfA._parameters["Minimizer"] == "CG":
Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "NCG":
Minimum, fopt, nfeval, grad_calls, hcalls, rc = scipy.optimize.fmin_ncg(
f = CostFunction,
avextol = selfA._parameters["CostDecrementTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "BFGS":
Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
else:
raise ValueError("Error in minimizer name: %s is unkown"%selfA._parameters["Minimizer"])
#
Minimum = selfA.StoredVariables["CurrentState"][IndexMin]
#
Xa = Minimum
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
if selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("SimulatedObservationAtOptimum"):
if selfA._toStore("SimulatedObservationAtCurrentState"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif selfA._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm( Xa )
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - numpy.asarray(HXa).reshape((-1, 1))
#
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("JacobianMatrixAtOptimum") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("JacobianMatrixAtOptimum") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HtM = HO["Tangent"].asMatrix(ValueForMethodForm = Xa)
- HtM = HtM.reshape(Y.size,Xa.size) # ADAO & check shape
+ HtM = HtM.reshape(Y.size, Xa.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HaM = HO["Adjoint"].asMatrix(ValueForMethodForm = Xa)
- HaM = HaM.reshape(Xa.size,Y.size) # ADAO & check shape
+ HaM = HaM.reshape(Xa.size, Y.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles"):
+ selfA._toStore("SimulationQuantiles"):
A = HessienneEstimation(selfA, Xa.size, HaM, HtM, BI, RI)
if selfA._toStore("APosterioriCovariance"):
selfA.StoredVariables["APosterioriCovariance"].store( A )
if selfA._toStore("JacobianMatrixAtOptimum"):
selfA.StoredVariables["JacobianMatrixAtOptimum"].store( HtM )
if selfA._toStore("KalmanGainAtOptimum"):
- if (Y.size <= Xb.size): KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
- elif (Y.size > Xb.size): KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
+ if (Y.size <= Xb.size):
+ KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
+ elif (Y.size > Xb.size):
+ KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
selfA.StoredVariables["KalmanGainAtOptimum"].store( KG )
#
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("Innovation") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("MahalanobisConsistency") or \
- selfA._toStore("OMB"):
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ selfA._toStore("OMB"):
Innovation = Y - HXb
if selfA._toStore("Innovation"):
selfA.StoredVariables["Innovation"].store( Innovation )
TraceR = R.trace(Y.size)
selfA.StoredVariables["SigmaObs2"].store( vfloat( (Innovation.T @ oma) ) / TraceR )
if selfA._toStore("MahalanobisConsistency"):
- selfA.StoredVariables["MahalanobisConsistency"].store( float( 2.*MinJ/Innovation.size ) )
+ selfA.StoredVariables["MahalanobisConsistency"].store( float( 2. * MinJ / Innovation.size ) )
if selfA._toStore("SimulationQuantiles"):
QuantilesEstimations(selfA, A, Xa, HXa, Hm, HtM)
if selfA._toStore("SimulatedObservationAtBackground"):
Cm = CM["Tangent"].asMatrix(Xb)
else:
Cm = None
- #
+
def Un(_step):
if U is not None:
- if hasattr(U,"store") and 1<=_step<len(U) :
- _Un = numpy.ravel( U[_step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- _Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and 1 <= _step < len(U):
+ _Un = numpy.ravel( U[_step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ _Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- _Un = numpy.ravel( U ).reshape((-1,1))
+ _Un = numpy.ravel( U ).reshape((-1, 1))
else:
_Un = None
return _Un
- def CmUn(_xn,_un):
- if Cm is not None and _un is not None: # Attention : si Cm est aussi dans M, doublon !
- _Cm = Cm.reshape(_xn.size,_un.size) # ADAO & check shape
- _CmUn = (_Cm @ _un).reshape((-1,1))
+
+ def CmUn(_xn, _un):
+ if Cm is not None and _un is not None: # Attention : si Cm est aussi dans M, doublon !
+ _Cm = Cm.reshape(_xn.size, _un.size) # ADAO & check shape
+ _CmUn = (_Cm @ _un).reshape((-1, 1))
else:
_CmUn = 0.
return _CmUn
# avec l'observation du pas 1.
#
# Nombre de pas identique au nombre de pas d'observations
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
else:
duration = 2
#
# Définition de la fonction-coût
# ------------------------------
- selfA.DirectCalculation = [None,] # Le pas 0 n'est pas observé
- selfA.DirectInnovation = [None,] # Le pas 0 n'est pas observé
+ selfA.DirectCalculation = [None,] # Le pas 0 n'est pas observé
+ selfA.DirectInnovation = [None,] # Le pas 0 n'est pas observé
+
def CostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState") or \
- selfA._toStore("CurrentOptimum"):
+ selfA._toStore("CurrentState") or \
+ selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentState"].store( _X )
Jb = vfloat( 0.5 * (_X - Xb).T * (BI * (_X - Xb)) )
selfA.DirectCalculation = [None,]
selfA.DirectInnovation = [None,]
Jo = 0.
_Xn = _X
- for step in range(0,duration-1):
- if hasattr(Y,"store"):
- _Ynpu = numpy.ravel( Y[step+1] ).reshape((-1,1))
+ for step in range(0, duration - 1):
+ if hasattr(Y, "store"):
+ _Ynpu = numpy.ravel( Y[step + 1] ).reshape((-1, 1))
else:
- _Ynpu = numpy.ravel( Y ).reshape((-1,1))
+ _Ynpu = numpy.ravel( Y ).reshape((-1, 1))
_Un = Un(step)
#
# Etape d'évolution
if selfA._parameters["EstimationOf"] == "State":
- _Xn = Mm( (_Xn, _Un) ).reshape((-1,1)) + CmUn(_Xn, _Un)
+ _Xn = Mm( (_Xn, _Un) ).reshape((-1, 1)) + CmUn(_Xn, _Un)
elif selfA._parameters["EstimationOf"] == "Parameters":
pass
#
#
# Etape de différence aux observations
if selfA._parameters["EstimationOf"] == "State":
- _YmHMX = _Ynpu - numpy.ravel( Hm( (_Xn, None) ) ).reshape((-1,1))
+ _YmHMX = _Ynpu - numpy.ravel( Hm( (_Xn, None) ) ).reshape((-1, 1))
elif selfA._parameters["EstimationOf"] == "Parameters":
- _YmHMX = _Ynpu - numpy.ravel( Hm( (_Xn, _Un) ) ).reshape((-1,1)) - CmUn(_Xn, _Un)
+ _YmHMX = _Ynpu - numpy.ravel( Hm( (_Xn, None) ) ).reshape((-1, 1)) - CmUn(_Xn, _Un)
#
# Stockage de l'état
selfA.DirectCalculation.append( _Xn )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
if selfA._toStore("IndexOfOptimum") or \
- selfA._toStore("CurrentOptimum") or \
- selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJoAtCurrentOptimum"):
+ selfA._toStore("CurrentOptimum") or \
+ selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJoAtCurrentOptimum"):
IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] )
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
return J
- #
+
def GradientOfCostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
GradJb = BI * (_X - Xb)
GradJo = 0.
- for step in range(duration-1,0,-1):
+ for step in range(duration - 1, 0, -1):
# Étape de récupération du dernier stockage de l'évolution
_Xn = selfA.DirectCalculation.pop()
# Étape de récupération du dernier stockage de l'innovation
_YmHMX = selfA.DirectInnovation.pop()
# Calcul des adjoints
Ha = HO["Adjoint"].asMatrix(ValueForMethodForm = _Xn)
- Ha = Ha.reshape(_Xn.size,_YmHMX.size) # ADAO & check shape
+ Ha = Ha.reshape(_Xn.size, _YmHMX.size) # ADAO & check shape
Ma = EM["Adjoint"].asMatrix(ValueForMethodForm = _Xn)
- Ma = Ma.reshape(_Xn.size,_Xn.size) # ADAO & check shape
+ Ma = Ma.reshape(_Xn.size, _Xn.size) # ADAO & check shape
# Calcul du gradient par état adjoint
- GradJo = GradJo + Ha * (RI * _YmHMX) # Équivaut pour Ha linéaire à : Ha( (_Xn, RI * _YmHMX) )
- GradJo = Ma * GradJo # Équivaut pour Ma linéaire à : Ma( (_Xn, GradJo) )
+ GradJo = GradJo + Ha * (RI * _YmHMX) # Équivaut pour Ha linéaire à : Ha( (_Xn, RI * _YmHMX) )
+ GradJo = Ma * GradJo # Équivaut pour Ma linéaire à : Ma( (_Xn, GradJo) )
GradJ = numpy.ravel( GradJb ) - numpy.ravel( GradJo )
return GradJ
#
nbPreviousSteps = selfA.StoredVariables["CostFunctionJ"].stepnumber()
#
if selfA._parameters["Minimizer"] == "LBFGSB":
- if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
+ if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
import daAlgorithms.Atoms.lbfgsb14hlt as optimiseur
elif vt("1.5.0") <= vt(scipy.version.version) <= vt("1.7.99"):
import daAlgorithms.Atoms.lbfgsb17hlt as optimiseur
fprime = GradientOfCostFunction,
args = (),
bounds = selfA._parameters["Bounds"],
- maxfun = selfA._parameters["MaximumNumberOfIterations"]-1,
- factr = selfA._parameters["CostDecrementTolerance"]*1.e14,
+ maxfun = selfA._parameters["MaximumNumberOfIterations"] - 1,
+ factr = selfA._parameters["CostDecrementTolerance"] * 1.e14,
pgtol = selfA._parameters["ProjectedGradientTolerance"],
iprint = selfA._parameters["optiprint"],
- )
+ )
# nfeval = Informations['funcalls']
# rc = Informations['warnflag']
elif selfA._parameters["Minimizer"] == "TNC":
pgtol = selfA._parameters["ProjectedGradientTolerance"],
ftol = selfA._parameters["CostDecrementTolerance"],
messages = selfA._parameters["optmessages"],
- )
+ )
elif selfA._parameters["Minimizer"] == "CG":
Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "NCG":
Minimum, fopt, nfeval, grad_calls, hcalls, rc = scipy.optimize.fmin_ncg(
f = CostFunction,
avextol = selfA._parameters["CostDecrementTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "BFGS":
Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
else:
raise ValueError("Error in minimizer name: %s is unkown"%selfA._parameters["Minimizer"])
#
+++ /dev/null
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2008-2024 EDF R&D
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
-#
-# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
-#
-# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-
-__doc__ = """
- Unscented Kalman Filter
-"""
-__author__ = "Jean-Philippe ARGAUD"
-
-import math, numpy, scipy, copy
-from daCore.PlatformInfo import vfloat
-
-# ==============================================================================
-def uskf(selfA, Xb, Y, U, HO, EM, CM, R, B, Q):
- """
- Unscented Kalman Filter
- """
- if selfA._parameters["EstimationOf"] == "Parameters":
- selfA._parameters["StoreInternalVariables"] = True
- #
- L = Xb.size
- Alpha = selfA._parameters["Alpha"]
- Beta = selfA._parameters["Beta"]
- if selfA._parameters["Kappa"] == 0:
- if selfA._parameters["EstimationOf"] == "State":
- Kappa = 0
- elif selfA._parameters["EstimationOf"] == "Parameters":
- Kappa = 3 - L
- else:
- Kappa = selfA._parameters["Kappa"]
- Lambda = float( Alpha**2 ) * ( L + Kappa ) - L
- Gamma = math.sqrt( L + Lambda )
- #
- Ww = []
- Ww.append( 0. )
- for i in range(2*L):
- Ww.append( 1. / (2.*(L + Lambda)) )
- #
- Wm = numpy.array( Ww )
- Wm[0] = Lambda / (L + Lambda)
- Wc = numpy.array( Ww )
- Wc[0] = Lambda / (L + Lambda) + (1. - Alpha**2 + Beta)
- #
- # Durée d'observation et tailles
- if hasattr(Y,"stepnumber"):
- duration = Y.stepnumber()
- __p = numpy.cumprod(Y.shape())[-1]
- else:
- duration = 2
- __p = numpy.size(Y)
- #
- # Précalcul des inversions de B et R
- if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
- BI = B.getI()
- RI = R.getI()
- #
- __n = Xb.size
- nbPreviousSteps = len(selfA.StoredVariables["Analysis"])
- #
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
- Xn = Xb
- if hasattr(B,"asfullmatrix"):
- Pn = B.asfullmatrix(__n)
- else:
- Pn = B
- selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- selfA.StoredVariables["Analysis"].store( Xb )
- if selfA._toStore("APosterioriCovariance"):
- selfA.StoredVariables["APosterioriCovariance"].store( Pn )
- elif selfA._parameters["nextStep"]:
- Xn = selfA._getInternalState("Xn")
- Pn = selfA._getInternalState("Pn")
- #
- if selfA._parameters["EstimationOf"] == "Parameters":
- XaMin = Xn
- previousJMinimum = numpy.finfo(float).max
- #
- for step in range(duration-1):
- #
- if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
- else:
- Un = numpy.ravel( U ).reshape((-1,1))
- else:
- Un = None
- #
- if CM is not None and "Tangent" in CM and U is not None:
- Cm = CM["Tangent"].asMatrix(Xn)
- else:
- Cm = None
- #
- Pndemi = numpy.real(scipy.linalg.sqrtm(Pn))
- Xnmu = numpy.hstack([Xn, Xn+Gamma*Pndemi, Xn-Gamma*Pndemi])
- nbSpts = 2*Xn.size+1
- #
- XEnnmu = []
- for point in range(nbSpts):
- if selfA._parameters["EstimationOf"] == "State":
- Mm = EM["Direct"].appliedControledFormTo
- XEnnmui = numpy.asarray( Mm( (Xnmu[:,point], Un) ) ).reshape((-1,1))
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans M, doublon !
- Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
- XEnnmui = XEnnmui + Cm @ Un
- elif selfA._parameters["EstimationOf"] == "Parameters":
- # --- > Par principe, M = Id, Q = 0
- XEnnmui = Xnmu[:,point]
- XEnnmu.append( numpy.ravel(XEnnmui).reshape((-1,1)) )
- XEnnmu = numpy.concatenate( XEnnmu, axis=1 )
- #
- Xhmn = ( XEnnmu * Wm ).sum(axis=1)
- #
- if selfA._parameters["EstimationOf"] == "State": Pmn = copy.copy(Q)
- elif selfA._parameters["EstimationOf"] == "Parameters": Pmn = 0.
- for point in range(nbSpts):
- dXEnnmuXhmn = XEnnmu[:,point].flat-Xhmn
- Pmn += Wc[i] * numpy.outer(dXEnnmuXhmn, dXEnnmuXhmn)
- #
- Pmndemi = numpy.real(scipy.linalg.sqrtm(Pmn))
- Xnnmu = numpy.hstack([Xhmn.reshape((-1,1)), Xhmn.reshape((-1,1))+Gamma*Pmndemi, Xhmn.reshape((-1,1))-Gamma*Pmndemi])
- #
- Hm = HO["Direct"].appliedControledFormTo
- Ynnmu = []
- for point in range(nbSpts):
- if selfA._parameters["EstimationOf"] == "State":
- Ynnmui = Hm( (Xnnmu[:,point], None) )
- elif selfA._parameters["EstimationOf"] == "Parameters":
- Ynnmui = Hm( (Xnnmu[:,point], Un) )
- Ynnmu.append( numpy.ravel(Ynnmui).reshape((__p,1)) )
- Ynnmu = numpy.concatenate( Ynnmu, axis=1 )
- #
- Yhmn = ( Ynnmu * Wm ).sum(axis=1)
- #
- Pyyn = copy.copy(R)
- Pxyn = 0.
- for point in range(nbSpts):
- dYnnmuYhmn = Ynnmu[:,point].flat-Yhmn
- dXnnmuXhmn = Xnnmu[:,point].flat-Xhmn
- Pyyn += Wc[i] * numpy.outer(dYnnmuYhmn, dYnnmuYhmn)
- Pxyn += Wc[i] * numpy.outer(dXnnmuXhmn, dYnnmuYhmn)
- #
- if hasattr(Y,"store"):
- Ynpu = numpy.ravel( Y[step+1] ).reshape((__p,1))
- else:
- Ynpu = numpy.ravel( Y ).reshape((__p,1))
- _Innovation = Ynpu - Yhmn.reshape((-1,1))
- if selfA._parameters["EstimationOf"] == "Parameters":
- if Cm is not None and Un is not None: # Attention : si Cm est aussi dans H, doublon !
- _Innovation = _Innovation - Cm @ Un
- #
- Kn = Pxyn @ Pyyn.I
- Xn = Xhmn.reshape((-1,1)) + Kn @ _Innovation
- Pn = Pmn - Kn @ (Pyyn @ Kn.T)
- #
- Xa = Xn # Pointeurs
- #--------------------------
- selfA._setInternalState("Xn", Xn)
- selfA._setInternalState("Pn", Pn)
- #--------------------------
- #
- selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- # ---> avec analysis
- selfA.StoredVariables["Analysis"].store( Xa )
- if selfA._toStore("SimulatedObservationAtCurrentAnalysis"):
- selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"].store( Hm((Xa, Un)) )
- if selfA._toStore("InnovationAtCurrentAnalysis"):
- selfA.StoredVariables["InnovationAtCurrentAnalysis"].store( _Innovation )
- # ---> avec current state
- if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CurrentState"):
- selfA.StoredVariables["CurrentState"].store( Xn )
- if selfA._toStore("ForecastState"):
- selfA.StoredVariables["ForecastState"].store( Xhmn )
- if selfA._toStore("ForecastCovariance"):
- selfA.StoredVariables["ForecastCovariance"].store( Pmn )
- if selfA._toStore("BMA"):
- selfA.StoredVariables["BMA"].store( Xhmn - Xa )
- if selfA._toStore("InnovationAtCurrentState"):
- selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
- if selfA._toStore("SimulatedObservationAtCurrentState") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Yhmn )
- # ---> autres
- if selfA._parameters["StoreInternalVariables"] \
- or selfA._toStore("CostFunctionJ") \
- or selfA._toStore("CostFunctionJb") \
- or selfA._toStore("CostFunctionJo") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("APosterioriCovariance"):
- Jb = vfloat( 0.5 * (Xa - Xb).T * (BI * (Xa - Xb)) )
- Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
- J = Jb + Jo
- selfA.StoredVariables["CostFunctionJb"].store( Jb )
- selfA.StoredVariables["CostFunctionJo"].store( Jo )
- selfA.StoredVariables["CostFunctionJ" ].store( J )
- #
- if selfA._toStore("IndexOfOptimum") \
- or selfA._toStore("CurrentOptimum") \
- or selfA._toStore("CostFunctionJAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJbAtCurrentOptimum") \
- or selfA._toStore("CostFunctionJoAtCurrentOptimum") \
- or selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
- if selfA._toStore("IndexOfOptimum"):
- selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
- if selfA._toStore("CurrentOptimum"):
- selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["Analysis"][IndexMin] )
- if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentAnalysis"][IndexMin] )
- if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
- if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
- if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
- if selfA._toStore("APosterioriCovariance"):
- selfA.StoredVariables["APosterioriCovariance"].store( Pn )
- if selfA._parameters["EstimationOf"] == "Parameters" \
- and J < previousJMinimum:
- previousJMinimum = J
- XaMin = Xa
- if selfA._toStore("APosterioriCovariance"):
- covarianceXaMin = selfA.StoredVariables["APosterioriCovariance"][-1]
- #
- # Stockage final supplémentaire de l'optimum en estimation de paramètres
- # ----------------------------------------------------------------------
- if selfA._parameters["EstimationOf"] == "Parameters":
- selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["Analysis"]) )
- selfA.StoredVariables["Analysis"].store( XaMin )
- if selfA._toStore("APosterioriCovariance"):
- selfA.StoredVariables["APosterioriCovariance"].store( covarianceXaMin )
- if selfA._toStore("BMA"):
- selfA.StoredVariables["BMA"].store( numpy.ravel(Xb) - numpy.ravel(XaMin) )
- #
- return 0
-
-# ==============================================================================
-if __name__ == "__main__":
- print('\n AUTODIAGNOSTIC\n')
HXb = numpy.asarray(Hm( Xb, HO["AppliedInX"]["HXb"] ))
else:
HXb = numpy.asarray(Hm( Xb ))
- HXb = HXb.reshape((-1,1))
+ HXb = HXb.reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation H(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Y and %s of observed calculation H(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
#
if selfA._toStore("JacobianMatrixAtBackground"):
HtMb = HO["Tangent"].asMatrix(ValueForMethodForm = Xb)
- HtMb = HtMb.reshape(Y.size,Xb.size) # ADAO & check shape
+ HtMb = HtMb.reshape(Y.size, Xb.size) # ADAO & check shape
selfA.StoredVariables["JacobianMatrixAtBackground"].store( HtMb )
#
BT = B.getT()
#
# Définition de la fonction-coût
# ------------------------------
+
def CostFunction(v):
- _V = numpy.asarray(v).reshape((-1,1))
- _X = Xb + (B @ _V).reshape((-1,1))
+ _V = numpy.asarray(v).reshape((-1, 1))
+ _X = Xb + (B @ _V).reshape((-1, 1))
if selfA._parameters["StoreInternalVariables"] or \
- selfA._toStore("CurrentState") or \
- selfA._toStore("CurrentOptimum"):
+ selfA._toStore("CurrentState") or \
+ selfA._toStore("CurrentOptimum"):
selfA.StoredVariables["CurrentState"].store( _X )
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
_Innovation = Y - _HX
if selfA._toStore("SimulatedObservationAtCurrentState") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
selfA.StoredVariables["CostFunctionJo"].store( Jo )
selfA.StoredVariables["CostFunctionJ" ].store( J )
if selfA._toStore("IndexOfOptimum") or \
- selfA._toStore("CurrentOptimum") or \
- selfA._toStore("CostFunctionJAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
- selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
- selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
+ selfA._toStore("CurrentOptimum") or \
+ selfA._toStore("CostFunctionJAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJbAtCurrentOptimum") or \
+ selfA._toStore("CostFunctionJoAtCurrentOptimum") or \
+ selfA._toStore("SimulatedObservationAtCurrentOptimum"):
+ IndexMin = numpy.argmin( selfA.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps # noqa: E501
if selfA._toStore("IndexOfOptimum"):
selfA.StoredVariables["IndexOfOptimum"].store( IndexMin )
if selfA._toStore("CurrentOptimum"):
- selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] )
+ selfA.StoredVariables["CurrentOptimum"].store( selfA.StoredVariables["CurrentState"][IndexMin] ) # noqa: E501
if selfA._toStore("SimulatedObservationAtCurrentOptimum"):
- selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJbAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJoAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] )
+ selfA.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( selfA.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
if selfA._toStore("CostFunctionJAtCurrentOptimum"):
- selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] )
+ selfA.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( selfA.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
return J
- #
+
def GradientOfCostFunction(v):
- _V = numpy.asarray(v).reshape((-1,1))
- _X = Xb + (B @ _V).reshape((-1,1))
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _V = numpy.asarray(v).reshape((-1, 1))
+ _X = Xb + (B @ _V).reshape((-1, 1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
GradJb = BT * _V
GradJo = - BT * Ha( (_X, RI * (Y - _HX)) )
GradJ = numpy.ravel( GradJb ) + numpy.ravel( GradJo )
nbPreviousSteps = selfA.StoredVariables["CostFunctionJ"].stepnumber()
#
if selfA._parameters["Minimizer"] == "LBFGSB":
- if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
+ if vt("0.19") <= vt(scipy.version.version) <= vt("1.4.99"):
import daAlgorithms.Atoms.lbfgsb14hlt as optimiseur
elif vt("1.5.0") <= vt(scipy.version.version) <= vt("1.7.99"):
import daAlgorithms.Atoms.lbfgsb17hlt as optimiseur
fprime = GradientOfCostFunction,
args = (),
bounds = RecentredBounds(selfA._parameters["Bounds"], Xb, BI),
- maxfun = selfA._parameters["MaximumNumberOfIterations"]-1,
- factr = selfA._parameters["CostDecrementTolerance"]*1.e14,
+ maxfun = selfA._parameters["MaximumNumberOfIterations"] - 1,
+ factr = selfA._parameters["CostDecrementTolerance"] * 1.e14,
pgtol = selfA._parameters["ProjectedGradientTolerance"],
iprint = selfA._parameters["optiprint"],
- )
+ )
# nfeval = Informations['funcalls']
# rc = Informations['warnflag']
elif selfA._parameters["Minimizer"] == "TNC":
pgtol = selfA._parameters["ProjectedGradientTolerance"],
ftol = selfA._parameters["CostDecrementTolerance"],
messages = selfA._parameters["optmessages"],
- )
+ )
elif selfA._parameters["Minimizer"] == "CG":
Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "NCG":
Minimum, fopt, nfeval, grad_calls, hcalls, rc = scipy.optimize.fmin_ncg(
f = CostFunction,
avextol = selfA._parameters["CostDecrementTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
elif selfA._parameters["Minimizer"] == "BFGS":
Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs(
f = CostFunction,
gtol = selfA._parameters["GradientNormTolerance"],
disp = selfA._parameters["optdisp"],
full_output = True,
- )
+ )
else:
raise ValueError("Error in minimizer name: %s is unkown"%selfA._parameters["Minimizer"])
#
if selfA._parameters["StoreInternalVariables"] or selfA._toStore("CurrentState"):
Minimum = selfA.StoredVariables["CurrentState"][IndexMin]
else:
- Minimum = Xb + B * Minimum.reshape((-1,1)) # Pas @
+ Minimum = Xb + B * Minimum.reshape((-1, 1)) # Pas de @
#
Xa = Minimum
- if __storeState: selfA._setInternalState("Xn", Xa)
- #--------------------------
+ if __storeState:
+ selfA._setInternalState("Xn", Xa)
+ # --------------------------
#
selfA.StoredVariables["Analysis"].store( Xa )
#
if selfA._toStore("OMA") or \
- selfA._toStore("InnovationAtCurrentAnalysis") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("SimulatedObservationAtOptimum"):
+ selfA._toStore("InnovationAtCurrentAnalysis") or \
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("SimulatedObservationAtOptimum"):
if selfA._toStore("SimulatedObservationAtCurrentState"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif selfA._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = selfA.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm( Xa )
- oma = Y - HXa.reshape((-1,1))
+ oma = Y - numpy.asarray(HXa).reshape((-1, 1))
#
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("JacobianMatrixAtOptimum") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("JacobianMatrixAtOptimum") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HtM = HO["Tangent"].asMatrix(ValueForMethodForm = Xa)
- HtM = HtM.reshape(Y.size,Xa.size) # ADAO & check shape
+ HtM = HtM.reshape(Y.size, Xa.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles") or \
- selfA._toStore("KalmanGainAtOptimum"):
+ selfA._toStore("SimulationQuantiles") or \
+ selfA._toStore("KalmanGainAtOptimum"):
HaM = HO["Adjoint"].asMatrix(ValueForMethodForm = Xa)
- HaM = HaM.reshape(Xa.size,Y.size) # ADAO & check shape
+ HaM = HaM.reshape(Xa.size, Y.size) # ADAO & check shape
if selfA._toStore("APosterioriCovariance") or \
- selfA._toStore("SimulationQuantiles"):
+ selfA._toStore("SimulationQuantiles"):
BI = B.getI()
A = HessienneEstimation(selfA, Xa.size, HaM, HtM, BI, RI)
if selfA._toStore("APosterioriCovariance"):
if selfA._toStore("JacobianMatrixAtOptimum"):
selfA.StoredVariables["JacobianMatrixAtOptimum"].store( HtM )
if selfA._toStore("KalmanGainAtOptimum"):
- if (Y.size <= Xb.size): KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
- elif (Y.size > Xb.size): KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
+ if (Y.size <= Xb.size):
+ KG = B * HaM * (R + numpy.dot(HtM, B * HaM)).I
+ elif (Y.size > Xb.size):
+ KG = (BI + numpy.dot(HaM, RI * HtM)).I * HaM * RI
selfA.StoredVariables["KalmanGainAtOptimum"].store( KG )
#
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if selfA._toStore("Innovation") or \
- selfA._toStore("SigmaObs2") or \
- selfA._toStore("MahalanobisConsistency") or \
- selfA._toStore("OMB"):
+ selfA._toStore("SigmaObs2") or \
+ selfA._toStore("MahalanobisConsistency") or \
+ selfA._toStore("OMB"):
Innovation = Y - HXb
if selfA._toStore("Innovation"):
selfA.StoredVariables["Innovation"].store( Innovation )
TraceR = R.trace(Y.size)
selfA.StoredVariables["SigmaObs2"].store( vfloat( (Innovation.T @ oma) ) / TraceR )
if selfA._toStore("MahalanobisConsistency"):
- selfA.StoredVariables["MahalanobisConsistency"].store( float( 2.*MinJ/Innovation.size ) )
+ selfA.StoredVariables["MahalanobisConsistency"].store( float( 2. * MinJ / Innovation.size ) )
if selfA._toStore("SimulationQuantiles"):
QuantilesEstimations(selfA, A, Xa, HXa, Hm, HtM)
if selfA._toStore("SimulatedObservationAtBackground"):
message = "Variant ou formulation de la méthode",
listval = [
"Blue",
- ],
+ ],
listadv = [
"OneCorrection",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "Parameters",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
"SimulationQuantiles",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "Quantiles",
default = [],
message = "Liste des valeurs de quantiles",
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfSamplesForQuantiles",
default = 100,
typecast = int,
message = "Nombre d'échantillons simulés pour le calcul des quantiles",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "SimulationForQuantiles",
default = "Linear",
typecast = str,
message = "Type de simulation en estimation des quantiles",
listval = ["Linear", "NonLinear"]
- )
- self.defineRequiredParameter( # Pas de type
+ )
+ self.defineRequiredParameter( # Pas de type
name = "StateBoundsForQuantiles",
message = "Liste des paires de bornes pour les états utilisés en estimation des quantiles",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
- )
- self.setAttributes(tags=(
- "DataAssimilation",
- "Linear",
- "Filter",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "Linear",
+ "Filter",
+ ),
+ features=(
+ "LocalOptimization",
+ "DerivativeNeeded",
+ "ParallelDerivativesOnly",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] == "Blue":
+ # --------------------------
+ if self._parameters["Variant"] == "Blue":
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, ecwblue.ecwblue)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "OneCorrection":
ecwblue.ecwblue(self, Xb, Y, U, HO, CM, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = True,
typecast = bool,
message = "Calcule et affiche un résumé à chaque évaluation élémentaire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "NumberOfRepetition",
default = 1,
typecast = int,
message = "Nombre de fois où l'exécution de la fonction est répétée",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
listval = [
"CurrentState",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
optional = ("U"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- Hm = HO["Direct"].appliedControledFormTo
+ FunctionToTest = HO["Direct"].appliedControledFormTo
#
X0 = copy.copy( Xb )
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.ravel( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.ravel( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.ravel( U[-1] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.ravel( U[0] ).reshape((-1, 1))
else:
- Un = numpy.ravel( U ).reshape((-1,1))
+ Un = numpy.ravel( U ).reshape((-1, 1))
else:
Un = None
#
__p = self._parameters["NumberOfPrintedDigits"]
__r = self._parameters["NumberOfRepetition"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the (repetition of the) launch of some\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
if Un is None:
msgs += (__marge + "Characteristics of control parameter U, internally converted: None\n")
msgs += (__marge + "Characteristics of control parameter U, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( Un )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Un ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Un )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Un )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Un, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Un, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Un )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Un )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Un )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Un, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Un, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Un )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
#
if self._parameters["SetDebug"]:
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
msgs += (__flech + "Beginning of repeated evaluation, without activating debug\n")
else:
msgs += (__flech + "Beginning of evaluation, without activating debug\n")
- print(msgs) # 1
+ print(msgs) # 1
#
# ----------
HO["Direct"].disableAvoidingRedundancy()
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if __s:
- msgs = (__marge + "%s\n"%("-"*75,)) # 2-1
+ msgs = (__marge + "%s\n"%("-" * 75,)) # 2-1
if __r > 1:
msgs += ("\n")
- msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i+1,__r))
+ msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i + 1, __r))
msgs += ("\n")
msgs += (__flech + "Launching operator sequential evaluation\n")
- print(msgs) # 2-1
+ print(msgs) # 2-1
#
- Yn = Hm( (X0, Un) )
+ Yn = FunctionToTest( (X0, Un) )
#
if __s:
- msgs = ("\n") # 2-2
+ msgs = ("\n") # 2-2
msgs += (__flech + "End of operator sequential evaluation\n")
msgs += ("\n")
msgs += (__flech + "Information after evaluation:\n")
msgs += (__marge + "Characteristics of simulated output vector Y=F((X,U)), to compare to others:\n")
msgs += (__marge + " Type...............: %s\n")%type( Yn )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
- print(msgs) # 2-2
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Yn )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Yn )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Yn, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Yn, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Yn )
+ print(msgs) # 2-2
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(Yn) )
#
Ys.append( copy.copy( numpy.ravel(
Yn
- ) ) )
+ ) ) )
# ----------
HO["Direct"].enableAvoidingRedundancy()
# ----------
#
- msgs = (__marge + "%s\n\n"%("-"*75,)) # 3
+ msgs = (__marge + "%s\n\n"%("-" * 75,)) # 3
if self._parameters["SetDebug"]:
if __r > 1:
msgs += (__flech + "End of repeated evaluation, deactivating debug if necessary\n")
else:
msgs += (__flech + "End of evaluation, without deactivating debug\n")
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
if __r > 1:
msgs += ("\n")
msgs += (__flech + "Launching statistical summary calculation for %i states\n"%__r)
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
msgs += ("\n")
- msgs += (__flech + "Statistical analysis of the outputs obtained through sequential repeated evaluations\n")
+ msgs += (__flech + "Statistical analysis of the outputs obtained through sequential repeated evaluations\n") # noqa: E501
msgs += ("\n")
- msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr) # noqa: E501
msgs += ("\n")
Yy = numpy.array( Ys )
msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Ys )
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of outputs Y:\n")
msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ys[0].size
- msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.min( Yy ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.max( Yy ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.mean( Yy, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.std( Yy, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ym = numpy.mean( numpy.array( Ys ), axis=0, dtype=mfp )
msgs += (__marge + "Characteristics of the vector Ym, mean of the outputs Y:\n")
msgs += (__marge + " Size of the mean of the outputs...............: %i\n")%Ym.size
- msgs += (__marge + " Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
- msgs += (__marge + " Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
- msgs += (__marge + " Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.min( Ym ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.max( Ym ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the outputs...............: %." + str(__p) + "e\n")%numpy.mean( Ym, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the outputs.....: %." + str(__p) + "e\n")%numpy.std( Ym, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ye = numpy.mean( numpy.array( Ys ) - Ym, axis=0, dtype=mfp )
- msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n")
+ msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n") # noqa: E501
msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Ye.size
- msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
- msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
- msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.min( Ye ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.max( Ye ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the differences...........: %." + str(__p) + "e\n")%numpy.mean( Ye, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the differences.: %." + str(__p) + "e\n")%numpy.std( Ye, dtype=mfp ) # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
msgs += ("\n")
msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 3
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 3
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
import numpy, logging, scipy.optimize
from daCore import BasicObjects, PlatformInfo
+from daCore.NumericObjects import ApplyBounds, ForceNumericBounds
from daCore.PlatformInfo import vfloat
# ==============================================================================
"POWELL",
"SIMPLEX",
"SUBPLEX",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 15000,
message = "Nombre maximal de pas d'optimisation",
minval = -1,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfFunctionEvaluations",
default = 15000,
typecast = int,
message = "Nombre maximal d'évaluations de la fonction",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "StateVariationTolerance",
default = 1.e-4,
typecast = float,
message = "Variation relative maximale de l'état lors de l'arrêt",
- )
+ )
self.defineRequiredParameter(
name = "CostDecrementTolerance",
default = 1.e-7,
typecast = float,
message = "Diminution relative minimale du cout lors de l'arrêt",
- )
+ )
self.defineRequiredParameter(
name = "QualityCriterion",
default = "AugmentedWeightedLeastSquares",
"LeastSquares", "LS", "L2",
"AbsoluteValue", "L1",
"MaximumError", "ME", "Linf",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
- )
- self.setAttributes(tags=(
- "Optimization",
- "NonLinear",
- "MetaHeuristic",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "NonLinear",
+ "MetaHeuristic",
+ ),
+ features=(
+ "NonLocalOptimization",
+ "DerivativeFree",
+ "ParallelFree",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
if not PlatformInfo.has_nlopt and not self._parameters["Minimizer"] in ["COBYLA", "POWELL", "SIMPLEX"]:
- logging.warning("%s Minimization by SIMPLEX is forced because %s is unavailable (COBYLA, POWELL are also available)"%(self._name,self._parameters["Minimizer"]))
+ logging.warning(
+ "%s Minimization by SIMPLEX is forced because %s "%(self._name, self._parameters["Minimizer"]) + \
+ "is unavailable (COBYLA, POWELL are also available)")
self._parameters["Minimizer"] = "SIMPLEX"
#
Hm = HO["Direct"].appliedTo
#
BI = B.getI()
RI = R.getI()
- #
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.ravel( x ).reshape((-1,1))
- _HX = numpy.ravel( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.ravel( x ).reshape((-1, 1))
+ _HX = numpy.ravel( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
self.StoredVariables["CurrentState"].store( _X )
if self._toStore("SimulatedObservationAtCurrentState") or \
- self._toStore("SimulatedObservationAtCurrentOptimum"):
+ self._toStore("SimulatedObservationAtCurrentOptimum"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
if self._toStore("InnovationAtCurrentState"):
self.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = vfloat(0.5 * (_X - Xb).T @ (BI @ (_X - Xb)))
Jo = vfloat(0.5 * _Innovation.T @ (RI @ _Innovation))
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = vfloat(0.5 * _Innovation.T @ (RI @ _Innovation))
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = vfloat(0.5 * _Innovation.T @ _Innovation)
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = vfloat(numpy.sum( numpy.abs(_Innovation) ))
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = vfloat(numpy.max( numpy.abs(_Innovation) ))
#
self.StoredVariables["CostFunctionJo"].store( Jo )
self.StoredVariables["CostFunctionJ" ].store( J )
if self._toStore("IndexOfOptimum") or \
- self._toStore("CurrentOptimum") or \
- self._toStore("CostFunctionJAtCurrentOptimum") or \
- self._toStore("CostFunctionJbAtCurrentOptimum") or \
- self._toStore("CostFunctionJoAtCurrentOptimum") or \
- self._toStore("SimulatedObservationAtCurrentOptimum"):
+ self._toStore("CurrentOptimum") or \
+ self._toStore("CostFunctionJAtCurrentOptimum") or \
+ self._toStore("CostFunctionJbAtCurrentOptimum") or \
+ self._toStore("CostFunctionJoAtCurrentOptimum") or \
+ self._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( self.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if self._toStore("IndexOfOptimum"):
self.StoredVariables["IndexOfOptimum"].store( IndexMin )
if self._toStore("CurrentOptimum"):
- self.StoredVariables["CurrentOptimum"].store( self.StoredVariables["CurrentState"][IndexMin] )
+ self.StoredVariables["CurrentOptimum"].store(
+ self.StoredVariables["CurrentState"][IndexMin] )
if self._toStore("SimulatedObservationAtCurrentOptimum"):
- self.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( self.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ self.StoredVariables["SimulatedObservationAtCurrentOptimum"].store(
+ self.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
+ )
if self._toStore("CostFunctionJAtCurrentOptimum"):
- self.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( self.StoredVariables["CostFunctionJ" ][IndexMin] )
+ self.StoredVariables["CostFunctionJAtCurrentOptimum" ].store(
+ self.StoredVariables["CostFunctionJ" ][IndexMin] )
if self._toStore("CostFunctionJbAtCurrentOptimum"):
- self.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( self.StoredVariables["CostFunctionJb"][IndexMin] )
+ self.StoredVariables["CostFunctionJbAtCurrentOptimum"].store(
+ self.StoredVariables["CostFunctionJb"][IndexMin] )
if self._toStore("CostFunctionJoAtCurrentOptimum"):
- self.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( self.StoredVariables["CostFunctionJo"][IndexMin] )
+ self.StoredVariables["CostFunctionJoAtCurrentOptimum"].store(
+ self.StoredVariables["CostFunctionJo"][IndexMin] )
return J
#
Xini = numpy.ravel(Xb)
if len(Xini) < 2 and self._parameters["Minimizer"] == "NEWUOA":
- raise ValueError("The minimizer %s can not be used when the optimisation state dimension is 1. Please choose another minimizer."%self._parameters["Minimizer"])
+ raise ValueError(
+ "The minimizer %s "%self._parameters["Minimizer"] + \
+ "can not be used when the optimisation state dimension " + \
+ "is 1. Please choose another minimizer.")
#
# Minimisation de la fonctionnelle
# --------------------------------
func = CostFunction,
x0 = Xini,
args = (self._parameters["QualityCriterion"],),
- maxiter = self._parameters["MaximumNumberOfIterations"]-1,
+ maxiter = self._parameters["MaximumNumberOfIterations"] - 1,
maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"],
xtol = self._parameters["StateVariationTolerance"],
ftol = self._parameters["CostDecrementTolerance"],
full_output = True,
disp = self._parameters["optdisp"],
- )
+ )
elif self._parameters["Minimizer"] == "COBYLA" and not PlatformInfo.has_nlopt:
def make_constraints(bounds):
constraints = []
- for (i,(a,b)) in enumerate(bounds):
- lower = lambda x: x[i] - a
- upper = lambda x: b - x[i]
+ for (i, (a, b)) in enumerate(bounds):
+ lower = lambda x: x[i] - a # noqa: E731
+ upper = lambda x: b - x[i] # noqa: E731
constraints = constraints + [lower] + [upper]
return constraints
if self._parameters["Bounds"] is None:
raise ValueError("Bounds have to be given for all axes as a list of lower/upper pairs!")
+ self._parameters["Bounds"] = ForceNumericBounds( self._parameters["Bounds"] )
+ Xini = ApplyBounds( Xini, self._parameters["Bounds"] )
Minimum = scipy.optimize.fmin_cobyla(
func = CostFunction,
x0 = Xini,
cons = make_constraints( self._parameters["Bounds"] ),
args = (self._parameters["QualityCriterion"],),
- consargs = (), # To avoid extra-args
+ consargs = (), # To avoid extra-args
maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"],
rhobeg = 1.0,
rhoend = self._parameters["StateVariationTolerance"],
- catol = 2.*self._parameters["StateVariationTolerance"],
+ catol = 2. * self._parameters["StateVariationTolerance"],
disp = self._parameters["optdisp"],
- )
+ )
elif self._parameters["Minimizer"] == "COBYLA" and PlatformInfo.has_nlopt:
import nlopt
opt = nlopt.opt(nlopt.LN_COBYLA, Xini.size)
+
def _f(_Xx, Grad):
# DFO, so no gradient
return CostFunction(_Xx, self._parameters["QualityCriterion"])
opt.set_min_objective(_f)
+ self._parameters["Bounds"] = ForceNumericBounds( self._parameters["Bounds"] )
+ Xini = ApplyBounds( Xini, self._parameters["Bounds"] )
if self._parameters["Bounds"] is not None:
- lub = numpy.array(self._parameters["Bounds"],dtype=float).reshape((Xini.size,2))
- lb = lub[:,0] ; lb[numpy.isnan(lb)] = -float('inf')
- ub = lub[:,1] ; ub[numpy.isnan(ub)] = +float('inf')
+ lub = numpy.array(self._parameters["Bounds"], dtype=float).reshape((Xini.size, 2))
+ lb = lub[:, 0]; lb[numpy.isnan(lb)] = -float('inf') # noqa: E702
+ ub = lub[:, 1]; ub[numpy.isnan(ub)] = +float('inf') # noqa: E702
if self._parameters["optdisp"]:
- print("%s: upper bounds %s"%(opt.get_algorithm_name(),ub))
- print("%s: lower bounds %s"%(opt.get_algorithm_name(),lb))
+ print("%s: upper bounds %s"%(opt.get_algorithm_name(), ub))
+ print("%s: lower bounds %s"%(opt.get_algorithm_name(), lb))
opt.set_upper_bounds(ub)
opt.set_lower_bounds(lb)
opt.set_ftol_rel(self._parameters["CostDecrementTolerance"])
- opt.set_xtol_rel(2.*self._parameters["StateVariationTolerance"])
+ opt.set_xtol_rel(2. * self._parameters["StateVariationTolerance"])
opt.set_maxeval(self._parameters["MaximumNumberOfFunctionEvaluations"])
Minimum = opt.optimize( Xini )
if self._parameters["optdisp"]:
- print("%s: optimal state: %s"%(opt.get_algorithm_name(),Minimum))
- print("%s: minimum of J: %s"%(opt.get_algorithm_name(),opt.last_optimum_value()))
- print("%s: return code: %i"%(opt.get_algorithm_name(),opt.last_optimize_result()))
+ print("%s: optimal state: %s"%(opt.get_algorithm_name(), Minimum))
+ print("%s: minimum of J: %s"%(opt.get_algorithm_name(), opt.last_optimum_value()))
+ print("%s: return code: %i"%(opt.get_algorithm_name(), opt.last_optimize_result()))
elif self._parameters["Minimizer"] == "SIMPLEX" and not PlatformInfo.has_nlopt:
Minimum, J_optimal, niter, nfeval, rc = scipy.optimize.fmin(
func = CostFunction,
x0 = Xini,
args = (self._parameters["QualityCriterion"],),
- maxiter = self._parameters["MaximumNumberOfIterations"]-1,
+ maxiter = self._parameters["MaximumNumberOfIterations"] - 1,
maxfun = self._parameters["MaximumNumberOfFunctionEvaluations"],
xtol = self._parameters["StateVariationTolerance"],
ftol = self._parameters["CostDecrementTolerance"],
full_output = True,
disp = self._parameters["optdisp"],
- )
+ )
elif self._parameters["Minimizer"] == "SIMPLEX" and PlatformInfo.has_nlopt:
import nlopt
opt = nlopt.opt(nlopt.LN_NELDERMEAD, Xini.size)
+
def _f(_Xx, Grad):
# DFO, so no gradient
return CostFunction(_Xx, self._parameters["QualityCriterion"])
opt.set_min_objective(_f)
+ self._parameters["Bounds"] = ForceNumericBounds( self._parameters["Bounds"] )
+ Xini = ApplyBounds( Xini, self._parameters["Bounds"] )
if self._parameters["Bounds"] is not None:
- lub = numpy.array(self._parameters["Bounds"],dtype=float).reshape((Xini.size,2))
- lb = lub[:,0] ; lb[numpy.isnan(lb)] = -float('inf')
- ub = lub[:,1] ; ub[numpy.isnan(ub)] = +float('inf')
+ lub = numpy.array(self._parameters["Bounds"], dtype=float).reshape((Xini.size, 2))
+ lb = lub[:, 0]; lb[numpy.isnan(lb)] = -float('inf') # noqa: E702
+ ub = lub[:, 1]; ub[numpy.isnan(ub)] = +float('inf') # noqa: E702
if self._parameters["optdisp"]:
- print("%s: upper bounds %s"%(opt.get_algorithm_name(),ub))
- print("%s: lower bounds %s"%(opt.get_algorithm_name(),lb))
+ print("%s: upper bounds %s"%(opt.get_algorithm_name(), ub))
+ print("%s: lower bounds %s"%(opt.get_algorithm_name(), lb))
opt.set_upper_bounds(ub)
opt.set_lower_bounds(lb)
opt.set_ftol_rel(self._parameters["CostDecrementTolerance"])
- opt.set_xtol_rel(2.*self._parameters["StateVariationTolerance"])
+ opt.set_xtol_rel(2. * self._parameters["StateVariationTolerance"])
opt.set_maxeval(self._parameters["MaximumNumberOfFunctionEvaluations"])
Minimum = opt.optimize( Xini )
if self._parameters["optdisp"]:
- print("%s: optimal state: %s"%(opt.get_algorithm_name(),Minimum))
- print("%s: minimum of J: %s"%(opt.get_algorithm_name(),opt.last_optimum_value()))
- print("%s: return code: %i"%(opt.get_algorithm_name(),opt.last_optimize_result()))
+ print("%s: optimal state: %s"%(opt.get_algorithm_name(), Minimum))
+ print("%s: minimum of J: %s"%(opt.get_algorithm_name(), opt.last_optimum_value()))
+ print("%s: return code: %i"%(opt.get_algorithm_name(), opt.last_optimize_result()))
elif self._parameters["Minimizer"] == "BOBYQA" and PlatformInfo.has_nlopt:
import nlopt
opt = nlopt.opt(nlopt.LN_BOBYQA, Xini.size)
+
def _f(_Xx, Grad):
# DFO, so no gradient
return CostFunction(_Xx, self._parameters["QualityCriterion"])
opt.set_min_objective(_f)
+ self._parameters["Bounds"] = ForceNumericBounds( self._parameters["Bounds"] )
+ Xini = ApplyBounds( Xini, self._parameters["Bounds"] )
if self._parameters["Bounds"] is not None:
- lub = numpy.array(self._parameters["Bounds"],dtype=float).reshape((Xini.size,2))
- lb = lub[:,0] ; lb[numpy.isnan(lb)] = -float('inf')
- ub = lub[:,1] ; ub[numpy.isnan(ub)] = +float('inf')
+ lub = numpy.array(self._parameters["Bounds"], dtype=float).reshape((Xini.size, 2))
+ lb = lub[:, 0]; lb[numpy.isnan(lb)] = -float('inf') # noqa: E702
+ ub = lub[:, 1]; ub[numpy.isnan(ub)] = +float('inf') # noqa: E702
if self._parameters["optdisp"]:
- print("%s: upper bounds %s"%(opt.get_algorithm_name(),ub))
- print("%s: lower bounds %s"%(opt.get_algorithm_name(),lb))
+ print("%s: upper bounds %s"%(opt.get_algorithm_name(), ub))
+ print("%s: lower bounds %s"%(opt.get_algorithm_name(), lb))
opt.set_upper_bounds(ub)
opt.set_lower_bounds(lb)
opt.set_ftol_rel(self._parameters["CostDecrementTolerance"])
- opt.set_xtol_rel(2.*self._parameters["StateVariationTolerance"])
+ opt.set_xtol_rel(2. * self._parameters["StateVariationTolerance"])
opt.set_maxeval(self._parameters["MaximumNumberOfFunctionEvaluations"])
Minimum = opt.optimize( Xini )
if self._parameters["optdisp"]:
- print("%s: optimal state: %s"%(opt.get_algorithm_name(),Minimum))
- print("%s: minimum of J: %s"%(opt.get_algorithm_name(),opt.last_optimum_value()))
- print("%s: return code: %i"%(opt.get_algorithm_name(),opt.last_optimize_result()))
+ print("%s: optimal state: %s"%(opt.get_algorithm_name(), Minimum))
+ print("%s: minimum of J: %s"%(opt.get_algorithm_name(), opt.last_optimum_value()))
+ print("%s: return code: %i"%(opt.get_algorithm_name(), opt.last_optimize_result()))
elif self._parameters["Minimizer"] == "NEWUOA" and PlatformInfo.has_nlopt:
import nlopt
opt = nlopt.opt(nlopt.LN_NEWUOA, Xini.size)
+
def _f(_Xx, Grad):
# DFO, so no gradient
return CostFunction(_Xx, self._parameters["QualityCriterion"])
opt.set_min_objective(_f)
+ self._parameters["Bounds"] = ForceNumericBounds( self._parameters["Bounds"] )
+ Xini = ApplyBounds( Xini, self._parameters["Bounds"] )
if self._parameters["Bounds"] is not None:
- lub = numpy.array(self._parameters["Bounds"],dtype=float).reshape((Xini.size,2))
- lb = lub[:,0] ; lb[numpy.isnan(lb)] = -float('inf')
- ub = lub[:,1] ; ub[numpy.isnan(ub)] = +float('inf')
+ lub = numpy.array(self._parameters["Bounds"], dtype=float).reshape((Xini.size, 2))
+ lb = lub[:, 0]; lb[numpy.isnan(lb)] = -float('inf') # noqa: E702
+ ub = lub[:, 1]; ub[numpy.isnan(ub)] = +float('inf') # noqa: E702
if self._parameters["optdisp"]:
- print("%s: upper bounds %s"%(opt.get_algorithm_name(),ub))
- print("%s: lower bounds %s"%(opt.get_algorithm_name(),lb))
+ print("%s: upper bounds %s"%(opt.get_algorithm_name(), ub))
+ print("%s: lower bounds %s"%(opt.get_algorithm_name(), lb))
opt.set_upper_bounds(ub)
opt.set_lower_bounds(lb)
opt.set_ftol_rel(self._parameters["CostDecrementTolerance"])
- opt.set_xtol_rel(2.*self._parameters["StateVariationTolerance"])
+ opt.set_xtol_rel(2. * self._parameters["StateVariationTolerance"])
opt.set_maxeval(self._parameters["MaximumNumberOfFunctionEvaluations"])
Minimum = opt.optimize( Xini )
if self._parameters["optdisp"]:
- print("%s: optimal state: %s"%(opt.get_algorithm_name(),Minimum))
- print("%s: minimum of J: %s"%(opt.get_algorithm_name(),opt.last_optimum_value()))
- print("%s: return code: %i"%(opt.get_algorithm_name(),opt.last_optimize_result()))
+ print("%s: optimal state: %s"%(opt.get_algorithm_name(), Minimum))
+ print("%s: minimum of J: %s"%(opt.get_algorithm_name(), opt.last_optimum_value()))
+ print("%s: return code: %i"%(opt.get_algorithm_name(), opt.last_optimize_result()))
elif self._parameters["Minimizer"] == "SUBPLEX" and PlatformInfo.has_nlopt:
import nlopt
opt = nlopt.opt(nlopt.LN_SBPLX, Xini.size)
+
def _f(_Xx, Grad):
# DFO, so no gradient
return CostFunction(_Xx, self._parameters["QualityCriterion"])
opt.set_min_objective(_f)
+ self._parameters["Bounds"] = ForceNumericBounds( self._parameters["Bounds"] )
+ Xini = ApplyBounds( Xini, self._parameters["Bounds"] )
if self._parameters["Bounds"] is not None:
- lub = numpy.array(self._parameters["Bounds"],dtype=float).reshape((Xini.size,2))
- lb = lub[:,0] ; lb[numpy.isnan(lb)] = -float('inf')
- ub = lub[:,1] ; ub[numpy.isnan(ub)] = +float('inf')
+ lub = numpy.array(self._parameters["Bounds"], dtype=float).reshape((Xini.size, 2))
+ lb = lub[:, 0]; lb[numpy.isnan(lb)] = -float('inf') # noqa: E702
+ ub = lub[:, 1]; ub[numpy.isnan(ub)] = +float('inf') # noqa: E702
if self._parameters["optdisp"]:
- print("%s: upper bounds %s"%(opt.get_algorithm_name(),ub))
- print("%s: lower bounds %s"%(opt.get_algorithm_name(),lb))
+ print("%s: upper bounds %s"%(opt.get_algorithm_name(), ub))
+ print("%s: lower bounds %s"%(opt.get_algorithm_name(), lb))
opt.set_upper_bounds(ub)
opt.set_lower_bounds(lb)
opt.set_ftol_rel(self._parameters["CostDecrementTolerance"])
- opt.set_xtol_rel(2.*self._parameters["StateVariationTolerance"])
+ opt.set_xtol_rel(2. * self._parameters["StateVariationTolerance"])
opt.set_maxeval(self._parameters["MaximumNumberOfFunctionEvaluations"])
Minimum = opt.optimize( Xini )
if self._parameters["optdisp"]:
- print("%s: optimal state: %s"%(opt.get_algorithm_name(),Minimum))
- print("%s: minimum of J: %s"%(opt.get_algorithm_name(),opt.last_optimum_value()))
- print("%s: return code: %i"%(opt.get_algorithm_name(),opt.last_optimize_result()))
+ print("%s: optimal state: %s"%(opt.get_algorithm_name(), Minimum))
+ print("%s: minimum of J: %s"%(opt.get_algorithm_name(), opt.last_optimum_value()))
+ print("%s: return code: %i"%(opt.get_algorithm_name(), opt.last_optimize_result()))
else:
raise ValueError("Error in minimizer name: %s is unkown"%self._parameters["Minimizer"])
#
IndexMin = numpy.argmin( self.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
- MinJ = self.StoredVariables["CostFunctionJ"][IndexMin]
Minimum = self.StoredVariables["CurrentState"][IndexMin]
#
# Obtention de l'analyse
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if self._toStore("OMA") or \
- self._toStore("SimulatedObservationAtOptimum"):
+ self._toStore("SimulatedObservationAtOptimum"):
if self._toStore("SimulatedObservationAtCurrentState"):
HXa = self.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif self._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = self.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm(Xa)
- HXa = HXa.reshape((-1,1))
+ HXa = HXa.reshape((-1, 1))
if self._toStore("Innovation") or \
- self._toStore("OMB") or \
- self._toStore("SimulatedObservationAtBackground"):
- HXb = Hm(Xb).reshape((-1,1))
+ self._toStore("OMB") or \
+ self._toStore("SimulatedObservationAtBackground"):
+ HXb = Hm(Xb).reshape((-1, 1))
Innovation = Y - HXb
if self._toStore("Innovation"):
self.StoredVariables["Innovation"].store( Innovation )
if self._toStore("SimulatedObservationAtOptimum"):
self.StoredVariables["SimulatedObservationAtOptimum"].store( HXa )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
"RAND2EXP",
"RANDTOBEST1BIN",
"RANDTOBEST1EXP",
- ],
+ ],
listadv = [
"CURRENTTOBEST1EXP",
"CURRENTTOBEST1BIN",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 15000,
message = "Nombre maximal de générations",
minval = 0,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfFunctionEvaluations",
default = 15000,
typecast = int,
message = "Nombre maximal d'évaluations de la fonction",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "PopulationSize",
default = 100,
typecast = int,
message = "Taille approximative de la population à chaque génération",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "MutationDifferentialWeight_F",
default = (0.5, 1),
message = "Poids différentiel de mutation, constant ou aléatoire dans l'intervalle, noté F",
minval = 0.,
maxval = 2.,
- )
+ )
self.defineRequiredParameter(
name = "CrossOverProbability_CR",
default = 0.7,
message = "Probabilité de recombinaison ou de croisement, notée CR",
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "QualityCriterion",
default = "AugmentedWeightedLeastSquares",
"LeastSquares", "LS", "L2",
"AbsoluteValue", "L1",
"MaximumError", "ME", "Linf",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
- )
- self.setAttributes(tags=(
- "Optimization",
- "NonLinear",
- "MetaHeuristic",
- "Population",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "NonLinear",
+ "MetaHeuristic",
+ "Population",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
len_X = numpy.asarray(Xb).size
- popsize = round(self._parameters["PopulationSize"]/len_X)
- maxiter = min(self._parameters["MaximumNumberOfIterations"],round(self._parameters["MaximumNumberOfFunctionEvaluations"]/(popsize*len_X) - 1))
- logging.debug("%s Nombre maximal de générations = %i, taille de la population à chaque génération = %i"%(self._name, maxiter, popsize*len_X))
+ popsize = round(self._parameters["PopulationSize"] / len_X)
+ maxiter = min(self._parameters["MaximumNumberOfIterations"], round(self._parameters["MaximumNumberOfFunctionEvaluations"] / (popsize * len_X) - 1)) # noqa: E501
+ logging.debug("%s Nombre maximal de générations = %i, taille de la population à chaque génération = %i"%(self._name, maxiter, popsize * len_X)) # noqa: E501
#
Hm = HO["Direct"].appliedTo
#
BI = B.getI()
RI = R.getI()
- #
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.ravel( x ).reshape((-1,1))
- _HX = numpy.ravel( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.ravel( x ).reshape((-1, 1))
+ _HX = numpy.ravel( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
self.StoredVariables["CurrentState"].store( _X )
if self._toStore("SimulatedObservationAtCurrentState") or \
- self._toStore("SimulatedObservationAtCurrentOptimum"):
+ self._toStore("SimulatedObservationAtCurrentOptimum"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
if self._toStore("InnovationAtCurrentState"):
self.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = vfloat(0.5 * (_X - Xb).T @ (BI @ (_X - Xb)))
Jo = vfloat(0.5 * _Innovation.T @ (RI @ _Innovation))
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = vfloat(0.5 * _Innovation.T @ (RI @ _Innovation))
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = vfloat(0.5 * _Innovation.T @ _Innovation)
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = vfloat(numpy.sum( numpy.abs(_Innovation) ))
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = vfloat(numpy.max( numpy.abs(_Innovation) ))
#
self.StoredVariables["CostFunctionJo"].store( Jo )
self.StoredVariables["CostFunctionJ" ].store( J )
if self._toStore("IndexOfOptimum") or \
- self._toStore("CurrentOptimum") or \
- self._toStore("CostFunctionJAtCurrentOptimum") or \
- self._toStore("CostFunctionJbAtCurrentOptimum") or \
- self._toStore("CostFunctionJoAtCurrentOptimum") or \
- self._toStore("SimulatedObservationAtCurrentOptimum"):
+ self._toStore("CurrentOptimum") or \
+ self._toStore("CostFunctionJAtCurrentOptimum") or \
+ self._toStore("CostFunctionJbAtCurrentOptimum") or \
+ self._toStore("CostFunctionJoAtCurrentOptimum") or \
+ self._toStore("SimulatedObservationAtCurrentOptimum"):
IndexMin = numpy.argmin( self.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
if self._toStore("IndexOfOptimum"):
self.StoredVariables["IndexOfOptimum"].store( IndexMin )
if self._toStore("CurrentOptimum"):
self.StoredVariables["CurrentOptimum"].store( self.StoredVariables["CurrentState"][IndexMin] )
if self._toStore("SimulatedObservationAtCurrentOptimum"):
- self.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( self.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] )
+ self.StoredVariables["SimulatedObservationAtCurrentOptimum"].store( self.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin] ) # noqa: E501
if self._toStore("CostFunctionJAtCurrentOptimum"):
- self.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( self.StoredVariables["CostFunctionJ" ][IndexMin] )
+ self.StoredVariables["CostFunctionJAtCurrentOptimum" ].store( self.StoredVariables["CostFunctionJ" ][IndexMin] ) # noqa: E501
if self._toStore("CostFunctionJbAtCurrentOptimum"):
- self.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( self.StoredVariables["CostFunctionJb"][IndexMin] )
+ self.StoredVariables["CostFunctionJbAtCurrentOptimum"].store( self.StoredVariables["CostFunctionJb"][IndexMin] ) # noqa: E501
if self._toStore("CostFunctionJoAtCurrentOptimum"):
- self.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( self.StoredVariables["CostFunctionJo"][IndexMin] )
+ self.StoredVariables["CostFunctionJoAtCurrentOptimum"].store( self.StoredVariables["CostFunctionJo"][IndexMin] ) # noqa: E501
return J
#
Xini = numpy.ravel(Xb)
# --------------------------------
nbPreviousSteps = self.StoredVariables["CostFunctionJ"].stepnumber()
#
- optResults = scipy.optimize.differential_evolution(
+ scipy.optimize.differential_evolution(
CostFunction,
self._parameters["Bounds"],
strategy = str(self._parameters["Minimizer"]).lower(),
mutation = self._parameters["MutationDifferentialWeight_F"],
recombination = self._parameters["CrossOverProbability_CR"],
disp = self._parameters["optdisp"],
- )
+ x0 = Xini,
+ )
#
IndexMin = numpy.argmin( self.StoredVariables["CostFunctionJ"][nbPreviousSteps:] ) + nbPreviousSteps
- MinJ = self.StoredVariables["CostFunctionJ"][IndexMin]
Minimum = self.StoredVariables["CurrentState"][IndexMin]
#
# Obtention de l'analyse
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if self._toStore("OMA") or \
- self._toStore("SimulatedObservationAtOptimum"):
+ self._toStore("SimulatedObservationAtOptimum"):
if self._toStore("SimulatedObservationAtCurrentState"):
HXa = self.StoredVariables["SimulatedObservationAtCurrentState"][IndexMin]
elif self._toStore("SimulatedObservationAtCurrentOptimum"):
HXa = self.StoredVariables["SimulatedObservationAtCurrentOptimum"][-1]
else:
HXa = Hm(Xa)
- HXa = HXa.reshape((-1,1))
+ HXa = HXa.reshape((-1, 1))
if self._toStore("Innovation") or \
- self._toStore("OMB") or \
- self._toStore("SimulatedObservationAtBackground"):
- HXb = Hm(Xb).reshape((-1,1))
+ self._toStore("OMB") or \
+ self._toStore("SimulatedObservationAtBackground"):
+ HXb = Hm(Xb).reshape((-1, 1))
Innovation = Y - HXb
if self._toStore("Innovation"):
self.StoredVariables["Innovation"].store( Innovation )
if self._toStore("SimulatedObservationAtOptimum"):
self.StoredVariables["SimulatedObservationAtOptimum"].store( HXa )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtBackground",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
- )
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Filter",
- "Ensemble",
- "Reduction",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Filter",
+ "Ensemble",
+ "Reduction",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
# de la diagonale de R
# --------------------------------------------------------------------
DiagonaleR = R.diag(Y.size)
- EnsembleY = numpy.zeros([Y.size,nb_ens])
+ EnsembleY = numpy.zeros([Y.size, nb_ens])
for npar in range(DiagonaleR.size):
- bruit = numpy.random.normal(0,DiagonaleR[npar],nb_ens)
- EnsembleY[npar,:] = Y[npar] + bruit
+ bruit = numpy.random.normal(0, DiagonaleR[npar], nb_ens)
+ EnsembleY[npar, :] = Y[npar] + bruit
#
# Initialisation des opérateurs d'observation et de la matrice gain
# -----------------------------------------------------------------
Xbm = Xb.mean()
Hm = HO["Tangent"].asMatrix(Xbm)
- Hm = Hm.reshape(Y.size,Xbm.size) # ADAO & check shape
+ Hm = Hm.reshape(Y.size, Xbm.size) # ADAO & check shape
Ha = HO["Adjoint"].asMatrix(Xbm)
- Ha = Ha.reshape(Xbm.size,Y.size) # ADAO & check shape
+ Ha = Ha.reshape(Xbm.size, Y.size) # ADAO & check shape
#
# Calcul de la matrice de gain dans l'espace le plus petit et de l'analyse
# ------------------------------------------------------------------------
HXb = Hm @ Xb[iens]
if self._toStore("SimulatedObservationAtBackground"):
self.StoredVariables["SimulatedObservationAtBackground"].store( HXb )
- Innovation = numpy.ravel(EnsembleY[:,iens]) - numpy.ravel(HXb)
+ Innovation = numpy.ravel(EnsembleY[:, iens]) - numpy.ravel(HXb)
if self._toStore("Innovation"):
self.StoredVariables["Innovation"].store( Innovation )
Xa = Xb[iens] + K @ Innovation
if self._toStore("SimulatedObservationAtOptimum"):
self.StoredVariables["SimulatedObservationAtOptimum"].store( Hm @ numpy.ravel(Xa) )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
"IEnKF",
"E3DVAR",
"EnKS",
- ],
+ ],
listadv = [
"StochasticEnKF",
"EnKF-05",
"E3DVAR-EnKF",
"E3DVAR-ETKF",
"E3DVAR-MLEF",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "NumberOfMembers",
default = 100,
typecast = int,
message = "Nombre de membres dans l'ensemble",
minval = 2,
- )
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "State",
typecast = str,
message = "Estimation d'etat ou de parametres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "InflationType",
default = "MultiplicativeOnAnalysisAnomalies",
listval = [
"MultiplicativeOnAnalysisAnomalies",
"MultiplicativeOnBackgroundAnomalies",
- ],
+ ],
listadv = [
"MultiplicativeOnAnalysisCovariance",
"MultiplicativeOnBackgroundCovariance",
"AdditiveOnBackgroundCovariance",
"HybridOnBackgroundCovariance",
"Relaxation",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "InflationFactor",
default = 1.,
typecast = float,
message = "Facteur d'inflation",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "SmootherLagL",
default = 0,
typecast = int,
message = "Nombre d'intervalles de temps de lissage dans le passé",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "HybridCovarianceEquilibrium",
default = 0.5,
typecast = float,
- message = "Facteur d'équilibre entre la covariance statique et la covariance d'ensemble en hybride variationnel",
+ message = "Facteur d'équilibre entre la covariance statique et la covariance d'ensemble en hybride variationnel", # noqa: E501
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "HybridMaximumNumberOfIterations",
default = 15000,
typecast = int,
message = "Nombre maximal de pas d'optimisation en hybride variationnel",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "HybridCostDecrementTolerance",
default = 1.e-7,
typecast = float,
message = "Diminution relative minimale du coût lors de l'arrêt en hybride variationnel",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentAnalysis",
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
- ],
+ ],
listadv = [
"CurrentEnsembleState",
- ],
- )
+ ],
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
- )
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Filter",
- "Ensemble",
- "Dynamic",
- "Reduction",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Filter",
+ "Ensemble",
+ "Dynamic",
+ "Reduction",
+ ),
+ features=(
+ "LocalOptimization",
+ "ParallelAlgorithm",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
# Default EnKF = EnKF-16 = StochasticEnKF
- if self._parameters["Variant"] == "EnKF-05":
+ if self._parameters["Variant"] == "EnKF-05":
senkf.senkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="KalmanFilterFormula05")
#
elif self._parameters["Variant"] in ["EnKF-16", "StochasticEnKF", "EnKF"]:
senkf.senkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="KalmanFilterFormula16")
#
- #--------------------------
+ # --------------------------
# Default ETKF = ETKF-KFF
elif self._parameters["Variant"] in ["ETKF-KFF", "ETKF"]:
etkf.etkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="KalmanFilterFormula")
elif self._parameters["Variant"] == "ETKF-VAR":
etkf.etkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="Variational")
#
- #--------------------------
+ # --------------------------
# Default ETKF-N = ETKF-N-16
elif self._parameters["Variant"] == "ETKF-N-11":
etkf.etkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="FiniteSize11")
elif self._parameters["Variant"] in ["ETKF-N-16", "ETKF-N"]:
etkf.etkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="FiniteSize16")
#
- #--------------------------
+ # --------------------------
# Default MLEF = MLEF-T
elif self._parameters["Variant"] in ["MLEF-T", "MLEF"]:
mlef.mlef(self, Xb, Y, U, HO, EM, CM, R, B, Q, BnotT=False)
elif self._parameters["Variant"] == "MLEF-B":
mlef.mlef(self, Xb, Y, U, HO, EM, CM, R, B, Q, BnotT=True)
#
- #--------------------------
+ # --------------------------
# Default IEnKF = IEnKF-T
elif self._parameters["Variant"] in ["IEnKF-T", "IEnKF"]:
ienkf.ienkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, BnotT=False)
elif self._parameters["Variant"] in ["IEnKF-B", "IEKF"]:
ienkf.ienkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, BnotT=True)
#
- #--------------------------
+ # --------------------------
# Default EnKS = EnKS-KFF
elif self._parameters["Variant"] in ["EnKS-KFF", "EnKS"]:
enks.enks(self, Xb, Y, U, HO, EM, CM, R, B, Q, VariantM="EnKS16-KalmanFilterFormula")
#
- #--------------------------
+ # --------------------------
# Default E3DVAR = E3DVAR-ETKF
elif self._parameters["Variant"] == "E3DVAR-EnKF":
senkf.senkf(self, Xb, Y, U, HO, EM, CM, R, B, Q, Hybrid="E3DVAR")
elif self._parameters["Variant"] == "E3DVAR-MLEF":
mlef.mlef(self, Xb, Y, U, HO, EM, CM, R, B, Q, Hybrid="E3DVAR")
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = [],
typecast = tuple,
message = "Points de calcul définis par une liste de n-uplet",
- )
+ )
self.defineRequiredParameter(
name = "SampleAsExplicitHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxStepHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxLatinHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi du nombre de points demandés",
- )
+ message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi du nombre de points demandés", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxSobolSequence",
default = [],
typecast = tuple,
- message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]",
- )
+ message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsIndependantRandomVariables",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]",
- )
+ message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = ["EnsembleOfSimulations",],
listval = [
"EnsembleOfSimulations",
"EnsembleOfStates",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
optional = (),
+ )
+ self.setAttributes(
+ tags=(
+ "Reduction",
+ "Checking",
)
- self.setAttributes(tags=(
- "Reduction",
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
eosg.eosg(self, Xb, HO)
- #--------------------------
+ # --------------------------
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
message = "Variant ou formulation de la méthode",
listval = [
"ExtendedBlue",
- ],
+ ],
listadv = [
"OneCorrection",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "Parameters",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
"SimulationQuantiles",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "Quantiles",
default = [],
message = "Liste des valeurs de quantiles",
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfSamplesForQuantiles",
default = 100,
typecast = int,
message = "Nombre d'échantillons simulés pour le calcul des quantiles",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "SimulationForQuantiles",
default = "Linear",
typecast = str,
message = "Type de simulation en estimation des quantiles",
listval = ["Linear", "NonLinear"]
- )
- self.defineRequiredParameter( # Pas de type
+ )
+ self.defineRequiredParameter( # Pas de type
name = "StateBoundsForQuantiles",
message = "Liste des paires de bornes pour les états utilisés en estimation des quantiles",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Filter",
)
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Filter",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] == "ExtendedBlue":
+ # --------------------------
+ if self._parameters["Variant"] == "ExtendedBlue":
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, ecwexblue.ecwexblue)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "OneCorrection":
ecwexblue.ecwexblue(self, Xb, Y, U, HO, CM, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
listval = [
"EKF",
"CEKF",
- ],
+ ],
listadv = [
"EKS",
"CEKS",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "ConstrainedBy",
default = "EstimateProjection",
typecast = str,
message = "Prise en compte des contraintes",
listval = ["EstimateProjection"],
- )
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "State",
typecast = str,
message = "Estimation d'etat ou de parametres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentAnalysis",
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Filter",
+ "Dynamic",
)
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Filter",
- "Dynamic",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] == "EKF":
+ # --------------------------
+ if self._parameters["Variant"] == "EKF":
exkf.exkf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "CEKF":
cekf.cekf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "EKS":
exks.exks(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "CEKS":
ceks.ceks(self, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = True,
typecast = bool,
message = "Calcule et affiche un résumé à chaque évaluation élémentaire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "NumberOfRepetition",
default = 1,
typecast = int,
message = "Nombre de fois où l'exécution de la fonction est répétée",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
listval = [
"CurrentState",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
__p = self._parameters["NumberOfPrintedDigits"]
__r = self._parameters["NumberOfRepetition"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the (repetition of the) launch of some\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
#
if self._parameters["SetDebug"]:
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
msgs += (__flech + "Beginning of repeated evaluation, without activating debug\n")
else:
msgs += (__flech + "Beginning of evaluation, without activating debug\n")
- print(msgs) # 1
+ print(msgs) # 1
#
# ----------
HO["Direct"].disableAvoidingRedundancy()
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if __s:
- msgs = (__marge + "%s\n"%("-"*75,)) # 2-1
+ msgs = (__marge + "%s\n"%("-" * 75,)) # 2-1
if __r > 1:
msgs += ("\n")
- msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i+1,__r))
+ msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i + 1, __r))
msgs += ("\n")
msgs += (__flech + "Launching operator sequential evaluation\n")
- print(msgs) # 2-1
+ print(msgs) # 2-1
#
Yn = Hm( X0 )
#
if __s:
- msgs = ("\n") # 2-2
+ msgs = ("\n") # 2-2
msgs += (__flech + "End of operator sequential evaluation\n")
msgs += ("\n")
msgs += (__flech + "Information after evaluation:\n")
msgs += (__marge + "Characteristics of simulated output vector Y=F(X), to compare to others:\n")
msgs += (__marge + " Type...............: %s\n")%type( Yn )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
- print(msgs) # 2-2
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Yn )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Yn )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Yn, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Yn, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Yn )
+ print(msgs) # 2-2
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(Yn) )
#
Ys.append( copy.copy( numpy.ravel(
Yn
- ) ) )
+ ) ) )
# ----------
HO["Direct"].enableAvoidingRedundancy()
# ----------
#
- msgs = (__marge + "%s\n\n"%("-"*75,)) # 3
+ msgs = (__marge + "%s\n\n"%("-" * 75,)) # 3
if self._parameters["SetDebug"]:
if __r > 1:
msgs += (__flech + "End of repeated evaluation, deactivating debug if necessary\n")
else:
msgs += (__flech + "End of evaluation, without deactivating debug\n")
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
if __r > 1:
msgs += ("\n")
msgs += (__flech + "Launching statistical summary calculation for %i states\n"%__r)
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
msgs += ("\n")
msgs += (__flech + "Statistical analysis of the outputs obtained through sequential repeated evaluations\n")
msgs += ("\n")
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of outputs Y:\n")
msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ys[0].size
- msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.min( Yy ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.max( Yy ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.mean( Yy, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.std( Yy, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ym = numpy.mean( numpy.array( Ys ), axis=0, dtype=mfp )
msgs += (__marge + "Characteristics of the vector Ym, mean of the outputs Y:\n")
msgs += (__marge + " Size of the mean of the outputs...............: %i\n")%Ym.size
- msgs += (__marge + " Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
- msgs += (__marge + " Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
- msgs += (__marge + " Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.min( Ym ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.max( Ym ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the outputs...............: %." + str(__p) + "e\n")%numpy.mean( Ym, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the outputs.....: %." + str(__p) + "e\n")%numpy.std( Ym, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ye = numpy.mean( numpy.array( Ys ) - Ym, axis=0, dtype=mfp )
- msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n")
+ msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n") # noqa: E501
msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Ye.size
- msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
- msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
- msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.min( Ye ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.max( Ye ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the differences...........: %." + str(__p) + "e\n")%numpy.mean( Ye, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the differences.: %." + str(__p) + "e\n")%numpy.std( Ye, dtype=mfp ) # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
msgs += ("\n")
msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 3
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 3
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
typecast = str,
message = "Formule de résidu utilisée",
listval = ["Norm", "TaylorOnNorm", "Taylor"],
- )
+ )
self.defineRequiredParameter(
name = "EpsilonMinimumExponent",
default = -8,
message = "Exposant minimal en puissance de 10 pour le multiplicateur d'incrément",
minval = -20,
maxval = 0,
- )
+ )
self.defineRequiredParameter(
name = "InitialDirection",
default = [],
typecast = list,
message = "Direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfInitialDirection",
default = 1.,
typecast = float,
message = "Amplitude de la direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfTangentPerturbation",
default = 1.e-2,
message = "Amplitude de la perturbation pour le calcul de la forme tangente",
minval = 1.e-10,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "ResultLabel",
default = "",
typecast = str,
message = "Label de la courbe tracée dans la figure",
- )
+ )
self.defineRequiredParameter(
name = "ResultFile",
- default = self._name+"_result_file",
+ default = self._name + "_result_file",
typecast = str,
message = "Nom de base (hors extension) des fichiers de sauvegarde des résultats",
- )
+ )
self.defineRequiredParameter(
name = "PlotAndSave",
default = False,
typecast = bool,
message = "Trace et sauve les résultats",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"CurrentState",
"Residu",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
if self._parameters["ResiduFormula"] in ["Taylor", "TaylorOnNorm"]:
Ht = HO["Tangent"].appliedInXTo
#
- X0 = numpy.ravel( Xb ).reshape((-1,1))
+ X0 = numpy.ravel( Xb ).reshape((-1, 1))
#
# ----------
__p = self._parameters["NumberOfPrintedDigits"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the numerical stability of the gradient of some\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
msgs += (__flech + "Numerical quality indicators:\n")
msgs += (__marge + "-----------------------------\n")
msgs += ("\n")
- msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"]) # noqa: E501
msgs += (__marge + "following ratio or comparison:\n")
msgs += ("\n")
#
msgs += (__marge + "and constant, it means that F is linear and that the residue decreases\n")
msgs += (__marge + "from the error made in the calculation of the GradientF_X term.\n")
#
- __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
+ __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )" # noqa: E501
#
if self._parameters["ResiduFormula"] == "TaylorOnNorm":
msgs += (__marge + " || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||\n")
msgs += (__marge + "the calculation of the gradient is correct until the residue is of the\n")
msgs += (__marge + "order of magnitude of ||F(X)||.\n")
#
- __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
+ __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )" # noqa: E501
#
if self._parameters["ResiduFormula"] == "Norm":
msgs += (__marge + " || F(X+Alpha*dX) - F(X) ||\n")
msgs += (__marge + "which must remain constant until the accuracy of the calculation is\n")
msgs += (__marge + "reached.\n")
#
- __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
+ __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )" # noqa: E501
#
msgs += ("\n")
msgs += (__marge + "We take dX0 = Normal(0,X) and dX = Alpha*dX0. F is the calculation code.\n")
msgs += (__marge + "with a differential increment of value %.2e.\n"%HO["DifferentialIncrement"])
msgs += ("\n")
msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
- print(msgs) # 1
+ print(msgs) # 1
#
- Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
+ Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"], 1) ]
Perturbations.reverse()
#
- FX = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ FX = numpy.ravel( Hm( X0 ) ).reshape((-1, 1))
NormeX = numpy.linalg.norm( X0 )
NormeFX = numpy.linalg.norm( FX )
- if NormeFX < mpr: NormeFX = mpr
+ if NormeFX < mpr:
+ NormeFX = mpr
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if self._toStore("SimulatedObservationAtCurrentState"):
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
X0,
- )
+ )
#
if self._parameters["ResiduFormula"] in ["Taylor", "TaylorOnNorm"]:
dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0
GradFxdX = Ht( (X0, dX1) )
- GradFxdX = numpy.ravel( GradFxdX ).reshape((-1,1))
- GradFxdX = float(1./self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
+ GradFxdX = numpy.ravel( GradFxdX ).reshape((-1, 1))
+ GradFxdX = float(1. / self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
#
# Boucle sur les perturbations
# ----------------------------
__nbtirets = len(__entete) + 2
- msgs = ("") # 2
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs = ("") # 2
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += "\n" + __marge + __entete
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += ("\n")
#
NormesdX = []
NormesdFXsAm = []
NormesdFXGdX = []
#
- for i,amplitude in enumerate(Perturbations):
- dX = amplitude * dX0.reshape((-1,1))
+ for ip, amplitude in enumerate(Perturbations):
+ dX = amplitude * dX0.reshape((-1, 1))
#
- FX_plus_dX = Hm( X0 + dX )
- FX_plus_dX = numpy.ravel( FX_plus_dX ).reshape((-1,1))
+ X_plus_dX = X0 + dX
+ FX_plus_dX = Hm( X_plus_dX )
+ FX_plus_dX = numpy.ravel( FX_plus_dX ).reshape((-1, 1))
#
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( numpy.ravel(X0 + dX) )
+ self.StoredVariables["CurrentState"].store( X_plus_dX )
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(FX_plus_dX) )
#
NormedX = numpy.linalg.norm( dX )
NormeFXdX = numpy.linalg.norm( FX_plus_dX )
NormedFX = numpy.linalg.norm( FX_plus_dX - FX )
- NormedFXsdX = NormedFX/NormedX
+ NormedFXsdX = NormedFX / NormedX
# Residu Taylor
if self._parameters["ResiduFormula"] in ["Taylor", "TaylorOnNorm"]:
NormedFXGdX = numpy.linalg.norm( FX_plus_dX - FX - amplitude * GradFxdX )
# Residu Norm
- NormedFXsAm = NormedFX/amplitude
+ NormedFXsAm = NormedFX / amplitude
#
- # if numpy.abs(NormedFX) < 1.e-20:
- # break
+ # if numpy.abs(NormedFX) < 1.e-20:
+ # break
#
NormesdX.append( NormedX )
NormesFXdX.append( NormeFXdX )
if self._parameters["ResiduFormula"] == "Taylor":
Residu = NormedFXGdX / NormeFX
elif self._parameters["ResiduFormula"] == "TaylorOnNorm":
- Residu = NormedFXGdX / (amplitude*amplitude)
+ Residu = NormedFXGdX / (amplitude * amplitude)
elif self._parameters["ResiduFormula"] == "Norm":
Residu = NormedFXsAm
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = " %2i %5.0e %9.3e %9.3e %9.3e %9.3e %9.3e | %9.3e | %9.3e %4.0f\n"%(i,amplitude,NormeX,NormeFX,NormeFXdX,NormedX,NormedFX,NormedFXsdX,Residu,math.log10(max(1.e-99,Residu)))
+ ttsep = " %2i %5.0e %9.3e %9.3e %9.3e %9.3e %9.3e | %9.3e | %9.3e %4.0f\n"%(ip, amplitude, NormeX, NormeFX, NormeFXdX, NormedX, NormedFX, NormedFXsdX, Residu, math.log10(max(1.e-99, Residu))) # noqa: E501
msgs += __marge + ttsep
#
- msgs += (__marge + "-"*__nbtirets + "\n\n")
- msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 2
+ msgs += (__marge + "-" * __nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name, self._parameters["ResiduFormula"])) # noqa: E501
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 2
#
if self._parameters["PlotAndSave"]:
- f = open(str(self._parameters["ResultFile"])+".txt",'a')
+ f = open(str(self._parameters["ResultFile"]) + ".txt", 'a')
f.write(msgs)
f.close()
#
Residus = self.StoredVariables["Residu"][-len(Perturbations):]
if self._parameters["ResiduFormula"] in ["Taylor", "TaylorOnNorm"]:
- PerturbationsCarre = [ 10**(2*i) for i in range(-len(NormesdFXGdX)+1,1) ]
+ PerturbationsCarre = [ 10**(2 * i) for i in range(-len(NormesdFXGdX) + 1, 1) ]
PerturbationsCarre.reverse()
dessiner(
Perturbations,
label = self._parameters["ResultLabel"],
logX = True,
logY = True,
- filename = str(self._parameters["ResultFile"])+".ps",
+ filename = str(self._parameters["ResultFile"]) + ".ps",
YRef = PerturbationsCarre,
normdY0 = numpy.log10( NormesdFX[0] ),
- )
+ )
elif self._parameters["ResiduFormula"] == "Norm":
dessiner(
Perturbations,
label = self._parameters["ResultLabel"],
logX = True,
logY = True,
- filename = str(self._parameters["ResultFile"])+".ps",
- )
+ filename = str(self._parameters["ResultFile"]) + ".ps",
+ )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
logY = False,
filename = "",
pause = False,
- YRef = None, # Vecteur de reference a comparer a Y
- recalYRef = True, # Decalage du point 0 de YRef a Y[0]
- normdY0 = 0., # Norme de DeltaY[0]
- ):
+ YRef = None, # Vecteur de reference a comparer a Y
+ recalYRef = True, # Decalage du point 0 de YRef a Y[0]
+ normdY0 = 0.): # Norme de DeltaY[0]
import Gnuplot
__gnuplot = Gnuplot
- __g = __gnuplot.Gnuplot(persist=1) # persist=1
+ __g = __gnuplot.Gnuplot(persist=1) # persist=1
# __g('set terminal '+__gnuplot.GnuplotOpts.default_term)
__g('set style data lines')
__g('set grid')
__g('set autoscale')
- __g('set title "'+titre+'"')
+ __g('set title "' + titre + '"')
# __g('set range [] reverse')
- # __g('set yrange [0:2]')
+ # __g('set yrange [0:2]')
#
if logX:
steps = numpy.log10( X )
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "PrintAllValuesFor",
default = [],
"Background",
"CheckingPoint",
"Observation",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "ShowInformationOnlyFor",
default = ["Background", "CheckingPoint", "Observation"],
"Background",
"CheckingPoint",
"Observation",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.requireInputArguments(
mandatory= (),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
_p = self._parameters["NumberOfPrintedDigits"]
numpy.set_printoptions(precision=_p)
- #
+
def __buildPrintableVectorProperties( __name, __vector ):
- if __vector is None: return ""
- if len(__vector) == 0: return ""
- if hasattr(__vector,"name") and __name != __vector.name(): return ""
- if __name not in self._parameters["ShowInformationOnlyFor"]: return ""
+ if __vector is None:
+ return ""
+ if len(__vector) == 0:
+ return ""
+ if hasattr(__vector, "name") and __name != __vector.name():
+ return ""
+ if __name not in self._parameters["ShowInformationOnlyFor"]:
+ return ""
#
- if hasattr(__vector,"mins"):
- __title = "Information for %svector series:"%(str(__name)+" ",)
+ if hasattr(__vector, "mins"):
+ __title = "Information for %svector series:"%(str(__name) + " ",)
else:
- __title = "Information for %svector:"%(str(__name)+" ",)
+ __title = "Information for %svector:"%(str(__name) + " ",)
msgs = "\n"
- msgs += ("===> "+__title+"\n")
- msgs += (" "+("-"*len(__title))+"\n")
+ msgs += ("===> " + __title + "\n")
+ msgs += (" " + ("-" * len(__title)) + "\n")
msgs += (" Main characteristics of the vector:\n")
- if hasattr(__vector,"basetype"):
+ if hasattr(__vector, "basetype"):
msgs += (" Python base type..........: %s\n")%( __vector.basetype(), )
msgs += (" Shape of data.............: %s\n")%( __vector.shape(), )
else:
msgs += (" Shape of serie of vectors.: %s\n")%( __vector.shape, )
try:
msgs += (" Number of data............: %s\n")%( len(__vector), )
- except: pass
- if hasattr(__vector,"mins"):
+ except Exception:
+ pass
+ if hasattr(__vector, "mins"):
msgs += (" Serie of minimum values...: %s\n")%numpy.array(__vector.mins())
else:
- msgs += (" Minimum of vector.........: %12."+str(_p)+"e\n")%__vector.min()
- if hasattr(__vector,"means"):
+ msgs += (" Minimum of vector.........: %12." + str(_p) + "e\n")%__vector.min()
+ if hasattr(__vector, "means"):
msgs += (" Serie of mean values......: %s\n")%numpy.array(__vector.means())
else:
- msgs += (" Mean of vector............: %12."+str(_p)+"e\n")%__vector.mean()
- if hasattr(__vector,"maxs"):
+ msgs += (" Mean of vector............: %12." + str(_p) + "e\n")%__vector.mean()
+ if hasattr(__vector, "maxs"):
msgs += (" Serie of maximum values...: %s\n")%numpy.array(__vector.maxs())
else:
- msgs += (" Maximum of vector.........: %12."+str(_p)+"e\n")%__vector.max()
+ msgs += (" Maximum of vector.........: %12." + str(_p) + "e\n")%__vector.max()
if self._parameters["SetDebug"] or __name in self._parameters["PrintAllValuesFor"]:
msgs += ("\n")
msgs += (" Printing all values :\n")
msgs += ("%s"%(__vector,))
print(msgs)
return msgs
- #----------
- __buildPrintableVectorProperties( "Background", Xb )
+ #
+ __buildPrintableVectorProperties( "Background", Xb )
__buildPrintableVectorProperties( "CheckingPoint", Xb )
- __buildPrintableVectorProperties( "Observation", Y )
+ __buildPrintableVectorProperties( "Observation", Y )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = [],
typecast = numpy.array,
message = "Base réduite, 1 vecteur par colonne",
- )
+ )
self.defineRequiredParameter(
name = "OptimalLocations",
default = [],
typecast = tuple,
- message = "Liste des indices ou noms de positions optimales de mesure selon l'ordre interne d'un vecteur de base",
- )
+ message = "Liste des indices ou noms de positions optimales de mesure selon l'ordre interne d'un vecteur de base", # noqa: E501
+ )
self.defineRequiredParameter(
name = "ObservationsAlreadyRestrictedOnOptimalLocations",
default = True,
typecast = bool,
message = "Stockage des mesures restreintes a priori aux positions optimales de mesure ou non",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
listval = [
"Analysis",
"ReducedCoordinates",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Y",),
optional = (),
+ )
+ self.setAttributes(
+ tags=(
+ "Reduction",
+ "Interpolation",
)
- self.setAttributes(tags=(
- "Reduction",
- "Interpolation",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
+ # --------------------------
__rb = self._parameters["ReducedBasis"]
__ip = self._parameters["OptimalLocations"]
if len(__ip) != __rb.shape[1]:
- raise ValueError("The number of optimal measurement locations (%i) and the dimension of the RB (%i) has to be the same."%(len(__ip),__rb.shape[1]))
+ raise ValueError("The number of optimal measurement locations (%i) and the dimension of the RB (%i) has to be the same."%(len(__ip), __rb.shape[1])) # noqa: E501
#
# Nombre de pas identique au nombre de pas d'observations
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
else:
duration = 2
#
- for step in range(0,duration-1):
+ for step in range(0, duration - 1):
#
# La boucle sur les mesures permet une interpolation par jeu de mesure,
# sans qu'il y ait de lien entre deux jeux successifs de mesures.
# Important : les observations sont données sur tous les points
# possibles ou déjà restreintes aux points optimaux de mesure, mais
# ne sont utilisés qu'aux points optimaux
- if hasattr(Y,"store"):
- _Ynpu = numpy.ravel( Y[step+1] ).reshape((-1,1))
+ if hasattr(Y, "store"):
+ _Ynpu = numpy.ravel( Y[step + 1] ).reshape((-1, 1))
else:
- _Ynpu = numpy.ravel( Y ).reshape((-1,1))
+ _Ynpu = numpy.ravel( Y ).reshape((-1, 1))
if self._parameters["ObservationsAlreadyRestrictedOnOptimalLocations"]:
__rm = _Ynpu
else:
#
# Interpolation
ecweim.EIM_online(self, __rb, __rm, __ip)
- #--------------------------
+ # --------------------------
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
import numpy, math
from daCore import BasicObjects, PlatformInfo
+from daCore.PlatformInfo import vfloat
+from daAlgorithms.Atoms import ecweim
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
-from daCore.PlatformInfo import vfloat
-from daAlgorithms.Atoms import ecweim, eosg
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
default = [],
typecast = numpy.array,
message = "Base réduite, 1 vecteur par colonne",
- )
+ )
self.defineRequiredParameter(
name = "MeasurementLocations",
default = [],
typecast = tuple,
- message = "Liste des indices ou noms de positions optimales de mesure selon l'ordre interne d'un vecteur de base",
- )
+ message = "Liste des indices ou noms de positions optimales de mesure selon l'ordre interne d'un vecteur de base", # noqa: E501
+ )
self.defineRequiredParameter(
name = "EnsembleOfSnapshots",
default = [],
typecast = numpy.array,
message = "Ensemble de vecteurs d'état physique (snapshots), 1 état par colonne (Test Set)",
- )
+ )
self.defineRequiredParameter(
name = "ErrorNorm",
default = "L2",
typecast = str,
message = "Norme d'erreur utilisée pour le critère d'optimalité des positions",
listval = ["L2", "Linf"]
- )
+ )
self.defineRequiredParameter(
name = "ShowElementarySummary",
default = True,
typecast = bool,
message = "Calcule et affiche un résumé à chaque évaluation élémentaire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.requireInputArguments(
mandatory= (),
optional = (),
+ )
+ self.setAttributes(
+ tags=(
+ "Reduction",
+ "Interpolation",
)
- self.setAttributes(tags=(
- "Reduction",
- "Interpolation",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
__fdim, __nsn = __eos.shape
#
if __fdim != __rdim:
- raise ValueError("The dimension of each snapshot (%i) has to be equal to the dimension of each reduced basis vector."%(__fdim,__rdim))
+ raise ValueError("The dimension of each snapshot (%i) has to be equal to the dimension of each reduced basis vector (%i)."%(__fdim, __rdim)) # noqa: E501
if __fdim < len(__ip):
- raise ValueError("The dimension of each snapshot (%i) has to be greater or equal to the number of optimal measurement locations (%i)."%(__fdim,len(__ip)))
+ raise ValueError("The dimension of each snapshot (%i) has to be greater or equal to the number of optimal measurement locations (%i)."%(__fdim, len(__ip))) # noqa: E501
#
- #--------------------------
+ # --------------------------
__s = self._parameters["ShowElementarySummary"]
__p = self._parameters["NumberOfPrintedDigits"]
__r = __nsn
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- __ordre = int(math.log10(__nsn))+1
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ __ordre = int(math.log10(__nsn)) + 1
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the quality of the interpolation of states,\n")
msgs += ("\n")
msgs += (__marge + "Warning: in order to be coherent, this test has to use the same norm\n")
msgs += (__marge + "than the one used to build the reduced basis. The user chosen norm in\n")
- msgs += (__marge + "this test is presently \"%s\". Check the RB building one.\n"%(self._parameters["ErrorNorm"],))
+ msgs += (__marge + "this test is presently \"%s\". Check the RB building one.\n"%(self._parameters["ErrorNorm"],)) # noqa: E501
msgs += ("\n")
msgs += (__flech + "Information before launching:\n")
msgs += (__marge + "-----------------------------\n")
msgs += (__marge + " Number of measures locations...: %i\n")%len(__ip)
msgs += (__marge + " Number of snapshots to test....: %i\n")%__nsn
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
#
st = "Normalized interpolation error test using \"%s\" norm for all given states:"%self._parameters["ErrorNorm"]
msgs += (__flech + "%s\n"%st)
- msgs += (__marge + "%s\n"%("-"*len(st),))
+ msgs += (__marge + "%s\n"%("-" * len(st),))
msgs += ("\n")
Ns, Es = [], []
for ns in range(__nsn):
- __rm = __eos[__ip,ns]
- __im = ecweim.EIM_online(self, __rb, __eos[__ip,ns], __ip)
+ # __rm = __eos[__ip, ns]
+ __im = ecweim.EIM_online(self, __rb, __eos[__ip, ns], __ip)
#
- if self._parameters["ErrorNorm"] == "L2":
- __norms = numpy.linalg.norm( __eos[:,ns] )
- __ecart = vfloat(numpy.linalg.norm( __eos[:,ns] - __im ) / __norms )
+ if self._parameters["ErrorNorm"] == "L2":
+ __norms = numpy.linalg.norm( __eos[:, ns] )
+ __ecart = vfloat(numpy.linalg.norm( __eos[:, ns] - __im ) / __norms )
else:
- __norms = numpy.linalg.norm( __eos[:,ns], ord=numpy.inf )
- __ecart = vfloat(numpy.linalg.norm( __eos[:,ns] - __im, ord=numpy.inf ) / __norms )
+ __norms = numpy.linalg.norm( __eos[:, ns], ord=numpy.inf )
+ __ecart = vfloat(numpy.linalg.norm( __eos[:, ns] - __im, ord=numpy.inf ) / __norms )
Ns.append( __norms )
Es.append( __ecart )
if __s:
- msgs += (__marge + "State %0"+str(__ordre)+"i: error of %."+str(__p)+"e for a state norm of %."+str(__p)+"e (= %3i%s)\n")%(ns,__ecart,__norms,100*__ecart/__norms,"%")
+ msgs += (__marge + "State %0" + str(__ordre) + "i: error of %." + str(__p) + "e for a state norm of %." + str(__p) + "e (= %3i%s)\n")%(ns, __ecart, __norms, 100 * __ecart / __norms, "%") # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
if __r > 1:
msgs += ("\n")
msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Es )
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of error outputs Es:\n")
- msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.min( Yy ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.max( Yy ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.mean( Yy, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.std( Yy, dtype=mfp ) # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
msgs += ("\n")
msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 3
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 1
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
message = "Variant ou formulation de la méthode",
listval = [
"KalmanFilter",
- ],
+ ],
listadv = [
"OneCorrection",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "State",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentAnalysis",
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "Linear",
+ "Filter",
+ "Dynamic",
)
- self.setAttributes(tags=(
- "DataAssimilation",
- "Linear",
- "Filter",
- "Dynamic",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] == "KalmanFilter":
+ # --------------------------
+ if self._parameters["Variant"] == "KalmanFilter":
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, ecwstdkf.ecwstdkf, True)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "OneCorrection":
ecwstdkf.ecwstdkf(self, Xb, Y, U, HO, CM, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
message = "Variant ou formulation de la méthode",
listval = [
"LinearLeastSquares",
- ],
+ ],
listadv = [
"OneCorrection",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "Parameters",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Y", "HO"),
optional = ("R"),
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "Linear",
+ "Variational",
)
- self.setAttributes(tags=(
- "Optimization",
- "Linear",
- "Variational",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] == "LinearLeastSquares":
+ # --------------------------
+ if self._parameters["Variant"] == "LinearLeastSquares":
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, ecwlls.ecwlls)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "OneCorrection":
ecwlls.ecwlls(self, Xb, Y, U, HO, CM, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
typecast = str,
message = "Formule de résidu utilisée",
listval = ["CenteredDL", "Taylor", "NominalTaylor", "NominalTaylorRMS"],
- )
+ )
self.defineRequiredParameter(
name = "EpsilonMinimumExponent",
default = -8,
message = "Exposant minimal en puissance de 10 pour le multiplicateur d'incrément",
minval = -20,
maxval = 0,
- )
+ )
self.defineRequiredParameter(
name = "InitialDirection",
default = [],
typecast = list,
message = "Direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfInitialDirection",
default = 1.,
typecast = float,
message = "Amplitude de la direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfTangentPerturbation",
default = 1.e-2,
message = "Amplitude de la perturbation pour le calcul de la forme tangente",
minval = 1.e-10,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"CurrentState",
"Residu",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
- #
+
def RMS(V1, V2):
import math
return math.sqrt( ((numpy.ravel(V2) - numpy.ravel(V1))**2).sum() / float(numpy.ravel(V1).size) )
if self._parameters["ResiduFormula"] in ["Taylor", "NominalTaylor", "NominalTaylorRMS"]:
Ht = HO["Tangent"].appliedInXTo
#
- X0 = numpy.ravel( Xb ).reshape((-1,1))
+ X0 = numpy.ravel( Xb ).reshape((-1, 1))
#
# ----------
__p = self._parameters["NumberOfPrintedDigits"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the linearity property of some given\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
msgs += (__flech + "Numerical quality indicators:\n")
msgs += (__marge + "-----------------------------\n")
msgs += ("\n")
- msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"]) # noqa: E501
msgs += (__marge + "following ratio or comparison:\n")
msgs += ("\n")
#
#
msgs += ("\n")
msgs += (__marge + "We take dX0 = Normal(0,X) and dX = Alpha*dX0. F is the calculation code.\n")
- if (self._parameters["ResiduFormula"] == "Taylor") and ("DifferentialIncrement" in HO and HO["DifferentialIncrement"] is not None):
+ if (self._parameters["ResiduFormula"] == "Taylor") and ("DifferentialIncrement" in HO and HO["DifferentialIncrement"] is not None): # noqa: E501
msgs += ("\n")
msgs += (__marge + "Reminder: gradient operator is obtained internally by finite differences,\n")
msgs += (__marge + "with a differential increment of value %.2e.\n"%HO["DifferentialIncrement"])
msgs += ("\n")
msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
- print(msgs) # 1
+ print(msgs) # 1
#
- Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
+ Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"], 1) ]
Perturbations.reverse()
#
- FX = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ FX = numpy.ravel( Hm( X0 ) ).reshape((-1, 1))
NormeX = numpy.linalg.norm( X0 )
NormeFX = numpy.linalg.norm( FX )
- if NormeFX < mpr: NormeFX = mpr
+ if NormeFX < mpr:
+ NormeFX = mpr
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if self._toStore("SimulatedObservationAtCurrentState"):
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
X0,
- )
+ )
#
if self._parameters["ResiduFormula"] == "Taylor":
dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0
GradFxdX = Ht( (X0, dX1) )
- GradFxdX = numpy.ravel( GradFxdX ).reshape((-1,1))
- GradFxdX = float(1./self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
+ GradFxdX = numpy.ravel( GradFxdX ).reshape((-1, 1))
+ GradFxdX = float(1. / self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
#
# Boucle sur les perturbations
# ----------------------------
__nbtirets = len(__entete) + 2
- msgs = ("") # 2
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs = ("") # 2
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += "\n" + __marge + __entete
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += ("\n")
#
- for i,amplitude in enumerate(Perturbations):
- dX = amplitude * dX0.reshape((-1,1))
+ for ip, amplitude in enumerate(Perturbations):
+ dX = amplitude * dX0.reshape((-1, 1))
#
if self._parameters["ResiduFormula"] == "CenteredDL":
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 + dX )
self.StoredVariables["CurrentState"].store( X0 - dX )
#
- FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
- FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1, 1))
+ FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1, 1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX_plus_dX )
Residu = numpy.linalg.norm( FX_plus_dX + FX_moins_dX - 2 * FX ) / NormeFX
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f\n"%(i,amplitude,NormeX,NormeFX,Residu,math.log10(max(1.e-99,Residu)))
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f\n"%(ip, amplitude, NormeX, NormeFX, Residu, math.log10(max(1.e-99, Residu))) # noqa: E501
msgs += __marge + ttsep
#
if self._parameters["ResiduFormula"] == "Taylor":
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 + dX )
#
- FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1, 1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX_plus_dX )
Residu = numpy.linalg.norm( FX_plus_dX - FX - amplitude * GradFxdX ) / NormeFX
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f\n"%(i,amplitude,NormeX,NormeFX,Residu,math.log10(max(1.e-99,Residu)))
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f\n"%(ip, amplitude, NormeX, NormeFX, Residu, math.log10(max(1.e-99, Residu))) # noqa: E501
msgs += __marge + ttsep
#
if self._parameters["ResiduFormula"] == "NominalTaylor":
self.StoredVariables["CurrentState"].store( X0 - dX )
self.StoredVariables["CurrentState"].store( dX )
#
- FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
- FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1,1))
- FdX = numpy.ravel( Hm( dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1, 1))
+ FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1, 1))
+ FdX = numpy.ravel( Hm( dX ) ).reshape((-1, 1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX_plus_dX )
Residu = max(
numpy.linalg.norm( FX_plus_dX - amplitude * FdX ) / NormeFX,
numpy.linalg.norm( FX_moins_dX + amplitude * FdX ) / NormeFX,
- )
+ )
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s\n"%(i,amplitude,NormeX,NormeFX,Residu,100.*abs(Residu-1.),"%")
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s\n"%(ip, amplitude, NormeX, NormeFX, Residu, 100. * abs(Residu - 1.), "%") # noqa: E501
msgs += __marge + ttsep
#
if self._parameters["ResiduFormula"] == "NominalTaylorRMS":
self.StoredVariables["CurrentState"].store( X0 - dX )
self.StoredVariables["CurrentState"].store( dX )
#
- FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
- FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1,1))
- FdX = numpy.ravel( Hm( dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1, 1))
+ FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1, 1))
+ FdX = numpy.ravel( Hm( dX ) ).reshape((-1, 1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX_plus_dX )
Residu = max(
RMS( FX, FX_plus_dX - amplitude * FdX ) / NormeFX,
RMS( FX, FX_moins_dX + amplitude * FdX ) / NormeFX,
- )
+ )
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s\n"%(i,amplitude,NormeX,NormeFX,Residu,100.*Residu,"%")
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s\n"%(ip, amplitude, NormeX, NormeFX, Residu, 100. * Residu, "%") # noqa: E501
msgs += __marge + ttsep
#
- msgs += (__marge + "-"*__nbtirets + "\n\n")
- msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 2
+ msgs += (__marge + "-" * __nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name, self._parameters["ResiduFormula"])) # noqa: E501
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 2
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = ["JacobianMatrixAtCurrentState",],
"CurrentState",
"JacobianMatrixAtCurrentState",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
logging.getLogger().setLevel(logging.DEBUG)
print("===> Beginning of evaluation, activating debug\n")
- print(" %s\n"%("-"*75,))
+ print(" %s\n"%("-" * 75,))
#
# ----------
Ht = HO["Tangent"].asMatrix( Xb )
- Ht = Ht.reshape(Y.size,Xb.size) # ADAO & check shape
+ Ht = Ht.reshape(Y.size, Xb.size) # ADAO & check shape
# ----------
#
if self._parameters["SetDebug"]:
- print("\n %s\n"%("-"*75,))
+ print("\n %s\n"%("-" * 75,))
print("===> End evaluation, deactivating debug if necessary\n")
logging.getLogger().setLevel(CUR_LEVEL)
#
HXb = HO["AppliedInX"]["HXb"]
else:
HXb = Ht @ Xb
- HXb = numpy.ravel( HXb ).reshape((-1,1))
+ HXb = numpy.ravel( HXb ).reshape((-1, 1))
if Y.size != HXb.size:
- raise ValueError("The size %i of observations Yobs and %i of observed calculation F(X) are different, they have to be identical."%(Y.size,HXb.size))
+ raise ValueError("The size %i of observations Yobs and %i of observed calculation F(X) are different, they have to be identical."%(Y.size, HXb.size)) # noqa: E501
if max(Y.shape) != max(HXb.shape):
- raise ValueError("The shapes %s of observations Yobs and %s of observed calculation F(X) are different, they have to be identical."%(Y.shape,HXb.shape))
+ raise ValueError("The shapes %s of observations Yobs and %s of observed calculation F(X) are different, they have to be identical."%(Y.shape, HXb.shape)) # noqa: E501
self.StoredVariables["SimulatedObservationAtCurrentState"].store( HXb )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
typecast = str,
message = "Variant ou formulation de la méthode",
listval = [
- "EIM", "PositioningByEIM",
- "lcEIM", "PositioningBylcEIM",
- "DEIM", "PositioningByDEIM",
+ "EIM", "PositioningByEIM",
+ "lcEIM", "PositioningBylcEIM",
+ "DEIM", "PositioningByDEIM",
"lcDEIM", "PositioningBylcDEIM",
- ],
+ ],
listadv = [
- "UBFEIM", "PositioningByUBFEIM",
+ "UBFEIM", "PositioningByUBFEIM",
"lcUBFEIM", "PositioningBylcUBFEIM",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EnsembleOfSnapshots",
default = [],
typecast = numpy.array,
message = "Ensemble de vecteurs d'état physique (snapshots), 1 état par colonne (Training Set)",
- )
+ )
self.defineRequiredParameter(
name = "UserBasisFunctions",
default = [],
typecast = numpy.array,
message = "Ensemble de fonctions de base définis par l'utilisateur, 1 fonction de base par colonne",
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfLocations",
default = 1,
typecast = int,
message = "Nombre maximal de positions",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ExcludeLocations",
default = [],
typecast = tuple,
message = "Liste des indices ou noms de positions exclues selon l'ordre interne d'un snapshot",
- )
+ )
self.defineRequiredParameter(
name = "NameOfLocations",
default = [],
typecast = tuple,
message = "Liste des noms de positions selon l'ordre interne d'un snapshot",
- )
+ )
self.defineRequiredParameter(
name = "ErrorNorm",
default = "L2",
typecast = str,
message = "Norme d'erreur utilisée pour le critère d'optimalité des positions",
listval = ["L2", "Linf"]
- )
+ )
self.defineRequiredParameter(
name = "ErrorNormTolerance",
default = 1.e-7,
typecast = float,
message = "Valeur limite inférieure du critère d'optimalité forçant l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "SampleAsnUplet",
default = [],
typecast = tuple,
message = "Points de calcul définis par une liste de n-uplet",
- )
+ )
self.defineRequiredParameter(
name = "SampleAsExplicitHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxStepHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxLatinHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi du nombre de points demandés",
- )
+ message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi du nombre de points demandés", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxSobolSequence",
default = [],
typecast = tuple,
- message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]",
- )
+ message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsIndependantRandomVariables",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]",
- )
+ message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "ReduceMemoryUse",
default = False,
typecast = bool,
- message = "Réduction de l'empreinte mémoire lors de l'exécution au prix d'une augmentation du temps de calcul",
- )
+ message = "Réduction de l'empreinte mémoire lors de l'exécution au prix d'une augmentation du temps de calcul", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"ExcludedPoints",
"OptimalPoints",
"ReducedBasis",
+ "ReducedBasisMus",
"Residus",
"SingularValues",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.requireInputArguments(
mandatory= (),
optional = ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Reduction",
+ "Checking",
)
- self.setAttributes(tags=(
- "Reduction",
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] in ["lcEIM", "PositioningBylcEIM", "EIM", "PositioningByEIM"]:
+ # --------------------------
+ if self._parameters["Variant"] in ["lcEIM", "PositioningBylcEIM", "EIM", "PositioningByEIM"]:
if len(self._parameters["EnsembleOfSnapshots"]) > 0:
if self._toStore("EnsembleOfSimulations"):
self.StoredVariables["EnsembleOfSimulations"].store( self._parameters["EnsembleOfSnapshots"] )
else:
raise ValueError("Snapshots or Operator have to be given in order to launch the EIM analysis")
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] in ["lcDEIM", "PositioningBylcDEIM", "DEIM", "PositioningByDEIM"]:
if len(self._parameters["EnsembleOfSnapshots"]) > 0:
if self._toStore("EnsembleOfSimulations"):
ecwdeim.DEIM_offline(self, eosg.eosg(self, Xb, HO))
else:
raise ValueError("Snapshots or Operator have to be given in order to launch the DEIM analysis")
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] in ["lcUBFEIM", "PositioningBylcUBFEIM", "UBFEIM", "PositioningByUBFEIM"]:
if len(self._parameters["EnsembleOfSnapshots"]) > 0:
if self._toStore("EnsembleOfSimulations"):
else:
raise ValueError("Snapshots or Operator have to be given in order to launch the UBFEIM analysis")
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
message = "Variant ou formulation de la méthode",
listval = [
"NonLinearLeastSquares",
- ],
+ ],
listadv = [
"OneCorrection",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "Minimizer",
default = "LBFGSB",
"CG",
"BFGS",
"LM",
- ],
+ ],
listadv = [
"NCG",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "Parameters",
typecast = str,
message = "Estimation d'état ou de paramètres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 15000,
message = "Nombre maximal de pas d'optimisation",
minval = -1,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "CostDecrementTolerance",
default = 1.e-7,
typecast = float,
message = "Diminution relative minimale du coût lors de l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "ProjectedGradientTolerance",
default = -1,
typecast = float,
message = "Maximum des composantes du gradient projeté lors de l'arrêt",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "GradientNormTolerance",
default = 1.e-05,
typecast = float,
message = "Maximum des composantes du gradient lors de l'arrêt",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des paires de bornes",
- )
+ )
self.defineRequiredParameter(
name = "InitializationPoint",
typecast = numpy.ravel,
message = "État initial imposé (par défaut, c'est l'ébauche si None)",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R"),
optional = ("U", "EM", "CM", "Q"),
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "NonLinear",
+ "Variational",
)
- self.setAttributes(tags=(
- "Optimization",
- "NonLinear",
- "Variational",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] == "NonLinearLeastSquares":
+ # --------------------------
+ if self._parameters["Variant"] == "NonLinearLeastSquares":
NumericObjects.multiXOsteps(self, Xb, Y, U, HO, EM, CM, R, B, Q, ecwnlls.ecwnlls)
#
- #--------------------------
+ # --------------------------
elif self._parameters["Variant"] == "OneCorrection":
ecwnlls.ecwnlls(self, Xb, Y, U, HO, CM, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
default = True,
typecast = bool,
message = "Calcule et affiche un résumé à chaque évaluation élémentaire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "NumberOfRepetition",
default = 1,
typecast = int,
message = "Nombre de fois où l'exécution de la fonction est répétée",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"InnovationAtCurrentState",
"OMB",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
if len(self._parameters["StoreSupplementaryCalculations"]) > 0:
BI = B.getI()
RI = R.getI()
- def CostFunction(x,HmX):
+
+ def CostFunction(x, HmX):
_X = numpy.ravel( x )
_HX = numpy.ravel( HmX )
_X0 = numpy.ravel( X0 )
_Y0 = numpy.ravel( Y0 )
- Jb = vfloat( 0.5 * (_X - _X0).T * (BI * (_X - _X0)) )
+ Jb = vfloat( 0.5 * (_X - _X0).T * (BI * (_X - _X0)) ) # noqa: E222
Jo = vfloat( 0.5 * (_Y0 - _HX).T * (RI * (_Y0 - _HX)) )
J = Jb + Jo
self.StoredVariables["CostFunctionJb"].store( Jb )
__p = self._parameters["NumberOfPrintedDigits"]
__r = self._parameters["NumberOfRepetition"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the (repetition of the) launch of some\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
msgs += (__marge + "Characteristics of input vector of observations Yobs, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( Y0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Y0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Y0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Y0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Y0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Y0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Y0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Y0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Y0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Y0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Y0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Y0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
#
if self._parameters["SetDebug"]:
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
msgs += (__flech + "Beginning of repeated evaluation, without activating debug\n")
else:
msgs += (__flech + "Beginning of evaluation, without activating debug\n")
- print(msgs) # 1
+ print(msgs) # 1
#
# ----------
HO["Direct"].disableAvoidingRedundancy()
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if __s:
- msgs = (__marge + "%s\n"%("-"*75,)) # 2-1
+ msgs = (__marge + "%s\n"%("-" * 75,)) # 2-1
if __r > 1:
msgs += ("\n")
- msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i+1,__r))
+ msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i + 1, __r))
msgs += ("\n")
msgs += (__flech + "Launching operator sequential evaluation\n")
- print(msgs) # 2-1
+ print(msgs) # 2-1
#
Yn = Hm( X0 )
#
if _Y0.size != Yn.size:
- raise ValueError("The size %i of observations Y and %i of observed calculation F(X) are different, they have to be identical."%(Y0.size,Yn.size))
+ raise ValueError("The size %i of observations Y and %i of observed calculation F(X) are different, they have to be identical."%(Y0.size, Yn.size)) # noqa: E501
#
Dn = _Y0 - numpy.ravel( Yn )
#
if self._toStore("CostFunctionJ"):
Js.append( J )
if __s:
- msgs = ("\n") # 2-2
+ msgs = ("\n") # 2-2
msgs += (__flech + "End of operator sequential evaluation\n")
msgs += ("\n")
msgs += (__flech + "Information after evaluation:\n")
msgs += (__marge + "Characteristics of simulated output vector Y=F(X), to compare to others:\n")
msgs += (__marge + " Type...............: %s\n")%type( Yn )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Yn )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Yn )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Yn, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Yn, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Yn )
msgs += ("\n")
- msgs += (__marge + "Characteristics of OMB differences between observations Yobs and simulated output vector Y=F(X):\n")
+ msgs += (__marge + "Characteristics of OMB differences between observations Yobs and simulated output vector Y=F(X):\n") # noqa: E501
msgs += (__marge + " Type...............: %s\n")%type( Dn )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Dn ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Dn )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Dn )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Dn, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Dn, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Dn )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Dn )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Dn )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Dn, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Dn, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Dn )
if len(self._parameters["StoreSupplementaryCalculations"]) > 0:
if self._toStore("CostFunctionJ"):
msgs += ("\n")
- msgs += (__marge + " Cost function J....: %."+str(__p)+"e\n")%J
- msgs += (__marge + " Cost function Jb...: %."+str(__p)+"e\n")%Jb
- msgs += (__marge + " Cost function Jo...: %."+str(__p)+"e\n")%Jo
- msgs += (__marge + " (Remark: the Jb background part of the cost function J is zero by hypothesis)\n")
- print(msgs) # 2-2
+ msgs += (__marge + " Cost function J....: %." + str(__p) + "e\n")%J
+ msgs += (__marge + " Cost function Jb...: %." + str(__p) + "e\n")%Jb
+ msgs += (__marge + " Cost function Jo...: %." + str(__p) + "e\n")%Jo
+ msgs += (__marge + " (Remark: the Jb background part of the cost function J is zero by hypothesis)\n") # noqa: E501
+ print(msgs) # 2-2
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(Yn) )
if self._toStore("Innovation"):
#
Ys.append( copy.copy( numpy.ravel(
Yn
- ) ) )
+ ) ) )
Ds.append( copy.copy( numpy.ravel(
Dn
- ) ) )
+ ) ) )
# ----------
HO["Direct"].enableAvoidingRedundancy()
# ----------
#
- msgs = (__marge + "%s\n\n"%("-"*75,)) # 3
+ msgs = (__marge + "%s\n\n"%("-" * 75,)) # 3
if self._parameters["SetDebug"]:
if __r > 1:
msgs += (__flech + "End of repeated evaluation, deactivating debug if necessary\n")
else:
msgs += (__flech + "End of evaluation, without deactivating debug\n")
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
if __r > 1:
msgs += ("\n")
msgs += (__flech + "Launching statistical summary calculation for %i states\n"%__r)
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
msgs += ("\n")
- msgs += (__flech + "Statistical analysis of the outputs obtained through sequential repeated evaluations\n")
+ msgs += (__flech + "Statistical analysis of the outputs obtained through sequential repeated evaluations\n") # noqa: E501
msgs += ("\n")
- msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr) # noqa: E501
msgs += ("\n")
Yy = numpy.array( Ys )
msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Ys )
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of outputs Y:\n")
msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ys[0].size
- msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.min( Yy ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.max( Yy ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.mean( Yy, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.std( Yy, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ym = numpy.mean( numpy.array( Ys ), axis=0, dtype=mfp )
msgs += (__marge + "Characteristics of the vector Ym, mean of the outputs Y:\n")
msgs += (__marge + " Size of the mean of the outputs...............: %i\n")%Ym.size
- msgs += (__marge + " Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
- msgs += (__marge + " Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
- msgs += (__marge + " Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.min( Ym ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.max( Ym ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the outputs...............: %." + str(__p) + "e\n")%numpy.mean( Ym, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the outputs.....: %." + str(__p) + "e\n")%numpy.std( Ym, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ye = numpy.mean( numpy.array( Ys ) - Ym, axis=0, dtype=mfp )
- msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n")
+ msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n") # noqa: E501
msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Ye.size
- msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
- msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
- msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.min( Ye ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.max( Ye ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the differences...........: %." + str(__p) + "e\n")%numpy.mean( Ye, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the differences.: %." + str(__p) + "e\n")%numpy.std( Ye, dtype=mfp ) # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
msgs += ("\n")
- msgs += (__flech + "Statistical analysis of the OMB differences obtained through sequential repeated evaluations\n")
+ msgs += (__flech + "Statistical analysis of the OMB differences obtained through sequential repeated evaluations\n") # noqa: E501
msgs += ("\n")
- msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr) # noqa: E501
msgs += ("\n")
Dy = numpy.array( Ds )
msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Ds )
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of OMB differences:\n")
msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ds[0].size
- msgs += (__marge + " Minimum value of the whole set of differences.: %."+str(__p)+"e\n")%numpy.min( Dy )
- msgs += (__marge + " Maximum value of the whole set of differences.: %."+str(__p)+"e\n")%numpy.max( Dy )
- msgs += (__marge + " Mean of vector of the whole set of differences: %."+str(__p)+"e\n")%numpy.mean( Dy, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of differences: %."+str(__p)+"e\n")%numpy.std( Dy, dtype=mfp )
+ msgs += (__marge + " Minimum value of the whole set of differences.: %." + str(__p) + "e\n")%numpy.min( Dy ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of differences.: %." + str(__p) + "e\n")%numpy.max( Dy ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of differences: %." + str(__p) + "e\n")%numpy.mean( Dy, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of differences: %." + str(__p) + "e\n")%numpy.std( Dy, dtype=mfp ) # noqa: E501
msgs += ("\n")
Dm = numpy.mean( numpy.array( Ds ), axis=0, dtype=mfp )
msgs += (__marge + "Characteristics of the vector Dm, mean of the OMB differences:\n")
msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Dm.size
- msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Dm )
- msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Dm )
- msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Dm, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Dm, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.min( Dm ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.max( Dm ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the differences...........: %." + str(__p) + "e\n")%numpy.mean( Dm, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the differences.: %." + str(__p) + "e\n")%numpy.std( Dm, dtype=mfp ) # noqa: E501
msgs += ("\n")
De = numpy.mean( numpy.array( Ds ) - Dm, axis=0, dtype=mfp )
- msgs += (__marge + "Characteristics of the mean of the differences between the OMB differences and their mean Dm:\n")
+ msgs += (__marge + "Characteristics of the mean of the differences between the OMB differences and their mean Dm:\n") # noqa: E501
msgs += (__marge + " Size of the mean of the differences...........: %i\n")%De.size
- msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( De )
- msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( De )
- msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( De, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( De, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.min( De ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.max( De ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the differences...........: %." + str(__p) + "e\n")%numpy.mean( De, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the differences.: %." + str(__p) + "e\n")%numpy.std( De, dtype=mfp ) # noqa: E501
#
if self._toStore("CostFunctionJ"):
msgs += ("\n")
Jj = numpy.array( Js )
- msgs += (__marge + "%s\n\n"%("-"*75,))
- msgs += (__flech + "Statistical analysis of the cost function J values obtained through sequential repeated evaluations\n")
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
+ msgs += (__flech + "Statistical analysis of the cost function J values obtained through sequential repeated evaluations\n") # noqa: E501
msgs += ("\n")
msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Js )
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of data assimilation cost function J values:\n")
- msgs += (__marge + " Minimum value of the whole set of J...........: %."+str(__p)+"e\n")%numpy.min( Jj )
- msgs += (__marge + " Maximum value of the whole set of J...........: %."+str(__p)+"e\n")%numpy.max( Jj )
- msgs += (__marge + " Mean of vector of the whole set of J..........: %."+str(__p)+"e\n")%numpy.mean( Jj, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of J..........: %."+str(__p)+"e\n")%numpy.std( Jj, dtype=mfp )
- msgs += (__marge + " (Remark: variations of the cost function J only come from the observation part Jo of J)\n")
+ msgs += (__marge + " Minimum value of the whole set of J...........: %." + str(__p) + "e\n")%numpy.min( Jj ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of J...........: %." + str(__p) + "e\n")%numpy.max( Jj ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of J..........: %." + str(__p) + "e\n")%numpy.mean( Jj, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of J..........: %." + str(__p) + "e\n")%numpy.std( Jj, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " (Remark: variations of the cost function J only come from the observation part Jo of J)\n") # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
msgs += ("\n")
msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 3
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 3
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
def __init__(self):
BasicObjects.Algorithm.__init__(self, "OBSERVERTEST")
- self.setAttributes(tags=(
- "Checking",
- ))
+ self.setAttributes(
+ tags=(
+ "Checking",
+ )
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
print(" only activated on selected ones by explicit association.")
print("")
#
- __Xa = 1.+numpy.arange(3.)
+ __Xa = 1. + numpy.arange(3.)
__Xb = numpy.zeros(3)
- __YY = 1.+numpy.arange(5.)
+ __YY = 1. + numpy.arange(5.)
#
# Activation des observers sur toutes les variables stockables
# ------------------------------------------------------------
self.StoredVariables["SigmaObs2"].store( 1. )
self.StoredVariables["SigmaBck2"].store( 1. )
self.StoredVariables["MahalanobisConsistency"].store( 1. )
- self.StoredVariables["SimulationQuantiles"].store( numpy.array((__YY,__YY,__YY)) )
+ self.StoredVariables["SimulationQuantiles"].store( numpy.array((__YY, __YY, __YY)) )
self.StoredVariables["SimulatedObservationAtBackground"].store( __YY )
self.StoredVariables["SimulatedObservationAtCurrentState"].store( __YY )
self.StoredVariables["SimulatedObservationAtOptimum"].store( __YY )
default = True,
typecast = bool,
message = "Calcule et affiche un résumé à chaque évaluation élémentaire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "NumberOfRepetition",
default = 1,
typecast = int,
message = "Nombre de fois où l'exécution de la fonction est répétée",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
listval = [
"CurrentState",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
__p = self._parameters["NumberOfPrintedDigits"]
__r = self._parameters["NumberOfRepetition"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the (repetition of the) launch of some\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
#
if self._parameters["SetDebug"]:
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
else:
msgs += (__flech + "Beginning of evaluation, without activating debug\n")
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 1
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 1
#
# ----------
HO["Direct"].disableAvoidingRedundancy()
# ----------
Ys = []
Xs = []
- msgs = (__marge + "Appending the input vector to the agument set to be evaluated in parallel\n") # 2-1
+ msgs = (__marge + "Appending the input vector to the agument set to be evaluated in parallel\n") # 2-1
for i in range(__r):
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if __s:
# msgs += ("\n")
if __r > 1:
- msgs += (__marge + " Appending step number %i on a total of %i\n"%(i+1,__r))
+ msgs += (__marge + " Appending step number %i on a total of %i\n"%(i + 1, __r))
#
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
msgs += (__flech + "Launching operator parallel evaluation for %i states\n"%__r)
- print(msgs) # 2-1
+ print(msgs) # 2-1
#
Ys = Hm( Xs, argsAsSerie = True )
#
- msgs = ("\n") # 2-2
+ msgs = ("\n") # 2-2
msgs += (__flech + "End of operator parallel evaluation for %i states\n"%__r)
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 2-2
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 2-2
#
# ----------
HO["Direct"].enableAvoidingRedundancy()
# ----------
#
- msgs = ("") # 3
+ msgs = ("") # 3
if self._parameters["SetDebug"]:
if __r > 1:
msgs += (__flech + "End of repeated evaluation, deactivating debug if necessary\n")
for i in range(self._parameters["NumberOfRepetition"]):
if __s:
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
if self._parameters["NumberOfRepetition"] > 1:
- msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i+1,self._parameters["NumberOfRepetition"]))
+ msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i + 1, self._parameters["NumberOfRepetition"])) # noqa: E501
#
Yn = Ys[i]
if __s:
msgs += (__marge + "Characteristics of simulated output vector Y=F(X), to compare to others:\n")
msgs += (__marge + " Type...............: %s\n")%type( Yn )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( Yn )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( Yn )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( Yn, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( Yn, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( Yn )
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(Yn) )
#
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
if __r > 1:
msgs += ("\n")
msgs += (__flech + "Launching statistical summary calculation for %i states\n"%__r)
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
msgs += ("\n")
- msgs += (__flech + "Statistical analysis of the outputs obtained through parallel repeated evaluations\n")
+ msgs += (__flech + "Statistical analysis of the outputs obtained through parallel repeated evaluations\n") # noqa: E501
msgs += ("\n")
- msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr) # noqa: E501
msgs += ("\n")
Yy = numpy.array( Ys )
msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Ys )
msgs += ("\n")
msgs += (__marge + "Characteristics of the whole set of outputs Y:\n")
msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ys[0].size
- msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.min( Yy ) # noqa: E501
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %." + str(__p) + "e\n")%numpy.max( Yy ) # noqa: E501
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.mean( Yy, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the whole set of outputs....: %." + str(__p) + "e\n")%numpy.std( Yy, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ym = numpy.mean( numpy.array( Ys ), axis=0, dtype=mfp )
msgs += (__marge + "Characteristics of the vector Ym, mean of the outputs Y:\n")
msgs += (__marge + " Size of the mean of the outputs...............: %i\n")%Ym.size
- msgs += (__marge + " Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
- msgs += (__marge + " Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
- msgs += (__marge + " Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.min( Ym ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the outputs......: %." + str(__p) + "e\n")%numpy.max( Ym ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the outputs...............: %." + str(__p) + "e\n")%numpy.mean( Ym, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the outputs.....: %." + str(__p) + "e\n")%numpy.std( Ym, dtype=mfp ) # noqa: E501
msgs += ("\n")
Ye = numpy.mean( numpy.array( Ys ) - Ym, axis=0, dtype=mfp )
- msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n")
+ msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n") # noqa: E501
msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Ye.size
- msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
- msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
- msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
- msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
+ msgs += (__marge + " Minimum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.min( Ye ) # noqa: E501
+ msgs += (__marge + " Maximum value of the mean of the differences..: %." + str(__p) + "e\n")%numpy.max( Ye ) # noqa: E501
+ msgs += (__marge + " Mean of the mean of the differences...........: %." + str(__p) + "e\n")%numpy.mean( Ye, dtype=mfp ) # noqa: E501
+ msgs += (__marge + " Standard error of the mean of the differences.: %." + str(__p) + "e\n")%numpy.std( Ye, dtype=mfp ) # noqa: E501
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
#
msgs += ("\n")
msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 3
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 3
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import numpy, logging, copy
+import numpy
from daCore import BasicObjects
from daAlgorithms.Atoms import ecwnpso, ecwopso, ecwapso, ecwspso, ecwpspso
"SPSO-2011-AIS",
"SPSO-2011-SIS",
"SPSO-2011-PSIS",
- ],
+ ],
listadv = [
"PSO",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 50,
message = "Nombre maximal de pas d'optimisation",
minval = 0,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfFunctionEvaluations",
default = 15000,
typecast = int,
message = "Nombre maximal d'évaluations de la fonction",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfInsects",
default = 40,
typecast = int,
message = "Nombre d'insectes dans l'essaim",
minval = -1,
- )
+ )
self.defineRequiredParameter(
name = "SwarmTopology",
default = "FullyConnectedNeighborhood",
"RingNeighborhoodWithRadius2", "RingNeighbourhoodWithRadius2",
"AdaptativeRandomWith3Neighbors", "AdaptativeRandomWith3Neighbours", "abest",
"AdaptativeRandomWith5Neighbors", "AdaptativeRandomWith5Neighbours",
- ],
+ ],
listadv = [
"VonNeumannNeighborhood", "VonNeumannNeighbourhood",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "InertiaWeight",
- default = 0.72135, # 1/(2*ln(2))
+ default = 0.72135, # 1/(2*ln(2))
typecast = float,
- message = "Part de la vitesse de l'essaim qui est imposée à l'insecte, ou poids de l'inertie (entre 0 et 1)",
+ message = "Part de la vitesse de l'essaim qui est imposée à l'insecte, ou poids de l'inertie (entre 0 et 1)", # noqa: E501
minval = 0.,
maxval = 1.,
oldname = "SwarmVelocity",
- )
+ )
self.defineRequiredParameter(
name = "CognitiveAcceleration",
- default = 1.19315, # 1/2+ln(2)
+ default = 1.19315, # 1/2+ln(2)
typecast = float,
message = "Taux de rappel à la meilleure position de l'insecte précédemment connue (positif)",
minval = 0.,
- )
+ )
self.defineRequiredParameter(
name = "SocialAcceleration",
- default = 1.19315, # 1/2+ln(2)
+ default = 1.19315, # 1/2+ln(2)
typecast = float,
message = "Taux de rappel au meilleur insecte du groupe local (positif)",
minval = 0.,
oldname = "GroupRecallRate",
- )
+ )
self.defineRequiredParameter(
name = "VelocityClampingFactor",
default = 0.3,
message = "Facteur de réduction de l'amplitude de variation des vitesses (entre 0 et 1)",
minval = 0.0001,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "QualityCriterion",
default = "AugmentedWeightedLeastSquares",
"LeastSquares", "LS", "L2",
"AbsoluteValue", "L1",
"MaximumError", "ME", "Linf",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtBackground",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des paires de bornes",
- )
- self.defineRequiredParameter( # Pas de type
+ )
+ self.defineRequiredParameter( # Pas de type
name = "BoxBounds",
message = "Liste des paires de bornes d'incréments",
- )
+ )
self.defineRequiredParameter(
name = "InitializationPoint",
typecast = numpy.ravel,
message = "État initial imposé (par défaut, c'est l'ébauche si None)",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
- )
- self.setAttributes(tags=(
- "Optimization",
- "NonLinear",
- "MetaHeuristic",
- "Population",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "NonLinear",
+ "MetaHeuristic",
+ "Population",
+ )
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- if self._parameters["Variant"] in ["CanonicalPSO", "PSO"]:
+ # --------------------------
+ if self._parameters["Variant"] in ["CanonicalPSO", "PSO"]:
ecwnpso.ecwnpso(self, Xb, Y, HO, R, B)
#
elif self._parameters["Variant"] in ["OGCR"]:
elif self._parameters["Variant"] in ["SPSO-2011-PSIS"]:
ecwpspso.ecwpspso(self, Xb, Y, HO, R, B)
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
message = "Quantile pour la régression de quantile",
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "Minimizer",
default = "MMQR",
typecast = str,
message = "Minimiseur utilisé",
listval = ["MMQR",],
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfIterations",
default = 15000,
message = "Nombre maximal de pas d'optimisation",
minval = 1,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "CostDecrementTolerance",
default = 1.e-6,
typecast = float,
message = "Maximum de variation de la fonction d'estimation lors de l'arrêt",
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtBackground",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.defineRequiredParameter(
name = "InitializationPoint",
typecast = numpy.ravel,
message = "État initial imposé (par défaut, c'est l'ébauche si None)",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO" ),
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "Risk",
+ "Variational",
)
- self.setAttributes(tags=(
- "Optimization",
- "Risk",
- "Variational",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
self._parameters["Bounds"] = NumericObjects.ForceNumericBounds( self._parameters["Bounds"] )
#
Hm = HO["Direct"].appliedTo
- #
+
def CostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
if self._parameters["StoreInternalVariables"] or \
- self._toStore("CurrentState"):
+ self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( _X )
- _HX = numpy.asarray(Hm( _X )).reshape((-1,1))
+ _HX = numpy.asarray(Hm( _X )).reshape((-1, 1))
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( _HX )
Jb = 0.
self.StoredVariables["CostFunctionJo"].store( Jo )
self.StoredVariables["CostFunctionJ" ].store( J )
return _HX
- #
+
def GradientOfCostFunction(x):
- _X = numpy.asarray(x).reshape((-1,1))
+ _X = numpy.asarray(x).reshape((-1, 1))
Hg = HO["Tangent"].asMatrix( _X )
return Hg
#
maxfun = self._parameters["MaximumNumberOfIterations"],
toler = self._parameters["CostDecrementTolerance"],
y = Y,
- )
+ )
else:
raise ValueError("Error in minimizer name: %s is unkown"%self._parameters["Minimizer"])
#
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if self._toStore("OMA") or \
- self._toStore("SimulatedObservationAtOptimum"):
- HXa = Hm(Xa).reshape((-1,1))
+ self._toStore("SimulatedObservationAtOptimum"):
+ HXa = Hm(Xa).reshape((-1, 1))
if self._toStore("Innovation") or \
- self._toStore("OMB") or \
- self._toStore("SimulatedObservationAtBackground"):
- HXb = Hm(Xb).reshape((-1,1))
+ self._toStore("OMB") or \
+ self._toStore("SimulatedObservationAtBackground"):
+ HXb = Hm(Xb).reshape((-1, 1))
Innovation = Y - HXb
if self._toStore("Innovation"):
self.StoredVariables["Innovation"].store( Innovation )
if self._toStore("SimulatedObservationAtOptimum"):
self.StoredVariables["SimulatedObservationAtOptimum"].store( HXa )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
import math, numpy, logging
+import daCore
from daCore import BasicObjects, PlatformInfo
-mpr = PlatformInfo.PlatformInfo().MachinePrecision()
-mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
-from daCore.PlatformInfo import vfloat, has_matplotlib
from daCore.NumericObjects import FindIndexesFromNames, SingularValuesEstimation
from daAlgorithms.Atoms import eosg
+mpr = PlatformInfo.PlatformInfo().MachinePrecision()
+mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
default = [],
typecast = numpy.array,
message = "Ensemble de vecteurs d'état physique (snapshots), 1 état par colonne (Training Set)",
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfLocations",
default = 1,
typecast = int,
message = "Nombre maximal de positions",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ExcludeLocations",
default = [],
typecast = tuple,
message = "Liste des indices ou noms de positions exclues selon l'ordre interne d'un snapshot",
- )
+ )
self.defineRequiredParameter(
name = "NameOfLocations",
default = [],
typecast = tuple,
message = "Liste des noms de positions selon l'ordre interne d'un snapshot",
- )
+ )
self.defineRequiredParameter(
name = "SampleAsnUplet",
default = [],
typecast = tuple,
message = "Points de calcul définis par une liste de n-uplet",
- )
+ )
self.defineRequiredParameter(
name = "SampleAsExplicitHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxStepHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxLatinHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi du nombre de points demandés",
- )
+ message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi du nombre de points demandés", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxSobolSequence",
default = [],
typecast = tuple,
- message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]",
- )
+ message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsIndependantRandomVariables",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]",
- )
+ message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"EnsembleOfStates",
"Residus",
"SingularValues",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "MaximumNumberOfModes",
default = 15000,
typecast = int,
message = "Nombre maximal de modes pour l'analyse",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ShowElementarySummary",
default = True,
typecast = bool,
message = "Calcule et affiche un résumé à chaque évaluation élémentaire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "ResultFile",
- default = self._name+"_result_file",
+ default = self._name + "_result_file",
typecast = str,
message = "Nom de base (hors extension) des fichiers de sauvegarde des résultats",
- )
+ )
self.defineRequiredParameter(
name = "PlotAndSave",
default = False,
typecast = bool,
message = "Trace et sauve les résultats",
- )
+ )
self.requireInputArguments(
mandatory= (),
optional = ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Reduction",
+ "Checking",
)
- self.setAttributes(tags=(
- "Reduction",
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
elif isinstance(EOS, (list, tuple, daCore.Persistence.Persistence)):
__EOS = numpy.stack([numpy.ravel(_sn) for _sn in EOS], axis=1)
else:
- raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).")
+ raise ValueError("EnsembleOfSnapshots has to be an array/matrix (each column being a vector) or a list/tuple (each element being a vector).") # noqa: E501
__dimS, __nbmS = __EOS.shape
- logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(self._name,__nbmS,__dimS))
+ logging.debug("%s Using a collection of %i snapshots of individual size of %i"%(self._name, __nbmS, __dimS))
#
__fdim, __nsn = __EOS.shape
#
- #--------------------------
+ # --------------------------
__s = self._parameters["ShowElementarySummary"]
__p = self._parameters["NumberOfPrintedDigits"]
- __r = __nsn
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- __ordre = int(math.log10(__nsn))+1
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ __ordre = int(math.log10(__nsn)) + 1
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the characteristics of the collection of\n")
else:
__ExcludedPoints = ()
if "NameOfLocations" in self._parameters:
- if isinstance(self._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) and len(self._parameters["NameOfLocations"]) == __dimS:
+ if isinstance(self._parameters["NameOfLocations"], (list, numpy.ndarray, tuple)) \
+ and len(self._parameters["NameOfLocations"]) == __dimS:
__NameOfLocations = self._parameters["NameOfLocations"]
else:
__NameOfLocations = ()
numpy.arange(__EOS.shape[0]),
__ExcludedPoints,
assume_unique = True,
- )
+ )
else:
__IncludedPoints = []
if len(__IncludedPoints) > 0:
if self._toStore("Residus"):
self.StoredVariables["Residus"].store( __qisv )
#
- nbsv = min(5,self._parameters["MaximumNumberOfModes"])
+ nbsv = min(5, self._parameters["MaximumNumberOfModes"])
msgs += ("\n")
msgs += (__flech + "Summary of the %i first singular values:\n"%nbsv)
msgs += (__marge + "---------------------------------------\n")
msgs += ("\n")
msgs += (__marge + "Singular values σ:\n")
for i in range(nbsv):
- msgs += __marge + (" σ[%i] = %."+str(__p)+"e\n")%(i+1,__sv[i])
+ msgs += __marge + (" σ[%i] = %." + str(__p) + "e\n")%(i + 1, __sv[i])
msgs += ("\n")
msgs += (__marge + "Singular values σ divided by the first one σ[1]:\n")
for i in range(nbsv):
- msgs += __marge + (" σ[%i] / σ[1] = %."+str(__p)+"e\n")%(i+1,__sv[i]/__sv[0])
+ msgs += __marge + (" σ[%i] / σ[1] = %." + str(__p) + "e\n")%(i + 1, __sv[i] / __sv[0])
#
if __s:
msgs += ("\n")
msgs += (__flech + "Ordered singular values and remaining variance:\n")
msgs += (__marge + "-----------------------------------------------\n")
- __entete = (" %"+str(__ordre)+"s | %22s | %22s | Variance: part, remaining")%("i","Singular value σ","σ[i]/σ[1]")
+ __entete = (" %" + str(__ordre) + "s | %22s | %22s | Variance: part, remaining")%("i", "Singular value σ", "σ[i]/σ[1]") # noqa: E501
#
__nbtirets = len(__entete) + 2
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += "\n" + __marge + __entete
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += ("\n")
#
cut1pd, cut1pc, cut1pm, cut1pi = 1, 1, 1, 1
svalue = __sv[ns]
rvalue = __sv[ns] / __sv[0]
vsinfo = 100 * __tisv[ns]
- rsinfo = max(100 * __qisv[ns],0.)
+ rsinfo = max(100 * __qisv[ns], 0.)
if __s:
- msgs += (__marge + " %0"+str(__ordre)+"i | %22."+str(__p)+"e | %22."+str(__p)+"e | %2i%s , %4.1f%s\n")%(ns,svalue,rvalue,vsinfo,"%",rsinfo,"%")
- if rsinfo > 10: cut1pd = ns+2 # 10%
- if rsinfo > 1: cut1pc = ns+2 # 1%
- if rsinfo > 0.1: cut1pm = ns+2 # 1‰
- if rsinfo > 0.01: cut1pi = ns+2 # 0.1‰
+ msgs += (__marge + " %0" + str(__ordre) + "i | %22." + str(__p) + "e | %22." + str(__p) + "e | %2i%s , %4.1f%s\n")%(ns, svalue, rvalue, vsinfo, "%", rsinfo, "%") # noqa: E501
+ if rsinfo > 10:
+ cut1pd = ns + 2 # 10%
+ if rsinfo > 1:
+ cut1pc = ns + 2 # 1%
+ if rsinfo > 0.1:
+ cut1pm = ns + 2 # 1‰
+ if rsinfo > 0.01:
+ cut1pi = ns + 2 # 0.1‰
#
if __s:
- msgs += __marge + "-"*__nbtirets + "\n"
+ msgs += __marge + "-" * __nbtirets + "\n"
msgs += ("\n")
msgs += (__flech + "Summary of variance cut-off:\n")
msgs += (__marge + "----------------------------\n")
if cut1pd > 0:
- msgs += __marge + "Representing more than 90%s of variance requires at least %i mode(s).\n"%("%",cut1pd)
+ msgs += __marge + "Representing more than 90%s of variance requires at least %i mode(s).\n"%("%", cut1pd)
if cut1pc > 0:
- msgs += __marge + "Representing more than 99%s of variance requires at least %i mode(s).\n"%("%",cut1pc)
+ msgs += __marge + "Representing more than 99%s of variance requires at least %i mode(s).\n"%("%", cut1pc)
if cut1pm > 0:
- msgs += __marge + "Representing more than 99.9%s of variance requires at least %i mode(s).\n"%("%",cut1pm)
+ msgs += __marge + "Representing more than 99.9%s of variance requires at least %i mode(s).\n"%("%", cut1pm)
if cut1pi > 0:
- msgs += __marge + "Representing more than 99.99%s of variance requires at least %i mode(s).\n"%("%",cut1pi)
+ msgs += __marge + "Representing more than 99.99%s of variance requires at least %i mode(s).\n"%("%", cut1pi)
#
- if has_matplotlib and self._parameters["PlotAndSave"]:
+ if PlatformInfo.has_matplotlib and self._parameters["PlotAndSave"]:
# Evite les message debug de matplotlib
dL = logging.getLogger().getEffectiveLevel()
logging.getLogger().setLevel(logging.WARNING)
msgs += (__marge + "Plot and save results in a file named \"%s\"\n"%str(self._parameters["ResultFile"]))
#
import matplotlib.pyplot as plt
- from matplotlib import ticker
- fig = plt.figure(figsize=(10,15))
+ fig = plt.figure(figsize=(10, 15))
plt.tight_layout()
if len(self._parameters["ResultTitle"]) > 0:
fig.suptitle(self._parameters["ResultTitle"])
else:
fig.suptitle("Singular values analysis on an ensemble of %i snapshots\n"%__nsn)
# ----
- ax = fig.add_subplot(3,1,1)
+ ax = fig.add_subplot(3, 1, 1)
ax.set_xlabel("Singular values index, numbered from 1 (first %i ones)"%len(__qisv))
ax.set_ylabel("Remaining variance to be explained (%, linear scale)", color="tab:blue")
ax.grid(True, which='both', color="tab:blue")
- ax.set_xlim(1,1+len(__qisv))
- ax.set_ylim(0,100)
- ax.plot(range(1,1+len(__qisv)), 100 * __qisv, linewidth=2, color="b", label="On linear scale")
+ ax.set_xlim(1, 1 + len(__qisv))
+ ax.set_ylim(0, 100)
+ ax.plot(range(1, 1 + len(__qisv)), 100 * __qisv, linewidth=2, color="b", label="On linear scale")
ax.tick_params(axis='y', labelcolor="tab:blue")
ax.yaxis.set_major_formatter('{x:.0f}%')
#
rg = ax.twinx()
rg.set_ylabel("Remaining variance to be explained (%, log scale)", color="tab:red")
rg.grid(True, which='both', color="tab:red")
- rg.set_xlim(1,1+len(__qisv))
+ rg.set_xlim(1, 1 + len(__qisv))
rg.set_yscale("log")
- rg.plot(range(1,1+len(__qisv)), 100 * __qisv, linewidth=2, color="r", label="On log10 scale")
- rg.set_ylim(rg.get_ylim()[0],101)
+ rg.plot(range(1, 1 + len(__qisv)), 100 * __qisv, linewidth=2, color="r", label="On log10 scale")
+ rg.set_ylim(rg.get_ylim()[0], 101)
rg.tick_params(axis='y', labelcolor="tab:red")
# ----
- ax = fig.add_subplot(3,1,2)
+ ax = fig.add_subplot(3, 1, 2)
ax.set_ylabel("Singular values")
- ax.set_xlim(1,1+len(__sv))
- ax.plot(range(1,1+len(__sv)), __sv, linewidth=2)
+ ax.set_xlim(1, 1 + len(__sv))
+ ax.plot(range(1, 1 + len(__sv)), __sv, linewidth=2)
ax.grid(True)
# ----
- ax = fig.add_subplot(3,1,3)
+ ax = fig.add_subplot(3, 1, 3)
ax.set_ylabel("Singular values (log scale)")
ax.grid(True, which='both')
- ax.set_xlim(1,1+len(__sv))
+ ax.set_xlim(1, 1 + len(__sv))
ax.set_xscale("log")
ax.set_yscale("log")
- ax.plot(range(1,1+len(__sv)), __sv, linewidth=2)
+ ax.plot(range(1, 1 + len(__sv)), __sv, linewidth=2)
# ----
plt.savefig(str(self._parameters["ResultFile"]))
plt.close(fig)
- except:
+ except Exception:
msgs += ("\n")
msgs += (__marge + "Saving figure fail, please update your Matplolib version.\n")
msgs += ("\n")
logging.getLogger().setLevel(dL)
#
msgs += ("\n")
- msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += (__marge + "%s\n"%("-" * 75,))
msgs += ("\n")
msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 3
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 3
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import numpy, logging, copy
+import numpy, copy
from daCore import BasicObjects, NumericObjects, PlatformInfo
-from daCore.PlatformInfo import PlatformInfo, vfloat
+from daCore.PlatformInfo import vfloat
from daAlgorithms.Atoms import eosg
-mfp = PlatformInfo().MaximumPrecision()
+mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
default = [],
typecast = numpy.array,
message = "Ensemble de vecteurs d'état physique (snapshots), 1 état par colonne",
- )
+ )
self.defineRequiredParameter(
name = "SampleAsnUplet",
default = [],
typecast = tuple,
message = "Points de calcul définis par une liste de n-uplet",
- )
+ )
self.defineRequiredParameter(
name = "SampleAsExplicitHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages explicites de chaque variable comme une liste", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxStepHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]",
- )
+ message = "Points de calcul définis par un hyper-cube dont on donne la liste des échantillonnages implicites de chaque variable par un triplet [min,max,step]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxLatinHyperCube",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre de points demandés]",
- )
+ message = "Points de calcul définis par un hyper-cube Latin dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre de points demandés]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsMinMaxSobolSequence",
default = [],
typecast = tuple,
- message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]",
- )
+ message = "Points de calcul définis par une séquence de Sobol dont on donne les bornes de chaque variable par une paire [min,max], suivi de la paire [dimension, nombre minimal de points demandés]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "SampleAsIndependantRandomVariables",
default = [],
typecast = tuple,
- message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]",
- )
+ message = "Points de calcul définis par un hyper-cube dont les points sur chaque axe proviennent de l'échantillonnage indépendant de la variable selon la spécification ['distribution',[parametres],nombre]", # noqa: E501
+ )
self.defineRequiredParameter(
name = "QualityCriterion",
default = "AugmentedWeightedLeastSquares",
listval = [
"DA",
"AugmentedWeightedLeastSquares", "AWLS",
- "WeightedLeastSquares","WLS",
+ "WeightedLeastSquares", "WLS",
"LeastSquares", "LS", "L2",
"AbsoluteValue", "L1",
"MaximumError", "ME", "Linf",
- ],
+ ],
listadv = [
"AugmentedPonderatedLeastSquares", "APLS",
"PonderatedLeastSquares", "PLS",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "SetDebug",
default = False,
typecast = bool,
message = "Activation du mode debug lors de l'exécution",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"Innovation",
"InnovationAtCurrentState",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "R", "B"),
optional = ("HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- if hasattr(Y,"store"):
- Yb = numpy.asarray( Y[-1] ).reshape((-1,1)) # Y en Vector ou VectorSerie
+ if hasattr(Y, "store"):
+ Yb = numpy.asarray( Y[-1] ).reshape((-1, 1)) # Y en Vector ou VectorSerie
else:
- Yb = numpy.asarray( Y ).reshape((-1,1)) # Y en Vector ou VectorSerie
+ Yb = numpy.asarray( Y ).reshape((-1, 1)) # Y en Vector ou VectorSerie
BI = B.getI()
RI = R.getI()
+
def CostFunction(x, HmX, QualityMeasure="AugmentedWeightedLeastSquares"):
if numpy.any(numpy.isnan(HmX)):
_X = numpy.nan
_HX = numpy.nan
Jb, Jo, J = numpy.nan, numpy.nan, numpy.nan
else:
- _X = numpy.asarray( x ).reshape((-1,1))
- _HX = numpy.asarray( HmX ).reshape((-1,1))
+ _X = numpy.asarray( x ).reshape((-1, 1))
+ _HX = numpy.asarray( HmX ).reshape((-1, 1))
_Innovation = Yb - _HX
assert Yb.size == _HX.size
assert Yb.size == _Innovation.size
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","AugmentedPonderatedLeastSquares","APLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "AugmentedPonderatedLeastSquares", "APLS", "DA"]: # noqa: E501
if BI is None or RI is None:
- raise ValueError("Background and Observation error covariance matrix has to be properly defined!")
- Jb = vfloat( 0.5 * (_X - Xb).T * (BI * (_X - Xb)) )
- Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
- elif QualityMeasure in ["WeightedLeastSquares","WLS","PonderatedLeastSquares","PLS"]:
+ raise ValueError("Background and Observation error covariance matrix has to be properly defined!") # noqa: E501
+ Jb = vfloat( 0.5 * (_X - Xb).T * (BI * (_X - Xb)) )
+ Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS", "PonderatedLeastSquares", "PLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = vfloat( 0.5 * _Innovation.T @ _Innovation )
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = vfloat( numpy.sum( numpy.abs(_Innovation), dtype=mfp ) )
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = vfloat(numpy.max( numpy.abs(_Innovation) ))
#
self._parameters["SampleAsIndependantRandomVariables"],
Xb,
self._parameters["SetSeed"],
- )
- if hasattr(sampleList,"__len__") and len(sampleList) == 0:
+ )
+ if hasattr(sampleList, "__len__") and len(sampleList) == 0:
EOX = numpy.array([[]])
else:
EOX = numpy.stack(tuple(copy.copy(sampleList)), axis=1)
EOS = self._parameters["EnsembleOfSnapshots"]
if EOX.shape[1] != EOS.shape[1]:
- raise ValueError("Numbers of states (=%i) and snapshots (=%i) has to be the same!"%(EOX.shape[1], EOS.shape[1]))
+ raise ValueError("Numbers of states (=%i) and snapshots (=%i) has to be the same!"%(EOX.shape[1], EOS.shape[1])) # noqa: E501
#
if self._toStore("EnsembleOfStates"):
self.StoredVariables["EnsembleOfStates"].store( EOX )
EOX, EOS = eosg.eosg(self, Xb, HO, True, False)
#
for i in range(EOS.shape[1]):
- J, Jb, Jo = CostFunction( EOX[:,i], EOS[:,i], self._parameters["QualityCriterion"])
+ J, Jb, Jo = CostFunction( EOX[:, i], EOS[:, i], self._parameters["QualityCriterion"])
# ----------
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
message = "Nombre maximal de pas d'optimisation",
minval = 1,
oldname = "MaximumNumberOfSteps",
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "LengthOfTabuList",
default = 50,
typecast = int,
message = "Longueur de la liste tabou",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "NumberOfElementaryPerturbations",
default = 1,
typecast = int,
message = "Nombre de perturbations élémentaires pour choisir une perturbation d'état",
minval = 1,
- )
+ )
self.defineRequiredParameter(
name = "NoiseDistribution",
default = "Uniform",
typecast = str,
message = "Distribution pour générer les perturbations d'état",
- listval = ["Gaussian","Uniform"],
- )
+ listval = ["Gaussian", "Uniform"],
+ )
self.defineRequiredParameter(
name = "QualityCriterion",
default = "AugmentedWeightedLeastSquares",
"LeastSquares", "LS", "L2",
"AbsoluteValue", "L1",
"MaximumError", "ME", "Linf",
- ],
- )
+ ],
+ )
self.defineRequiredParameter(
name = "NoiseHalfRange",
default = [],
typecast = numpy.ravel,
message = "Demi-amplitude des perturbations uniformes centrées d'état pour chaque composante de l'état",
- )
+ )
self.defineRequiredParameter(
name = "StandardDeviation",
default = [],
typecast = numpy.ravel,
message = "Ecart-type des perturbations gaussiennes d'état pour chaque composante de l'état",
- )
+ )
self.defineRequiredParameter(
name = "NoiseAddingProbability",
default = 1.,
message = "Probabilité de perturbation d'une composante de l'état",
minval = 0.,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtBackground",
"SimulatedObservationAtCurrentState",
"SimulatedObservationAtOptimum",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
+ )
+ self.setAttributes(
+ tags=(
+ "Optimization",
+ "NonLinear",
+ "MetaHeuristic",
)
- self.setAttributes(tags=(
- "Optimization",
- "NonLinear",
- "MetaHeuristic",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
if self._parameters["NoiseDistribution"] == "Uniform":
- nrange = self._parameters["NoiseHalfRange"] # Vecteur
+ nrange = self._parameters["NoiseHalfRange"] # Vecteur
if nrange.size != Xb.size:
- raise ValueError("Noise generation by Uniform distribution requires range for all variable increments. The actual noise half range vector is:\n%s"%nrange)
+ raise ValueError("Noise generation by Uniform distribution requires range for all variable increments. The actual noise half range vector is:\n%s"%nrange) # noqa: E501
elif self._parameters["NoiseDistribution"] == "Gaussian":
- sigma = numpy.ravel(self._parameters["StandardDeviation"]) # Vecteur
+ sigma = numpy.ravel(self._parameters["StandardDeviation"]) # Vecteur
if sigma.size != Xb.size:
- raise ValueError("Noise generation by Gaussian distribution requires standard deviation for all variable increments. The actual standard deviation vector is:\n%s"%sigma)
+ raise ValueError("Noise generation by Gaussian distribution requires standard deviation for all variable increments. The actual standard deviation vector is:\n%s"%sigma) # noqa: E501
#
Hm = HO["Direct"].appliedTo
#
BI = B.getI()
RI = R.getI()
- #
+
def Tweak( x, NoiseDistribution, NoiseAddingProbability ):
- _X = numpy.array( x, dtype=float, copy=True ).ravel().reshape((-1,1))
+ _X = numpy.array( x, dtype=float, copy=True ).ravel().reshape((-1, 1))
if NoiseDistribution == "Uniform":
for i in range(_X.size):
if NoiseAddingProbability >= numpy.random.uniform():
_X[i] += _increment
#
return _X
- #
+
def StateInList( x, _TL ):
_X = numpy.ravel( x )
_xInList = False
for state in _TL:
- if numpy.all(numpy.abs( _X - numpy.ravel(state) ) <= 1e-16*numpy.abs(_X)):
+ if numpy.all(numpy.abs( _X - numpy.ravel(state) ) <= 1e-16 * numpy.abs(_X)):
_xInList = True
# if _xInList: import sys ; sys.exit()
return _xInList
- #
+
def CostFunction(x, QualityMeasure="AugmentedWeightedLeastSquares"):
- _X = numpy.ravel( x ).reshape((-1,1))
- _HX = numpy.ravel( Hm( _X ) ).reshape((-1,1))
+ _X = numpy.ravel( x ).reshape((-1, 1))
+ _HX = numpy.ravel( Hm( _X ) ).reshape((-1, 1))
_Innovation = Y - _HX
#
- if QualityMeasure in ["AugmentedWeightedLeastSquares","AWLS","DA"]:
+ if QualityMeasure in ["AugmentedWeightedLeastSquares", "AWLS", "DA"]:
if BI is None or RI is None:
raise ValueError("Background and Observation error covariance matrices has to be properly defined!")
Jb = vfloat(0.5 * (_X - Xb).T @ (BI @ (_X - Xb)))
Jo = vfloat(0.5 * _Innovation.T @ (RI @ _Innovation))
- elif QualityMeasure in ["WeightedLeastSquares","WLS"]:
+ elif QualityMeasure in ["WeightedLeastSquares", "WLS"]:
if RI is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
Jb = 0.
Jo = vfloat(0.5 * _Innovation.T @ (RI @ _Innovation))
- elif QualityMeasure in ["LeastSquares","LS","L2"]:
+ elif QualityMeasure in ["LeastSquares", "LS", "L2"]:
Jb = 0.
Jo = vfloat(0.5 * _Innovation.T @ _Innovation)
- elif QualityMeasure in ["AbsoluteValue","L1"]:
+ elif QualityMeasure in ["AbsoluteValue", "L1"]:
Jb = 0.
Jo = vfloat(numpy.sum( numpy.abs(_Innovation) ))
- elif QualityMeasure in ["MaximumError","ME", "Linf"]:
+ elif QualityMeasure in ["MaximumError", "ME", "Linf"]:
Jb = 0.
Jo = vfloat(numpy.max( numpy.abs(_Innovation) ))
#
_n = 0
_S = Xb
_qualityS = CostFunction( _S, self._parameters["QualityCriterion"] )
- _Best, _qualityBest = _S, _qualityS
+ _Best, _qualityBest = _S, _qualityS
_TabuList = []
_TabuList.append( _S )
while _n < self._parameters["MaximumNumberOfIterations"]:
_TabuList.pop(0)
_R = Tweak( _S, self._parameters["NoiseDistribution"], self._parameters["NoiseAddingProbability"] )
_qualityR = CostFunction( _R, self._parameters["QualityCriterion"] )
- for nbt in range(self._parameters["NumberOfElementaryPerturbations"]-1):
+ for nbt in range(self._parameters["NumberOfElementaryPerturbations"] - 1):
_W = Tweak( _S, self._parameters["NoiseDistribution"], self._parameters["NoiseAddingProbability"] )
_qualityW = CostFunction( _W, self._parameters["QualityCriterion"] )
- if (not StateInList(_W, _TabuList)) and ( (_qualityW < _qualityR) or StateInList(_R,_TabuList) ):
- _R, _qualityR = _W, _qualityW
+ if (not StateInList(_W, _TabuList)) and ( (_qualityW < _qualityR) or StateInList(_R, _TabuList) ):
+ _R, _qualityR = _W, _qualityW
if (not StateInList( _R, _TabuList )) and (_qualityR < _qualityS):
- _S, _qualityS = _R, _qualityR
+ _S, _qualityS = _R, _qualityR
_TabuList.append( _S )
if _qualityS < _qualityBest:
- _Best, _qualityBest = _S, _qualityS
+ _Best, _qualityBest = _S, _qualityS
#
self.StoredVariables["CurrentIterationNumber"].store( len(self.StoredVariables["CostFunctionJ"]) )
if self._parameters["StoreInternalVariables"] or self._toStore("CurrentState"):
# Calculs et/ou stockages supplémentaires
# ---------------------------------------
if self._toStore("OMA") or \
- self._toStore("SimulatedObservationAtOptimum"):
- HXa = Hm(Xa).reshape((-1,1))
+ self._toStore("SimulatedObservationAtOptimum"):
+ HXa = Hm(Xa).reshape((-1, 1))
if self._toStore("Innovation") or \
- self._toStore("OMB") or \
- self._toStore("SimulatedObservationAtBackground"):
- HXb = Hm(Xb).reshape((-1,1))
+ self._toStore("OMB") or \
+ self._toStore("SimulatedObservationAtBackground"):
+ HXb = Hm(Xb).reshape((-1, 1))
Innovation = Y - HXb
if self._toStore("Innovation"):
self.StoredVariables["Innovation"].store( Innovation )
if self._toStore("SimulatedObservationAtOptimum"):
self.StoredVariables["SimulatedObservationAtOptimum"].store( HXa )
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
typecast = str,
message = "Formule de résidu utilisée",
listval = ["Taylor"],
- )
+ )
self.defineRequiredParameter(
name = "EpsilonMinimumExponent",
default = -8,
message = "Exposant minimal en puissance de 10 pour le multiplicateur d'incrément",
minval = -20,
maxval = 0,
- )
+ )
self.defineRequiredParameter(
name = "InitialDirection",
default = [],
typecast = list,
message = "Direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfInitialDirection",
default = 1.,
typecast = float,
message = "Amplitude de la direction initiale de la dérivée directionnelle autour du point nominal",
- )
+ )
self.defineRequiredParameter(
name = "AmplitudeOfTangentPerturbation",
default = 1.e-2,
message = "Amplitude de la perturbation pour le calcul de la forme tangente",
minval = 1.e-10,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "SetSeed",
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
- )
+ )
self.defineRequiredParameter(
name = "NumberOfPrintedDigits",
default = 5,
typecast = int,
message = "Nombre de chiffres affichés pour les impressions de réels",
minval = 0,
- )
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
typecast = str,
message = "Titre du tableau et de la figure",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"CurrentState",
"Residu",
"SimulatedObservationAtCurrentState",
- ]
- )
+ ]
+ )
self.requireInputArguments(
mandatory= ("Xb", "HO"),
+ )
+ self.setAttributes(
+ tags=(
+ "Checking",
)
- self.setAttributes(tags=(
- "Checking",
- ))
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
Hm = HO["Direct"].appliedTo
Ht = HO["Tangent"].appliedInXTo
#
- X0 = numpy.ravel( Xb ).reshape((-1,1))
+ X0 = numpy.ravel( Xb ).reshape((-1, 1))
#
# ----------
__p = self._parameters["NumberOfPrintedDigits"]
#
- __marge = 5*u" "
- __flech = 3*"="+"> "
- msgs = ("\n") # 1
+ __marge = 5 * u" "
+ __flech = 3 * "=" + "> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
- msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + "====" + "=" * len(__rt) + "====\n")
else:
msgs += (__marge + "%s\n"%self._name)
- msgs += (__marge + "%s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%("=" * len(self._name),))
#
msgs += ("\n")
msgs += (__marge + "This test allows to analyze the numerical stability of the tangent of some\n")
msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
msgs += (__marge + " Type...............: %s\n")%type( X0 )
msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
- msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
- msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
- msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
- msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
- msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += (__marge + " Minimum value......: %." + str(__p) + "e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %." + str(__p) + "e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %." + str(__p) + "e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %." + str(__p) + "e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %." + str(__p) + "e\n")%numpy.linalg.norm( X0 )
msgs += ("\n")
- msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__marge + "%s\n\n"%("-" * 75,))
msgs += (__flech + "Numerical quality indicators:\n")
msgs += (__marge + "-----------------------------\n")
msgs += ("\n")
- msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"]) # noqa: E501
msgs += (__marge + "ratio of increments using the tangent linear:\n")
msgs += ("\n")
#
msgs += (__marge + "with a differential increment of value %.2e.\n"%HO["DifferentialIncrement"])
msgs += ("\n")
msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
- print(msgs) # 1
+ print(msgs) # 1
#
- Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
+ Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"], 1) ]
Perturbations.reverse()
#
- FX = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ FX = numpy.ravel( Hm( X0 ) ).reshape((-1, 1))
NormeX = numpy.linalg.norm( X0 )
NormeFX = numpy.linalg.norm( FX )
- if NormeFX < mpr: NormeFX = mpr
+ if NormeFX < mpr:
+ NormeFX = mpr
if self._toStore("CurrentState"):
self.StoredVariables["CurrentState"].store( X0 )
if self._toStore("SimulatedObservationAtCurrentState"):
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
X0,
- )
+ )
#
# Calcul du gradient au point courant X pour l'incrément dX
# qui est le tangent en X multiplie par dX
# ---------------------------------------------------------
dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0
GradFxdX = Ht( (X0, dX1) )
- GradFxdX = numpy.ravel( GradFxdX ).reshape((-1,1))
- GradFxdX = float(1./self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
+ GradFxdX = numpy.ravel( GradFxdX ).reshape((-1, 1))
+ GradFxdX = float(1. / self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
NormeGX = numpy.linalg.norm( GradFxdX )
- if NormeGX < mpr: NormeGX = mpr
+ if NormeGX < mpr:
+ NormeGX = mpr
#
# Boucle sur les perturbations
# ----------------------------
__nbtirets = len(__entete) + 2
- msgs = ("") # 2
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs = ("") # 2
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += "\n" + __marge + __entete
- msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += "\n" + __marge + "-" * __nbtirets
msgs += ("\n")
- for i,amplitude in enumerate(Perturbations):
- dX = amplitude * dX0.reshape((-1,1))
+ for ip, amplitude in enumerate(Perturbations):
+ dX = amplitude * dX0.reshape((-1, 1))
#
if self._parameters["ResiduFormula"] == "Taylor":
- FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1, 1))
#
Residu = numpy.linalg.norm( FX_plus_dX - FX ) / (amplitude * NormeGX)
#
self.StoredVariables["Residu"].store( Residu )
- ttsep = " %2i %5.0e %9.3e %9.3e | %11.5e %5.1e\n"%(i,amplitude,NormeX,NormeFX,Residu,abs(Residu-1.)/amplitude)
+ ttsep = " %2i %5.0e %9.3e %9.3e | %11.5e %5.1e\n"%(ip, amplitude, NormeX, NormeFX, Residu, abs(Residu - 1.) / amplitude) # noqa: E501
msgs += __marge + ttsep
#
- msgs += (__marge + "-"*__nbtirets + "\n\n")
- msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
- msgs += (__marge + "%s\n"%("-"*75,))
- print(msgs) # 2
+ msgs += (__marge + "-" * __nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name, self._parameters["ResiduFormula"])) # noqa: E501
+ msgs += (__marge + "%s\n"%("-" * 75,))
+ print(msgs) # 2
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
from daCore import BasicObjects
-from daAlgorithms.Atoms import ecwukf, c2ukf, uskf
+from daAlgorithms.Atoms import ecwukf, ecw2ukf
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
+
def __init__(self):
BasicObjects.Algorithm.__init__(self, "UNSCENTEDKALMANFILTER")
self.defineRequiredParameter(
message = "Variant ou formulation de la méthode",
listval = [
"UKF",
- "2UKF",
- ],
+ "S3F",
+ "CUKF", "2UKF",
+ "CS3F", "2S3F",
+ ],
listadv = [
"UKF-Std",
- "UKF-MSP",
- ],
- )
+ "MSS",
+ "CMSS", "2MSS",
+ "5OS",
+ "C5OS", "25OS",
+ ],
+ )
self.defineRequiredParameter(
name = "EstimationOf",
default = "State",
typecast = str,
message = "Estimation d'etat ou de parametres",
listval = ["State", "Parameters"],
- )
+ )
self.defineRequiredParameter(
name = "ConstrainedBy",
default = "EstimateProjection",
typecast = str,
message = "Prise en compte des contraintes",
listval = ["EstimateProjection"],
- )
+ )
self.defineRequiredParameter(
name = "Alpha",
- default = 1.,
+ default = 1.e-2,
typecast = float,
- message = "",
+ message = "Coefficient Alpha d'échelle",
minval = 1.e-4,
maxval = 1.,
- )
+ )
self.defineRequiredParameter(
name = "Beta",
default = 2,
typecast = float,
- message = "",
- )
+ message = "Coefficient Beta d'information a priori sur la distribution",
+ )
self.defineRequiredParameter(
name = "Kappa",
default = 0,
typecast = int,
- message = "",
+ message = "Coefficient Kappa secondaire d'échelle",
maxval = 2,
- )
+ )
self.defineRequiredParameter(
name = "Reconditioner",
default = 1.,
typecast = float,
- message = "",
+ message = "Coefficient de reconditionnement",
minval = 1.e-3,
maxval = 1.e+1,
- )
+ )
self.defineRequiredParameter(
name = "StoreInternalVariables",
default = False,
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
- )
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
"SimulatedObservationAtCurrentAnalysis",
"SimulatedObservationAtCurrentOptimum",
"SimulatedObservationAtCurrentState",
- ]
- )
- self.defineRequiredParameter( # Pas de type
+ ]
+ )
+ self.defineRequiredParameter( # Pas de type
name = "Bounds",
message = "Liste des valeurs de bornes",
- )
+ )
self.requireInputArguments(
mandatory= ("Xb", "Y", "HO", "R", "B"),
optional = ("U", "EM", "CM", "Q"),
- )
- self.setAttributes(tags=(
- "DataAssimilation",
- "NonLinear",
- "Filter",
- "Ensemble",
- "Dynamic",
- "Reduction",
- ))
+ )
+ self.setAttributes(
+ tags=(
+ "DataAssimilation",
+ "NonLinear",
+ "Filter",
+ "Ensemble",
+ "Dynamic",
+ "Reduction",
+ ),
+ features=(
+ "LocalOptimization",
+ "DerivativeFree",
+ "ParallelAlgorithm",
+ ),
+ )
def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
self._pre_run(Parameters, Xb, Y, U, HO, EM, CM, R, B, Q)
#
- #--------------------------
- # Default UKF
- #--------------------------
- if self._parameters["Variant"] in ["UKF", "UKF-Std"]:
- ecwukf.ecwukf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ # --------------------------
+ if self._parameters["Variant"] in ["UKF", "UKF-Std"]:
+ ecwukf.ecwukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "UKF")
+ #
+ elif self._parameters["Variant"] == "S3F":
+ ecwukf.ecwukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "S3F")
+ #
+ elif self._parameters["Variant"] == "MSS":
+ ecwukf.ecwukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "MSS")
+ #
+ elif self._parameters["Variant"] == "5OS":
+ ecwukf.ecwukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "5OS")
+ #
+ # --------------------------
+ elif self._parameters["Variant"] in ["CUKF", "2UKF"]:
+ ecw2ukf.ecw2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "UKF")
+ #
+ elif self._parameters["Variant"] in ["CS3F", "2S3F"]:
+ ecw2ukf.ecw2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "S3F")
#
- #--------------------------
- # Default 2UKF
- elif self._parameters["Variant"] == "2UKF":
- c2ukf.c2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ elif self._parameters["Variant"] in ["CMSS", "2MSS"]:
+ ecw2ukf.ecw2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "MSS")
#
- #--------------------------
- # UKF-MSP
- elif self._parameters["Variant"] == "UKF-MSP":
- uskf.uskf(self, Xb, Y, U, HO, EM, CM, R, B, Q)
+ elif self._parameters["Variant"] in ["C5OS", "25OS"]:
+ ecw2ukf.ecw2ukf(self, Xb, Y, U, HO, EM, CM, R, B, Q, "5OS")
#
- #--------------------------
+ # --------------------------
else:
raise ValueError("Error in Variant name: %s"%self._parameters["Variant"])
#
- self._post_run(HO)
+ self._post_run(HO, EM)
return 0
# ==============================================================================
from daCore.BasicObjects import UserScript, ExternalParameters
from daCore import PlatformInfo
from daCore import version
-#
-from daCore import ExtendedLogging ; ExtendedLogging.ExtendedLogging() # A importer en premier
-import logging
+
+from daCore import ExtendedLogging
+ExtendedLogging.ExtendedLogging() # A importer en premier
+import logging # noqa: E402
# ==============================================================================
class Aidsm(object):
__slots__ = (
"__name", "__objname", "__directory", "__case", "__parent",
"__adaoObject", "__StoredInputs", "__PostAnalysis", "__Concepts",
- )
- #
- def __init__(self, name = "", addViewers=None):
+ )
+
+ def __init__(self, name="", addViewers=None):
self.__name = str(name)
self.__objname = None
self.__directory = None
self.__StoredInputs = {}
self.__PostAnalysis = []
#
- self.__Concepts = [ # Liste exhaustive
+ self.__Concepts = [ # Liste exhaustive
"AlgorithmParameters",
"Background",
"BackgroundError",
"RegulationParameters",
"SupplementaryParameters",
"UserPostAnalysis",
- ]
+ ]
#
for ename in self.__Concepts:
self.__adaoObject[ename] = None
self.__adaoObject[ename] = Covariance(ename, asEyeByScalar = 1.e-16)
for ename in ("Observer", "UserPostAnalysis"):
self.__adaoObject[ename] = []
- self.__StoredInputs[ename] = [] # Vide par defaut
+ self.__StoredInputs[ename] = [] # Vide par defaut
self.__StoredInputs["Name"] = self.__name
self.__StoredInputs["Directory"] = self.__directory
#
# Récupère le chemin du répertoire parent et l'ajoute au path
# (Cela complète l'action de la classe PathManagement dans PlatformInfo,
# qui est activée dans Persistence)
- self.__parent = os.path.abspath(os.path.join(os.path.dirname(__file__),".."))
+ self.__parent = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
sys.path.insert(0, self.__parent)
- sys.path = PlatformInfo.uniq( sys.path ) # Conserve en unique exemplaire chaque chemin
+ sys.path = PlatformInfo.uniq( sys.path ) # Conserve en unique exemplaire chaque chemin
def set(self,
- Concept = None, # Premier argument
+ Concept = None, # Premier argument
Algorithm = None,
AppliedInXb = None,
Checked = False,
VectorSerie = None,
):
"Interface unique de définition de variables d'entrées par argument"
- self.__case.register("set",dir(),locals(),None,True)
+ self.__case.register("set", dir(), locals(), None, True)
try:
- if Concept in ("Background", "CheckingPoint", "ControlInput", "Observation"):
- commande = getattr(self,"set"+Concept)
+ if Concept in ("Background", "CheckingPoint", "ControlInput", "Observation"):
+ commande = getattr(self, "set" + Concept)
commande(Vector, VectorSerie, Script, DataFile, ColNames, ColMajor, Stored, Scheduler, Checked )
elif Concept in ("BackgroundError", "ObservationError", "EvolutionError"):
- commande = getattr(self,"set"+Concept)
+ commande = getattr(self, "set" + Concept)
commande(Matrix, ScalarSparseMatrix, DiagonalSparseMatrix,
Script, Stored, ObjectMatrix, Checked )
elif Concept == "AlgorithmParameters":
Parameters, Script, ExtraArguments,
Stored, PerformanceProfile, InputFunctionAsMulti, Checked )
elif Concept in ("EvolutionModel", "ControlModel"):
- commande = getattr(self,"set"+Concept)
+ commande = getattr(self, "set" + Concept)
commande(
Matrix, OneFunction, ThreeFunctions,
Parameters, Script, Scheduler, ExtraArguments,
else:
raise ValueError("the variable named '%s' is not allowed."%str(Concept))
except Exception as e:
- if isinstance(e, SyntaxError): msg = " at %s: %s"%(e.offset, e.text)
- else: msg = ""
- raise ValueError(("during settings, the following error occurs:\n"+\
- "\n%s%s\n\nSee also the potential messages, "+\
- "which can show the origin of the above error, "+\
- "in the launching terminal.")%(str(e),msg))
+ if isinstance(e, SyntaxError):
+ msg = " at %s: %s"%(e.offset, e.text)
+ else:
+ msg = ""
+ raise ValueError((
+ "during settings, the following error occurs:\n" + \
+ "\n%s%s\n\nSee also the potential messages, " + \
+ "which can show the origin of the above error, " + \
+ "in the launching terminal.")%(str(e), msg))
# -----------------------------------------------------------
- def setBackground(self,
+ def setBackground(
+ self,
Vector = None,
VectorSerie = None,
Script = None,
ColMajor = False,
Stored = False,
Scheduler = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "Background"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
name = Concept,
asVector = Vector,
colMajor = ColMajor,
scheduledBy = Scheduler,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setCheckingPoint(self,
+ def setCheckingPoint(
+ self,
Vector = None,
VectorSerie = None,
Script = None,
ColMajor = False,
Stored = False,
Scheduler = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "CheckingPoint"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
name = Concept,
asVector = Vector,
colMajor = ColMajor,
scheduledBy = Scheduler,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setControlInput(self,
+ def setControlInput(
+ self,
Vector = None,
VectorSerie = None,
Script = None,
ColMajor = False,
Stored = False,
Scheduler = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "ControlInput"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
name = Concept,
asVector = Vector,
colMajor = ColMajor,
scheduledBy = Scheduler,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setObservation(self,
+ def setObservation(
+ self,
Vector = None,
VectorSerie = None,
Script = None,
ColMajor = False,
Stored = False,
Scheduler = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "Observation"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
name = Concept,
asVector = Vector,
colMajor = ColMajor,
scheduledBy = Scheduler,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setBackgroundError(self,
+ def setBackgroundError(
+ self,
Matrix = None,
ScalarSparseMatrix = None,
DiagonalSparseMatrix = None,
Script = None,
Stored = False,
ObjectMatrix = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "BackgroundError"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = Covariance(
name = Concept,
asCovariance = Matrix,
asCovObject = ObjectMatrix,
asScript = self.__with_directory(Script),
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setObservationError(self,
+ def setObservationError(
+ self,
Matrix = None,
ScalarSparseMatrix = None,
DiagonalSparseMatrix = None,
Script = None,
Stored = False,
ObjectMatrix = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "ObservationError"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = Covariance(
name = Concept,
asCovariance = Matrix,
asCovObject = ObjectMatrix,
asScript = self.__with_directory(Script),
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setEvolutionError(self,
+ def setEvolutionError(
+ self,
Matrix = None,
ScalarSparseMatrix = None,
DiagonalSparseMatrix = None,
Script = None,
Stored = False,
ObjectMatrix = None,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "EvolutionError"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = Covariance(
name = Concept,
asCovariance = Matrix,
asCovObject = ObjectMatrix,
asScript = self.__with_directory(Script),
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setObservationOperator(self,
+ def setObservationOperator(
+ self,
Matrix = None,
OneFunction = None,
ThreeFunctions = None,
Stored = False,
PerformanceProfile = None,
InputFunctionAsMulti = False,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "ObservationOperator"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = FullOperator(
name = Concept,
asMatrix = Matrix,
inputAsMF = InputFunctionAsMulti,
scheduledBy = None,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setEvolutionModel(self,
+ def setEvolutionModel(
+ self,
Matrix = None,
OneFunction = None,
ThreeFunctions = None,
Stored = False,
PerformanceProfile = None,
InputFunctionAsMulti = False,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "EvolutionModel"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = FullOperator(
name = Concept,
asMatrix = Matrix,
inputAsMF = InputFunctionAsMulti,
scheduledBy = Scheduler,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
- def setControlModel(self,
+ def setControlModel(
+ self,
Matrix = None,
OneFunction = None,
ThreeFunctions = None,
Stored = False,
PerformanceProfile = None,
InputFunctionAsMulti = False,
- Checked = False):
+ Checked = False ):
"Définition d'un concept de calcul"
Concept = "ControlModel"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = FullOperator(
name = Concept,
asMatrix = Matrix,
inputAsMF = InputFunctionAsMulti,
scheduledBy = Scheduler,
toBeChecked = Checked,
- )
+ )
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setName(self, String=None):
"Définition d'un concept de calcul"
- self.__case.register("setName",dir(),locals())
+ self.__case.register("setName", dir(), locals())
if String is not None:
self.__name = str(String)
else:
def setDirectory(self, String=None):
"Définition d'un concept de calcul"
- self.__case.register("setDirectory",dir(),locals())
+ self.__case.register("setDirectory", dir(), locals())
if os.path.isdir(os.path.abspath(str(String))):
self.__directory = os.path.abspath(str(String))
else:
def setDebug(self, __level = 10):
"NOTSET=0 < DEBUG=10 < INFO=20 < WARNING=30 < ERROR=40 < CRITICAL=50"
- self.__case.register("setDebug",dir(),locals())
+ self.__case.register("setDebug", dir(), locals())
log = logging.getLogger()
log.setLevel( __level )
logging.debug("Mode debug initialisé avec %s %s"%(version.name, version.version))
def setNoDebug(self):
"NOTSET=0 < DEBUG=10 < INFO=20 < WARNING=30 < ERROR=40 < CRITICAL=50"
- self.__case.register("setNoDebug",dir(),locals())
+ self.__case.register("setNoDebug", dir(), locals())
log = logging.getLogger()
log.setLevel( logging.WARNING )
self.__StoredInputs["Debug"] = logging.WARNING
self.__StoredInputs["NoDebug"] = True
return 0
- def setAlgorithmParameters(self,
+ def setAlgorithmParameters(
+ self,
Algorithm = None,
Parameters = None,
- Script = None):
+ Script = None ):
"Définition d'un concept de calcul"
Concept = "AlgorithmParameters"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = AlgorithmAndParameters(
name = Concept,
asAlgorithm = Algorithm,
asDict = Parameters,
asScript = self.__with_directory(Script),
- )
+ )
return 0
- def updateAlgorithmParameters(self,
+ def updateAlgorithmParameters(
+ self,
Parameters = None,
- Script = None):
+ Script = None ):
"Mise à jour d'un concept de calcul"
Concept = "AlgorithmParameters"
if Concept not in self.__adaoObject or self.__adaoObject[Concept] is None:
self.__adaoObject[Concept].updateParameters(
asDict = Parameters,
asScript = self.__with_directory(Script),
- )
+ )
# RaJ du register
return 0
- def setRegulationParameters(self,
+ def setRegulationParameters(
+ self,
Algorithm = None,
Parameters = None,
- Script = None):
+ Script = None ):
"Définition d'un concept de calcul"
Concept = "RegulationParameters"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = RegulationAndParameters(
name = Concept,
asAlgorithm = Algorithm,
asDict = Parameters,
asScript = self.__with_directory(Script),
- )
+ )
return 0
- def setSupplementaryParameters(self,
+ def setSupplementaryParameters(
+ self,
Parameters = None,
- Script = None):
+ Script = None ):
"Définition d'un concept de calcul"
Concept = "SupplementaryParameters"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = ExternalParameters(
name = Concept,
asDict = Parameters,
asScript = self.__with_directory(Script),
- )
+ )
return 0
- def updateSupplementaryParameters(self,
+ def updateSupplementaryParameters(
+ self,
Parameters = None,
- Script = None):
+ Script = None ):
"Mise à jour d'un concept de calcul"
Concept = "SupplementaryParameters"
if Concept not in self.__adaoObject or self.__adaoObject[Concept] is None:
self.__adaoObject[Concept].updateParameters(
asDict = Parameters,
asScript = self.__with_directory(Script),
- )
+ )
return 0
- def setObserver(self,
+ def setObserver(
+ self,
Variable = None,
Template = None,
String = None,
Script = None,
Info = None,
ObjectFunction = None,
- Scheduler = None):
+ Scheduler = None ):
"Définition d'un concept de calcul"
Concept = "Observer"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept].append( DataObserver(
name = Concept,
onVariable = Variable,
withInfo = Info,
scheduledBy = Scheduler,
withAlgo = self.__adaoObject["AlgorithmParameters"]
- ))
+ ))
return 0
- def removeObserver(self,
+ def removeObserver(
+ self,
Variable = None,
- ObjectFunction = None,
- ):
+ ObjectFunction = None ):
"Permet de retirer un observer à une ou des variable nommées"
if "AlgorithmParameters" not in self.__adaoObject:
raise ValueError("No algorithm registred, ask for one before removing observers")
else:
return self.__adaoObject["AlgorithmParameters"].removeObserver( ename, ObjectFunction )
- def setUserPostAnalysis(self,
+ def setUserPostAnalysis(
+ self,
Template = None,
String = None,
- Script = None):
+ Script = None ):
"Définition d'un concept de calcul"
Concept = "UserPostAnalysis"
- self.__case.register("set"+Concept, dir(), locals())
+ self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept].append( repr(UserScript(
name = Concept,
asTemplate = Template,
asString = String,
asScript = self.__with_directory(Script),
- )))
+ )))
return 0
# -----------------------------------------------------------
"Récupération d'une sortie du calcul"
if Concept is not None:
try:
- self.__case.register("get", dir(), locals(), Concept) # Break pickle in Python 2
+ self.__case.register("get", dir(), locals(), Concept) # Break pickle in Python 2
except Exception:
pass
if Concept in self.__StoredInputs:
raise ValueError("The requested key \"%s\" does not exists as an input or a stored variable."%Concept)
else:
allvariables = {}
- allvariables.update( {"AlgorithmParameters":self.__adaoObject["AlgorithmParameters"].get()} )
+ allvariables.update( {"AlgorithmParameters": self.__adaoObject["AlgorithmParameters"].get()} )
if self.__adaoObject["SupplementaryParameters"] is not None:
- allvariables.update( {"SupplementaryParameters":self.__adaoObject["SupplementaryParameters"].get()} )
+ allvariables.update( {"SupplementaryParameters": self.__adaoObject["SupplementaryParameters"].get()} )
# allvariables.update( self.__adaoObject["AlgorithmParameters"].get() )
allvariables.update( self.__StoredInputs )
allvariables.pop('Observer', None)
préalablement choisi sinon la méthode renvoie "None".
"""
if len(list(self.__adaoObject["AlgorithmParameters"].keys())) == 0 and \
- len(list(self.__StoredInputs.keys())) == 0:
+ len(list(self.__StoredInputs.keys())) == 0:
return None
else:
variables = []
if len(list(self.__adaoObject["AlgorithmParameters"].keys())) > 0:
variables.extend(list(self.__adaoObject["AlgorithmParameters"].keys()))
if self.__adaoObject["SupplementaryParameters"] is not None and \
- len(list(self.__adaoObject["SupplementaryParameters"].keys())) > 0:
+ len(list(self.__adaoObject["SupplementaryParameters"].keys())) > 0:
variables.extend(list(self.__adaoObject["SupplementaryParameters"].keys()))
if len(list(self.__StoredInputs.keys())) > 0:
variables.extend( list(self.__StoredInputs.keys()) )
"""
files = []
for directory in sys.path:
- trypath = os.path.join(directory,"daAlgorithms")
+ trypath = os.path.join(directory, "daAlgorithms")
if os.path.isdir(trypath):
for fname in os.listdir(trypath):
- if os.path.isfile(os.path.join(trypath,fname)):
+ if os.path.isfile(os.path.join(trypath, fname)):
root, ext = os.path.splitext(fname)
- if ext != ".py": continue
- with open(os.path.join(trypath,fname)) as fc:
+ if ext != ".py":
+ continue
+ with open(os.path.join(trypath, fname)) as fc:
iselal = bool("class ElementaryAlgorithm" in fc.read())
if iselal and ext == '.py' and root != '__init__':
files.append(root)
se trouve un sous-répertoire "daAlgorithms"
"""
if not os.path.isdir(Path):
- raise ValueError("The given "+Path+" argument must exist as a directory")
- if not os.path.isdir(os.path.join(Path,"daAlgorithms")):
- raise ValueError("The given \""+Path+"\" argument must contain a subdirectory named \"daAlgorithms\"")
- if not os.path.isfile(os.path.join(Path,"daAlgorithms","__init__.py")):
- raise ValueError("The given \""+Path+"/daAlgorithms\" path must contain a file named \"__init__.py\"")
+ raise ValueError("The given " + Path + " argument must exist as a directory")
+ if not os.path.isdir(os.path.join(Path, "daAlgorithms")):
+ raise ValueError("The given \"" + Path + "\" argument must contain a subdirectory named \"daAlgorithms\"")
+ if not os.path.isfile(os.path.join(Path, "daAlgorithms", "__init__.py")):
+ raise ValueError("The given \"" + Path + "/daAlgorithms\" path must contain a file named \"__init__.py\"")
sys.path.insert(0, os.path.abspath(Path))
- sys.path = PlatformInfo.uniq( sys.path ) # Conserve en unique exemplaire chaque chemin
+ sys.path = PlatformInfo.uniq( sys.path ) # Conserve en unique exemplaire chaque chemin
return 0
# -----------------------------------------------------------
def execute(self, Executor=None, SaveCaseInFile=None, nextStep=False):
"Lancement du calcul"
- self.__case.register("execute",dir(),locals(),None,True)
- self.updateAlgorithmParameters(Parameters={"nextStep":bool(nextStep)})
- if not nextStep: Operator.CM.clearCache()
+ self.__case.register("execute", dir(), locals(), None, True)
+ self.updateAlgorithmParameters(Parameters={"nextStep": bool(nextStep)})
+ if not nextStep:
+ Operator.CM.clearCache()
try:
- if Executor == "YACS": self.__executeYACSScheme( SaveCaseInFile )
- else: self.__executePythonScheme( SaveCaseInFile )
+ if Executor == "YACS":
+ self.__executeYACSScheme( SaveCaseInFile )
+ else:
+ self.__executePythonScheme( SaveCaseInFile )
except Exception as e:
- if isinstance(e, SyntaxError): msg = "at %s: %s"%(e.offset, e.text)
- else: msg = ""
- raise ValueError(("during execution, the following error occurs:\n"+\
- "\n%s %s\n\nSee also the potential messages, "+\
- "which can show the origin of the above error, "+\
- "in the launching terminal.\n")%(str(e),msg))
+ if isinstance(e, SyntaxError):
+ msg = "at %s: %s"%(e.offset, e.text)
+ else:
+ msg = ""
+ raise ValueError((
+ "during execution, the following error occurs:\n" + \
+ "\n%s %s\n\nSee also the potential messages, " + \
+ "which can show the origin of the above error, " + \
+ "in the launching terminal.\n")%(str(e), msg))
return 0
def __executePythonScheme(self, FileName=None):
if FileName is not None:
self.dump( FileName, "TUI")
self.__adaoObject["AlgorithmParameters"].executePythonScheme( self.__adaoObject )
- if "UserPostAnalysis" in self.__adaoObject and len(self.__adaoObject["UserPostAnalysis"])>0:
+ if "UserPostAnalysis" in self.__adaoObject and len(self.__adaoObject["UserPostAnalysis"]) > 0:
self.__objname = self.__retrieve_objname()
for __UpaOne in self.__adaoObject["UserPostAnalysis"]:
__UpaOne = eval(str(__UpaOne))
- exec(__UpaOne, {}, {'self':self, 'ADD':self, 'case':self, 'adaopy':self, self.__objname:self})
+ exec(__UpaOne, {}, {'self': self, 'ADD': self, 'case': self, 'adaopy': self, self.__objname: self})
return 0
def __executeYACSScheme(self, FileName=None):
def load(self, FileName=None, Content=None, Object=None, Formater="TUI"):
"Chargement normalisé des commandes"
__commands = self.__case.load(FileName, Content, Object, Formater)
- from numpy import array, matrix
+ from numpy import array, matrix # noqa: F401
for __command in __commands:
- if (__command.find("set")>-1 and __command.find("set_")<0) or 'UserPostAnalysis' in __command:
- exec("self."+__command, {}, locals())
+ if (__command.find("set") > -1 and __command.find("set_") < 0) or 'UserPostAnalysis' in __command:
+ exec("self." + __command, {}, locals())
else:
self.__PostAnalysis.append(__command)
return self
def convert(self,
- FileNameFrom=None, ContentFrom=None, ObjectFrom=None, FormaterFrom="TUI",
- FileNameTo=None, FormaterTo="TUI",
- ):
+ FileNameFrom=None, ContentFrom=None, ObjectFrom=None, FormaterFrom="TUI",
+ FileNameTo=None, FormaterTo="TUI" ):
"Conversion normalisée des commandes"
return self.load(
FileName=FileNameFrom, Content=ContentFrom, Object=ObjectFrom, Formater=FormaterFrom
- ).dump(
+ ).dump(
FileName=FileNameTo, Formater=FormaterTo
- )
+ )
def clear(self):
"Effacement du contenu du cas en cours"
for level in reversed(inspect.stack()):
__names += [name for name, value in level.frame.f_locals.items() if value is self]
__names += [name for name, value in globals().items() if value is self]
- while 'self' in __names: __names.remove('self') # Devrait toujours être trouvé, donc pas d'erreur
+ while 'self' in __names:
+ __names.remove('self') # Devrait toujours être trouvé, donc pas d'erreur
if len(__names) > 0:
self.__objname = __names[0]
else:
msg = PlatformInfo.PlatformInfo().getAllInformation(" ", title)
return msg
+ def callinfo(self, __prefix=" "):
+ msg = ""
+ for oname in ["ObservationOperator", "EvolutionModel"]:
+ if hasattr(self.__adaoObject[oname], "nbcalls"):
+ ostats = self.__adaoObject[oname].nbcalls()
+ msg += "\n%sNumber of calls for the %s:"%(__prefix, oname)
+ for otype in ["Direct", "Tangent", "Adjoint"]:
+ if otype in ostats:
+ msg += "\n%s%30s : %s"%(__prefix, "%s evaluation"%(otype,), ostats[otype][0])
+ msg += "\n"
+ return msg
+
def prepare_to_pickle(self):
"Retire les variables non pickelisables, avec recopie efficace"
if self.__adaoObject['AlgorithmParameters'] is not None:
for k in self.__adaoObject['AlgorithmParameters'].keys():
- if k == "Algorithm": continue
+ if k == "Algorithm":
+ continue
if k in self.__StoredInputs:
raise ValueError("The key \"%s\" to be transfered for pickling will overwrite an existing one."%(k,))
if self.__adaoObject['AlgorithmParameters'].hasObserver( k ):
self.__adaoObject['AlgorithmParameters'].removeObserver( k, "", True )
self.__StoredInputs[k] = self.__adaoObject['AlgorithmParameters'].pop(k, None)
if sys.version_info[0] == 2:
- del self.__adaoObject # Because it breaks pickle in Python 2. Not required for Python 3
- del self.__case # Because it breaks pickle in Python 2. Not required for Python 3
+ del self.__adaoObject # Because it breaks pickle in Python 2. Not required for Python 3
+ del self.__case # Because it breaks pickle in Python 2. Not required for Python 3
if sys.version_info.major < 3:
return 0
else:
Generic ADAO TUI builder
"""
__slots__ = ()
- #
+
def __init__(self, name = ""):
_Aidsm.__init__(self, name)
__slots__ = (
"__tolerBP", "__lengthOR", "__initlnOR", "__seenNames", "__enabled",
"__listOPCV",
- )
- #
+ )
+
def __init__(self,
toleranceInRedundancy = 1.e-18,
- lengthOfRedundancy = -1,
- ):
+ lengthOfRedundancy = -1 ):
"""
Les caractéristiques de tolérance peuvent être modifiées à la création.
"""
__alc = False
__HxV = None
if self.__enabled:
- for i in range(min(len(self.__listOPCV),self.__lengthOR)-1,-1,-1):
+ for i in range(min(len(self.__listOPCV), self.__lengthOR) - 1, -1, -1):
if not hasattr(xValue, 'size'):
pass
elif (str(oName) != self.__listOPCV[i][3]):
self.__lengthOR = 2 * min(numpy.size(xValue), 50) + 2
self.__initlnOR = self.__lengthOR
self.__seenNames.append(str(oName))
- if str(oName) not in self.__seenNames: # Etend la liste si nouveau
+ if str(oName) not in self.__seenNames: # Étend la liste si nouveau
self.__lengthOR += 2 * min(numpy.size(xValue), 50) + 2
self.__initlnOR += self.__lengthOR
self.__seenNames.append(str(oName))
while len(self.__listOPCV) > self.__lengthOR:
self.__listOPCV.pop(0)
- self.__listOPCV.append( (
- copy.copy(numpy.ravel(xValue)), # 0 Previous point
- copy.copy(HxValue), # 1 Previous value
- numpy.linalg.norm(xValue), # 2 Norm
- str(oName), # 3 Operator name
- ) )
+ self.__listOPCV.append((
+ copy.copy(numpy.ravel(xValue)), # 0 Previous point
+ copy.copy(HxValue), # 1 Previous value
+ numpy.linalg.norm(xValue), # 2 Norm
+ str(oName), # 3 Operator name
+ ))
def disable(self):
"Inactive le cache"
"__name", "__NbCallsAsMatrix", "__NbCallsAsMethod",
"__NbCallsOfCached", "__reduceM", "__avoidRC", "__inputAsMF",
"__mpEnabled", "__extraArgs", "__Method", "__Matrix", "__Type",
- )
+ )
#
NbCallsAsMatrix = 0
NbCallsAsMethod = 0
NbCallsOfCached = 0
CM = CacheManager()
- #
+
def __init__(self,
- name = "GenericOperator",
- fromMethod = None,
- fromMatrix = None,
- avoidingRedundancy = True,
- reducingMemoryUse = False,
- inputAsMultiFunction = False,
- enableMultiProcess = False,
- extraArguments = None,
- ):
+ name = "GenericOperator",
+ fromMethod = None,
+ fromMatrix = None,
+ avoidingRedundancy = True,
+ reducingMemoryUse = False,
+ inputAsMultiFunction = False,
+ enableMultiProcess = False,
+ extraArguments = None ):
"""
On construit un objet de ce type en fournissant, à l'aide de l'un des
deux mots-clé, soit une fonction ou un multi-fonction python, soit une
self.__inputAsMF = bool( inputAsMultiFunction )
self.__mpEnabled = bool( enableMultiProcess )
self.__extraArgs = extraArguments
- if fromMethod is not None and self.__inputAsMF:
- self.__Method = fromMethod # logtimer(fromMethod)
+ if fromMethod is not None and self.__inputAsMF:
+ self.__Method = fromMethod # logtimer(fromMethod)
self.__Matrix = None
self.__Type = "Method"
elif fromMethod is not None and not self.__inputAsMF:
elif fromMatrix is not None:
self.__Method = None
if isinstance(fromMatrix, str):
- fromMatrix = PlatformInfo.strmatrix2liststr( fromMatrix )
+ fromMatrix = PlatformInfo.strmatrix2liststr( fromMatrix )
self.__Matrix = numpy.asarray( fromMatrix, dtype=float )
self.__Type = "Matrix"
else:
for i in range(len(_HValue)):
_HxValue.append( _HValue[i] )
if self.__avoidRC:
- Operator.CM.storeValueInX(_xValue[i],_HxValue[-1],self.__name)
+ Operator.CM.storeValueInX(_xValue[i], _HxValue[-1], self.__name)
else:
_HxValue = []
_xserie = []
_hindex = []
for i, xv in enumerate(_xValue):
if self.__avoidRC:
- __alreadyCalculated, __HxV = Operator.CM.wasCalculatedIn(xv,self.__name)
+ __alreadyCalculated, __HxV = Operator.CM.wasCalculatedIn(xv, self.__name)
else:
__alreadyCalculated = False
#
_hv = None
_HxValue.append( _hv )
#
- if len(_xserie)>0 and self.__Matrix is None:
+ if len(_xserie) > 0 and self.__Matrix is None:
if self.__extraArgs is None:
- _hserie = self.__Method( _xserie ) # Calcul MF
+ _hserie = self.__Method( _xserie ) # Calcul MF
else:
- _hserie = self.__Method( _xserie, self.__extraArgs ) # Calcul MF
+ _hserie = self.__Method( _xserie, self.__extraArgs ) # Calcul MF
if not hasattr(_hserie, "pop"):
raise TypeError(
- "The user input multi-function doesn't seem to return a"+\
- " result sequence, behaving like a mono-function. It has"+\
- " to be checked."
- )
+ "The user input multi-function doesn't seem to return a" + \
+ " result sequence, behaving like a mono-function. It has" + \
+ " to be checked." )
for i in _hindex:
_xv = _xserie.pop(0)
_hv = _hserie.pop(0)
_HxValue[i] = _hv
if self.__avoidRC:
- Operator.CM.storeValueInX(_xv,_hv,self.__name)
+ Operator.CM.storeValueInX(_xv, _hv, self.__name)
#
if returnSerieAsArrayMatrix:
_HxValue = numpy.stack([numpy.ravel(_hv) for _hv in _HxValue], axis=1)
#
- if argsAsSerie: return _HxValue
- else: return _HxValue[-1]
+ if argsAsSerie: return _HxValue # noqa: E701
+ else: return _HxValue[-1] # noqa: E241,E272,E701
def appliedControledFormTo(self, paires, argsAsSerie = False, returnSerieAsArrayMatrix = False):
"""
- uValue : argument U adapté pour appliquer l'opérateur
- argsAsSerie : indique si l'argument est une mono ou multi-valeur
"""
- if argsAsSerie: _xuValue = paires
- else: _xuValue = (paires,)
+ if argsAsSerie: _xuValue = paires # noqa: E701
+ else: _xuValue = (paires,) # noqa: E241,E272,E701
PlatformInfo.isIterable( _xuValue, True, " in Operator.appliedControledFormTo" )
#
if self.__Matrix is not None:
_xuArgs.append( _xValue )
self.__addOneMethodCall( len(_xuArgs) )
if self.__extraArgs is None:
- _HxValue = self.__Method( _xuArgs ) # Calcul MF
+ _HxValue = self.__Method( _xuArgs ) # Calcul MF
else:
- _HxValue = self.__Method( _xuArgs, self.__extraArgs ) # Calcul MF
+ _HxValue = self.__Method( _xuArgs, self.__extraArgs ) # Calcul MF
#
if returnSerieAsArrayMatrix:
_HxValue = numpy.stack([numpy.ravel(_hv) for _hv in _HxValue], axis=1)
#
- if argsAsSerie: return _HxValue
- else: return _HxValue[-1]
+ if argsAsSerie: return _HxValue # noqa: E701
+ else: return _HxValue[-1] # noqa: E241,E272,E701
def appliedInXTo(self, paires, argsAsSerie = False, returnSerieAsArrayMatrix = False):
"""
- xValue : série d'arguments adaptés pour appliquer l'opérateur
- argsAsSerie : indique si l'argument est une mono ou multi-valeur
"""
- if argsAsSerie: _nxValue = paires
- else: _nxValue = (paires,)
+ if argsAsSerie: _nxValue = paires # noqa: E701
+ else: _nxValue = (paires,) # noqa: E241,E272,E701
PlatformInfo.isIterable( _nxValue, True, " in Operator.appliedInXTo" )
#
if self.__Matrix is not None:
else:
self.__addOneMethodCall( len(_nxValue) )
if self.__extraArgs is None:
- _HxValue = self.__Method( _nxValue ) # Calcul MF
+ _HxValue = self.__Method( _nxValue ) # Calcul MF
else:
- _HxValue = self.__Method( _nxValue, self.__extraArgs ) # Calcul MF
+ _HxValue = self.__Method( _nxValue, self.__extraArgs ) # Calcul MF
#
if returnSerieAsArrayMatrix:
_HxValue = numpy.stack([numpy.ravel(_hv) for _hv in _HxValue], axis=1)
#
- if argsAsSerie: return _HxValue
- else: return _HxValue[-1]
+ if argsAsSerie: return _HxValue # noqa: E701
+ else: return _HxValue[-1] # noqa: E241,E272,E701
def asMatrix(self, ValueForMethodForm = "UnknownVoidValue", argsAsSerie = False):
"""
if self.__Matrix is not None:
self.__addOneMatrixCall()
mValue = [self.__Matrix,]
- elif not isinstance(ValueForMethodForm,str) or ValueForMethodForm != "UnknownVoidValue": # Ne pas utiliser "None"
+ elif not isinstance(ValueForMethodForm, str) or ValueForMethodForm != "UnknownVoidValue": # Ne pas utiliser "None"
mValue = []
if argsAsSerie:
self.__addOneMethodCall( len(ValueForMethodForm) )
else:
raise ValueError("Matrix form of the operator defined as a function/method requires to give an operating point.")
#
- if argsAsSerie: return mValue
- else: return mValue[-1]
+ if argsAsSerie: return mValue # noqa: E701
+ else: return mValue[-1] # noqa: E241,E272,E701
def shape(self):
"""
Renvoie les nombres d'évaluations de l'opérateur
"""
__nbcalls = (
- self.__NbCallsAsMatrix+self.__NbCallsAsMethod,
+ self.__NbCallsAsMatrix + self.__NbCallsAsMethod,
self.__NbCallsAsMatrix,
self.__NbCallsAsMethod,
self.__NbCallsOfCached,
- Operator.NbCallsAsMatrix+Operator.NbCallsAsMethod,
+ Operator.NbCallsAsMatrix + Operator.NbCallsAsMethod,
Operator.NbCallsAsMatrix,
Operator.NbCallsAsMethod,
Operator.NbCallsOfCached,
- )
- if which is None: return __nbcalls
- else: return __nbcalls[which]
+ )
+ if which is None: return __nbcalls # noqa: E701
+ else: return __nbcalls[which] # noqa: E241,E272,E701
def __addOneMatrixCall(self):
"Comptabilise un appel"
- self.__NbCallsAsMatrix += 1 # Decompte local
- Operator.NbCallsAsMatrix += 1 # Decompte global
+ self.__NbCallsAsMatrix += 1 # Decompte local
+ Operator.NbCallsAsMatrix += 1 # Decompte global
def __addOneMethodCall(self, nb = 1):
"Comptabilise un appel"
- self.__NbCallsAsMethod += nb # Decompte local
- Operator.NbCallsAsMethod += nb # Decompte global
+ self.__NbCallsAsMethod += nb # Decompte local
+ Operator.NbCallsAsMethod += nb # Decompte global
def __addOneCacheCall(self):
"Comptabilise un appel"
- self.__NbCallsOfCached += 1 # Decompte local
- Operator.NbCallsOfCached += 1 # Decompte global
+ self.__NbCallsOfCached += 1 # Décompte local
+ Operator.NbCallsOfCached += 1 # Décompte global
# ==============================================================================
class FullOperator(object):
"""
__slots__ = (
"__name", "__check", "__extraArgs", "__FO", "__T",
- )
- #
+ )
+
def __init__(self,
name = "GenericFullOperator",
asMatrix = None,
- asOneFunction = None, # 1 Fonction
- asThreeFunctions = None, # 3 Fonctions in a dictionary
- asScript = None, # 1 or 3 Fonction(s) by script
- asDict = None, # Parameters
+ asOneFunction = None, # 1 Fonction
+ asThreeFunctions = None, # 3 Fonctions in a dictionary
+ asScript = None, # 1 or 3 Fonction(s) by script
+ asDict = None, # Parameters
appliedInX = None,
extraArguments = None,
performancePrf = None,
- inputAsMF = False,# Fonction(s) as Multi-Functions
+ inputAsMF = False, # Fonction(s) as Multi-Functions
scheduledBy = None,
- toBeChecked = False,
- ):
+ toBeChecked = False ):
""
self.__name = str(name)
self.__check = bool(toBeChecked)
if "EnableMultiProcessing" in __Parameters and __Parameters["EnableMultiProcessing"]:
__Parameters["EnableMultiProcessingInDerivatives"] = True
__Parameters["EnableMultiProcessingInEvaluation"] = False
- if "EnableMultiProcessingInDerivatives" not in __Parameters:
+ if "EnableMultiProcessingInDerivatives" not in __Parameters:
__Parameters["EnableMultiProcessingInDerivatives"] = False
if __Parameters["EnableMultiProcessingInDerivatives"]:
__Parameters["EnableMultiProcessingInEvaluation"] = False
- if "EnableMultiProcessingInEvaluation" not in __Parameters:
+ if "EnableMultiProcessingInEvaluation" not in __Parameters:
__Parameters["EnableMultiProcessingInEvaluation"] = False
- if "withIncrement" in __Parameters: # Temporaire
+ if "withIncrement" in __Parameters: # Temporaire
__Parameters["DifferentialIncrement"] = __Parameters["withIncrement"]
# Le défaut est équivalent à "ReducedOverallRequirements"
__reduceM, __avoidRC = True, True
if performancePrf is not None:
- if performancePrf == "ReducedAmountOfCalculation":
+ if performancePrf == "ReducedAmountOfCalculation":
__reduceM, __avoidRC = False, True
elif performancePrf == "ReducedMemoryFootprint":
__reduceM, __avoidRC = True, False
if asMatrix:
__Matrix = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
elif asOneFunction:
- __Function = { "Direct":Interfaces.ImportFromScript(asScript).getvalue( "DirectOperator" ) }
- __Function.update({"useApproximatedDerivatives":True})
+ __Function = { "Direct": Interfaces.ImportFromScript(asScript).getvalue( "DirectOperator" ) }
+ __Function.update({"useApproximatedDerivatives": True})
__Function.update(__Parameters)
elif asThreeFunctions:
__Function = {
- "Direct" :Interfaces.ImportFromScript(asScript).getvalue( "DirectOperator" ),
- "Tangent":Interfaces.ImportFromScript(asScript).getvalue( "TangentOperator" ),
- "Adjoint":Interfaces.ImportFromScript(asScript).getvalue( "AdjointOperator" ),
- }
+ "Direct": Interfaces.ImportFromScript(asScript).getvalue( "DirectOperator" ),
+ "Tangent": Interfaces.ImportFromScript(asScript).getvalue( "TangentOperator" ),
+ "Adjoint": Interfaces.ImportFromScript(asScript).getvalue( "AdjointOperator" ),
+ }
__Function.update(__Parameters)
else:
__Matrix = asMatrix
else:
raise ValueError("The function has to be given in a dictionnary which have 1 key (\"Direct\")")
else:
- __Function = { "Direct":asOneFunction }
- __Function.update({"useApproximatedDerivatives":True})
+ __Function = { "Direct": asOneFunction }
+ __Function.update({"useApproximatedDerivatives": True})
__Function.update(__Parameters)
elif asThreeFunctions is not None:
if isinstance(asThreeFunctions, dict) and \
- ("Tangent" in asThreeFunctions) and (asThreeFunctions["Tangent"] is not None) and \
- ("Adjoint" in asThreeFunctions) and (asThreeFunctions["Adjoint"] is not None) and \
- (("useApproximatedDerivatives" not in asThreeFunctions) or not bool(asThreeFunctions["useApproximatedDerivatives"])):
+ ("Tangent" in asThreeFunctions) and (asThreeFunctions["Tangent"] is not None) and \
+ ("Adjoint" in asThreeFunctions) and (asThreeFunctions["Adjoint"] is not None) and \
+ (("useApproximatedDerivatives" not in asThreeFunctions) or not bool(asThreeFunctions["useApproximatedDerivatives"])):
__Function = asThreeFunctions
elif isinstance(asThreeFunctions, dict) and \
- ("Direct" in asThreeFunctions) and (asThreeFunctions["Direct"] is not None):
+ ("Direct" in asThreeFunctions) and (asThreeFunctions["Direct"] is not None):
__Function = asThreeFunctions
- __Function.update({"useApproximatedDerivatives":True})
+ __Function.update({"useApproximatedDerivatives": True})
else:
raise ValueError(
- "The functions has to be given in a dictionnary which have either"+\
- " 1 key (\"Direct\") or"+\
+ "The functions has to be given in a dictionnary which have either" + \
+ " 1 key (\"Direct\") or" + \
" 3 keys (\"Direct\" (optionnal), \"Tangent\" and \"Adjoint\")")
- if "Direct" not in asThreeFunctions:
+ if "Direct" not in asThreeFunctions:
__Function["Direct"] = asThreeFunctions["Tangent"]
__Function.update(__Parameters)
else:
__Function = None
#
- if appliedInX is not None and isinstance(appliedInX, dict):
+ if appliedInX is not None and isinstance(appliedInX, dict):
__appliedInX = appliedInX
elif appliedInX is not None:
- __appliedInX = {"HXb":appliedInX}
+ __appliedInX = {"HXb": appliedInX}
else:
__appliedInX = None
#
if isinstance(__Function, dict) and \
("useApproximatedDerivatives" in __Function) and bool(__Function["useApproximatedDerivatives"]) and \
("Direct" in __Function) and (__Function["Direct"] is not None):
- if "CenteredFiniteDifference" not in __Function: __Function["CenteredFiniteDifference"] = False
- if "DifferentialIncrement" not in __Function: __Function["DifferentialIncrement"] = 0.01
- if "withdX" not in __Function: __Function["withdX"] = None
- if "withReducingMemoryUse" not in __Function: __Function["withReducingMemoryUse"] = __reduceM
- if "withAvoidingRedundancy" not in __Function: __Function["withAvoidingRedundancy"] = __avoidRC
- if "withToleranceInRedundancy" not in __Function: __Function["withToleranceInRedundancy"] = 1.e-18
- if "withLengthOfRedundancy" not in __Function: __Function["withLengthOfRedundancy"] = -1
- if "NumberOfProcesses" not in __Function: __Function["NumberOfProcesses"] = None
- if "withmfEnabled" not in __Function: __Function["withmfEnabled"] = inputAsMF
+ if "CenteredFiniteDifference" not in __Function: __Function["CenteredFiniteDifference"] = False # noqa: E272,E701
+ if "DifferentialIncrement" not in __Function: __Function["DifferentialIncrement"] = 0.01 # noqa: E272,E701
+ if "withdX" not in __Function: __Function["withdX"] = None # noqa: E272,E701
+ if "withReducingMemoryUse" not in __Function: __Function["withReducingMemoryUse"] = __reduceM # noqa: E272,E701
+ if "withAvoidingRedundancy" not in __Function: __Function["withAvoidingRedundancy"] = __avoidRC # noqa: E272,E701
+ if "withToleranceInRedundancy" not in __Function: __Function["withToleranceInRedundancy"] = 1.e-18 # noqa: E272,E701
+ if "withLengthOfRedundancy" not in __Function: __Function["withLengthOfRedundancy"] = -1 # noqa: E272,E701
+ if "NumberOfProcesses" not in __Function: __Function["NumberOfProcesses"] = None # noqa: E272,E701
+ if "withmfEnabled" not in __Function: __Function["withmfEnabled"] = inputAsMF # noqa: E272,E701
from daCore import NumericObjects
FDA = NumericObjects.FDApproximation(
name = self.__name,
mpEnabled = __Function["EnableMultiProcessingInDerivatives"],
mpWorkers = __Function["NumberOfProcesses"],
mfEnabled = __Function["withmfEnabled"],
- )
+ )
self.__FO["Direct"] = Operator(
name = self.__name,
fromMethod = FDA.DirectOperator,
extraArguments = self.__extraArgs,
enableMultiProcess = __Parameters["EnableMultiProcessingInEvaluation"] )
self.__FO["Tangent"] = Operator(
- name = self.__name+"Tangent",
+ name = self.__name + "Tangent",
fromMethod = FDA.TangentOperator,
reducingMemoryUse = __reduceM,
avoidingRedundancy = __avoidRC,
inputAsMultiFunction = inputAsMF,
extraArguments = self.__extraArgs )
self.__FO["Adjoint"] = Operator(
- name = self.__name+"Adjoint",
+ name = self.__name + "Adjoint",
fromMethod = FDA.AdjointOperator,
reducingMemoryUse = __reduceM,
avoidingRedundancy = __avoidRC,
extraArguments = self.__extraArgs,
enableMultiProcess = __Parameters["EnableMultiProcessingInEvaluation"] )
self.__FO["Tangent"] = Operator(
- name = self.__name+"Tangent",
+ name = self.__name + "Tangent",
fromMethod = __Function["Tangent"],
reducingMemoryUse = __reduceM,
avoidingRedundancy = __avoidRC,
inputAsMultiFunction = inputAsMF,
extraArguments = self.__extraArgs )
self.__FO["Adjoint"] = Operator(
- name = self.__name+"Adjoint",
+ name = self.__name + "Adjoint",
fromMethod = __Function["Adjoint"],
reducingMemoryUse = __reduceM,
avoidingRedundancy = __avoidRC,
inputAsMultiFunction = inputAsMF,
enableMultiProcess = __Parameters["EnableMultiProcessingInEvaluation"] )
self.__FO["Tangent"] = Operator(
- name = self.__name+"Tangent",
+ name = self.__name + "Tangent",
fromMatrix = __matrice,
reducingMemoryUse = __reduceM,
avoidingRedundancy = __avoidRC,
inputAsMultiFunction = inputAsMF )
self.__FO["Adjoint"] = Operator(
- name = self.__name+"Adjoint",
+ name = self.__name + "Adjoint",
fromMatrix = __matrice.T,
reducingMemoryUse = __reduceM,
avoidingRedundancy = __avoidRC,
self.__FO["DifferentialIncrement"] = None
else:
raise ValueError(
- "The %s object is improperly defined or undefined,"%self.__name+\
- " it requires at minima either a matrix, a Direct operator for"+\
- " approximate derivatives or a Tangent/Adjoint operators pair."+\
+ "The %s object is improperly defined or undefined,"%self.__name + \
+ " it requires at minima either a matrix, a Direct operator for" + \
+ " approximate derivatives or a Tangent/Adjoint operators pair." + \
" Please check your operator input.")
#
if __appliedInX is not None:
for key in __appliedInX:
if isinstance(__appliedInX[key], str):
__appliedInX[key] = PlatformInfo.strvect2liststr( __appliedInX[key] )
- self.__FO["AppliedInX"][key] = numpy.ravel( __appliedInX[key] ).reshape((-1,1))
+ self.__FO["AppliedInX"][key] = numpy.ravel( __appliedInX[key] ).reshape((-1, 1))
else:
self.__FO["AppliedInX"] = None
def getO(self):
return self.__FO
+ def nbcalls(self, whot=None, which=None):
+ """
+ Renvoie les nombres d'évaluations de l'opérateur
+ """
+ __nbcalls = {}
+ for otype in ["Direct", "Tangent", "Adjoint"]:
+ if otype in self.__FO:
+ __nbcalls[otype] = self.__FO[otype].nbcalls()
+ if whot in __nbcalls and which is not None:
+ return __nbcalls[whot][which]
+ else:
+ return __nbcalls
+
def __repr__(self):
"x.__repr__() <==> repr(x)"
return repr(self.__FO)
"_m", "__variable_names_not_public", "__canonical_parameter_name",
"__canonical_stored_name", "__replace_by_the_new_name",
"StoredVariables",
- )
- #
+ )
+
def __init__(self, name):
"""
L'initialisation présente permet de fabriquer des variables de stockage
self._m = PlatformInfo.SystemUsage()
#
self._name = str( name )
- self._parameters = {"StoreSupplementaryCalculations":[]}
+ self._parameters = {"StoreSupplementaryCalculations": []}
self.__internal_state = {}
self.__required_parameters = {}
self.__required_inputs = {
- "RequiredInputValues":{"mandatory":(), "optional":()},
- "ClassificationTags":[],
- }
- self.__variable_names_not_public = {"nextStep":False} # Duplication dans AlgorithmAndParameters
- self.__canonical_parameter_name = {} # Correspondance "lower"->"correct"
- self.__canonical_stored_name = {} # Correspondance "lower"->"correct"
- self.__replace_by_the_new_name = {} # Nouveau nom à partir d'un nom ancien
+ "RequiredInputValues": {"mandatory": (), "optional": ()},
+ "AttributesTags": [],
+ "AttributesFeatures": [],
+ }
+ self.__variable_names_not_public = {"nextStep": False} # Duplication dans AlgorithmAndParameters
+ self.__canonical_parameter_name = {} # Correspondance "lower"->"correct"
+ self.__canonical_stored_name = {} # Correspondance "lower"->"correct"
+ self.__replace_by_the_new_name = {} # Nouveau nom à partir d'un nom ancien
#
self.StoredVariables = {}
- self.StoredVariables["APosterioriCorrelations"] = Persistence.OneMatrix(name = "APosterioriCorrelations")
- self.StoredVariables["APosterioriCovariance"] = Persistence.OneMatrix(name = "APosterioriCovariance")
- self.StoredVariables["APosterioriStandardDeviations"] = Persistence.OneVector(name = "APosterioriStandardDeviations")
- self.StoredVariables["APosterioriVariances"] = Persistence.OneVector(name = "APosterioriVariances")
- self.StoredVariables["Analysis"] = Persistence.OneVector(name = "Analysis")
- self.StoredVariables["BMA"] = Persistence.OneVector(name = "BMA")
- self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(name = "CostFunctionJ")
- self.StoredVariables["CostFunctionJAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJAtCurrentOptimum")
- self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(name = "CostFunctionJb")
- self.StoredVariables["CostFunctionJbAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJbAtCurrentOptimum")
- self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(name = "CostFunctionJo")
- self.StoredVariables["CostFunctionJoAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJoAtCurrentOptimum")
- self.StoredVariables["CurrentEnsembleState"] = Persistence.OneMatrix(name = "CurrentEnsembleState")
- self.StoredVariables["CurrentIterationNumber"] = Persistence.OneIndex(name = "CurrentIterationNumber")
- self.StoredVariables["CurrentOptimum"] = Persistence.OneVector(name = "CurrentOptimum")
- self.StoredVariables["CurrentState"] = Persistence.OneVector(name = "CurrentState")
- self.StoredVariables["CurrentStepNumber"] = Persistence.OneIndex(name = "CurrentStepNumber")
- self.StoredVariables["EnsembleOfSimulations"] = Persistence.OneMatrice(name = "EnsembleOfSimulations")
- self.StoredVariables["EnsembleOfSnapshots"] = Persistence.OneMatrice(name = "EnsembleOfSnapshots")
- self.StoredVariables["EnsembleOfStates"] = Persistence.OneMatrice(name = "EnsembleOfStates")
- self.StoredVariables["ExcludedPoints"] = Persistence.OneVector(name = "ExcludedPoints")
- self.StoredVariables["ForecastCovariance"] = Persistence.OneMatrix(name = "ForecastCovariance")
- self.StoredVariables["ForecastState"] = Persistence.OneVector(name = "ForecastState")
- self.StoredVariables["GradientOfCostFunctionJ"] = Persistence.OneVector(name = "GradientOfCostFunctionJ")
- self.StoredVariables["GradientOfCostFunctionJb"] = Persistence.OneVector(name = "GradientOfCostFunctionJb")
- self.StoredVariables["GradientOfCostFunctionJo"] = Persistence.OneVector(name = "GradientOfCostFunctionJo")
- self.StoredVariables["IndexOfOptimum"] = Persistence.OneIndex(name = "IndexOfOptimum")
- self.StoredVariables["Innovation"] = Persistence.OneVector(name = "Innovation")
- self.StoredVariables["InnovationAtCurrentAnalysis"] = Persistence.OneVector(name = "InnovationAtCurrentAnalysis")
- self.StoredVariables["InnovationAtCurrentState"] = Persistence.OneVector(name = "InnovationAtCurrentState")
- self.StoredVariables["InternalCostFunctionJ"] = Persistence.OneVector(name = "InternalCostFunctionJ")
- self.StoredVariables["InternalCostFunctionJb"] = Persistence.OneVector(name = "InternalCostFunctionJb")
- self.StoredVariables["InternalCostFunctionJo"] = Persistence.OneVector(name = "InternalCostFunctionJo")
- self.StoredVariables["InternalStates"] = Persistence.OneMatrix(name = "InternalStates")
- self.StoredVariables["JacobianMatrixAtBackground"] = Persistence.OneMatrix(name = "JacobianMatrixAtBackground")
- self.StoredVariables["JacobianMatrixAtCurrentState"] = Persistence.OneMatrix(name = "JacobianMatrixAtCurrentState")
- self.StoredVariables["JacobianMatrixAtOptimum"] = Persistence.OneMatrix(name = "JacobianMatrixAtOptimum")
- self.StoredVariables["KalmanGainAtOptimum"] = Persistence.OneMatrix(name = "KalmanGainAtOptimum")
- self.StoredVariables["MahalanobisConsistency"] = Persistence.OneScalar(name = "MahalanobisConsistency")
- self.StoredVariables["OMA"] = Persistence.OneVector(name = "OMA")
- self.StoredVariables["OMB"] = Persistence.OneVector(name = "OMB")
- self.StoredVariables["OptimalPoints"] = Persistence.OneVector(name = "OptimalPoints")
- self.StoredVariables["ReducedBasis"] = Persistence.OneMatrix(name = "ReducedBasis")
- self.StoredVariables["ReducedCoordinates"] = Persistence.OneVector(name = "ReducedCoordinates")
- self.StoredVariables["Residu"] = Persistence.OneScalar(name = "Residu")
- self.StoredVariables["Residus"] = Persistence.OneVector(name = "Residus")
- self.StoredVariables["SampledStateForQuantiles"] = Persistence.OneMatrix(name = "SampledStateForQuantiles")
- self.StoredVariables["SigmaBck2"] = Persistence.OneScalar(name = "SigmaBck2")
- self.StoredVariables["SigmaObs2"] = Persistence.OneScalar(name = "SigmaObs2")
- self.StoredVariables["SimulatedObservationAtBackground"] = Persistence.OneVector(name = "SimulatedObservationAtBackground")
- self.StoredVariables["SimulatedObservationAtCurrentAnalysis"]= Persistence.OneVector(name = "SimulatedObservationAtCurrentAnalysis")
- self.StoredVariables["SimulatedObservationAtCurrentOptimum"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentOptimum")
- self.StoredVariables["SimulatedObservationAtCurrentState"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentState")
- self.StoredVariables["SimulatedObservationAtOptimum"] = Persistence.OneVector(name = "SimulatedObservationAtOptimum")
- self.StoredVariables["SimulationQuantiles"] = Persistence.OneMatrix(name = "SimulationQuantiles")
- self.StoredVariables["SingularValues"] = Persistence.OneVector(name = "SingularValues")
+ self.StoredVariables["APosterioriCorrelations"] = Persistence.OneMatrix(name = "APosterioriCorrelations")
+ self.StoredVariables["APosterioriCovariance"] = Persistence.OneMatrix(name = "APosterioriCovariance")
+ self.StoredVariables["APosterioriStandardDeviations"] = Persistence.OneVector(name = "APosterioriStandardDeviations")
+ self.StoredVariables["APosterioriVariances"] = Persistence.OneVector(name = "APosterioriVariances")
+ self.StoredVariables["Analysis"] = Persistence.OneVector(name = "Analysis")
+ self.StoredVariables["BMA"] = Persistence.OneVector(name = "BMA")
+ self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(name = "CostFunctionJ")
+ self.StoredVariables["CostFunctionJAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJAtCurrentOptimum")
+ self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(name = "CostFunctionJb")
+ self.StoredVariables["CostFunctionJbAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJbAtCurrentOptimum")
+ self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(name = "CostFunctionJo")
+ self.StoredVariables["CostFunctionJoAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJoAtCurrentOptimum")
+ self.StoredVariables["CurrentEnsembleState"] = Persistence.OneMatrix(name = "CurrentEnsembleState")
+ self.StoredVariables["CurrentIterationNumber"] = Persistence.OneIndex(name = "CurrentIterationNumber")
+ self.StoredVariables["CurrentOptimum"] = Persistence.OneVector(name = "CurrentOptimum")
+ self.StoredVariables["CurrentState"] = Persistence.OneVector(name = "CurrentState")
+ self.StoredVariables["CurrentStepNumber"] = Persistence.OneIndex(name = "CurrentStepNumber")
+ self.StoredVariables["EnsembleOfSimulations"] = Persistence.OneMatrice(name = "EnsembleOfSimulations")
+ self.StoredVariables["EnsembleOfSnapshots"] = Persistence.OneMatrice(name = "EnsembleOfSnapshots")
+ self.StoredVariables["EnsembleOfStates"] = Persistence.OneMatrice(name = "EnsembleOfStates")
+ self.StoredVariables["ExcludedPoints"] = Persistence.OneVector(name = "ExcludedPoints")
+ self.StoredVariables["ForecastCovariance"] = Persistence.OneMatrix(name = "ForecastCovariance")
+ self.StoredVariables["ForecastState"] = Persistence.OneVector(name = "ForecastState")
+ self.StoredVariables["GradientOfCostFunctionJ"] = Persistence.OneVector(name = "GradientOfCostFunctionJ")
+ self.StoredVariables["GradientOfCostFunctionJb"] = Persistence.OneVector(name = "GradientOfCostFunctionJb")
+ self.StoredVariables["GradientOfCostFunctionJo"] = Persistence.OneVector(name = "GradientOfCostFunctionJo")
+ self.StoredVariables["IndexOfOptimum"] = Persistence.OneIndex(name = "IndexOfOptimum")
+ self.StoredVariables["Innovation"] = Persistence.OneVector(name = "Innovation")
+ self.StoredVariables["InnovationAtCurrentAnalysis"] = Persistence.OneVector(name = "InnovationAtCurrentAnalysis")
+ self.StoredVariables["InnovationAtCurrentState"] = Persistence.OneVector(name = "InnovationAtCurrentState")
+ self.StoredVariables["InternalCostFunctionJ"] = Persistence.OneVector(name = "InternalCostFunctionJ")
+ self.StoredVariables["InternalCostFunctionJb"] = Persistence.OneVector(name = "InternalCostFunctionJb")
+ self.StoredVariables["InternalCostFunctionJo"] = Persistence.OneVector(name = "InternalCostFunctionJo")
+ self.StoredVariables["InternalStates"] = Persistence.OneMatrix(name = "InternalStates")
+ self.StoredVariables["JacobianMatrixAtBackground"] = Persistence.OneMatrix(name = "JacobianMatrixAtBackground")
+ self.StoredVariables["JacobianMatrixAtCurrentState"] = Persistence.OneMatrix(name = "JacobianMatrixAtCurrentState")
+ self.StoredVariables["JacobianMatrixAtOptimum"] = Persistence.OneMatrix(name = "JacobianMatrixAtOptimum")
+ self.StoredVariables["KalmanGainAtOptimum"] = Persistence.OneMatrix(name = "KalmanGainAtOptimum")
+ self.StoredVariables["MahalanobisConsistency"] = Persistence.OneScalar(name = "MahalanobisConsistency")
+ self.StoredVariables["OMA"] = Persistence.OneVector(name = "OMA")
+ self.StoredVariables["OMB"] = Persistence.OneVector(name = "OMB")
+ self.StoredVariables["OptimalPoints"] = Persistence.OneVector(name = "OptimalPoints")
+ self.StoredVariables["ReducedBasis"] = Persistence.OneMatrix(name = "ReducedBasis")
+ self.StoredVariables["ReducedBasisMus"] = Persistence.OneVector(name = "ReducedBasisMus")
+ self.StoredVariables["ReducedCoordinates"] = Persistence.OneVector(name = "ReducedCoordinates")
+ self.StoredVariables["Residu"] = Persistence.OneScalar(name = "Residu")
+ self.StoredVariables["Residus"] = Persistence.OneVector(name = "Residus")
+ self.StoredVariables["SampledStateForQuantiles"] = Persistence.OneMatrix(name = "SampledStateForQuantiles")
+ self.StoredVariables["SigmaBck2"] = Persistence.OneScalar(name = "SigmaBck2")
+ self.StoredVariables["SigmaObs2"] = Persistence.OneScalar(name = "SigmaObs2")
+ self.StoredVariables["SimulatedObservationAtBackground"] = Persistence.OneVector(name = "SimulatedObservationAtBackground")
+ self.StoredVariables["SimulatedObservationAtCurrentAnalysis"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentAnalysis")
+ self.StoredVariables["SimulatedObservationAtCurrentOptimum"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentOptimum")
+ self.StoredVariables["SimulatedObservationAtCurrentState"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentState")
+ self.StoredVariables["SimulatedObservationAtOptimum"] = Persistence.OneVector(name = "SimulatedObservationAtOptimum")
+ self.StoredVariables["SimulationQuantiles"] = Persistence.OneMatrix(name = "SimulationQuantiles")
+ self.StoredVariables["SingularValues"] = Persistence.OneVector(name = "SingularValues")
#
for k in self.StoredVariables:
self.__canonical_stored_name[k.lower()] = k
#
# Mise à jour des paramètres internes avec le contenu de Parameters, en
# reprenant les valeurs par défauts pour toutes celles non définies
- self.__setParameters(Parameters, reset=True) # Copie
+ self.__setParameters(Parameters, reset=True) # Copie
for k, v in self.__variable_names_not_public.items():
- if k not in self._parameters: self.__setParameters( {k:v} )
+ if k not in self._parameters:
+ self.__setParameters( {k: v} )
- # Corrections et compléments des vecteurs
def __test_vvalue(argument, variable, argname, symbol=None):
- if symbol is None: symbol = variable
+ "Corrections et compléments des vecteurs"
+ if symbol is None:
+ symbol = variable
if argument is None:
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- raise ValueError("%s %s vector %s is not set and has to be properly defined!"%(self._name,argname,symbol))
+ raise ValueError("%s %s vector %s is not set and has to be properly defined!"%(self._name, argname, symbol))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s vector %s is not set, but is optional."%(self._name,argname,symbol))
+ logging.debug("%s %s vector %s is not set, but is optional."%(self._name, argname, symbol))
else:
- logging.debug("%s %s vector %s is not set, but is not required."%(self._name,argname,symbol))
+ logging.debug("%s %s vector %s is not set, but is not required."%(self._name, argname, symbol))
else:
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
logging.debug(
- "%s %s vector %s is required and set, and its size is %i."%(
- self._name,argname,symbol,numpy.array(argument).size))
+ "%s %s vector %s is required and set, and its full size is %i." \
+ % (self._name, argname, symbol, numpy.array(argument).size))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
logging.debug(
- "%s %s vector %s is optional and set, and its size is %i."%(
- self._name,argname,symbol,numpy.array(argument).size))
+ "%s %s vector %s is optional and set, and its full size is %i." \
+ % (self._name, argname, symbol, numpy.array(argument).size))
else:
logging.debug(
- "%s %s vector %s is set although neither required nor optional, and its size is %i."%(
- self._name,argname,symbol,numpy.array(argument).size))
+ "%s %s vector %s is set although neither required nor optional, and its full size is %i." \
+ % (self._name, argname, symbol, numpy.array(argument).size))
return 0
__test_vvalue( Xb, "Xb", "Background or initial state" )
- __test_vvalue( Y, "Y", "Observation" )
- __test_vvalue( U, "U", "Control" )
- #
- # Corrections et compléments des covariances
+ __test_vvalue( Y, "Y", "Observation" )
+ __test_vvalue( U, "U", "Control" )
+
def __test_cvalue(argument, variable, argname, symbol=None):
- if symbol is None: symbol = variable
+ "Corrections et compléments des covariances"
+ if symbol is None:
+ symbol = variable
if argument is None:
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- raise ValueError("%s %s error covariance matrix %s is not set and has to be properly defined!"%(self._name,argname,symbol))
+ raise ValueError("%s %s error covariance matrix %s is not set and has to be properly defined!"%(self._name, argname, symbol))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s error covariance matrix %s is not set, but is optional."%(self._name,argname,symbol))
+ logging.debug("%s %s error covariance matrix %s is not set, but is optional."%(self._name, argname, symbol))
else:
- logging.debug("%s %s error covariance matrix %s is not set, but is not required."%(self._name,argname,symbol))
+ logging.debug("%s %s error covariance matrix %s is not set, but is not required."%(self._name, argname, symbol))
else:
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- logging.debug("%s %s error covariance matrix %s is required and set."%(self._name,argname,symbol))
+ logging.debug("%s %s error covariance matrix %s is required and set."%(self._name, argname, symbol))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s error covariance matrix %s is optional and set."%(self._name,argname,symbol))
+ logging.debug("%s %s error covariance matrix %s is optional and set."%(self._name, argname, symbol))
else:
logging.debug(
- "%s %s error covariance matrix %s is set although neither required nor optional."%(
- self._name,argname,symbol))
+ "%s %s error covariance matrix %s is set although neither required nor optional." \
+ % (self._name, argname, symbol))
return 0
__test_cvalue( B, "B", "Background" )
__test_cvalue( R, "R", "Observation" )
__test_cvalue( Q, "Q", "Evolution" )
- #
- # Corrections et compléments des opérateurs
+
def __test_ovalue(argument, variable, argname, symbol=None):
- if symbol is None: symbol = variable
- if argument is None or (isinstance(argument,dict) and len(argument)==0):
+ "Corrections et compléments des opérateurs"
+ if symbol is None:
+ symbol = variable
+ if argument is None or (isinstance(argument, dict) and len(argument) == 0):
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- raise ValueError("%s %s operator %s is not set and has to be properly defined!"%(self._name,argname,symbol))
+ raise ValueError("%s %s operator %s is not set and has to be properly defined!"%(self._name, argname, symbol))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s operator %s is not set, but is optional."%(self._name,argname,symbol))
+ logging.debug("%s %s operator %s is not set, but is optional."%(self._name, argname, symbol))
else:
- logging.debug("%s %s operator %s is not set, but is not required."%(self._name,argname,symbol))
+ logging.debug("%s %s operator %s is not set, but is not required."%(self._name, argname, symbol))
else:
if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- logging.debug("%s %s operator %s is required and set."%(self._name,argname,symbol))
+ logging.debug("%s %s operator %s is required and set."%(self._name, argname, symbol))
elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s operator %s is optional and set."%(self._name,argname,symbol))
+ logging.debug("%s %s operator %s is optional and set."%(self._name, argname, symbol))
else:
- logging.debug("%s %s operator %s is set although neither required nor optional."%(self._name,argname,symbol))
+ logging.debug("%s %s operator %s is set although neither required nor optional."%(self._name, argname, symbol))
return 0
__test_ovalue( HO, "HO", "Observation", "H" )
__test_ovalue( EM, "EM", "Evolution", "M" )
__test_ovalue( CM, "CM", "Control Model", "C" )
#
# Corrections et compléments des bornes
- if ("Bounds" in self._parameters) and isinstance(self._parameters["Bounds"], (list, tuple)) and (len(self._parameters["Bounds"]) > 0):
- logging.debug("%s Bounds taken into account"%(self._name,))
+ if ("Bounds" in self._parameters) \
+ and isinstance(self._parameters["Bounds"], (list, tuple)):
+ if (len(self._parameters["Bounds"]) > 0):
+ logging.debug("%s Bounds taken into account"%(self._name,))
+ else:
+ self._parameters["Bounds"] = None
+ elif ("Bounds" in self._parameters) \
+ and isinstance(self._parameters["Bounds"], (numpy.ndarray, numpy.matrix)):
+ self._parameters["Bounds"] = numpy.ravel(self._parameters["Bounds"]).reshape((-1, 2)).tolist()
+ if (len(self._parameters["Bounds"]) > 0):
+ logging.debug("%s Bounds for states taken into account"%(self._name,))
+ else:
+ self._parameters["Bounds"] = None
else:
self._parameters["Bounds"] = None
+ if self._parameters["Bounds"] is None:
+ logging.debug("%s There are no bounds for states to take into account"%(self._name,))
+ #
if ("StateBoundsForQuantiles" in self._parameters) \
- and isinstance(self._parameters["StateBoundsForQuantiles"], (list, tuple)) \
- and (len(self._parameters["StateBoundsForQuantiles"]) > 0):
+ and isinstance(self._parameters["StateBoundsForQuantiles"], (list, tuple)) \
+ and (len(self._parameters["StateBoundsForQuantiles"]) > 0):
logging.debug("%s Bounds for quantiles states taken into account"%(self._name,))
- # Attention : contrairement à Bounds, pas de défaut à None, sinon on ne peut pas être sans bornes
+ elif ("StateBoundsForQuantiles" in self._parameters) \
+ and isinstance(self._parameters["StateBoundsForQuantiles"], (numpy.ndarray, numpy.matrix)):
+ self._parameters["StateBoundsForQuantiles"] = numpy.ravel(self._parameters["StateBoundsForQuantiles"]).reshape((-1, 2)).tolist()
+ if (len(self._parameters["StateBoundsForQuantiles"]) > 0):
+ logging.debug("%s Bounds for quantiles states taken into account"%(self._name,))
+ # Attention : contrairement à Bounds, il n'y a pas de défaut à None,
+ # sinon on ne peut pas être sans bornes
#
# Corrections et compléments de l'initialisation en X
- if "InitializationPoint" in self._parameters:
+ if "InitializationPoint" in self._parameters:
if Xb is not None:
- if self._parameters["InitializationPoint"] is not None and hasattr(self._parameters["InitializationPoint"],'size'):
+ if self._parameters["InitializationPoint"] is not None and hasattr(self._parameters["InitializationPoint"], 'size'):
if self._parameters["InitializationPoint"].size != numpy.ravel(Xb).size:
- raise ValueError("Incompatible size %i of forced initial point that have to replace the background of size %i" \
- %(self._parameters["InitializationPoint"].size,numpy.ravel(Xb).size))
+ raise ValueError(
+ "Incompatible size %i of forced initial point that have to replace the background of size %i" \
+ % (self._parameters["InitializationPoint"].size, numpy.ravel(Xb).size))
# Obtenu par typecast : numpy.ravel(self._parameters["InitializationPoint"])
else:
self._parameters["InitializationPoint"] = numpy.ravel(Xb)
#
# Correction pour pallier a un bug de TNC sur le retour du Minimum
if "Minimizer" in self._parameters and self._parameters["Minimizer"] == "TNC":
- self.setParameterValue("StoreInternalVariables",True)
+ self.setParameterValue("StoreInternalVariables", True)
#
# Verbosité et logging
if logging.getLogger().level < logging.WARNING:
#
return 0
- def _post_run(self,_oH=None):
+ def _post_run(self, _oH=None, _oM=None):
"Post-calcul"
if ("StoreSupplementaryCalculations" in self._parameters) and \
- "APosterioriCovariance" in self._parameters["StoreSupplementaryCalculations"]:
+ "APosterioriCovariance" in self._parameters["StoreSupplementaryCalculations"]:
for _A in self.StoredVariables["APosterioriCovariance"]:
if "APosterioriVariances" in self._parameters["StoreSupplementaryCalculations"]:
self.StoredVariables["APosterioriVariances"].store( numpy.diag(_A) )
if "APosterioriStandardDeviations" in self._parameters["StoreSupplementaryCalculations"]:
self.StoredVariables["APosterioriStandardDeviations"].store( numpy.sqrt(numpy.diag(_A)) )
if "APosterioriCorrelations" in self._parameters["StoreSupplementaryCalculations"]:
- _EI = numpy.diag(1./numpy.sqrt(numpy.diag(_A)))
+ _EI = numpy.diag(1. / numpy.sqrt(numpy.diag(_A)))
_C = numpy.dot(_EI, numpy.dot(_A, _EI))
self.StoredVariables["APosterioriCorrelations"].store( _C )
if _oH is not None and "Direct" in _oH and "Tangent" in _oH and "Adjoint" in _oH:
logging.debug(
"%s Nombre d'évaluation(s) de l'opérateur d'observation direct/tangent/adjoint.: %i/%i/%i",
- self._name, _oH["Direct"].nbcalls(0),_oH["Tangent"].nbcalls(0),_oH["Adjoint"].nbcalls(0))
+ self._name, _oH["Direct"].nbcalls(0), _oH["Tangent"].nbcalls(0), _oH["Adjoint"].nbcalls(0))
logging.debug(
"%s Nombre d'appels au cache d'opérateur d'observation direct/tangent/adjoint..: %i/%i/%i",
- self._name, _oH["Direct"].nbcalls(3),_oH["Tangent"].nbcalls(3),_oH["Adjoint"].nbcalls(3))
+ self._name, _oH["Direct"].nbcalls(3), _oH["Tangent"].nbcalls(3), _oH["Adjoint"].nbcalls(3))
+ if _oM is not None and "Direct" in _oM and "Tangent" in _oM and "Adjoint" in _oM:
+ logging.debug(
+ "%s Nombre d'évaluation(s) de l'opérateur d'évolution direct/tangent/adjoint.: %i/%i/%i",
+ self._name, _oM["Direct"].nbcalls(0), _oM["Tangent"].nbcalls(0), _oM["Adjoint"].nbcalls(0))
+ logging.debug(
+ "%s Nombre d'appels au cache d'opérateur d'évolution direct/tangent/adjoint..: %i/%i/%i",
+ self._name, _oM["Direct"].nbcalls(3), _oM["Tangent"].nbcalls(3), _oM["Adjoint"].nbcalls(3))
logging.debug("%s Taille mémoire utilisée de %.0f Mio", self._name, self._m.getUsedMemory("Mio"))
logging.debug("%s Durées d'utilisation CPU de %.1fs et elapsed de %.1fs", self._name, self._getTimeState()[0], self._getTimeState()[1])
logging.debug("%s Terminé", self._name)
"""
raise NotImplementedError("Mathematical algorithmic calculation has not been implemented!")
- def defineRequiredParameter(self,
- name = None,
- default = None,
- typecast = None,
- message = None,
- minval = None,
- maxval = None,
- listval = None,
- listadv = None,
- oldname = None,
- ):
+ def defineRequiredParameter(
+ self,
+ name = None,
+ default = None,
+ typecast = None,
+ message = None,
+ minval = None,
+ maxval = None,
+ listval = None,
+ listadv = None,
+ oldname = None ):
"""
Permet de définir dans l'algorithme des paramètres requis et leurs
caractéristiques par défaut.
raise ValueError("A name is mandatory to define a required parameter.")
#
self.__required_parameters[name] = {
- "default" : default,
- "typecast" : typecast,
- "minval" : minval,
- "maxval" : maxval,
- "listval" : listval,
- "listadv" : listadv,
- "message" : message,
- "oldname" : oldname,
- }
+ "default" : default, # noqa: E203
+ "typecast" : typecast, # noqa: E203
+ "minval" : minval, # noqa: E203
+ "maxval" : maxval, # noqa: E203
+ "listval" : listval, # noqa: E203
+ "listadv" : listadv, # noqa: E203
+ "message" : message, # noqa: E203
+ "oldname" : oldname, # noqa: E203
+ }
self.__canonical_parameter_name[name.lower()] = name
if oldname is not None:
- self.__canonical_parameter_name[oldname.lower()] = name # Conversion
+ self.__canonical_parameter_name[oldname.lower()] = name # Conversion
self.__replace_by_the_new_name[oldname.lower()] = name
logging.debug("%s %s (valeur par défaut = %s)", self._name, message, self.setParameterValue(name))
if value is None and default is None:
__val = None
elif value is None and default is not None:
- if typecast is None: __val = default
- else: __val = typecast( default )
+ if typecast is None:
+ __val = default
+ else:
+ __val = typecast( default )
else:
- if typecast is None: __val = value
+ if typecast is None:
+ __val = value
else:
try:
__val = typecast( value )
if maxval is not None and (numpy.array(__val, float) > maxval).any():
raise ValueError("The parameter named '%s' of value '%s' can not be greater than %s."%(__k, __val, maxval))
if listval is not None or listadv is not None:
- if typecast is list or typecast is tuple or isinstance(__val,list) or isinstance(__val,tuple):
+ if typecast is list or typecast is tuple or isinstance(__val, list) or isinstance(__val, tuple):
for v in __val:
- if listval is not None and v in listval: continue
- elif listadv is not None and v in listadv: continue
+ if listval is not None and v in listval:
+ continue
+ elif listadv is not None and v in listadv:
+ continue
else:
raise ValueError("The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."%(v, __k, listval))
elif not (listval is not None and __val in listval) and not (listadv is not None and __val in listadv):
- raise ValueError("The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."%( __val, __k,listval))
+ raise ValueError("The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."%(__val, __k, listval))
#
if __k in ["SetSeed",]:
__val = value
"""
return self.__required_inputs["RequiredInputValues"]["mandatory"], self.__required_inputs["RequiredInputValues"]["optional"]
- def setAttributes(self, tags=()):
+ def setAttributes(self, tags=(), features=()):
"""
Permet d'adjoindre des attributs comme les tags de classification.
Renvoie la liste actuelle dans tous les cas.
"""
- self.__required_inputs["ClassificationTags"].extend( tags )
- return self.__required_inputs["ClassificationTags"]
+ self.__required_inputs["AttributesTags"].extend( tags )
+ self.__required_inputs["AttributesFeatures"].extend( features )
+ return (self.__required_inputs["AttributesTags"], self.__required_inputs["AttributesFeatures"])
def __setParameters(self, fromDico={}, reset=False):
"""
for k in fromDico.keys():
if k.lower() in self.__canonical_parameter_name:
__inverse_fromDico_keys[self.__canonical_parameter_name[k.lower()]] = k
- #~ __inverse_fromDico_keys = dict([(self.__canonical_parameter_name[k.lower()],k) for k in fromDico.keys()])
+ # __inverse_fromDico_keys = dict([(self.__canonical_parameter_name[k.lower()],k) for k in fromDico.keys()])
__canonic_fromDico_keys = __inverse_fromDico_keys.keys()
#
for k in __inverse_fromDico_keys.values():
if k.lower() in self.__replace_by_the_new_name:
__newk = self.__replace_by_the_new_name[k.lower()]
- __msg = "the parameter \"%s\" used in \"%s\" algorithm case is deprecated and has to be replaced by \"%s\"."%(k,self._name,__newk)
+ __msg = "the parameter \"%s\" used in \"%s\" algorithm case is deprecated and has to be replaced by \"%s\"."%(k, self._name, __newk)
__msg += " Please update your code."
warnings.warn(__msg, FutureWarning, stacklevel=50)
#
for k in self.__required_parameters.keys():
if k in __canonic_fromDico_keys:
- self._parameters[k] = self.setParameterValue(k,fromDico[__inverse_fromDico_keys[k]])
+ self._parameters[k] = self.setParameterValue(k, fromDico[__inverse_fromDico_keys[k]])
elif reset:
self._parameters[k] = self.setParameterValue(k)
else:
pass
- if hasattr(self._parameters[k],"size") and self._parameters[k].size > 100:
+ if hasattr(self._parameters[k], "size") and self._parameters[k].size > 100:
logging.debug("%s %s d'une taille totale de %s", self._name, self.__required_parameters[k]["message"], self._parameters[k].size)
- elif hasattr(self._parameters[k],"__len__") and len(self._parameters[k]) > 100:
+ elif hasattr(self._parameters[k], "__len__") and len(self._parameters[k]) > 100:
logging.debug("%s %s de longueur %s", self._name, self.__required_parameters[k]["message"], len(self._parameters[k]))
else:
logging.debug("%s %s : %s", self._name, self.__required_parameters[k]["message"], self._parameters[k])
"""
Permet de stocker des variables nommées constituant l'état interne
"""
- if reset: # Vide le dictionnaire préalablement
+ if reset: # Vide le dictionnaire préalablement
self.__internal_state = {}
if key is not None and value is not None:
self.__internal_state[key] = value
"""
__slots__ = (
"_name", "_parameters", "StoredVariables", "__canonical_stored_name",
- )
- #
+ )
+
def __init__(self, name):
self._name = str( name )
- self._parameters = {"StoreSupplementaryCalculations":[]}
+ self._parameters = {"StoreSupplementaryCalculations": []}
#
self.StoredVariables = {}
self.StoredVariables["Analysis"] = Persistence.OneVector(name = "Analysis")
"__name", "__algorithm", "__algorithmFile", "__algorithmName", "__A",
"__P", "__Xb", "__Y", "__U", "__HO", "__EM", "__CM", "__B", "__R",
"__Q", "__variable_names_not_public",
- )
- #
+ )
+
def __init__(self,
name = "GenericAlgorithm",
asAlgorithm = None,
asDict = None,
- asScript = None,
- ):
+ asScript = None ):
"""
"""
self.__name = str(name)
#
if __Algo is not None:
self.__A = str(__Algo)
- self.__P.update( {"Algorithm":self.__A} )
+ self.__P.update( {"Algorithm": self.__A} )
#
self.__setAlgorithm( self.__A )
#
- self.__variable_names_not_public = {"nextStep":False} # Duplication dans Algorithm
+ self.__variable_names_not_public = {"nextStep": False} # Duplication dans Algorithm
- def updateParameters(self,
- asDict = None,
- asScript = None,
- ):
+ def updateParameters(self, asDict = None, asScript = None ):
"Mise à jour des paramètres"
if asDict is None and asScript is not None:
__Dict = Interfaces.ImportFromScript(asScript).getvalue( self.__name, "Parameters" )
#
if not isinstance(asDictAO, dict):
raise ValueError("The objects for algorithm calculation have to be given together as a dictionnary, and they are not")
- if hasattr(asDictAO["Background"],"getO"): self.__Xb = asDictAO["Background"].getO()
- elif hasattr(asDictAO["CheckingPoint"],"getO"): self.__Xb = asDictAO["CheckingPoint"].getO()
- else: self.__Xb = None
- if hasattr(asDictAO["Observation"],"getO"): self.__Y = asDictAO["Observation"].getO()
- else: self.__Y = asDictAO["Observation"]
- if hasattr(asDictAO["ControlInput"],"getO"): self.__U = asDictAO["ControlInput"].getO()
- else: self.__U = asDictAO["ControlInput"]
- if hasattr(asDictAO["ObservationOperator"],"getO"): self.__HO = asDictAO["ObservationOperator"].getO()
- else: self.__HO = asDictAO["ObservationOperator"]
- if hasattr(asDictAO["EvolutionModel"],"getO"): self.__EM = asDictAO["EvolutionModel"].getO()
- else: self.__EM = asDictAO["EvolutionModel"]
- if hasattr(asDictAO["ControlModel"],"getO"): self.__CM = asDictAO["ControlModel"].getO()
- else: self.__CM = asDictAO["ControlModel"]
+ if hasattr(asDictAO["Background"], "getO"): self.__Xb = asDictAO["Background"].getO() # noqa: E241,E701
+ elif hasattr(asDictAO["CheckingPoint"], "getO"): self.__Xb = asDictAO["CheckingPoint"].getO() # noqa: E241,E701
+ else: self.__Xb = None # noqa: E241,E701
+ if hasattr(asDictAO["Observation"], "getO"): self.__Y = asDictAO["Observation"].getO() # noqa: E241,E701
+ else: self.__Y = asDictAO["Observation"] # noqa: E241,E701
+ if hasattr(asDictAO["ControlInput"], "getO"): self.__U = asDictAO["ControlInput"].getO() # noqa: E241,E701
+ else: self.__U = asDictAO["ControlInput"] # noqa: E241,E701
+ if hasattr(asDictAO["ObservationOperator"], "getO"): self.__HO = asDictAO["ObservationOperator"].getO() # noqa: E241,E701
+ else: self.__HO = asDictAO["ObservationOperator"] # noqa: E241,E701
+ if hasattr(asDictAO["EvolutionModel"], "getO"): self.__EM = asDictAO["EvolutionModel"].getO() # noqa: E241,E701
+ else: self.__EM = asDictAO["EvolutionModel"] # noqa: E241,E701
+ if hasattr(asDictAO["ControlModel"], "getO"): self.__CM = asDictAO["ControlModel"].getO() # noqa: E241,E701
+ else: self.__CM = asDictAO["ControlModel"] # noqa: E241,E701
self.__B = asDictAO["BackgroundError"]
self.__R = asDictAO["ObservationError"]
self.__Q = asDictAO["EvolutionError"]
B = self.__B,
Q = self.__Q,
Parameters = self.__P,
- )
+ )
return 0
def executeYACSScheme(self, FileName=None):
__file = os.path.abspath(FileName)
logging.debug("The YACS file name is \"%s\"."%__file)
if not PlatformInfo.has_salome or \
- not PlatformInfo.has_yacs or \
- not PlatformInfo.has_adao:
- raise ImportError("\n\n"+\
- "Unable to get SALOME, YACS or ADAO environnement variables.\n"+\
- "Please load the right environnement before trying to use it.\n")
+ not PlatformInfo.has_yacs or \
+ not PlatformInfo.has_adao:
+ raise ImportError(
+ "\n\n" + \
+ "Unable to get SALOME, YACS or ADAO environnement variables.\n" + \
+ "Please load the right environnement before trying to use it.\n" )
#
import pilot
import SALOMERuntime
print("The YACS XML schema is not valid and will not be executed:")
print(p.getErrorReport())
- info=pilot.LinkInfo(pilot.LinkInfo.ALL_DONT_STOP)
+ info = pilot.LinkInfo(pilot.LinkInfo.ALL_DONT_STOP)
p.checkConsistency(info)
if info.areWarningsOrErrors():
print("The YACS XML schema is not coherent and will not be executed:")
return self.__P[key]
else:
allvariables = self.__P
- for k in self.__variable_names_not_public: allvariables.pop(k, None)
+ for k in self.__variable_names_not_public:
+ allvariables.pop(k, None)
return allvariables
def pop(self, k, d):
def setObserver(self, __V, __O, __I, __S):
if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm,"StoredVariables"):
+ or isinstance(self.__algorithm, dict) \
+ or not hasattr(self.__algorithm, "StoredVariables"):
raise ValueError("No observer can be build before choosing an algorithm.")
if __V not in self.__algorithm:
raise ValueError("An observer requires to be set on a variable named %s which does not exist."%__V)
else:
- self.__algorithm.StoredVariables[ __V ].setDataObserver(
- Scheduler = __S,
- HookFunction = __O,
- HookParameters = __I,
- )
+ self.__algorithm.StoredVariables[ __V ].setDataObserver( Scheduler = __S, HookFunction = __O, HookParameters = __I )
def removeObserver(self, __V, __O, __A = False):
if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm,"StoredVariables"):
+ or isinstance(self.__algorithm, dict) \
+ or not hasattr(self.__algorithm, "StoredVariables"):
raise ValueError("No observer can be removed before choosing an algorithm.")
if __V not in self.__algorithm:
raise ValueError("An observer requires to be removed on a variable named %s which does not exist."%__V)
else:
- return self.__algorithm.StoredVariables[ __V ].removeDataObserver(
- HookFunction = __O,
- AllObservers = __A,
- )
+ return self.__algorithm.StoredVariables[ __V ].removeDataObserver( HookFunction = __O, AllObservers = __A )
def hasObserver(self, __V):
if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm,"StoredVariables"):
+ or isinstance(self.__algorithm, dict) \
+ or not hasattr(self.__algorithm, "StoredVariables"):
return False
if __V not in self.__algorithm:
return False
def keys(self):
__allvariables = list(self.__algorithm.keys()) + list(self.__P.keys())
for k in self.__variable_names_not_public:
- if k in __allvariables: __allvariables.remove(k)
+ if k in __allvariables:
+ __allvariables.remove(k)
return __allvariables
def __contains__(self, key=None):
def __repr__(self):
"x.__repr__() <==> repr(x)"
- return repr(self.__A)+", "+repr(self.__P)
+ return repr(self.__A) + ", " + repr(self.__P)
def __str__(self):
"x.__str__() <==> str(x)"
- return str(self.__A)+", "+str(self.__P)
+ return str(self.__A) + ", " + str(self.__P)
def __setAlgorithm(self, choice = None ):
"""
# ------------------------------------------
module_path = None
for directory in sys.path:
- if os.path.isfile(os.path.join(directory, daDirectory, str(choice)+'.py')):
+ if os.path.isfile(os.path.join(directory, daDirectory, str(choice) + '.py')):
module_path = os.path.abspath(os.path.join(directory, daDirectory))
if module_path is None:
raise ImportError(
# Importe le fichier complet comme un module
# ------------------------------------------
try:
- sys_path_tmp = sys.path ; sys.path.insert(0,module_path)
+ sys_path_tmp = sys.path
+ sys.path.insert(0, module_path)
self.__algorithmFile = __import__(str(choice), globals(), locals(), [])
if not hasattr(self.__algorithmFile, "ElementaryAlgorithm"):
raise ImportError("this module does not define a valid elementary algorithm.")
self.__algorithmName = str(choice)
- sys.path = sys_path_tmp ; del sys_path_tmp
+ sys.path = sys_path_tmp
+ del sys_path_tmp
except ImportError as e:
raise ImportError(
- "The module named \"%s\" was found, but is incorrect at the import stage.\n The import error message is: %s"%(choice,e))
+ "The module named \"%s\" was found, but is incorrect at the import stage.\n The import error message is: %s"%(choice, e))
#
# Instancie un objet du type élémentaire du fichier
# -------------------------------------------------
Validation de la correspondance correcte des tailles des variables et
des matrices s'il y en a.
"""
- if self.__Xb is None: __Xb_shape = (0,)
- elif hasattr(self.__Xb,"size"): __Xb_shape = (self.__Xb.size,)
- elif hasattr(self.__Xb,"shape"):
- if isinstance(self.__Xb.shape, tuple): __Xb_shape = self.__Xb.shape
- else: __Xb_shape = self.__Xb.shape()
- else: raise TypeError("The background (Xb) has no attribute of shape: problem !")
- #
- if self.__Y is None: __Y_shape = (0,)
- elif hasattr(self.__Y,"size"): __Y_shape = (self.__Y.size,)
- elif hasattr(self.__Y,"shape"):
- if isinstance(self.__Y.shape, tuple): __Y_shape = self.__Y.shape
- else: __Y_shape = self.__Y.shape()
- else: raise TypeError("The observation (Y) has no attribute of shape: problem !")
- #
- if self.__U is None: __U_shape = (0,)
- elif hasattr(self.__U,"size"): __U_shape = (self.__U.size,)
- elif hasattr(self.__U,"shape"):
- if isinstance(self.__U.shape, tuple): __U_shape = self.__U.shape
- else: __U_shape = self.__U.shape()
- else: raise TypeError("The control (U) has no attribute of shape: problem !")
- #
- if self.__B is None: __B_shape = (0,0)
- elif hasattr(self.__B,"shape"):
- if isinstance(self.__B.shape, tuple): __B_shape = self.__B.shape
- else: __B_shape = self.__B.shape()
- else: raise TypeError("The a priori errors covariance matrix (B) has no attribute of shape: problem !")
- #
- if self.__R is None: __R_shape = (0,0)
- elif hasattr(self.__R,"shape"):
- if isinstance(self.__R.shape, tuple): __R_shape = self.__R.shape
- else: __R_shape = self.__R.shape()
- else: raise TypeError("The observation errors covariance matrix (R) has no attribute of shape: problem !")
- #
- if self.__Q is None: __Q_shape = (0,0)
- elif hasattr(self.__Q,"shape"):
- if isinstance(self.__Q.shape, tuple): __Q_shape = self.__Q.shape
- else: __Q_shape = self.__Q.shape()
- else: raise TypeError("The evolution errors covariance matrix (Q) has no attribute of shape: problem !")
- #
- if len(self.__HO) == 0: __HO_shape = (0,0)
- elif isinstance(self.__HO, dict): __HO_shape = (0,0)
- elif hasattr(self.__HO["Direct"],"shape"):
- if isinstance(self.__HO["Direct"].shape, tuple): __HO_shape = self.__HO["Direct"].shape
- else: __HO_shape = self.__HO["Direct"].shape()
- else: raise TypeError("The observation operator (H) has no attribute of shape: problem !")
- #
- if len(self.__EM) == 0: __EM_shape = (0,0)
- elif isinstance(self.__EM, dict): __EM_shape = (0,0)
- elif hasattr(self.__EM["Direct"],"shape"):
- if isinstance(self.__EM["Direct"].shape, tuple): __EM_shape = self.__EM["Direct"].shape
- else: __EM_shape = self.__EM["Direct"].shape()
- else: raise TypeError("The evolution model (EM) has no attribute of shape: problem !")
- #
- if len(self.__CM) == 0: __CM_shape = (0,0)
- elif isinstance(self.__CM, dict): __CM_shape = (0,0)
- elif hasattr(self.__CM["Direct"],"shape"):
- if isinstance(self.__CM["Direct"].shape, tuple): __CM_shape = self.__CM["Direct"].shape
- else: __CM_shape = self.__CM["Direct"].shape()
- else: raise TypeError("The control model (CM) has no attribute of shape: problem !")
+ if self.__Xb is None: __Xb_shape = (0,) # noqa: E241,E701
+ elif hasattr(self.__Xb, "size"): __Xb_shape = (self.__Xb.size,) # noqa: E241,E701
+ elif hasattr(self.__Xb, "shape"):
+ if isinstance(self.__Xb.shape, tuple): __Xb_shape = self.__Xb.shape # noqa: E241,E701
+ else: __Xb_shape = self.__Xb.shape() # noqa: E241,E701
+ else: raise TypeError("The background (Xb) has no attribute of shape: problem !") # noqa: E701
+ #
+ if self.__Y is None: __Y_shape = (0,) # noqa: E241,E701
+ elif hasattr(self.__Y, "size"): __Y_shape = (self.__Y.size,) # noqa: E241,E701
+ elif hasattr(self.__Y, "shape"):
+ if isinstance(self.__Y.shape, tuple): __Y_shape = self.__Y.shape # noqa: E241,E701
+ else: __Y_shape = self.__Y.shape() # noqa: E241,E701
+ else: raise TypeError("The observation (Y) has no attribute of shape: problem !") # noqa: E701
+ #
+ if self.__U is None: __U_shape = (0,) # noqa: E241,E701
+ elif hasattr(self.__U, "size"): __U_shape = (self.__U.size,) # noqa: E241,E701
+ elif hasattr(self.__U, "shape"):
+ if isinstance(self.__U.shape, tuple): __U_shape = self.__U.shape # noqa: E241,E701
+ else: __U_shape = self.__U.shape() # noqa: E241,E701
+ else: raise TypeError("The control (U) has no attribute of shape: problem !") # noqa: E701
+ #
+ if self.__B is None: __B_shape = (0, 0) # noqa: E241,E701
+ elif hasattr(self.__B, "shape"):
+ if isinstance(self.__B.shape, tuple): __B_shape = self.__B.shape # noqa: E241,E701
+ else: __B_shape = self.__B.shape() # noqa: E241,E701
+ else: raise TypeError("The a priori errors covariance matrix (B) has no attribute of shape: problem !") # noqa: E701
+ #
+ if self.__R is None: __R_shape = (0, 0) # noqa: E241,E701
+ elif hasattr(self.__R, "shape"):
+ if isinstance(self.__R.shape, tuple): __R_shape = self.__R.shape # noqa: E241,E701
+ else: __R_shape = self.__R.shape() # noqa: E241,E701
+ else: raise TypeError("The observation errors covariance matrix (R) has no attribute of shape: problem !") # noqa: E701
+ #
+ if self.__Q is None: __Q_shape = (0, 0) # noqa: E241,E701
+ elif hasattr(self.__Q, "shape"):
+ if isinstance(self.__Q.shape, tuple): __Q_shape = self.__Q.shape # noqa: E241,E701
+ else: __Q_shape = self.__Q.shape() # noqa: E241,E701
+ else: raise TypeError("The evolution errors covariance matrix (Q) has no attribute of shape: problem !") # noqa: E701
+ #
+ if len(self.__HO) == 0: __HO_shape = (0, 0) # noqa: E241,E701
+ elif isinstance(self.__HO, dict): __HO_shape = (0, 0) # noqa: E241,E701
+ elif hasattr(self.__HO["Direct"], "shape"):
+ if isinstance(self.__HO["Direct"].shape, tuple): __HO_shape = self.__HO["Direct"].shape # noqa: E241,E701
+ else: __HO_shape = self.__HO["Direct"].shape() # noqa: E241,E701
+ else: raise TypeError("The observation operator (H) has no attribute of shape: problem !") # noqa: E701
+ #
+ if len(self.__EM) == 0: __EM_shape = (0, 0) # noqa: E241,E701
+ elif isinstance(self.__EM, dict): __EM_shape = (0, 0) # noqa: E241,E701
+ elif hasattr(self.__EM["Direct"], "shape"):
+ if isinstance(self.__EM["Direct"].shape, tuple): __EM_shape = self.__EM["Direct"].shape # noqa: E241,E701
+ else: __EM_shape = self.__EM["Direct"].shape() # noqa: E241,E701
+ else: raise TypeError("The evolution model (EM) has no attribute of shape: problem !") # noqa: E241,E70
+ #
+ if len(self.__CM) == 0: __CM_shape = (0, 0) # noqa: E241,E701
+ elif isinstance(self.__CM, dict): __CM_shape = (0, 0) # noqa: E241,E701
+ elif hasattr(self.__CM["Direct"], "shape"):
+ if isinstance(self.__CM["Direct"].shape, tuple): __CM_shape = self.__CM["Direct"].shape # noqa: E241,E701
+ else: __CM_shape = self.__CM["Direct"].shape() # noqa: E241,E701
+ else: raise TypeError("The control model (CM) has no attribute of shape: problem !") # noqa: E701
#
# Vérification des conditions
# ---------------------------
- if not( len(__Xb_shape) == 1 or min(__Xb_shape) == 1 ):
+ if not ( len(__Xb_shape) == 1 or min(__Xb_shape) == 1 ):
raise ValueError("Shape characteristic of background (Xb) is incorrect: \"%s\"."%(__Xb_shape,))
- if not( len(__Y_shape) == 1 or min(__Y_shape) == 1 ):
+ if not ( len(__Y_shape) == 1 or min(__Y_shape) == 1 ):
raise ValueError("Shape characteristic of observation (Y) is incorrect: \"%s\"."%(__Y_shape,))
#
- if not( min(__B_shape) == max(__B_shape) ):
+ if not ( min(__B_shape) == max(__B_shape) ):
raise ValueError("Shape characteristic of a priori errors covariance matrix (B) is incorrect: \"%s\"."%(__B_shape,))
- if not( min(__R_shape) == max(__R_shape) ):
+ if not ( min(__R_shape) == max(__R_shape) ):
raise ValueError("Shape characteristic of observation errors covariance matrix (R) is incorrect: \"%s\"."%(__R_shape,))
- if not( min(__Q_shape) == max(__Q_shape) ):
+ if not ( min(__Q_shape) == max(__Q_shape) ):
raise ValueError("Shape characteristic of evolution errors covariance matrix (Q) is incorrect: \"%s\"."%(__Q_shape,))
- if not( min(__EM_shape) == max(__EM_shape) ):
+ if not ( min(__EM_shape) == max(__EM_shape) ):
raise ValueError("Shape characteristic of evolution operator (EM) is incorrect: \"%s\"."%(__EM_shape,))
#
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and not( __HO_shape[1] == max(__Xb_shape) ):
+ if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and not ( __HO_shape[1] == max(__Xb_shape) ):
raise ValueError(
- "Shape characteristic of observation operator (H)"+\
- " \"%s\" and state (X) \"%s\" are incompatible."%(__HO_shape,__Xb_shape))
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and not( __HO_shape[0] == max(__Y_shape) ):
+ "Shape characteristic of observation operator (H)" + \
+ " \"%s\" and state (X) \"%s\" are incompatible."%(__HO_shape, __Xb_shape))
+ if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and not ( __HO_shape[0] == max(__Y_shape) ):
raise ValueError(
- "Shape characteristic of observation operator (H)"+\
- " \"%s\" and observation (Y) \"%s\" are incompatible."%(__HO_shape,__Y_shape))
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and len(self.__B) > 0 and not( __HO_shape[1] == __B_shape[0] ):
+ "Shape characteristic of observation operator (H)" + \
+ " \"%s\" and observation (Y) \"%s\" are incompatible."%(__HO_shape, __Y_shape))
+ if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and len(self.__B) > 0 and not ( __HO_shape[1] == __B_shape[0] ):
raise ValueError(
- "Shape characteristic of observation operator (H)"+\
- " \"%s\" and a priori errors covariance matrix (B) \"%s\" are incompatible."%(__HO_shape,__B_shape))
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and len(self.__R) > 0 and not( __HO_shape[0] == __R_shape[1] ):
+ "Shape characteristic of observation operator (H)" + \
+ " \"%s\" and a priori errors covariance matrix (B) \"%s\" are incompatible."%(__HO_shape, __B_shape))
+ if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and len(self.__R) > 0 and not ( __HO_shape[0] == __R_shape[1] ):
raise ValueError(
- "Shape characteristic of observation operator (H)"+\
- " \"%s\" and observation errors covariance matrix (R) \"%s\" are incompatible."%(__HO_shape,__R_shape))
+ "Shape characteristic of observation operator (H)" + \
+ " \"%s\" and observation errors covariance matrix (R) \"%s\" are incompatible."%(__HO_shape, __R_shape))
#
- if self.__B is not None and len(self.__B) > 0 and not( __B_shape[1] == max(__Xb_shape) ):
+ if self.__B is not None and len(self.__B) > 0 and not ( __B_shape[1] == max(__Xb_shape) ):
if self.__algorithmName in ["EnsembleBlue",]:
- asPersistentVector = self.__Xb.reshape((-1,min(__B_shape)))
+ asPersistentVector = self.__Xb.reshape((-1, min(__B_shape)))
self.__Xb = Persistence.OneVector("Background")
for member in asPersistentVector:
self.__Xb.store( numpy.asarray(member, dtype=float) )
__Xb_shape = min(__B_shape)
else:
raise ValueError(
- "Shape characteristic of a priori errors covariance matrix (B)"+\
- " \"%s\" and background vector (Xb) \"%s\" are incompatible."%(__B_shape,__Xb_shape))
+ "Shape characteristic of a priori errors covariance matrix (B)" + \
+ " \"%s\" and background vector (Xb) \"%s\" are incompatible."%(__B_shape, __Xb_shape))
#
- if self.__R is not None and len(self.__R) > 0 and not( __R_shape[1] == max(__Y_shape) ):
+ if self.__R is not None and len(self.__R) > 0 and not ( __R_shape[1] == max(__Y_shape) ):
raise ValueError(
- "Shape characteristic of observation errors covariance matrix (R)"+\
- " \"%s\" and observation vector (Y) \"%s\" are incompatible."%(__R_shape,__Y_shape))
+ "Shape characteristic of observation errors covariance matrix (R)" + \
+ " \"%s\" and observation vector (Y) \"%s\" are incompatible."%(__R_shape, __Y_shape))
#
- if self.__EM is not None and len(self.__EM) > 0 and not isinstance(self.__EM, dict) and not( __EM_shape[1] == max(__Xb_shape) ):
+ if self.__EM is not None and len(self.__EM) > 0 and not isinstance(self.__EM, dict) and not ( __EM_shape[1] == max(__Xb_shape) ):
raise ValueError(
- "Shape characteristic of evolution model (EM)"+\
- " \"%s\" and state (X) \"%s\" are incompatible."%(__EM_shape,__Xb_shape))
+ "Shape characteristic of evolution model (EM)" + \
+ " \"%s\" and state (X) \"%s\" are incompatible."%(__EM_shape, __Xb_shape))
#
- if self.__CM is not None and len(self.__CM) > 0 and not isinstance(self.__CM, dict) and not( __CM_shape[1] == max(__U_shape) ):
+ if self.__CM is not None and len(self.__CM) > 0 and not isinstance(self.__CM, dict) and not ( __CM_shape[1] == max(__U_shape) ):
raise ValueError(
- "Shape characteristic of control model (CM)"+\
- " \"%s\" and control (U) \"%s\" are incompatible."%(__CM_shape,__U_shape))
+ "Shape characteristic of control model (CM)" + \
+ " \"%s\" and control (U) \"%s\" are incompatible."%(__CM_shape, __U_shape))
#
if ("Bounds" in self.__P) \
- and (isinstance(self.__P["Bounds"], list) or isinstance(self.__P["Bounds"], tuple)) \
- and (len(self.__P["Bounds"]) != max(__Xb_shape)):
- raise ValueError("The number \"%s\" of bound pairs for the state (X) components is different of the size \"%s\" of the state itself." \
- %(len(self.__P["Bounds"]),max(__Xb_shape)))
+ and isinstance(self.__P["Bounds"], (list, tuple)) \
+ and (len(self.__P["Bounds"]) != max(__Xb_shape)):
+ if len(self.__P["Bounds"]) > 0:
+ raise ValueError("The number '%s' of bound pairs for the state components is different from the size '%s' of the state (X) itself." \
+ % (len(self.__P["Bounds"]), max(__Xb_shape)))
+ else:
+ self.__P["Bounds"] = None
+ if ("Bounds" in self.__P) \
+ and isinstance(self.__P["Bounds"], (numpy.ndarray, numpy.matrix)) \
+ and (self.__P["Bounds"].shape[0] != max(__Xb_shape)):
+ if self.__P["Bounds"].size > 0:
+ raise ValueError("The number '%s' of bound pairs for the state components is different from the size '%s' of the state (X) itself." \
+ % (self.__P["Bounds"].shape[0], max(__Xb_shape)))
+ else:
+ self.__P["Bounds"] = None
+ #
+ if ("BoxBounds" in self.__P) \
+ and isinstance(self.__P["BoxBounds"], (list, tuple)) \
+ and (len(self.__P["BoxBounds"]) != max(__Xb_shape)):
+ raise ValueError("The number '%s' of bound pairs for the state box components is different from the size '%s' of the state (X) itself." \
+ % (len(self.__P["BoxBounds"]), max(__Xb_shape)))
+ if ("BoxBounds" in self.__P) \
+ and isinstance(self.__P["BoxBounds"], (numpy.ndarray, numpy.matrix)) \
+ and (self.__P["BoxBounds"].shape[0] != max(__Xb_shape)):
+ raise ValueError("The number '%s' of bound pairs for the state box components is different from the size '%s' of the state (X) itself." \
+ % (self.__P["BoxBounds"].shape[0], max(__Xb_shape)))
#
if ("StateBoundsForQuantiles" in self.__P) \
- and (isinstance(self.__P["StateBoundsForQuantiles"], list) or isinstance(self.__P["StateBoundsForQuantiles"], tuple)) \
- and (len(self.__P["StateBoundsForQuantiles"]) != max(__Xb_shape)):
- raise ValueError("The number \"%s\" of bound pairs for the quantile state (X) components is different of the size \"%s\" of the state itself." \
- %(len(self.__P["StateBoundsForQuantiles"]),max(__Xb_shape)))
+ and isinstance(self.__P["StateBoundsForQuantiles"], (list, tuple)) \
+ and (len(self.__P["StateBoundsForQuantiles"]) != max(__Xb_shape)):
+ raise ValueError("The number '%s' of bound pairs for the quantile state components is different from the size '%s' of the state (X) itself." \
+ % (len(self.__P["StateBoundsForQuantiles"]), max(__Xb_shape)))
#
return 1
Classe générale d'interface d'action pour la régulation et ses paramètres
"""
__slots__ = ("__name", "__P")
- #
+
def __init__(self,
- name = "GenericRegulation",
- asAlgorithm = None,
- asDict = None,
- asScript = None,
- ):
+ name = "GenericRegulation",
+ asAlgorithm = None,
+ asDict = None,
+ asScript = None ):
"""
"""
self.__name = str(name)
self.__P.update( dict(__Dict) )
#
if __Algo is not None:
- self.__P.update( {"Algorithm":str(__Algo)} )
+ self.__P.update( {"Algorithm": str(__Algo)} )
def get(self, key = None):
"Vérifie l'existence d'une clé de variable ou de paramètres"
Classe générale d'interface de type observer
"""
__slots__ = ("__name", "__V", "__O", "__I")
- #
+
def __init__(self,
name = "GenericObserver",
onVariable = None,
asObsObject = None,
withInfo = None,
scheduledBy = None,
- withAlgo = None,
- ):
+ withAlgo = None ):
"""
"""
self.__name = str(name)
if withInfo is None:
self.__I = self.__V
else:
- self.__I = (str(withInfo),)*len(self.__V)
+ self.__I = (str(withInfo),) * len(self.__V)
elif isinstance(onVariable, str):
self.__V = (onVariable,)
if withInfo is None:
def __repr__(self):
"x.__repr__() <==> repr(x)"
- return repr(self.__V)+"\n"+repr(self.__O)
+ return repr(self.__V) + "\n" + repr(self.__O)
def __str__(self):
"x.__str__() <==> str(x)"
- return str(self.__V)+"\n"+str(self.__O)
+ return str(self.__V) + "\n" + str(self.__O)
# ==============================================================================
class UserScript(object):
Classe générale d'interface de type texte de script utilisateur
"""
__slots__ = ("__name", "__F")
- #
+
def __init__(self,
name = "GenericUserScript",
asTemplate = None,
asString = None,
- asScript = None,
- ):
+ asScript = None ):
"""
"""
self.__name = str(name)
Classe générale d'interface pour le stockage des paramètres externes
"""
__slots__ = ("__name", "__P")
- #
+
def __init__(self,
- name = "GenericExternalParameters",
- asDict = None,
- asScript = None,
- ):
+ name = "GenericExternalParameters",
+ asDict = None,
+ asScript = None ):
"""
"""
self.__name = str(name)
#
self.updateParameters( asDict, asScript )
- def updateParameters(self,
- asDict = None,
- asScript = None,
- ):
+ def updateParameters(self, asDict = None, asScript = None ):
"Mise à jour des paramètres"
if asDict is None and asScript is not None:
__Dict = Interfaces.ImportFromScript(asScript).getvalue( self.__name, "ExternalParameters" )
__slots__ = (
"__name", "__check", "__V", "__T", "__is_vector", "__is_series",
"shape", "size",
- )
- #
+ )
+
def __init__(self,
name = "GenericVector",
asVector = None,
colNames = None,
colMajor = False,
scheduledBy = None,
- toBeChecked = False,
- ):
+ toBeChecked = False ):
"""
Permet de définir un vecteur :
- asVector : entrée des données, comme un vecteur compatible avec le
if __Vector is not None:
self.__is_vector = True
if isinstance(__Vector, str):
- __Vector = PlatformInfo.strvect2liststr( __Vector )
- self.__V = numpy.ravel(numpy.asarray( __Vector, dtype=float )).reshape((-1,1))
+ __Vector = PlatformInfo.strvect2liststr( __Vector )
+ self.__V = numpy.ravel(numpy.asarray( __Vector, dtype=float )).reshape((-1, 1))
self.shape = self.__V.shape
self.size = self.__V.size
elif __Series is not None:
else:
self.shape = self.__V.shape()
if len(self.shape) == 1:
- self.shape = (self.shape[0],1)
- self.size = self.shape[0] * self.shape[1]
+ self.shape = (self.shape[0], 1)
+ self.size = self.shape[0] * self.shape[1]
else:
raise ValueError(
- "The %s object is improperly defined or undefined,"%self.__name+\
- " it requires at minima either a vector, a list/tuple of"+\
+ "The %s object is improperly defined or undefined,"%self.__name + \
+ " it requires at minima either a vector, a list/tuple of" + \
" vectors or a persistent object. Please check your vector input.")
#
if scheduledBy is not None:
__slots__ = (
"__name", "__check", "__C", "__is_scalar", "__is_vector", "__is_matrix",
"__is_object", "shape", "size",
- )
- #
+ )
+
def __init__(self,
name = "GenericCovariance",
asCovariance = None,
asEyeByVector = None,
asCovObject = None,
asScript = None,
- toBeChecked = False,
- ):
+ toBeChecked = False ):
"""
Permet de définir une covariance :
- asCovariance : entrée des données, comme une matrice compatible avec
if __Scalar is not None:
if isinstance(__Scalar, str):
__Scalar = PlatformInfo.strvect2liststr( __Scalar )
- if len(__Scalar) > 0: __Scalar = __Scalar[0]
+ if len(__Scalar) > 0:
+ __Scalar = __Scalar[0]
if numpy.array(__Scalar).size != 1:
raise ValueError(
- " The diagonal multiplier given to define a sparse matrix is"+\
- " not a unique scalar value.\n Its actual measured size is"+\
+ " The diagonal multiplier given to define a sparse matrix is" + \
+ " not a unique scalar value.\n Its actual measured size is" + \
" %i. Please check your scalar input."%numpy.array(__Scalar).size)
self.__is_scalar = True
self.__C = numpy.abs( float(__Scalar) )
- self.shape = (0,0)
+ self.shape = (0, 0)
self.size = 0
elif __Vector is not None:
if isinstance(__Vector, str):
__Vector = PlatformInfo.strvect2liststr( __Vector )
self.__is_vector = True
self.__C = numpy.abs( numpy.ravel(numpy.asarray( __Vector, dtype=float )) )
- self.shape = (self.__C.size,self.__C.size)
+ self.shape = (self.__C.size, self.__C.size)
self.size = self.__C.size**2
elif __Matrix is not None:
self.__is_matrix = True
elif __Object is not None:
self.__is_object = True
self.__C = __Object
- for at in ("getT","getI","diag","trace","__add__","__sub__","__neg__","__matmul__","__mul__","__rmatmul__","__rmul__"):
- if not hasattr(self.__C,at):
- raise ValueError("The matrix given for %s as an object has no attribute \"%s\". Please check your object input."%(self.__name,at))
- if hasattr(self.__C,"shape"):
+ for at in ("getT", "getI", "diag", "trace", "__add__", "__sub__", "__neg__", "__matmul__", "__mul__", "__rmatmul__", "__rmul__"):
+ if not hasattr(self.__C, at):
+ raise ValueError("The matrix given for %s as an object has no attribute \"%s\". Please check your object input."%(self.__name, at))
+ if hasattr(self.__C, "shape"):
self.shape = self.__C.shape
else:
- self.shape = (0,0)
- if hasattr(self.__C,"size"):
+ self.shape = (0, 0)
+ if hasattr(self.__C, "size"):
self.size = self.__C.size
else:
self.size = 0
if self.__C is None:
raise UnboundLocalError("%s covariance matrix value has not been set!"%(self.__name,))
if self.ismatrix() and min(self.shape) != max(self.shape):
- raise ValueError("The given matrix for %s is not a square one, its shape is %s. Please check your matrix input."%(self.__name,self.shape))
+ raise ValueError("The given matrix for %s is not a square one, its shape is %s. Please check your matrix input."%(self.__name, self.shape))
if self.isobject() and min(self.shape) != max(self.shape):
- raise ValueError("The matrix given for \"%s\" is not a square one, its shape is %s. Please check your object input."%(self.__name,self.shape))
+ raise ValueError("The matrix given for \"%s\" is not a square one, its shape is %s. Please check your object input."%(self.__name, self.shape))
if self.isscalar() and self.__C <= 0:
- raise ValueError("The \"%s\" covariance matrix is not positive-definite. Please check your scalar input %s."%(self.__name,self.__C))
+ raise ValueError("The \"%s\" covariance matrix is not positive-definite. Please check your scalar input %s."%(self.__name, self.__C))
if self.isvector() and (self.__C <= 0).any():
raise ValueError("The \"%s\" covariance matrix is not positive-definite. Please check your vector input."%(self.__name,))
if self.ismatrix() and (self.__check or logging.getLogger().level < logging.WARNING):
def getI(self):
"Inversion"
- if self.ismatrix():
- return Covariance(self.__name+"I", asCovariance = numpy.linalg.inv(self.__C) )
+ if self.ismatrix():
+ return Covariance(self.__name + "I", asCovariance = numpy.linalg.inv(self.__C) )
elif self.isvector():
- return Covariance(self.__name+"I", asEyeByVector = 1. / self.__C )
+ return Covariance(self.__name + "I", asEyeByVector = 1. / self.__C )
elif self.isscalar():
- return Covariance(self.__name+"I", asEyeByScalar = 1. / self.__C )
- elif self.isobject() and hasattr(self.__C,"getI"):
- return Covariance(self.__name+"I", asCovObject = self.__C.getI() )
+ return Covariance(self.__name + "I", asEyeByScalar = 1. / self.__C )
+ elif self.isobject() and hasattr(self.__C, "getI"):
+ return Covariance(self.__name + "I", asCovObject = self.__C.getI() )
else:
- return None # Indispensable
+ return None # Indispensable
def getT(self):
"Transposition"
- if self.ismatrix():
- return Covariance(self.__name+"T", asCovariance = self.__C.T )
+ if self.ismatrix():
+ return Covariance(self.__name + "T", asCovariance = self.__C.T )
elif self.isvector():
- return Covariance(self.__name+"T", asEyeByVector = self.__C )
+ return Covariance(self.__name + "T", asEyeByVector = self.__C )
elif self.isscalar():
- return Covariance(self.__name+"T", asEyeByScalar = self.__C )
- elif self.isobject() and hasattr(self.__C,"getT"):
- return Covariance(self.__name+"T", asCovObject = self.__C.getT() )
+ return Covariance(self.__name + "T", asEyeByScalar = self.__C )
+ elif self.isobject() and hasattr(self.__C, "getT"):
+ return Covariance(self.__name + "T", asCovObject = self.__C.getT() )
else:
raise AttributeError("the %s covariance matrix has no getT attribute."%(self.__name,))
def cholesky(self):
"Décomposition de Cholesky"
- if self.ismatrix():
- return Covariance(self.__name+"C", asCovariance = numpy.linalg.cholesky(self.__C) )
+ if self.ismatrix():
+ return Covariance(self.__name + "C", asCovariance = numpy.linalg.cholesky(self.__C) )
elif self.isvector():
- return Covariance(self.__name+"C", asEyeByVector = numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "C", asEyeByVector = numpy.sqrt( self.__C ) )
elif self.isscalar():
- return Covariance(self.__name+"C", asEyeByScalar = numpy.sqrt( self.__C ) )
- elif self.isobject() and hasattr(self.__C,"cholesky"):
- return Covariance(self.__name+"C", asCovObject = self.__C.cholesky() )
+ return Covariance(self.__name + "C", asEyeByScalar = numpy.sqrt( self.__C ) )
+ elif self.isobject() and hasattr(self.__C, "cholesky"):
+ return Covariance(self.__name + "C", asCovObject = self.__C.cholesky() )
else:
raise AttributeError("the %s covariance matrix has no cholesky attribute."%(self.__name,))
def choleskyI(self):
"Inversion de la décomposition de Cholesky"
- if self.ismatrix():
- return Covariance(self.__name+"H", asCovariance = numpy.linalg.inv(numpy.linalg.cholesky(self.__C)) )
+ if self.ismatrix():
+ return Covariance(self.__name + "H", asCovariance = numpy.linalg.inv(numpy.linalg.cholesky(self.__C)) )
elif self.isvector():
- return Covariance(self.__name+"H", asEyeByVector = 1.0 / numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "H", asEyeByVector = 1.0 / numpy.sqrt( self.__C ) )
elif self.isscalar():
- return Covariance(self.__name+"H", asEyeByScalar = 1.0 / numpy.sqrt( self.__C ) )
- elif self.isobject() and hasattr(self.__C,"choleskyI"):
- return Covariance(self.__name+"H", asCovObject = self.__C.choleskyI() )
+ return Covariance(self.__name + "H", asEyeByScalar = 1.0 / numpy.sqrt( self.__C ) )
+ elif self.isobject() and hasattr(self.__C, "choleskyI"):
+ return Covariance(self.__name + "H", asCovObject = self.__C.choleskyI() )
else:
raise AttributeError("the %s covariance matrix has no choleskyI attribute."%(self.__name,))
def sqrtm(self):
"Racine carrée matricielle"
- if self.ismatrix():
+ if self.ismatrix():
import scipy
- return Covariance(self.__name+"C", asCovariance = numpy.real(scipy.linalg.sqrtm(self.__C)) )
+ return Covariance(self.__name + "C", asCovariance = numpy.real(scipy.linalg.sqrtm(self.__C)) )
elif self.isvector():
- return Covariance(self.__name+"C", asEyeByVector = numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "C", asEyeByVector = numpy.sqrt( self.__C ) )
elif self.isscalar():
- return Covariance(self.__name+"C", asEyeByScalar = numpy.sqrt( self.__C ) )
- elif self.isobject() and hasattr(self.__C,"sqrtm"):
- return Covariance(self.__name+"C", asCovObject = self.__C.sqrtm() )
+ return Covariance(self.__name + "C", asEyeByScalar = numpy.sqrt( self.__C ) )
+ elif self.isobject() and hasattr(self.__C, "sqrtm"):
+ return Covariance(self.__name + "C", asCovObject = self.__C.sqrtm() )
else:
raise AttributeError("the %s covariance matrix has no sqrtm attribute."%(self.__name,))
def sqrtmI(self):
"Inversion de la racine carrée matricielle"
- if self.ismatrix():
+ if self.ismatrix():
import scipy
- return Covariance(self.__name+"H", asCovariance = numpy.linalg.inv(numpy.real(scipy.linalg.sqrtm(self.__C))) )
+ return Covariance(self.__name + "H", asCovariance = numpy.linalg.inv(numpy.real(scipy.linalg.sqrtm(self.__C))) )
elif self.isvector():
- return Covariance(self.__name+"H", asEyeByVector = 1.0 / numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "H", asEyeByVector = 1.0 / numpy.sqrt( self.__C ) )
elif self.isscalar():
- return Covariance(self.__name+"H", asEyeByScalar = 1.0 / numpy.sqrt( self.__C ) )
- elif self.isobject() and hasattr(self.__C,"sqrtmI"):
- return Covariance(self.__name+"H", asCovObject = self.__C.sqrtmI() )
+ return Covariance(self.__name + "H", asEyeByScalar = 1.0 / numpy.sqrt( self.__C ) )
+ elif self.isobject() and hasattr(self.__C, "sqrtmI"):
+ return Covariance(self.__name + "H", asCovObject = self.__C.sqrtmI() )
else:
raise AttributeError("the %s covariance matrix has no sqrtmI attribute."%(self.__name,))
def diag(self, msize=None):
"Diagonale de la matrice"
- if self.ismatrix():
+ if self.ismatrix():
return numpy.diag(self.__C)
elif self.isvector():
return self.__C
raise ValueError("the size of the %s covariance matrix has to be given in case of definition as a scalar over the diagonal."%(self.__name,))
else:
return self.__C * numpy.ones(int(msize))
- elif self.isobject() and hasattr(self.__C,"diag"):
+ elif self.isobject() and hasattr(self.__C, "diag"):
return self.__C.diag()
else:
raise AttributeError("the %s covariance matrix has no diag attribute."%(self.__name,))
def trace(self, msize=None):
"Trace de la matrice"
- if self.ismatrix():
+ if self.ismatrix():
return numpy.trace(self.__C)
elif self.isvector():
return float(numpy.sum(self.__C))
def asfullmatrix(self, msize=None):
"Matrice pleine"
- if self.ismatrix():
+ if self.ismatrix():
return numpy.asarray(self.__C, dtype=float)
elif self.isvector():
return numpy.asarray( numpy.diag(self.__C), dtype=float )
raise ValueError("the size of the %s covariance matrix has to be given in case of definition as a scalar over the diagonal."%(self.__name,))
else:
return numpy.asarray( self.__C * numpy.eye(int(msize)), dtype=float )
- elif self.isobject() and hasattr(self.__C,"asfullmatrix"):
+ elif self.isobject() and hasattr(self.__C, "asfullmatrix"):
return self.__C.asfullmatrix()
else:
raise AttributeError("the %s covariance matrix has no asfullmatrix attribute."%(self.__name,))
def __add__(self, other):
"x.__add__(y) <==> x+y"
- if self.ismatrix() or self.isobject():
+ if self.ismatrix() or self.isobject():
return self.__C + numpy.asmatrix(other)
elif self.isvector() or self.isscalar():
_A = numpy.asarray(other)
if len(_A.shape) == 1:
- _A.reshape((-1,1))[::2] += self.__C
+ _A.reshape((-1, 1))[::2] += self.__C
else:
- _A.reshape(_A.size)[::_A.shape[1]+1] += self.__C
+ _A.reshape(_A.size)[::_A.shape[1] + 1] += self.__C
return numpy.asmatrix(_A)
def __radd__(self, other):
"x.__radd__(y) <==> y+x"
- raise NotImplementedError("%s covariance matrix __radd__ method not available for %s type!"%(self.__name,type(other)))
+ raise NotImplementedError("%s covariance matrix __radd__ method not available for %s type!"%(self.__name, type(other)))
def __sub__(self, other):
"x.__sub__(y) <==> x-y"
- if self.ismatrix() or self.isobject():
+ if self.ismatrix() or self.isobject():
return self.__C - numpy.asmatrix(other)
elif self.isvector() or self.isscalar():
_A = numpy.asarray(other)
- _A.reshape(_A.size)[::_A.shape[1]+1] = self.__C - _A.reshape(_A.size)[::_A.shape[1]+1]
+ _A.reshape(_A.size)[::_A.shape[1] + 1] = self.__C - _A.reshape(_A.size)[::_A.shape[1] + 1]
return numpy.asmatrix(_A)
def __rsub__(self, other):
"x.__rsub__(y) <==> y-x"
- raise NotImplementedError("%s covariance matrix __rsub__ method not available for %s type!"%(self.__name,type(other)))
+ raise NotImplementedError("%s covariance matrix __rsub__ method not available for %s type!"%(self.__name, type(other)))
def __neg__(self):
"x.__neg__() <==> -x"
def __matmul__(self, other):
"x.__mul__(y) <==> x@y"
- if self.ismatrix() and isinstance(other, (int, float)):
+ if self.ismatrix() and isinstance(other, (int, float)):
return numpy.asarray(self.__C) * other
elif self.ismatrix() and isinstance(other, (list, numpy.matrix, numpy.ndarray, tuple)):
- if numpy.ravel(other).size == self.shape[1]: # Vecteur
+ if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.ravel(self.__C @ numpy.ravel(other))
- elif numpy.asarray(other).shape[0] == self.shape[1]: # Matrice
+ elif numpy.asarray(other).shape[0] == self.shape[1]: # Matrice
return numpy.asarray(self.__C) @ numpy.asarray(other)
else:
- raise ValueError("operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape,numpy.asarray(other).shape,self.__name))
+ raise ValueError("operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.asarray(other).shape, self.__name))
elif self.isvector() and isinstance(other, (list, numpy.matrix, numpy.ndarray, tuple)):
- if numpy.ravel(other).size == self.shape[1]: # Vecteur
+ if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.ravel(self.__C) * numpy.ravel(other)
- elif numpy.asarray(other).shape[0] == self.shape[1]: # Matrice
- return numpy.ravel(self.__C).reshape((-1,1)) * numpy.asarray(other)
+ elif numpy.asarray(other).shape[0] == self.shape[1]: # Matrice
+ return numpy.ravel(self.__C).reshape((-1, 1)) * numpy.asarray(other)
else:
- raise ValueError("operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape,numpy.ravel(other).shape,self.__name))
- elif self.isscalar() and isinstance(other,numpy.matrix):
+ raise ValueError("operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.ravel(other).shape, self.__name))
+ elif self.isscalar() and isinstance(other, numpy.matrix):
return numpy.asarray(self.__C * other)
elif self.isscalar() and isinstance(other, (list, numpy.ndarray, tuple)):
if len(numpy.asarray(other).shape) == 1 or numpy.asarray(other).shape[1] == 1 or numpy.asarray(other).shape[0] == 1:
elif self.isobject():
return self.__C.__matmul__(other)
else:
- raise NotImplementedError("%s covariance matrix __matmul__ method not available for %s type!"%(self.__name,type(other)))
+ raise NotImplementedError("%s covariance matrix __matmul__ method not available for %s type!"%(self.__name, type(other)))
def __mul__(self, other):
"x.__mul__(y) <==> x*y"
- if self.ismatrix() and isinstance(other, (int, numpy.matrix, float)):
+ if self.ismatrix() and isinstance(other, (int, numpy.matrix, float)):
return self.__C * other
elif self.ismatrix() and isinstance(other, (list, numpy.ndarray, tuple)):
- if numpy.ravel(other).size == self.shape[1]: # Vecteur
+ if numpy.ravel(other).size == self.shape[1]: # Vecteur
return self.__C * numpy.asmatrix(numpy.ravel(other)).T
- elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
+ elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
return self.__C * numpy.asmatrix(other)
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape,numpy.asmatrix(other).shape,self.__name))
+ "operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.asmatrix(other).shape, self.__name))
elif self.isvector() and isinstance(other, (list, numpy.matrix, numpy.ndarray, tuple)):
- if numpy.ravel(other).size == self.shape[1]: # Vecteur
+ if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.asmatrix(self.__C * numpy.ravel(other)).T
- elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
+ elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
return numpy.asmatrix((self.__C * (numpy.asarray(other).transpose())).transpose())
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape,numpy.ravel(other).shape,self.__name))
- elif self.isscalar() and isinstance(other,numpy.matrix):
+ "operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.ravel(other).shape, self.__name))
+ elif self.isscalar() and isinstance(other, numpy.matrix):
return self.__C * other
elif self.isscalar() and isinstance(other, (list, numpy.ndarray, tuple)):
if len(numpy.asarray(other).shape) == 1 or numpy.asarray(other).shape[1] == 1 or numpy.asarray(other).shape[0] == 1:
return self.__C.__mul__(other)
else:
raise NotImplementedError(
- "%s covariance matrix __mul__ method not available for %s type!"%(self.__name,type(other)))
+ "%s covariance matrix __mul__ method not available for %s type!"%(self.__name, type(other)))
def __rmatmul__(self, other):
"x.__rmul__(y) <==> y@x"
if self.ismatrix() and isinstance(other, (int, numpy.matrix, float)):
return other * self.__C
elif self.ismatrix() and isinstance(other, (list, numpy.ndarray, tuple)):
- if numpy.ravel(other).size == self.shape[1]: # Vecteur
+ if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.asmatrix(numpy.ravel(other)) * self.__C
- elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
+ elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
return numpy.asmatrix(other) * self.__C
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.asmatrix(other).shape,self.shape,self.__name))
- elif self.isvector() and isinstance(other,numpy.matrix):
- if numpy.ravel(other).size == self.shape[0]: # Vecteur
+ "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.asmatrix(other).shape, self.shape, self.__name))
+ elif self.isvector() and isinstance(other, numpy.matrix):
+ if numpy.ravel(other).size == self.shape[0]: # Vecteur
return numpy.asmatrix(numpy.ravel(other) * self.__C)
- elif numpy.asmatrix(other).shape[1] == self.shape[0]: # Matrice
+ elif numpy.asmatrix(other).shape[1] == self.shape[0]: # Matrice
return numpy.asmatrix(numpy.array(other) * self.__C)
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.ravel(other).shape,self.shape,self.__name))
- elif self.isscalar() and isinstance(other,numpy.matrix):
+ "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.ravel(other).shape, self.shape, self.__name))
+ elif self.isscalar() and isinstance(other, numpy.matrix):
return other * self.__C
elif self.isobject():
return self.__C.__rmatmul__(other)
else:
raise NotImplementedError(
- "%s covariance matrix __rmatmul__ method not available for %s type!"%(self.__name,type(other)))
+ "%s covariance matrix __rmatmul__ method not available for %s type!"%(self.__name, type(other)))
def __rmul__(self, other):
"x.__rmul__(y) <==> y*x"
if self.ismatrix() and isinstance(other, (int, numpy.matrix, float)):
return other * self.__C
elif self.ismatrix() and isinstance(other, (list, numpy.ndarray, tuple)):
- if numpy.ravel(other).size == self.shape[1]: # Vecteur
+ if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.asmatrix(numpy.ravel(other)) * self.__C
- elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
+ elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
return numpy.asmatrix(other) * self.__C
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.asmatrix(other).shape,self.shape,self.__name))
- elif self.isvector() and isinstance(other,numpy.matrix):
- if numpy.ravel(other).size == self.shape[0]: # Vecteur
+ "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.asmatrix(other).shape, self.shape, self.__name))
+ elif self.isvector() and isinstance(other, numpy.matrix):
+ if numpy.ravel(other).size == self.shape[0]: # Vecteur
return numpy.asmatrix(numpy.ravel(other) * self.__C)
- elif numpy.asmatrix(other).shape[1] == self.shape[0]: # Matrice
+ elif numpy.asmatrix(other).shape[1] == self.shape[0]: # Matrice
return numpy.asmatrix(numpy.array(other) * self.__C)
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.ravel(other).shape,self.shape,self.__name))
- elif self.isscalar() and isinstance(other,numpy.matrix):
+ "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.ravel(other).shape, self.shape, self.__name))
+ elif self.isscalar() and isinstance(other, numpy.matrix):
return other * self.__C
- elif self.isscalar() and isinstance(other,float):
+ elif self.isscalar() and isinstance(other, float):
return other * self.__C
elif self.isobject():
return self.__C.__rmul__(other)
else:
raise NotImplementedError(
- "%s covariance matrix __rmul__ method not available for %s type!"%(self.__name,type(other)))
+ "%s covariance matrix __rmul__ method not available for %s type!"%(self.__name, type(other)))
def __len__(self):
"x.__len__() <==> len(x)"
Création d'une fonction d'observateur a partir de son texte
"""
__slots__ = ("__corps")
- #
+
def __init__(self, corps=""):
self.__corps = corps
- def func(self,var,info):
+
+ def func(self, var, info):
"Fonction d'observation"
exec(self.__corps)
+
def getfunc(self):
"Restitution du pointeur de fonction dans l'objet"
return self.func
__slots__ = (
"__name", "__objname", "__logSerie", "__switchoff", "__viewers",
"__loaders",
- )
- #
+ )
+
def __init__(self, __name="", __objname="case", __addViewers=None, __addLoaders=None):
self.__name = str(__name)
self.__objname = str(__objname)
self.__logSerie = []
self.__switchoff = False
self.__viewers = {
- "TUI" :Interfaces._TUIViewer,
- "SCD" :Interfaces._SCDViewer,
- "YACS":Interfaces._YACSViewer,
- "SimpleReportInRst":Interfaces._SimpleReportInRstViewer,
- "SimpleReportInHtml":Interfaces._SimpleReportInHtmlViewer,
- "SimpleReportInPlainTxt":Interfaces._SimpleReportInPlainTxtViewer,
- }
+ "TUI": Interfaces._TUIViewer,
+ "SCD": Interfaces._SCDViewer,
+ "YACS": Interfaces._YACSViewer,
+ "SimpleReportInRst": Interfaces._SimpleReportInRstViewer,
+ "SimpleReportInHtml": Interfaces._SimpleReportInHtmlViewer,
+ "SimpleReportInPlainTxt": Interfaces._SimpleReportInPlainTxtViewer,
+ }
self.__loaders = {
- "TUI" :Interfaces._TUIViewer,
- "COM" :Interfaces._COMViewer,
- }
+ "TUI": Interfaces._TUIViewer,
+ "COM": Interfaces._COMViewer,
+ }
if __addViewers is not None:
self.__viewers.update(dict(__addViewers))
if __addLoaders is not None:
def register(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
"Enregistrement d'une commande individuelle"
if __command is not None and __keys is not None and __local is not None and not self.__switchoff:
- if "self" in __keys: __keys.remove("self")
+ if "self" in __keys:
+ __keys.remove("self")
self.__logSerie.append( (str(__command), __keys, __local, __pre, __switchoff) )
if __switchoff:
self.__switchoff = True
_extraArguments = None,
_sFunction = lambda x: x,
_mpEnabled = False,
- _mpWorkers = None,
- ):
+ _mpWorkers = None ):
"""
Pour une liste ordonnée de vecteurs en entrée, renvoie en sortie la liste
correspondante de valeurs de la fonction en argument
import time
from daCore import PlatformInfo
-LOGFILE = os.path.join(os.path.abspath(os.curdir),"AdaoOutputLogfile.log")
+LOGFILE = os.path.join(os.path.abspath(os.curdir), "AdaoOutputLogfile.log")
# ==============================================================================
class ExtendedLogging(object):
sortie sur fichier
"""
__slots__ = ("__logfile")
- #
+
def __init__(self, level=logging.WARNING):
"""
Initialise un logging à la console pour TOUS les niveaux de messages.
"""
- if sys.version_info.major <= 3 and sys.version_info.minor < 8:
+ if sys.version_info.major <= 3 and sys.version_info.minor < 8:
if logging.getLogger().hasHandlers():
while logging.getLogger().hasHandlers():
logging.getLogger().removeHandler( logging.getLogger().handlers[-1] )
format = '%(levelname)-8s %(message)s',
level = level,
stream = sys.stdout,
- )
- else: # Actif lorsque Python > 3.7
+ )
+ else: # Actif lorsque Python > 3.7
logging.basicConfig(
format = '%(levelname)-8s %(message)s',
level = level,
stream = sys.stdout,
force = True,
- )
+ )
self.__logfile = None
#
# Initialise l'affichage de logging
p = PlatformInfo.PlatformInfo()
#
logging.info( "--------------------------------------------------" )
- logging.info( p.getName()+" version "+p.getVersion() )
+ logging.info( p.getName() + " version " + p.getVersion() )
logging.info( "--------------------------------------------------" )
logging.info( "Library availability:" )
logging.info( "- Python.......: True" )
- logging.info( "- Numpy........: "+str(PlatformInfo.has_numpy) )
- logging.info( "- Scipy........: "+str(PlatformInfo.has_scipy) )
- logging.info( "- Matplotlib...: "+str(PlatformInfo.has_matplotlib) )
- logging.info( "- Gnuplot......: "+str(PlatformInfo.has_gnuplot) )
- logging.info( "- Sphinx.......: "+str(PlatformInfo.has_sphinx) )
- logging.info( "- Nlopt........: "+str(PlatformInfo.has_nlopt) )
+ logging.info( "- Numpy........: " + str(PlatformInfo.has_numpy) )
+ logging.info( "- Scipy........: " + str(PlatformInfo.has_scipy) )
+ logging.info( "- Matplotlib...: " + str(PlatformInfo.has_matplotlib) )
+ logging.info( "- Gnuplot......: " + str(PlatformInfo.has_gnuplot) )
+ logging.info( "- Sphinx.......: " + str(PlatformInfo.has_sphinx) )
+ logging.info( "- Nlopt........: " + str(PlatformInfo.has_nlopt) )
logging.info( "Library versions:" )
- logging.info( "- Python.......: "+p.getPythonVersion() )
- logging.info( "- Numpy........: "+p.getNumpyVersion() )
- logging.info( "- Scipy........: "+p.getScipyVersion() )
- logging.info( "- Matplotlib...: "+p.getMatplotlibVersion() )
- logging.info( "- Gnuplot......: "+p.getGnuplotVersion() )
- logging.info( "- Sphinx.......: "+p.getSphinxVersion() )
- logging.info( "- Nlopt........: "+p.getNloptVersion() )
+ logging.info( "- Python.......: " + p.getPythonVersion() )
+ logging.info( "- Numpy........: " + p.getNumpyVersion() )
+ logging.info( "- Scipy........: " + p.getScipyVersion() )
+ logging.info( "- Matplotlib...: " + p.getMatplotlibVersion() )
+ logging.info( "- Gnuplot......: " + p.getGnuplotVersion() )
+ logging.info( "- Sphinx.......: " + p.getSphinxVersion() )
+ logging.info( "- Nlopt........: " + p.getNloptVersion() )
logging.info( "" )
def setLogfile(self, filename=LOGFILE, filemode="w", level=logging.NOTSET):
def logtimer(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
- start = time.clock() # time.time()
+ start = time.clock() # time.time()
result = f(*args, **kwargs)
- end = time.clock() # time.time()
+ end = time.clock() # time.time()
msg = 'TIMER Durée elapsed de la fonction utilisateur "{}": {:.3f}s'
- logging.debug(msg.format(f.__name__, end-start))
+ logging.debug(msg.format(f.__name__, end - start))
return result
return wrapper
__slots__ = (
"_name", "_objname", "_lineSerie", "_switchoff", "_content",
"_numobservers", "_object", "_missing",
- )
- #
+ )
+
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
self._name = str(__name)
self._content = __content
self._object = __object
self._missing = """raise ValueError("This case requires beforehand to import or define the variable named <%s>. When corrected, remove this command, correct and uncomment the following one.")\n# """
- #------------------------
+
def _append(self, *args):
"Transformation d'une commande individuelle en un enregistrement"
raise NotImplementedError()
+
def _extract(self, *args):
"Transformation d'enregistrement(s) en commande(s) individuelle(s)"
raise NotImplementedError()
- #------------------------------
+
def _initialize(self, __multilines):
"Permet des pré-conversions automatiques simples de commandes ou clés"
__translation = {
- "Study_name" :"StudyName",
- "Study_repertory" :"StudyRepertory",
- "MaximumNumberOfSteps":"MaximumNumberOfIterations",
- "FunctionDict" :"ScriptWithSwitch",
- "FUNCTIONDICT_FILE" :"SCRIPTWITHSWITCH_FILE",
+ "Study_name" : "StudyName", # noqa: E203
+ "Study_repertory" : "StudyRepertory", # noqa: E203
+ "MaximumNumberOfSteps": "MaximumNumberOfIterations",
+ "FunctionDict" : "ScriptWithSwitch", # noqa: E203
+ "FUNCTIONDICT_FILE" : "SCRIPTWITHSWITCH_FILE", # noqa: E203
}
- for k,v in __translation.items():
- __multilines = __multilines.replace(k,v)
+ for k, v in __translation.items():
+ __multilines = __multilines.replace(k, v)
return __multilines
- #
+
def _finalize(self, __upa=None):
"Enregistrement du final"
__hasNotExecute = True
for __l in self._lineSerie:
- if "%s.execute"%(self._objname,) in __l: __hasNotExecute = False
+ if "%s.execute"%(self._objname,) in __l:
+ __hasNotExecute = False
if __hasNotExecute:
self._lineSerie.append("%s.execute()"%(self._objname,))
- if __upa is not None and len(__upa)>0:
- __upa = __upa.replace("ADD",str(self._objname))
+ if __upa is not None and len(__upa) > 0:
+ __upa = __upa.replace("ADD", str(self._objname))
self._lineSerie.append(__upa)
- #
+
def _addLine(self, line=""):
"Ajoute un enregistrement individuel"
self._lineSerie.append(line)
- #
+
def _get_objname(self):
return self._objname
- #
+
def dump(self, __filename=None, __upa=None):
"Restitution normalisée des commandes"
self._finalize(__upa)
__text = "\n".join(self._lineSerie)
- __text +="\n"
+ __text += "\n"
if __filename is not None:
__file = os.path.abspath(__filename)
- __fid = open(__file,"w")
+ __fid = open(__file, "w")
__fid.write(__text)
__fid.close()
return __text
- #
+
def load(self, __filename=None, __content=None, __object=None):
"Chargement normalisé des commandes"
if __filename is not None and os.path.exists(__filename):
elif __object is not None and type(__object) is dict:
self._object = copy.deepcopy(__object)
else:
- pass # use "self._content" from initialization
+ pass # use "self._content" from initialization
__commands = self._extract(self._content, self._object)
return __commands
Établissement des commandes d'un cas ADAO TUI (Cas<->TUI)
"""
__slots__ = ()
- #
+
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
GenericCaseViewer.__init__(self, __name, __objname, __content, __object)
if self._content is not None:
for command in self._content:
self._append(*command)
- #
+
def _append(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
"Transformation d'une commande individuelle en un enregistrement"
if __command is not None and __keys is not None and __local is not None:
__text = ""
if __pre is not None:
__text += "%s = "%__pre
- __text += "%s.%s( "%(self._objname,str(__command))
- if "self" in __keys: __keys.remove("self")
- if __command not in ("set","get") and "Concept" in __keys: __keys.remove("Concept")
+ __text += "%s.%s( "%(self._objname, str(__command))
+ if "self" in __keys:
+ __keys.remove("self")
+ if __command not in ("set", "get") and "Concept" in __keys:
+ __keys.remove("Concept")
for k in __keys:
- if k not in __local: continue
+ if k not in __local: continue # noqa: E701
__v = __local[k]
- if __v is None: continue
- if k == "Checked" and not __v: continue
- if k == "Stored" and not __v: continue
- if k == "ColMajor" and not __v: continue
- if k == "InputFunctionAsMulti" and not __v: continue
- if k == "nextStep" and not __v: continue
- if k == "PerformanceProfile" and __v: continue
- if k == "noDetails": continue
- if isinstance(__v,Persistence.Persistence): __v = __v.values()
- if callable(__v): __text = self._missing%__v.__name__+__text
- if isinstance(__v,dict):
+ if __v is None: continue # noqa: E701
+ if k == "Checked" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "Stored" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "ColMajor" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "InputFunctionAsMulti" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "nextStep" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "PerformanceProfile" and __v: continue # noqa: E241,E271,E272,E701
+ if k == "noDetails": continue # noqa: E241,E271,E272,E701
+ if isinstance(__v, Persistence.Persistence):
+ __v = __v.values()
+ if callable(__v):
+ __text = self._missing%__v.__name__ + __text
+ if isinstance(__v, dict):
for val in __v.values():
- if callable(val): __text = self._missing%val.__name__+__text
- numpy.set_printoptions(precision=15,threshold=1000000,linewidth=1000*15)
- __text += "%s=%s, "%(k,repr(__v))
- numpy.set_printoptions(precision=8,threshold=1000,linewidth=75)
+ if callable(val):
+ __text = self._missing%val.__name__ + __text
+ numpy.set_printoptions(precision=15, threshold=1000000, linewidth=1000 * 15)
+ __text += "%s=%s, "%(k, repr(__v))
+ numpy.set_printoptions(precision=8, threshold=1000, linewidth=75)
__text = __text.rstrip(", ")
__text += " )"
self._addLine(__text)
- #
+
def _extract(self, __multilines="", __object=None):
"Transformation d'enregistrement(s) en commande(s) individuelle(s)"
__is_case = False
__commands = []
- __multilines = __multilines.replace("\r\n","\n")
+ __multilines = __multilines.replace("\r\n", "\n")
for line in __multilines.split("\n"):
if "adaoBuilder.New" in line and "=" in line:
self._objname = line.split("=")[0].strip()
if not __is_case:
continue
else:
- if self._objname+".set" in line:
- __commands.append( line.replace(self._objname+".","",1) )
+ if self._objname + ".set" in line:
+ __commands.append( line.replace(self._objname + ".", "", 1) )
logging.debug("TUI Extracted command: %s"%(__commands[-1],))
return __commands
Établissement des commandes d'un cas COMM (Eficas Native Format/Cas<-COM)
"""
__slots__ = ("_observerIndex", "_objdata")
- #
+
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
GenericCaseViewer.__init__(self, __name, __objname, __content, __object)
if self._content is not None:
for command in self._content:
self._append(*command)
- #
+
def _extract(self, __multilines=None, __object=None):
"Transformation d'enregistrement(s) en commande(s) individuelle(s)"
__suppparameters = {}
if 'adaoBuilder' in __multilines:
raise ValueError("Impossible to load given content as an ADAO COMM one (Hint: it's perhaps not a COMM input, but a TUI one).")
if "ASSIMILATION_STUDY" in __multilines:
- __suppparameters.update({'StudyType':"ASSIMILATION_STUDY"})
- __multilines = __multilines.replace("ASSIMILATION_STUDY","dict")
+ __suppparameters.update({'StudyType': "ASSIMILATION_STUDY"})
+ __multilines = __multilines.replace("ASSIMILATION_STUDY", "dict")
elif "OPTIMIZATION_STUDY" in __multilines:
- __suppparameters.update({'StudyType':"ASSIMILATION_STUDY"})
- __multilines = __multilines.replace("OPTIMIZATION_STUDY", "dict")
+ __suppparameters.update({'StudyType': "ASSIMILATION_STUDY"})
+ __multilines = __multilines.replace("OPTIMIZATION_STUDY", "dict")
elif "REDUCTION_STUDY" in __multilines:
- __suppparameters.update({'StudyType':"ASSIMILATION_STUDY"})
- __multilines = __multilines.replace("REDUCTION_STUDY", "dict")
+ __suppparameters.update({'StudyType': "ASSIMILATION_STUDY"})
+ __multilines = __multilines.replace("REDUCTION_STUDY", "dict")
elif "CHECKING_STUDY" in __multilines:
- __suppparameters.update({'StudyType':"CHECKING_STUDY"})
- __multilines = __multilines.replace("CHECKING_STUDY", "dict")
+ __suppparameters.update({'StudyType': "CHECKING_STUDY"})
+ __multilines = __multilines.replace("CHECKING_STUDY", "dict")
else:
- __multilines = __multilines.replace("ASSIMILATION_STUDY","dict")
+ __multilines = __multilines.replace("ASSIMILATION_STUDY", "dict")
#
- __multilines = __multilines.replace("_F(", "dict(")
- __multilines = __multilines.replace(",),);", ",),)")
+ __multilines = __multilines.replace("_F(", "dict(")
+ __multilines = __multilines.replace(",),);", ",),)")
__fulllines = ""
for line in __multilines.split("\n"):
- if len(line) < 1: continue
+ if len(line) < 1:
+ continue
__fulllines += line + "\n"
__multilines = __fulllines
self._objname = "case"
self._objdata = None
- exec("self._objdata = "+__multilines)
+ exec("self._objdata = " + __multilines)
#
- if self._objdata is None or not(type(self._objdata) is dict) or not('AlgorithmParameters' in self._objdata):
+ if self._objdata is None or not (type(self._objdata) is dict) or not ('AlgorithmParameters' in self._objdata):
raise ValueError("Impossible to load given content as an ADAO COMM one (no dictionnary or no 'AlgorithmParameters' key found).")
# ----------------------------------------------------------------------
logging.debug("COMM Extracting commands of '%s' object..."%(self._objname,))
__commands = []
__UserPostAnalysis = ""
- for k,r in self._objdata.items():
+ for k, r in self._objdata.items():
__command = k
logging.debug("COMM Extracted command: %s:%s"%(k, r))
- if __command == "StudyName" and len(str(r))>0:
+ if __command == "StudyName" and len(str(r)) > 0:
__commands.append( "set( Concept='Name', String='%s')"%(str(r),) )
- elif __command == "StudyRepertory":
+ elif __command == "StudyRepertory":
__commands.append( "set( Concept='Directory', String='%s')"%(str(r),) )
- elif __command == "Debug" and str(r) == "0":
+ elif __command == "Debug" and str(r) == "0":
__commands.append( "set( Concept='NoDebug' )" )
- elif __command == "Debug" and str(r) == "1":
+ elif __command == "Debug" and str(r) == "1":
__commands.append( "set( Concept='Debug' )" )
- elif __command == "ExecuteInContainer":
- __suppparameters.update({'ExecuteInContainer':r})
+ elif __command == "ExecuteInContainer":
+ __suppparameters.update({'ExecuteInContainer': r})
#
elif __command == "UserPostAnalysis" and type(r) is dict:
if 'STRING' in r:
- __UserPostAnalysis = r['STRING'].replace("ADD",str(self._objname))
+ __UserPostAnalysis = r['STRING'].replace("ADD", str(self._objname))
__commands.append( "set( Concept='UserPostAnalysis', String=\"\"\"%s\"\"\" )"%(__UserPostAnalysis,) )
elif 'SCRIPT_FILE' in r and os.path.exists(r['SCRIPT_FILE']):
- __UserPostAnalysis = open(r['SCRIPT_FILE'],'r').read()
+ __UserPostAnalysis = open(r['SCRIPT_FILE'], 'r').read()
__commands.append( "set( Concept='UserPostAnalysis', Script='%s' )"%(r['SCRIPT_FILE'],) )
elif 'Template' in r and 'ValueTemplate' not in r:
# AnalysisPrinter...
__from = r['data']
if 'STRING' in __from:
__parameters = ", Parameters=%s"%(repr(eval(__from['STRING'])),)
- elif 'SCRIPT_FILE' in __from: # Pas de test d'existence du fichier pour accepter un fichier relatif
+ elif 'SCRIPT_FILE' in __from: # Pas de test d'existence du fichier pour accepter un fichier relatif
__parameters = ", Script='%s'"%(__from['SCRIPT_FILE'],)
- else: # if 'Parameters' in r and r['Parameters'] == 'Defaults':
+ else: # if 'Parameters' in r and r['Parameters'] == 'Defaults':
__Dict = copy.deepcopy(r)
- __Dict.pop('Algorithm','')
- __Dict.pop('Parameters','')
- if 'SetSeed' in __Dict:__Dict['SetSeed'] = int(__Dict['SetSeed'])
+ __Dict.pop('Algorithm', '')
+ __Dict.pop('Parameters', '')
+ if 'SetSeed' in __Dict:
+ __Dict['SetSeed'] = int(__Dict['SetSeed'])
if 'Bounds' in __Dict and type(__Dict['Bounds']) is str:
__Dict['Bounds'] = eval(__Dict['Bounds'])
if 'BoxBounds' in __Dict and type(__Dict['BoxBounds']) is str:
__parameters = ', Parameters=%s'%(repr(__Dict),)
else:
__parameters = ""
- __commands.append( "set( Concept='AlgorithmParameters', Algorithm='%s'%s )"%(r['Algorithm'],__parameters) )
+ __commands.append( "set( Concept='AlgorithmParameters', Algorithm='%s'%s )"%(r['Algorithm'], __parameters) )
#
elif __command == "Observers" and type(r) is dict and 'SELECTION' in r:
if type(r['SELECTION']) is str:
__info = ", Info=\"\"\"%s\"\"\""%(__idata['Info'],)
else:
__info = ""
- __commands.append( "set( Concept='Observer', Variable='%s', Template=\"\"\"%s\"\"\"%s )"%(sk,__template,__info) )
+ __commands.append( "set( Concept='Observer', Variable='%s', Template=\"\"\"%s\"\"\"%s )"%(sk, __template, __info) )
if __idata['NodeType'] == 'String' and 'Value' in __idata:
- __value =__idata['Value']
- __commands.append( "set( Concept='Observer', Variable='%s', String=\"\"\"%s\"\"\" )"%(sk,__value) )
+ __value = __idata['Value']
+ __commands.append( "set( Concept='Observer', Variable='%s', String=\"\"\"%s\"\"\" )"%(sk, __value) )
#
# Background, ObservationError, ObservationOperator...
elif type(r) is dict:
__argumentsList = []
if 'Stored' in r and bool(r['Stored']):
- __argumentsList.append(['Stored',True])
+ __argumentsList.append(['Stored', True])
if 'INPUT_TYPE' in r and 'data' in r:
# Vector, Matrix, ScalarSparseMatrix, DiagonalSparseMatrix, Function
__itype = r['INPUT_TYPE']
if 'FROM' in __idata:
# String, Script, DataFile, Template, ScriptWithOneFunction, ScriptWithFunctions
__ifrom = __idata['FROM']
- __idata.pop('FROM','')
+ __idata.pop('FROM', '')
if __ifrom == 'String' or __ifrom == 'Template':
- __argumentsList.append([__itype,__idata['STRING']])
+ __argumentsList.append([__itype, __idata['STRING']])
if __ifrom == 'Script':
- __argumentsList.append([__itype,True])
- __argumentsList.append(['Script',__idata['SCRIPT_FILE']])
+ __argumentsList.append([__itype, True])
+ __argumentsList.append(['Script', __idata['SCRIPT_FILE']])
if __ifrom == 'DataFile':
- __argumentsList.append([__itype,True])
- __argumentsList.append(['DataFile',__idata['DATA_FILE']])
+ __argumentsList.append([__itype, True])
+ __argumentsList.append(['DataFile', __idata['DATA_FILE']])
if __ifrom == 'ScriptWithOneFunction':
- __argumentsList.append(['OneFunction',True])
- __argumentsList.append(['Script',__idata.pop('SCRIPTWITHONEFUNCTION_FILE')])
- if len(__idata)>0:
- __argumentsList.append(['Parameters',__idata])
+ __argumentsList.append(['OneFunction', True])
+ __argumentsList.append(['Script', __idata.pop('SCRIPTWITHONEFUNCTION_FILE')])
+ if len(__idata) > 0:
+ __argumentsList.append(['Parameters', __idata])
if __ifrom == 'ScriptWithFunctions':
- __argumentsList.append(['ThreeFunctions',True])
- __argumentsList.append(['Script',__idata.pop('SCRIPTWITHFUNCTIONS_FILE')])
- if len(__idata)>0:
- __argumentsList.append(['Parameters',__idata])
- __arguments = ["%s = %s"%(k,repr(v)) for k,v in __argumentsList]
+ __argumentsList.append(['ThreeFunctions', True])
+ __argumentsList.append(['Script', __idata.pop('SCRIPTWITHFUNCTIONS_FILE')])
+ if len(__idata) > 0:
+ __argumentsList.append(['Parameters', __idata])
+ __arguments = ["%s = %s"%(k, repr(v)) for k, v in __argumentsList]
__commands.append( "set( Concept='%s', %s )"%(__command, ", ".join(__arguments)))
#
__commands.append( "set( Concept='%s', Parameters=%s )"%('SupplementaryParameters', repr(__suppparameters)))
#
# ----------------------------------------------------------------------
- __commands.sort() # Pour commencer par 'AlgorithmParameters'
+ __commands.sort() # Pour commencer par 'AlgorithmParameters'
__commands.append(__UserPostAnalysis)
return __commands
__slots__ = (
"__DebugCommandNotSet", "__ObserverCommandNotSet",
"__UserPostAnalysisNotSet", "__hasAlgorithm")
- #
+
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entête"
GenericCaseViewer.__init__(self, __name, __objname, __content, __object)
#
if __content is not None:
for command in __content:
- if command[0] == "set": __command = command[2]["Concept"]
- else: __command = command[0].replace("set", "", 1)
+ if command[0] == "set":
+ __command = command[2]["Concept"]
+ else:
+ __command = command[0].replace("set", "", 1)
if __command == 'Name':
self._name = command[2]["String"]
#
if __content is not None:
for command in __content:
self._append(*command)
- #
+
def _append(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
"Transformation d'une commande individuelle en un enregistrement"
- if __command == "set": __command = __local["Concept"]
- else: __command = __command.replace("set", "", 1)
+ if __command == "set":
+ __command = __local["Concept"]
+ else:
+ __command = __command.replace("set", "", 1)
logging.debug("SCD Order processed: %s"%(__command))
#
__text = None
__text += "Analysis_config['Data'] = \"\"\"%s\"\"\"\n"%(Templates.UserPostAnalysisTemplates[__local['Template']],)
__text += "study_config['UserPostAnalysis'] = Analysis_config"
self.__UserPostAnalysisNotSet = False
- elif __local is not None: # __keys is not None and
- numpy.set_printoptions(precision=15,threshold=1000000,linewidth=1000*15)
+ elif __local is not None: # __keys is not None and
+ numpy.set_printoptions(precision=15, threshold=1000000, linewidth=1000 * 15)
__text = "#\n"
__text += "%s_config = {}\n"%__command
- __local.pop('self','')
+ __local.pop('self', '')
__to_be_removed = []
__vectorIsDataFile = False
__vectorIsScript = False
- for __k,__v in __local.items():
- if __v is None: __to_be_removed.append(__k)
+ for __k, __v in __local.items():
+ if __v is None:
+ __to_be_removed.append(__k)
for __k in __to_be_removed:
__local.pop(__k)
- for __k,__v in __local.items():
- if __k == "Concept": continue
- if __k in ['ScalarSparseMatrix','DiagonalSparseMatrix','Matrix','OneFunction','ThreeFunctions'] \
- and 'Script' in __local and __local['Script'] is not None: continue
- if __k in ['Vector','VectorSerie'] \
- and 'DataFile' in __local and __local['DataFile'] is not None: continue
- if __k == 'Parameters' and not (__command in ['AlgorithmParameters','SupplementaryParameters']): continue
+ for __k, __v in __local.items():
+ if __k == "Concept":
+ continue
+ if __k in ['ScalarSparseMatrix', 'DiagonalSparseMatrix', 'Matrix', 'OneFunction', 'ThreeFunctions'] \
+ and 'Script' in __local \
+ and __local['Script'] is not None:
+ continue
+ if __k in ['Vector', 'VectorSerie'] \
+ and 'DataFile' in __local \
+ and __local['DataFile'] is not None:
+ continue
+ if __k == 'Parameters' and not (__command in ['AlgorithmParameters', 'SupplementaryParameters']):
+ continue
if __k == 'Algorithm':
__text += "study_config['Algorithm'] = %s\n"%(repr(__v))
elif __k == 'DataFile':
__k = 'Vector'
__f = 'DataFile'
- __v = "'"+repr(__v)+"'"
- for __lk in ['Vector','VectorSerie']:
- if __lk in __local and __local[__lk]: __k = __lk
- __text += "%s_config['Type'] = '%s'\n"%(__command,__k)
- __text += "%s_config['From'] = '%s'\n"%(__command,__f)
- __text += "%s_config['Data'] = %s\n"%(__command,__v)
- __text = __text.replace("''","'")
+ __v = "'" + repr(__v) + "'"
+ for __lk in ['Vector', 'VectorSerie']:
+ if __lk in __local and __local[__lk]:
+ __k = __lk
+ __text += "%s_config['Type'] = '%s'\n"%(__command, __k)
+ __text += "%s_config['From'] = '%s'\n"%(__command, __f)
+ __text += "%s_config['Data'] = %s\n"%(__command, __v)
+ __text = __text.replace("''", "'")
__vectorIsDataFile = True
elif __k == 'Script':
__k = 'Vector'
__f = 'Script'
- __v = "'"+repr(__v)+"'"
- for __lk in ['ScalarSparseMatrix','DiagonalSparseMatrix','Matrix']:
- if __lk in __local and __local[__lk]: __k = __lk
- if __command == "AlgorithmParameters": __k = "Dict"
+ __v = "'" + repr(__v) + "'"
+ for __lk in ['ScalarSparseMatrix', 'DiagonalSparseMatrix', 'Matrix']:
+ if __lk in __local and __local[__lk]:
+ __k = __lk
+ if __command == "AlgorithmParameters":
+ __k = "Dict"
if 'OneFunction' in __local and __local['OneFunction']:
__text += "%s_ScriptWithOneFunction = {}\n"%(__command,)
__text += "%s_ScriptWithOneFunction['Function'] = ['Direct', 'Tangent', 'Adjoint']\n"%(__command,)
__text += "%s_ScriptWithOneFunction['Script'] = {}\n"%(__command,)
- __text += "%s_ScriptWithOneFunction['Script']['Direct'] = %s\n"%(__command,__v)
- __text += "%s_ScriptWithOneFunction['Script']['Tangent'] = %s\n"%(__command,__v)
- __text += "%s_ScriptWithOneFunction['Script']['Adjoint'] = %s\n"%(__command,__v)
+ __text += "%s_ScriptWithOneFunction['Script']['Direct'] = %s\n"%(__command, __v)
+ __text += "%s_ScriptWithOneFunction['Script']['Tangent'] = %s\n"%(__command, __v)
+ __text += "%s_ScriptWithOneFunction['Script']['Adjoint'] = %s\n"%(__command, __v)
__text += "%s_ScriptWithOneFunction['DifferentialIncrement'] = 1e-06\n"%(__command,)
__text += "%s_ScriptWithOneFunction['CenteredFiniteDifference'] = 0\n"%(__command,)
__k = 'Function'
__text += "%s_ScriptWithFunctions = {}\n"%(__command,)
__text += "%s_ScriptWithFunctions['Function'] = ['Direct', 'Tangent', 'Adjoint']\n"%(__command,)
__text += "%s_ScriptWithFunctions['Script'] = {}\n"%(__command,)
- __text += "%s_ScriptWithFunctions['Script']['Direct'] = %s\n"%(__command,__v)
- __text += "%s_ScriptWithFunctions['Script']['Tangent'] = %s\n"%(__command,__v)
- __text += "%s_ScriptWithFunctions['Script']['Adjoint'] = %s\n"%(__command,__v)
+ __text += "%s_ScriptWithFunctions['Script']['Direct'] = %s\n"%(__command, __v)
+ __text += "%s_ScriptWithFunctions['Script']['Tangent'] = %s\n"%(__command, __v)
+ __text += "%s_ScriptWithFunctions['Script']['Adjoint'] = %s\n"%(__command, __v)
__k = 'Function'
__f = 'ScriptWithFunctions'
__v = '%s_ScriptWithFunctions'%(__command,)
- __text += "%s_config['Type'] = '%s'\n"%(__command,__k)
- __text += "%s_config['From'] = '%s'\n"%(__command,__f)
- __text += "%s_config['Data'] = %s\n"%(__command,__v)
- __text = __text.replace("''","'")
+ __text += "%s_config['Type'] = '%s'\n"%(__command, __k)
+ __text += "%s_config['From'] = '%s'\n"%(__command, __f)
+ __text += "%s_config['Data'] = %s\n"%(__command, __v)
+ __text = __text.replace("''", "'")
__vectorIsScript = True
elif __k in ('Stored', 'Checked', 'ColMajor', 'InputFunctionAsMulti', 'nextStep'):
if bool(__v):
- __text += "%s_config['%s'] = '%s'\n"%(__command,__k,int(bool(__v)))
+ __text += "%s_config['%s'] = '%s'\n"%(__command, __k, int(bool(__v)))
elif __k in ('PerformanceProfile', 'noDetails'):
if not bool(__v):
- __text += "%s_config['%s'] = '%s'\n"%(__command,__k,int(bool(__v)))
+ __text += "%s_config['%s'] = '%s'\n"%(__command, __k, int(bool(__v)))
else:
- if __k == 'Vector' and __vectorIsScript: continue
- if __k == 'Vector' and __vectorIsDataFile: continue
- if __k == 'Parameters': __k = "Dict"
- if isinstance(__v,Persistence.Persistence): __v = __v.values()
- if callable(__v): __text = self._missing%__v.__name__+__text
- if isinstance(__v,dict):
+ if __k == 'Vector' and __vectorIsScript:
+ continue
+ if __k == 'Vector' and __vectorIsDataFile:
+ continue
+ if __k == 'Parameters':
+ __k = "Dict"
+ if isinstance(__v, Persistence.Persistence):
+ __v = __v.values()
+ if callable(__v):
+ __text = self._missing%__v.__name__ + __text
+ if isinstance(__v, dict):
for val in __v.values():
- if callable(val): __text = self._missing%val.__name__+__text
- __text += "%s_config['Type'] = '%s'\n"%(__command,__k)
- __text += "%s_config['From'] = '%s'\n"%(__command,'String')
- __text += "%s_config['Data'] = \"\"\"%s\"\"\"\n"%(__command,repr(__v))
- __text += "study_config['%s'] = %s_config"%(__command,__command)
- numpy.set_printoptions(precision=8,threshold=1000,linewidth=75)
+ if callable(val):
+ __text = self._missing%val.__name__ + __text
+ __text += "%s_config['Type'] = '%s'\n"%(__command, __k)
+ __text += "%s_config['From'] = '%s'\n"%(__command, 'String')
+ __text += "%s_config['Data'] = \"\"\"%s\"\"\"\n"%(__command, repr(__v))
+ __text += "study_config['%s'] = %s_config"%(__command, __command)
+ numpy.set_printoptions(precision=8, threshold=1000, linewidth=75)
if __switchoff:
self._switchoff = True
- if __text is not None: self._addLine(__text)
+ if __text is not None:
+ self._addLine(__text)
if not __switchoff:
self._switchoff = False
- #
+
def _finalize(self, *__args):
self.__loadVariablesByScript()
if self.__DebugCommandNotSet:
self._addLine("xa=ADD.get('Analysis')[-1]")
self._addLine("print('Analysis:',xa)\"\"\"")
self._addLine("study_config['UserPostAnalysis'] = Analysis_config")
- #
+
def __loadVariablesByScript(self):
- __ExecVariables = {} # Necessaire pour recuperer la variable
+ __ExecVariables = {} # Necessaire pour recuperer la variable
exec("\n".join(self._lineSerie), __ExecVariables)
study_config = __ExecVariables['study_config']
# Pour Python 3 : self.__hasAlgorithm = bool(study_config['Algorithm'])
Etablissement des commandes d'un cas YACS (Cas->SCD->YACS)
"""
__slots__ = ("__internalSCD", "_append")
- #
+
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
GenericCaseViewer.__init__(self, __name, __objname, __content, __object)
self.__internalSCD = _SCDViewer(__name, __objname, __content, __object)
self._append = self.__internalSCD._append
- #
+
def dump(self, __filename=None, __upa=None):
"Restitution normalisée des commandes"
# -----
os.remove(__file)
# -----
if not PlatformInfo.has_salome or \
- not PlatformInfo.has_adao:
+ not PlatformInfo.has_adao:
raise ImportError(
- "Unable to get SALOME (%s) or ADAO (%s) environnement for YACS conversion.\n"%(PlatformInfo.has_salome,PlatformInfo.has_adao)+\
+ "Unable to get SALOME (%s) or ADAO (%s) environnement for YACS conversion.\n"%(PlatformInfo.has_salome, PlatformInfo.has_adao) + \
"Please load the right SALOME environnement before trying to use it.")
else:
from daYacsSchemaCreator.run import create_schema_from_content
__msg += "See errors details in your launching terminal log.\n"
raise ValueError(__msg)
# -----
- __fid = open(__file,"r")
+ __fid = open(__file, "r")
__text = __fid.read()
__fid.close()
return __text
Partie commune de restitution simple
"""
__slots__ = ("_r")
- #
+
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
GenericCaseViewer.__init__(self, __name, __objname, __content, __object)
if self._content is not None:
for command in self._content:
self._append(*command)
- #
+
def _append(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
"Transformation d'une commande individuelle en un enregistrement"
if __command is not None and __keys is not None and __local is not None:
- if __command in ("set","get") and "Concept" in __keys: __command = __local["Concept"]
- __text = "<i>%s</i> command has been set"%str(__command.replace("set",""))
+ if __command in ("set", "get") and "Concept" in __keys:
+ __command = __local["Concept"]
+ __text = "<i>%s</i> command has been set"%str(__command.replace("set", ""))
__ktext = ""
for k in __keys:
- if k not in __local: continue
+ if k not in __local: continue # noqa: E701
__v = __local[k]
- if __v is None: continue
- if k == "Checked" and not __v: continue
- if k == "Stored" and not __v: continue
- if k == "ColMajor" and not __v: continue
- if k == "InputFunctionAsMulti" and not __v: continue
- if k == "nextStep" and not __v: continue
- if k == "PerformanceProfile" and __v: continue
- if k == "noDetails": continue
- if k == "Concept": continue
- if k == "self": continue
- if isinstance(__v,Persistence.Persistence): __v = __v.values()
- numpy.set_printoptions(precision=15,threshold=1000000,linewidth=1000*15)
- __ktext += "\n %s = %s,"%(k,repr(__v))
- numpy.set_printoptions(precision=8,threshold=1000,linewidth=75)
+ if __v is None: continue # noqa: E701
+ if k == "Checked" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "Stored" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "ColMajor" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "InputFunctionAsMulti" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "nextStep" and not __v: continue # noqa: E241,E271,E272,E701
+ if k == "PerformanceProfile" and __v: continue # noqa: E241,E271,E272,E701
+ if k == "noDetails": continue # noqa: E241,E271,E272,E701
+ if k == "Concept": continue # noqa: E241,E271,E272,E701
+ if k == "self": continue # noqa: E241,E271,E272,E701
+ if isinstance(__v, Persistence.Persistence):
+ __v = __v.values()
+ numpy.set_printoptions(precision=15, threshold=1000000, linewidth=1000 * 15)
+ __ktext += "\n %s = %s,"%(k, repr(__v))
+ numpy.set_printoptions(precision=8, threshold=1000, linewidth=75)
if len(__ktext) > 0:
__text += " with values:" + __ktext
__text = __text.rstrip(", ")
self._r.append(__text, "uli")
- #
+
def _finalize(self, __upa=None):
"Enregistrement du final"
raise NotImplementedError()
Restitution simple en RST
"""
__slots__ = ()
- #
+
def _finalize(self, __upa=None):
self._lineSerie.append(Reporting.ReportViewInRst(self._r).__str__())
Restitution simple en HTML
"""
__slots__ = ()
- #
+
def _finalize(self, __upa=None):
self._lineSerie.append(Reporting.ReportViewInHtml(self._r).__str__())
Restitution simple en TXT
"""
__slots__ = ()
- #
+
def _finalize(self, __upa=None):
self._lineSerie.append(Reporting.ReportViewInPlainTxt(self._r).__str__())
Obtention d'une variable nommee depuis un fichier script importé
"""
__slots__ = ("__basename", "__filenspace", "__filestring")
- #
+
def __init__(self, __filename=None):
"Verifie l'existence et importe le script"
if __filename is None:
__filename = __fullname
else:
raise ValueError(
- "The file containing the variable to be imported doesn't seem to"+\
+ "The file containing the variable to be imported doesn't seem to" + \
" exist. Please check the file. The given file name is:\n \"%s\""%str(__filename))
if os.path.dirname(__filename) != '':
sys.path.insert(0, os.path.dirname(__filename))
__basename = os.path.basename(__filename).rstrip(".py")
else:
__basename = __filename.rstrip(".py")
- PlatformInfo.checkFileNameImportability( __basename+".py" )
+ PlatformInfo.checkFileNameImportability( __basename + ".py" )
self.__basename = __basename
try:
self.__filenspace = __import__(__basename, globals(), locals(), [])
except NameError:
self.__filenspace = ""
- with open(__filename,'r') as fid:
+ with open(__filename, 'r') as fid:
self.__filestring = fid.read()
- #
+
def getvalue(self, __varname=None, __synonym=None ):
"Renvoie la variable demandee par son nom ou son synonyme"
if __varname is None:
if not hasattr(self.__filenspace, __varname):
if __synonym is None:
raise ValueError(
- "The imported script file \"%s\""%(str(self.__basename)+".py",)+\
- " doesn't contain the mandatory variable \"%s\""%(__varname,)+\
+ "The imported script file \"%s\""%(str(self.__basename) + ".py",) + \
+ " doesn't contain the mandatory variable \"%s\""%(__varname,) + \
" to be read. Please check the content of the file and the syntax.")
elif not hasattr(self.__filenspace, __synonym):
raise ValueError(
- "The imported script file \"%s\""%(str(self.__basename)+".py",)+\
- " doesn't contain the mandatory variable \"%s\""%(__synonym,)+\
+ "The imported script file \"%s\""%(str(self.__basename) + ".py",) + \
+ " doesn't contain the mandatory variable \"%s\""%(__synonym,) + \
" to be read. Please check the content of the file and the syntax.")
else:
return getattr(self.__filenspace, __synonym)
else:
return getattr(self.__filenspace, __varname)
- #
+
def getstring(self):
"Renvoie le script complet"
return self.__filestring
Détection des caractéristiques de fichiers ou objets en entrée
"""
__slots__ = ("__url", "__usr", "__root", "__end")
- #
+
def __enter__(self):
return self
+
def __exit__(self, exc_type, exc_val, exc_tb):
return False
- #
+
def __init__(self, __url, UserMime=""):
if __url is None:
raise ValueError("The name or url of the file object has to be specified.")
mimetypes.add_type('text/plain', '.txt')
mimetypes.add_type('text/csv', '.csv')
mimetypes.add_type('text/tab-separated-values', '.tsv')
- #
+
# File related tests
# ------------------
def is_local_file(self):
return True
else:
return False
- #
+
def is_not_local_file(self):
return not self.is_local_file()
- #
+
def raise_error_if_not_local_file(self):
if self.is_not_local_file():
raise ValueError("The name or the url of the file object doesn't seem to exist. The given name is:\n \"%s\""%str(self.__url))
else:
return False
- #
+
# Directory related tests
# -----------------------
def is_local_dir(self):
return True
else:
return False
- #
+
def is_not_local_dir(self):
return not self.is_local_dir()
- #
+
def raise_error_if_not_local_dir(self):
if self.is_not_local_dir():
raise ValueError("The name or the url of the directory object doesn't seem to exist. The given name is:\n \"%s\""%str(self.__url))
else:
return False
- #
+
# Mime related functions
# ------------------------
def get_standard_mime(self):
(__mtype, __encoding) = mimetypes.guess_type(self.__url, strict=False)
return __mtype
- #
+
def get_user_mime(self):
- __fake = "fake."+self.__usr.lower()
+ __fake = "fake." + self.__usr.lower()
(__mtype, __encoding) = mimetypes.guess_type(__fake, strict=False)
return __mtype
- #
+
def get_comprehensive_mime(self):
if self.get_standard_mime() is not None:
return self.get_standard_mime()
return self.get_user_mime()
else:
return None
- #
+
# Name related functions
# ----------------------
def get_user_name(self):
return self.__url
- #
+
def get_absolute_name(self):
return os.path.abspath(os.path.realpath(self.__url))
- #
+
def get_extension(self):
return self.__end
"_filename", "_colnames", "_colindex", "_varsline", "_format",
"_delimiter", "_skiprows", "__url", "__filestring", "__header",
"__allowvoid", "__binaryformats", "__supportedformats")
- #
+
def __enter__(self):
return self
- #
+
def __exit__(self, exc_type, exc_val, exc_tb):
return False
- #
+
def __init__(self, Filename=None, ColNames=None, ColIndex=None, Format="Guess", AllowVoidNameList=True):
"""
Verifie l'existence et les informations de définition du fichier. Les
- AllowVoidNameList : permet, si la liste de noms est vide, de
prendre par défaut toutes les colonnes
"""
- self.__binaryformats =(
+ self.__binaryformats = (
"application/numpy.npy",
"application/numpy.npz",
"application/dymola.sdf",
- )
+ )
self.__url = ImportDetector( Filename, Format)
self.__url.raise_error_if_not_local_file()
self._filename = self.__url.get_absolute_name()
else:
self._delimiter = None
#
- if ColNames is not None: self._colnames = tuple(ColNames)
- else: self._colnames = None
+ if ColNames is not None:
+ self._colnames = tuple(ColNames)
+ else:
+ self._colnames = None
#
- if ColIndex is not None: self._colindex = str(ColIndex)
- else: self._colindex = None
+ if ColIndex is not None:
+ self._colindex = str(ColIndex)
+ else:
+ self._colindex = None
#
self.__allowvoid = bool(AllowVoidNameList)
- #
+
def __getentete(self, __nblines = 3):
"Lit l'entête du fichier pour trouver la définition des variables"
# La première ligne non vide non commentée est toujours considérée
if self._format in self.__binaryformats:
pass
else:
- with open(self._filename,'r') as fid:
+ with open(self._filename, 'r') as fid:
__line = fid.readline().strip()
while "#" in __line or len(__line) < 1:
__header.append(__line)
__skiprows += 1
__line = fid.readline().strip()
- __varsline = __line # Ligne de labels par convention
- for i in range(max(0,__nblines)):
+ __varsline = __line # Ligne de labels par convention
+ for i in range(max(0, __nblines)):
__header.append(fid.readline())
return (__header, __varsline, __skiprows)
- #
+
def __getindices(self, __colnames, __colindex, __delimiter=None ):
"Indices de colonnes correspondants à l'index et aux variables"
if __delimiter is None:
__colnames = tuple(__colnames)
for v in __colnames:
for i, n in enumerate(__varserie):
- if v == n: __usecols.append(i)
+ if v == n:
+ __usecols.append(i)
__usecols = tuple(__usecols)
if len(__usecols) == 0:
if self.__allowvoid:
__useindex = None
__colindex = str(__colindex)
for i, n in enumerate(__varserie):
- if __colindex == n: __useindex = i
+ if __colindex == n:
+ __useindex = i
else:
__useindex = None
#
return (__usecols, __useindex)
- #
+
def getsupported(self):
self.__supportedformats = {}
self.__supportedformats["text/plain"] = True
self.__supportedformats["application/numpy.npz"] = True
self.__supportedformats["application/dymola.sdf"] = PlatformInfo.has_sdf
return self.__supportedformats
- #
+
def getvalue(self, ColNames=None, ColIndex=None ):
"Renvoie la ou les variables demandées par la liste de leurs noms"
# Uniquement si mise à jour
- if ColNames is not None: self._colnames = tuple(ColNames)
- if ColIndex is not None: self._colindex = str(ColIndex)
+ if ColNames is not None:
+ self._colnames = tuple(ColNames)
+ if ColIndex is not None:
+ self._colindex = str(ColIndex)
#
__index = None
if self._format == "application/numpy.npy":
with numpy.load(self._filename) as __allcolumns:
if self._colnames is None:
self._colnames = __allcolumns.files
- for nom in self._colnames: # Si une variable demandée n'existe pas
+ for nom in self._colnames: # Si une variable demandée n'existe pas
if nom not in __allcolumns.files:
self._colnames = tuple( __allcolumns.files )
for nom in self._colnames:
if nom in __allcolumns.files:
if __columns is not None:
# Attention : toutes les variables doivent avoir la même taille
- __columns = numpy.vstack((__columns, numpy.reshape(__allcolumns[nom], (1,-1))))
+ __columns = numpy.vstack((__columns, numpy.reshape(__allcolumns[nom], (1, -1))))
else:
# Première colonne
- __columns = numpy.reshape(__allcolumns[nom], (1,-1))
+ __columns = numpy.reshape(__allcolumns[nom], (1, -1))
if self._colindex is not None and self._colindex in __allcolumns.files:
- __index = numpy.array(numpy.reshape(__allcolumns[self._colindex], (1,-1)), dtype=bytes)
+ __index = numpy.array(numpy.reshape(__allcolumns[self._colindex], (1, -1)), dtype=bytes)
elif self._format == "text/plain":
__usecols, __useindex = self.__getindices(self._colnames, self._colindex)
__columns = numpy.loadtxt(self._filename, usecols = __usecols, skiprows=self._skiprows)
if __useindex is not None:
__index = numpy.loadtxt(self._filename, dtype = bytes, usecols = (__useindex,), skiprows=self._skiprows)
- if __usecols is None: # Si une variable demandée n'existe pas
+ if __usecols is None: # Si une variable demandée n'existe pas
self._colnames = None
#
elif self._format == "application/dymola.sdf" and PlatformInfo.has_sdf:
if nom in __content:
if __columns is not None:
# Attention : toutes les variables doivent avoir la même taille
- __columns = numpy.vstack((__columns, numpy.reshape(__content[nom].data, (1,-1))))
+ __columns = numpy.vstack((__columns, numpy.reshape(__content[nom].data, (1, -1))))
else:
# Première colonne
- __columns = numpy.reshape(__content[nom].data, (1,-1))
+ __columns = numpy.reshape(__content[nom].data, (1, -1))
if self._colindex is not None and self._colindex in __content:
__index = __content[self._colindex].data
#
__columns = numpy.loadtxt(self._filename, usecols = __usecols, delimiter = self._delimiter, skiprows=self._skiprows)
if __useindex is not None:
__index = numpy.loadtxt(self._filename, dtype = bytes, usecols = (__useindex,), delimiter = self._delimiter, skiprows=self._skiprows)
- if __usecols is None: # Si une variable demandée n'existe pas
+ if __usecols is None: # Si une variable demandée n'existe pas
self._colnames = None
#
elif self._format == "text/tab-separated-values":
__columns = numpy.loadtxt(self._filename, usecols = __usecols, delimiter = self._delimiter, skiprows=self._skiprows)
if __useindex is not None:
__index = numpy.loadtxt(self._filename, dtype = bytes, usecols = (__useindex,), delimiter = self._delimiter, skiprows=self._skiprows)
- if __usecols is None: # Si une variable demandée n'existe pas
+ if __usecols is None: # Si une variable demandée n'existe pas
self._colnames = None
else:
raise ValueError("Unkown file format \"%s\" or no reader available"%self._format)
- if __columns is None: __columns = ()
+ if __columns is None:
+ __columns = ()
def toString(value):
try:
__index = tuple([toString(v) for v in __index])
#
return (self._colnames, __columns, self._colindex, __index)
- #
+
def getstring(self):
"Renvoie le fichier texte complet"
if self._format in self.__binaryformats:
return ""
else:
- with open(self._filename,'r') as fid:
+ with open(self._filename, 'r') as fid:
return fid.read()
- #
+
def getformat(self):
return self._format
Seule la méthode "getvalue" est changée.
"""
__slots__ = ()
- #
+
def __enter__(self):
return self
- #
+
def __exit__(self, exc_type, exc_val, exc_tb):
return False
- #
+
def __init__(self, Filename=None, ColNames=None, ColIndex=None, Format="Guess"):
ImportFromFile.__init__(self, Filename, ColNames, ColIndex, Format)
if self._format not in ["text/plain", "text/csv", "text/tab-separated-values"]:
raise ValueError("Unkown file format \"%s\""%self._format)
- #
+
def getvalue(self, VarNames = None, HeaderNames=()):
"Renvoie la ou les variables demandées par la liste de leurs noms"
- if VarNames is not None: __varnames = tuple(VarNames)
- else: __varnames = None
+ if VarNames is not None:
+ __varnames = tuple(VarNames)
+ else:
+ __varnames = None
#
if "Name" in self._varsline and "Value" in self._varsline and "Minimum" in self._varsline and "Maximum" in self._varsline:
__ftype = "NamValMinMax"
- __dtypes = {'names' : ('Name', 'Value', 'Minimum', 'Maximum'),
+ __dtypes = {'names' : ('Name', 'Value', 'Minimum', 'Maximum'), # noqa: E203
'formats': ('S128', 'g', 'g', 'g')}
__usecols = (0, 1, 2, 3)
def __replaceNoneN( s ):
- if s.strip() == b'None': return numpy.NINF
- else: return s
+ if s.strip() == b'None':
+ return numpy.NINF
+ else:
+ return s
def __replaceNoneP( s ):
- if s.strip() == b'None': return numpy.PINF
- else: return s
+ if s.strip() == b'None':
+ return numpy.PINF
+ else:
+ return s
__converters = {2: __replaceNoneN, 3: __replaceNoneP}
elif "Name" in self._varsline and "Value" in self._varsline and ("Minimum" not in self._varsline or "Maximum" not in self._varsline):
__ftype = "NamVal"
- __dtypes = {'names' : ('Name', 'Value'),
+ __dtypes = {'names' : ('Name', 'Value'), # noqa: E203
'formats': ('S128', 'g')}
__converters = None
__usecols = (0, 1)
- elif len(HeaderNames)>0 and numpy.all([kw in self._varsline for kw in HeaderNames]):
+ elif len(HeaderNames) > 0 and numpy.all([kw in self._varsline for kw in HeaderNames]):
__ftype = "NamLotOfVals"
- __dtypes = {'names' : HeaderNames,
- 'formats': tuple(['S128',]+['g']*(len(HeaderNames)-1))}
+ __dtypes = {'names' : HeaderNames, # noqa: E203
+ 'formats': tuple(['S128',] + ['g'] * (len(HeaderNames) - 1))}
__usecols = tuple(range(len(HeaderNames)))
def __replaceNone( s ):
- if s.strip() == b'None': return numpy.NAN
- else: return s
+ if s.strip() == b'None':
+ return numpy.NAN
+ else:
+ return s
__converters = dict()
- for i in range(1,len(HeaderNames)):
+ for i in range(1, len(HeaderNames)):
__converters[i] = __replaceNone
else:
raise ValueError("Can not find names of columns for initial values. Wrong first line is:\n \"%s\""%self._varsline)
skiprows = self._skiprows,
converters = __converters,
ndmin = 1,
- )
+ )
elif self._format in ["text/csv", "text/tab-separated-values"]:
__content = numpy.loadtxt(
self._filename,
converters = __converters,
delimiter = self._delimiter,
ndmin = 1,
- )
+ )
else:
raise ValueError("Unkown file format \"%s\""%self._format)
#
for sub in __content:
if len(__usecols) == 4:
na, va, mi, ma = sub
- if numpy.isneginf(mi): mi = None # Réattribue les variables None
- elif numpy.isnan(mi): mi = None # Réattribue les variables None
- if numpy.isposinf(ma): ma = None # Réattribue les variables None
- elif numpy.isnan(ma): ma = None # Réattribue les variables None
+ if numpy.isneginf(mi):
+ mi = None # Réattribue les variables None
+ elif numpy.isnan(mi):
+ mi = None # Réattribue les variables None
+ if numpy.isposinf(ma):
+ ma = None # Réattribue les variables None
+ elif numpy.isnan(ma):
+ ma = None # Réattribue les variables None
elif len(__usecols) == 2 and __ftype == "NamVal":
na, va = sub
mi, ma = None, None
nsub = list(sub)
na = sub[0]
for i, v in enumerate(nsub[1:]):
- if numpy.isnan(v): nsub[i+1] = None
+ if numpy.isnan(v):
+ nsub[i + 1] = None
va = nsub[1:]
mi, ma = None, None
na = na.decode()
# Ne stocke que la premiere occurence d'une variable
__names.append(na)
__thevalue.append(va)
- __bounds.append((mi,ma))
+ __bounds.append((mi, ma))
#
__names = tuple(__names)
__thevalue = numpy.array(__thevalue)
Lancement autonome de l'interface EFICAS/ADAO
"""
__slots__ = ("__msg", "__path_settings_ok")
- #
+
def __init__(self, __addpath = None):
# Chemin pour l'installation (ordre important)
self.__msg = ""
self.__path_settings_ok = False
- #----------------
+ # ----------------
if "EFICAS_TOOLS_ROOT" in os.environ:
__EFICAS_TOOLS_ROOT = os.environ["EFICAS_TOOLS_ROOT"]
__path_ok = True
__EFICAS_TOOLS_ROOT = os.environ["EFICAS_NOUVEAU_ROOT"]
__path_ok = True
else:
- self.__msg += "\nKeyError:\n"+\
- " the required environment variable EFICAS_TOOLS_ROOT is unknown.\n"+\
- " You have either to be in SALOME environment, or to set this\n"+\
- " variable in your environment to the right path \"<...>\" to\n"+\
- " find an installed EFICAS application. For example:\n"+\
+ self.__msg += "\nKeyError:\n" + \
+ " the required environment variable EFICAS_TOOLS_ROOT is unknown.\n" + \
+ " You have either to be in SALOME environment, or to set this\n" + \
+ " variable in your environment to the right path \"<...>\" to\n" + \
+ " find an installed EFICAS application. For example:\n" + \
" EFICAS_TOOLS_ROOT=\"<...>\" command\n"
__path_ok = False
try:
import adao
__path_ok = True and __path_ok
except ImportError:
- self.__msg += "\nImportError:\n"+\
- " the required ADAO library can not be found to be imported.\n"+\
- " You have either to be in ADAO environment, or to be in SALOME\n"+\
- " environment, or to set manually in your Python 3 environment the\n"+\
- " right path \"<...>\" to find an installed ADAO application. For\n"+\
- " example:\n"+\
+ self.__msg += "\nImportError:\n" + \
+ " the required ADAO library can not be found to be imported.\n" + \
+ " You have either to be in ADAO environment, or to be in SALOME\n" + \
+ " environment, or to set manually in your Python 3 environment the\n" + \
+ " right path \"<...>\" to find an installed ADAO application. For\n" + \
+ " example:\n" + \
" PYTHONPATH=\"<...>:${PYTHONPATH}\" command\n"
__path_ok = False
try:
- import PyQt5
+ import PyQt5 # noqa: F401
__path_ok = True and __path_ok
except ImportError:
- self.__msg += "\nImportError:\n"+\
- " the required PyQt5 library can not be found to be imported.\n"+\
- " You have either to have a raisonable up-to-date Python 3\n"+\
+ self.__msg += "\nImportError:\n" + \
+ " the required PyQt5 library can not be found to be imported.\n" + \
+ " You have either to have a raisonable up-to-date Python 3\n" + \
" installation (less than 5 years), or to be in SALOME environment.\n"
__path_ok = False
- #----------------
+ # ----------------
if not __path_ok:
- self.__msg += "\nWarning:\n"+\
- " It seems you have some troubles with your installation.\n"+\
- " Be aware that some other errors may exist, that are not\n"+\
- " explained as above, like some incomplete or obsolete\n"+\
- " Python 3, or incomplete module installation.\n"+\
- " \n"+\
- " Please correct the above error(s) before launching the\n"+\
+ self.__msg += "\nWarning:\n" + \
+ " It seems you have some troubles with your installation.\n" + \
+ " Be aware that some other errors may exist, that are not\n" + \
+ " explained as above, like some incomplete or obsolete\n" + \
+ " Python 3, or incomplete module installation.\n" + \
+ " \n" + \
+ " Please correct the above error(s) before launching the\n" + \
" standalone EFICAS/ADAO interface.\n"
logging.debug("Some of the ADAO/EFICAS/QT5 paths have not been found")
self.__path_settings_ok = False
else:
logging.debug("All the ADAO/EFICAS/QT5 paths have been found")
self.__path_settings_ok = True
- #----------------
+ # ----------------
if self.__path_settings_ok:
- sys.path.insert(0,__EFICAS_TOOLS_ROOT)
- sys.path.insert(0,os.path.join(adao.adao_py_dir,"daEficas"))
+ sys.path.insert(0, __EFICAS_TOOLS_ROOT)
+ sys.path.insert(0, os.path.join(adao.adao_py_dir, "daEficas"))
if __addpath is not None and os.path.exists(os.path.abspath(__addpath)):
- sys.path.insert(0,os.path.abspath(__addpath))
+ sys.path.insert(0, os.path.abspath(__addpath))
logging.debug("All the paths have been correctly set up")
else:
print(self.__msg)
logging.debug("Errors in path settings have been found")
- #
+
def gui(self):
if self.__path_settings_ok:
logging.debug("Launching standalone EFICAS/ADAO interface...")
"""
__author__ = "Jean-Philippe ARGAUD"
-import os, copy, types, sys, logging, math, numpy, itertools
+import os, copy, types, sys, logging, math, numpy, scipy, itertools
from daCore.BasicObjects import Operator, Covariance, PartialAlgorithm
from daCore.PlatformInfo import PlatformInfo, vt, vfloat
mpr = PlatformInfo().MachinePrecision()
def ExecuteFunction( triplet ):
assert len(triplet) == 3, "Incorrect number of arguments"
X, xArgs, funcrepr = triplet
- __X = numpy.ravel( X ).reshape((-1,1))
- __sys_path_tmp = sys.path ; sys.path.insert(0,funcrepr["__userFunction__path"])
+ __X = numpy.ravel( X ).reshape((-1, 1))
+ __sys_path_tmp = sys.path
+ sys.path.insert(0, funcrepr["__userFunction__path"])
__module = __import__(funcrepr["__userFunction__modl"], globals(), locals(), [])
- __fonction = getattr(__module,funcrepr["__userFunction__name"])
- sys.path = __sys_path_tmp ; del __sys_path_tmp
+ __fonction = getattr(__module, funcrepr["__userFunction__name"])
+ sys.path = __sys_path_tmp
+ del __sys_path_tmp
if isinstance(xArgs, dict):
__HX = __fonction( __X, **xArgs )
else:
"__listJPCP", "__listJPCI", "__listJPCR", "__listJPPN", "__listJPIN",
"__userOperator", "__userFunction", "__increment", "__pool", "__dX",
"__userFunction__name", "__userFunction__modl", "__userFunction__path",
- )
- #
+ )
+
def __init__(self,
- name = "FDApproximation",
- Function = None,
- centeredDF = False,
- increment = 0.01,
- dX = None,
- extraArguments = None,
- reducingMemoryUse = False,
- avoidingRedundancy = True,
- toleranceInRedundancy = 1.e-18,
- lengthOfRedundancy = -1,
- mpEnabled = False,
- mpWorkers = None,
- mfEnabled = False,
- ):
+ name = "FDApproximation",
+ Function = None,
+ centeredDF = False,
+ increment = 0.01,
+ dX = None,
+ extraArguments = None,
+ reducingMemoryUse = False,
+ avoidingRedundancy = True,
+ toleranceInRedundancy = 1.e-18,
+ lengthOfRedundancy = -1,
+ mpEnabled = False,
+ mpWorkers = None,
+ mfEnabled = False ):
+ #
self.__name = str(name)
self.__extraArgs = extraArguments
#
if mpEnabled:
try:
- import multiprocessing
+ import multiprocessing # noqa: F401
self.__mpEnabled = True
except ImportError:
self.__mpEnabled = False
self.__mpWorkers = mpWorkers
if self.__mpWorkers is not None and self.__mpWorkers < 1:
self.__mpWorkers = None
- logging.debug("FDA Calculs en multiprocessing : %s (nombre de processus : %s)"%(self.__mpEnabled,self.__mpWorkers))
+ logging.debug("FDA Calculs en multiprocessing : %s (nombre de processus : %s)"%(self.__mpEnabled, self.__mpWorkers))
#
self.__mfEnabled = bool(mfEnabled)
logging.debug("FDA Calculs en multifonctions : %s"%(self.__mfEnabled,))
self.__avoidRC = True
self.__tolerBP = float(toleranceInRedundancy)
self.__lengthRJ = int(lengthOfRedundancy)
- self.__listJPCP = [] # Jacobian Previous Calculated Points
- self.__listJPCI = [] # Jacobian Previous Calculated Increment
- self.__listJPCR = [] # Jacobian Previous Calculated Results
- self.__listJPPN = [] # Jacobian Previous Calculated Point Norms
- self.__listJPIN = [] # Jacobian Previous Calculated Increment Norms
+ self.__listJPCP = [] # Jacobian Previous Calculated Points
+ self.__listJPCI = [] # Jacobian Previous Calculated Increment
+ self.__listJPCR = [] # Jacobian Previous Calculated Results
+ self.__listJPPN = [] # Jacobian Previous Calculated Point Norms
+ self.__listJPIN = [] # Jacobian Previous Calculated Increment Norms
else:
self.__avoidRC = False
logging.debug("FDA Calculs avec réduction des doublons : %s"%self.__avoidRC)
logging.debug("FDA Tolérance de détermination des doublons : %.2e"%self.__tolerBP)
#
if self.__mpEnabled:
- if isinstance(Function,types.FunctionType):
+ if isinstance(Function, types.FunctionType):
logging.debug("FDA Calculs en multiprocessing : FunctionType")
self.__userFunction__name = Function.__name__
try:
- mod = os.path.join(Function.__globals__['filepath'],Function.__globals__['filename'])
+ mod = os.path.join(Function.__globals__['filepath'], Function.__globals__['filename'])
except Exception:
mod = os.path.abspath(Function.__globals__['__file__'])
if not os.path.isfile(mod):
raise ImportError("No user defined function or method found with the name %s"%(mod,))
- self.__userFunction__modl = os.path.basename(mod).replace('.pyc','').replace('.pyo','').replace('.py','')
+ self.__userFunction__modl = os.path.basename(mod).replace('.pyc', '').replace('.pyo', '').replace('.py', '')
self.__userFunction__path = os.path.dirname(mod)
del mod
self.__userOperator = Operator(
avoidingRedundancy = self.__avoidRC,
inputAsMultiFunction = self.__mfEnabled,
extraArguments = self.__extraArgs )
- self.__userFunction = self.__userOperator.appliedTo # Pour le calcul Direct
- elif isinstance(Function,types.MethodType):
+ self.__userFunction = self.__userOperator.appliedTo # Pour le calcul Direct
+ elif isinstance(Function, types.MethodType):
logging.debug("FDA Calculs en multiprocessing : MethodType")
self.__userFunction__name = Function.__name__
try:
- mod = os.path.join(Function.__globals__['filepath'],Function.__globals__['filename'])
+ mod = os.path.join(Function.__globals__['filepath'], Function.__globals__['filename'])
except Exception:
mod = os.path.abspath(Function.__func__.__globals__['__file__'])
if not os.path.isfile(mod):
raise ImportError("No user defined function or method found with the name %s"%(mod,))
- self.__userFunction__modl = os.path.basename(mod).replace('.pyc','').replace('.pyo','').replace('.py','')
+ self.__userFunction__modl = os.path.basename(mod).replace('.pyc', '').replace('.pyo', '').replace('.py', '')
self.__userFunction__path = os.path.dirname(mod)
del mod
self.__userOperator = Operator(
avoidingRedundancy = self.__avoidRC,
inputAsMultiFunction = self.__mfEnabled,
extraArguments = self.__extraArgs )
- self.__userFunction = self.__userOperator.appliedTo # Pour le calcul Direct
+ self.__userFunction = self.__userOperator.appliedTo # Pour le calcul Direct
else:
raise TypeError("User defined function or method has to be provided for finite differences approximation.")
else:
# ---------------------------------------------------------
def __doublon__(self, __e, __l, __n, __v=None):
__ac, __iac = False, -1
- for i in range(len(__l)-1,-1,-1):
+ for i in range(len(__l) - 1, -1, -1):
if numpy.linalg.norm(__e - __l[i]) < self.__tolerBP * __n[i]:
__ac, __iac = True, i
- if __v is not None: logging.debug("FDA Cas%s déjà calculé, récupération du doublon %i"%(__v,__iac))
+ if __v is not None:
+ logging.debug("FDA Cas%s déjà calculé, récupération du doublon %i"%(__v, __iac))
break
return __ac, __iac
# ---------------------------------------------------------
def __listdotwith__(self, __LMatrix, __dotWith = None, __dotTWith = None):
"Produit incrémental d'une matrice liste de colonnes avec un vecteur"
- if not isinstance(__LMatrix, (list,tuple)):
+ if not isinstance(__LMatrix, (list, tuple)):
raise TypeError("Columnwise list matrix has not the proper type: %s"%type(__LMatrix))
if __dotWith is not None:
__Idwx = numpy.ravel( __dotWith )
logging.debug("FDA Incrément de............: %s*X"%float(self.__increment))
logging.debug("FDA Approximation centrée...: %s"%(self.__centeredDF))
#
- if X is None or len(X)==0:
+ if X is None or len(X) == 0:
raise ValueError("Nominal point X for approximate derivatives can not be None or void (given X: %s)."%(str(X),))
#
_X = numpy.ravel( X )
#
__alreadyCalculated = False
if self.__avoidRC:
- __bidon, __alreadyCalculatedP = self.__doublon__(_X, self.__listJPCP, self.__listJPPN, None)
+ __bidon, __alreadyCalculatedP = self.__doublon__( _X, self.__listJPCP, self.__listJPPN, None)
__bidon, __alreadyCalculatedI = self.__doublon__(_dX, self.__listJPCI, self.__listJPIN, None)
if __alreadyCalculatedP == __alreadyCalculatedI > -1:
__alreadyCalculated, __i = True, __alreadyCalculatedP
_Jacobienne = self.__listJPCR[__i]
logging.debug("FDA Fin du calcul de la Jacobienne")
if dotWith is not None:
- return numpy.dot(_Jacobienne, numpy.ravel( dotWith ))
+ return numpy.dot( _Jacobienne, numpy.ravel( dotWith ))
elif dotTWith is not None:
return numpy.dot(_Jacobienne.T, numpy.ravel( dotTWith ))
else:
#
if self.__mpEnabled and not self.__mfEnabled:
funcrepr = {
- "__userFunction__path" : self.__userFunction__path,
- "__userFunction__modl" : self.__userFunction__modl,
- "__userFunction__name" : self.__userFunction__name,
+ "__userFunction__path": self.__userFunction__path,
+ "__userFunction__modl": self.__userFunction__modl,
+ "__userFunction__name": self.__userFunction__name,
}
_jobs = []
for i in range( len(_dX) ):
_X_moins_dXi = numpy.array( _X, dtype=float )
_X_moins_dXi[i] = _X[i] - _dXi
#
- _jobs.append( (_X_plus_dXi, self.__extraArgs, funcrepr) )
+ _jobs.append( ( _X_plus_dXi, self.__extraArgs, funcrepr) )
_jobs.append( (_X_moins_dXi, self.__extraArgs, funcrepr) )
#
import multiprocessing
#
_Jacobienne = []
for i in range( len(_dX) ):
- _Jacobienne.append( numpy.ravel( _HX_plusmoins_dX[2*i] - _HX_plusmoins_dX[2*i+1] ) / (2.*_dX[i]) )
+ _Jacobienne.append( numpy.ravel( _HX_plusmoins_dX[2 * i] - _HX_plusmoins_dX[2 * i + 1] ) / (2. * _dX[i]) )
#
elif self.__mfEnabled:
_xserie = []
#
_Jacobienne = []
for i in range( len(_dX) ):
- _Jacobienne.append( numpy.ravel( _HX_plusmoins_dX[2*i] - _HX_plusmoins_dX[2*i+1] ) / (2.*_dX[i]) )
+ _Jacobienne.append( numpy.ravel( _HX_plusmoins_dX[2 * i] - _HX_plusmoins_dX[2 * i + 1] ) / (2. * _dX[i]) )
#
else:
_Jacobienne = []
_HX_plus_dXi = self.DirectOperator( _X_plus_dXi )
_HX_moins_dXi = self.DirectOperator( _X_moins_dXi )
#
- _Jacobienne.append( numpy.ravel( _HX_plus_dXi - _HX_moins_dXi ) / (2.*_dXi) )
+ _Jacobienne.append( numpy.ravel( _HX_plus_dXi - _HX_moins_dXi ) / (2. * _dXi) )
#
else:
#
if self.__mpEnabled and not self.__mfEnabled:
funcrepr = {
- "__userFunction__path" : self.__userFunction__path,
- "__userFunction__modl" : self.__userFunction__modl,
- "__userFunction__name" : self.__userFunction__name,
+ "__userFunction__path": self.__userFunction__path,
+ "__userFunction__modl": self.__userFunction__modl,
+ "__userFunction__name": self.__userFunction__name,
}
_jobs = []
_jobs.append( (_X, self.__extraArgs, funcrepr) )
if __Produit is None or self.__avoidRC:
_Jacobienne = numpy.transpose( numpy.vstack( _Jacobienne ) )
if self.__avoidRC:
- if self.__lengthRJ < 0: self.__lengthRJ = 2 * _X.size
+ if self.__lengthRJ < 0:
+ self.__lengthRJ = 2 * _X.size
while len(self.__listJPCP) > self.__lengthRJ:
self.__listJPCP.pop(0)
self.__listJPCI.pop(0)
# Calcul de la forme matricielle si le second argument est None
# -------------------------------------------------------------
_Jacobienne = self.TangentMatrix( X )
- if self.__mfEnabled: return [_Jacobienne,]
- else: return _Jacobienne
+ if self.__mfEnabled:
+ return [_Jacobienne,]
+ else:
+ return _Jacobienne
else:
#
# Calcul de la valeur linéarisée de H en X appliqué à dX
# ------------------------------------------------------
_HtX = self.TangentMatrix( X, dotWith = dX )
- if self.__mfEnabled: return [_HtX,]
- else: return _HtX
+ if self.__mfEnabled:
+ return [_HtX,]
+ else:
+ return _HtX
# ---------------------------------------------------------
def AdjointOperator(self, paire, **extraArgs ):
# Calcul de la forme matricielle si le second argument est None
# -------------------------------------------------------------
_JacobienneT = self.TangentMatrix( X ).T
- if self.__mfEnabled: return [_JacobienneT,]
- else: return _JacobienneT
+ if self.__mfEnabled:
+ return [_JacobienneT,]
+ else:
+ return _JacobienneT
else:
#
# Calcul de la valeur de l'adjoint en X appliqué à Y
# --------------------------------------------------
_HaY = self.TangentMatrix( X, dotTWith = Y )
- if self.__mfEnabled: return [_HaY,]
- else: return _HaY
+ if self.__mfEnabled:
+ return [_HaY,]
+ else:
+ return _HaY
# ==============================================================================
def SetInitialDirection( __Direction = [], __Amplitude = 1., __Position = None ):
__dX0 = []
__X0 = numpy.ravel(numpy.asarray(__Position))
__mX0 = numpy.mean( __X0, dtype=mfp )
- if abs(__mX0) < 2*mpr: __mX0 = 1. # Évite le problème de position nulle
+ if abs(__mX0) < 2 * mpr:
+ __mX0 = 1. # Évite le problème de position nulle
for v in __X0:
if abs(v) > 1.e-8:
- __dX0.append( numpy.random.normal(0.,abs(v)) )
+ __dX0.append( numpy.random.normal(0., abs(v)) )
else:
- __dX0.append( numpy.random.normal(0.,__mX0) )
+ __dX0.append( numpy.random.normal(0., __mX0) )
#
- __dX0 = numpy.asarray(__dX0,float) # Évite le problème d'array de taille 1
- __dX0 = numpy.ravel( __dX0 ) # Redresse les vecteurs
+ __dX0 = numpy.asarray(__dX0, float) # Évite le problème d'array de taille 1
+ __dX0 = numpy.ravel( __dX0 ) # Redresse les vecteurs
__dX0 = float(__Amplitude) * __dX0
#
return __dX0
def EnsembleOfCenteredPerturbations( __bgCenter, __bgCovariance, __nbMembers ):
"Génération d'un ensemble de taille __nbMembers-1 d'états aléatoires centrés"
#
- __bgCenter = numpy.ravel(__bgCenter)[:,None]
+ __bgCenter = numpy.ravel(__bgCenter)[:, None]
if __nbMembers < 1:
raise ValueError("Number of members has to be strictly more than 1 (given number: %s)."%(str(__nbMembers),))
#
__bgCenter,
__bgCovariance,
__nbMembers,
- __withSVD = True,
- ):
+ __withSVD = True ):
"Génération d'un ensemble de taille __nbMembers-1 d'états aléatoires centrés"
def __CenteredRandomAnomalies(Zr, N):
"""
notes manuscrites de MB et conforme au code de PS avec eps = -1
"""
eps = -1
- Q = numpy.identity(N-1)-numpy.ones((N-1,N-1))/numpy.sqrt(N)/(numpy.sqrt(N)-eps)
- Q = numpy.concatenate((Q, [eps*numpy.ones(N-1)/numpy.sqrt(N)]), axis=0)
- R, _ = numpy.linalg.qr(numpy.random.normal(size = (N-1,N-1)))
- Q = numpy.dot(Q,R)
- Zr = numpy.dot(Q,Zr)
+ Q = numpy.identity(N - 1) - numpy.ones((N - 1, N - 1)) / numpy.sqrt(N) / (numpy.sqrt(N) - eps)
+ Q = numpy.concatenate((Q, [eps * numpy.ones(N - 1) / numpy.sqrt(N)]), axis=0)
+ R, _ = numpy.linalg.qr(numpy.random.normal(size = (N - 1, N - 1)))
+ Q = numpy.dot(Q, R)
+ Zr = numpy.dot(Q, Zr)
return Zr.T
#
- __bgCenter = numpy.ravel(__bgCenter).reshape((-1,1))
+ __bgCenter = numpy.ravel(__bgCenter).reshape((-1, 1))
if __nbMembers < 1:
raise ValueError("Number of members has to be strictly more than 1 (given number: %s)."%(str(__nbMembers),))
if __bgCovariance is None:
if __nbMembers > _nbctl:
_Z = numpy.concatenate((numpy.dot(
numpy.diag(numpy.sqrt(_s[:_nbctl])), _V[:_nbctl]),
- numpy.random.multivariate_normal(numpy.zeros(_nbctl),__bgCovariance,__nbMembers-1-_nbctl)), axis = 0)
+ numpy.random.multivariate_normal(numpy.zeros(_nbctl), __bgCovariance, __nbMembers - 1 - _nbctl)), axis = 0)
else:
- _Z = numpy.dot(numpy.diag(numpy.sqrt(_s[:__nbMembers-1])), _V[:__nbMembers-1])
+ _Z = numpy.dot(numpy.diag(numpy.sqrt(_s[:__nbMembers - 1])), _V[:__nbMembers - 1])
_Zca = __CenteredRandomAnomalies(_Z, __nbMembers)
_Perturbations = __bgCenter + _Zca
else:
if max(abs(__bgCovariance.flatten())) > 0:
_nbctl = __bgCenter.size
- _Z = numpy.random.multivariate_normal(numpy.zeros(_nbctl),__bgCovariance,__nbMembers-1)
+ _Z = numpy.random.multivariate_normal(numpy.zeros(_nbctl), __bgCovariance, __nbMembers - 1)
_Zca = __CenteredRandomAnomalies(_Z, __nbMembers)
_Perturbations = __bgCenter + _Zca
else:
# ==============================================================================
def EnsembleMean( __Ensemble ):
"Renvoie la moyenne empirique d'un ensemble"
- return numpy.asarray(__Ensemble).mean(axis=1, dtype=mfp).astype('float').reshape((-1,1))
+ return numpy.asarray(__Ensemble).mean(axis=1, dtype=mfp).astype('float').reshape((-1, 1))
# ==============================================================================
def EnsembleOfAnomalies( __Ensemble, __OptMean = None, __Normalisation = 1. ):
if __OptMean is None:
__Em = EnsembleMean( __Ensemble )
else:
- __Em = numpy.ravel( __OptMean ).reshape((-1,1))
+ __Em = numpy.ravel( __OptMean ).reshape((-1, 1))
#
return __Normalisation * (numpy.asarray( __Ensemble ) - __Em)
__n, __m = numpy.asarray( __Ensemble ).shape
__Anomalies = EnsembleOfAnomalies( __Ensemble )
# Estimation empirique
- __Covariance = ( __Anomalies @ __Anomalies.T ) / (__m-1)
+ __Covariance = ( __Anomalies @ __Anomalies.T ) / (__m - 1)
# Assure la symétrie
__Covariance = ( __Covariance + __Covariance.T ) * 0.5
# Assure la positivité
- __epsilon = mpr*numpy.trace( __Covariance )
+ __epsilon = mpr * numpy.trace( __Covariance )
__Covariance = __Covariance + __epsilon * numpy.identity(__n)
#
return __Covariance
# ==============================================================================
def SingularValuesEstimation( __Ensemble, __Using = "SVDVALS"):
"Renvoie les valeurs singulières de l'ensemble et leur carré"
- if __Using == "SVDVALS": # Recommandé
+ if __Using == "SVDVALS": # Recommandé
import scipy
__sv = scipy.linalg.svdvals( __Ensemble )
__svsq = __sv**2
elif __Using == "SVD":
_, __sv, _ = numpy.linalg.svd( __Ensemble )
__svsq = __sv**2
- elif __Using == "EIG": # Lent
+ elif __Using == "EIG": # Lent
__eva, __eve = numpy.linalg.eig( __Ensemble @ __Ensemble.T )
__svsq = numpy.sort(numpy.abs(numpy.real( __eva )))[::-1]
__sv = numpy.sqrt( __svsq )
normes = numpy.linalg.norm(
numpy.take(__Ensemble, __IncludedPoints, axis=0, mode='clip'),
axis = 0,
- )
+ )
else:
normes = numpy.linalg.norm( __Ensemble, axis = 0)
nmax = numpy.max(normes)
normes = numpy.linalg.norm(
numpy.take(__Ensemble, __IncludedPoints, axis=0, mode='clip'),
axis = 0, ord=numpy.inf,
- )
+ )
else:
normes = numpy.linalg.norm( __Ensemble, axis = 0, ord=numpy.inf)
nmax = numpy.max(normes)
return nmax, imax, normes
def InterpolationErrorByColumn(
- __Ensemble = None, __Basis = None, __Points = None, __M = 2, # Usage 1
- __Differences = None, # Usage 2
- __ErrorNorm = None, # Commun
- __LcCsts = False, __IncludedPoints = [], # Commun
- __CDM = False, # ComputeMaxDifference # Commun
- __RMU = False, # ReduceMemoryUse # Commun
- __FTL = False, # ForceTril # Commun
- ):
+ __Ensemble = None, __Basis = None, __Points = None, __M = 2, # Usage 1
+ __Differences = None, # Usage 2
+ __ErrorNorm = None, # Commun
+ __LcCsts = False, __IncludedPoints = [], # Commun
+ __CDM = False, # ComputeMaxDifference # Commun
+ __RMU = False, # ReduceMemoryUse # Commun
+ __FTL = False, # ForceTril # Commun
+ ): # noqa: E123
"Analyse des normes d'erreurs d'interpolation calculées par colonne"
if __ErrorNorm == "L2":
NormByColumn = MaxL2NormByColumn
else:
NormByColumn = MaxLinfNormByColumn
#
- if __Differences is None and not __RMU: # Usage 1
+ if __Differences is None and not __RMU: # Usage 1
if __FTL:
- rBasis = numpy.tril( __Basis[__Points,:] )
+ rBasis = numpy.tril( __Basis[__Points, :] )
else:
- rBasis = __Basis[__Points,:]
- rEnsemble = __Ensemble[__Points,:]
+ rBasis = __Basis[__Points, :]
+ rEnsemble = __Ensemble[__Points, :]
#
if __M > 1:
rBasis_inv = numpy.linalg.inv(rBasis)
- Interpolator = numpy.dot(__Basis,numpy.dot(rBasis_inv,rEnsemble))
+ Interpolator = numpy.dot(__Basis, numpy.dot(rBasis_inv, rEnsemble))
else:
rBasis_inv = 1. / rBasis
- Interpolator = numpy.outer(__Basis,numpy.outer(rBasis_inv,rEnsemble))
+ Interpolator = numpy.outer(__Basis, numpy.outer(rBasis_inv, rEnsemble))
#
differences = __Ensemble - Interpolator
#
error, nbr, _ = NormByColumn(differences, __LcCsts, __IncludedPoints)
#
if __CDM:
- maxDifference = differences[:,nbr]
+ maxDifference = differences[:, nbr]
#
- elif __Differences is None and __RMU: # Usage 1
+ elif __Differences is None and __RMU: # Usage 1
if __FTL:
- rBasis = numpy.tril( __Basis[__Points,:] )
+ rBasis = numpy.tril( __Basis[__Points, :] )
else:
- rBasis = __Basis[__Points,:]
- rEnsemble = __Ensemble[__Points,:]
+ rBasis = __Basis[__Points, :]
+ rEnsemble = __Ensemble[__Points, :]
#
if __M > 1:
rBasis_inv = numpy.linalg.inv(rBasis)
- rCoordinates = numpy.dot(rBasis_inv,rEnsemble)
+ rCoordinates = numpy.dot(rBasis_inv, rEnsemble)
else:
rBasis_inv = 1. / rBasis
- rCoordinates = numpy.outer(rBasis_inv,rEnsemble)
+ rCoordinates = numpy.outer(rBasis_inv, rEnsemble)
#
error = 0.
nbr = -1
for iCol in range(__Ensemble.shape[1]):
if __M > 1:
- iDifference = __Ensemble[:,iCol] - numpy.dot(__Basis, rCoordinates[:,iCol])
+ iDifference = __Ensemble[:, iCol] - numpy.dot(__Basis, rCoordinates[:, iCol])
else:
- iDifference = __Ensemble[:,iCol] - numpy.ravel(numpy.outer(__Basis, rCoordinates[:,iCol]))
+ iDifference = __Ensemble[:, iCol] - numpy.ravel(numpy.outer(__Basis, rCoordinates[:, iCol]))
#
normDifference, _, _ = NormByColumn(iDifference, __LcCsts, __IncludedPoints)
#
nbr = iCol
#
if __CDM:
- maxDifference = __Ensemble[:,nbr] - numpy.dot(__Basis, rCoordinates[:,nbr])
+ maxDifference = __Ensemble[:, nbr] - numpy.dot(__Basis, rCoordinates[:, nbr])
#
- else: # Usage 2
+ else: # Usage 2
differences = __Differences
#
error, nbr, _ = NormByColumn(differences, __LcCsts, __IncludedPoints)
#
if __CDM:
# faire cette variable intermédiaire coûte cher
- maxDifference = differences[:,nbr]
+ maxDifference = differences[:, nbr]
#
if __CDM:
return error, nbr, maxDifference
def EnsemblePerturbationWithGivenCovariance(
__Ensemble,
__Covariance,
- __Seed = None,
- ):
+ __Seed = None ):
"Ajout d'une perturbation à chaque membre d'un ensemble selon une covariance prescrite"
- if hasattr(__Covariance,"assparsematrix"):
- if (abs(__Ensemble).mean() > mpr) and (abs(__Covariance.assparsematrix())/abs(__Ensemble).mean() < mpr).all():
+ if hasattr(__Covariance, "assparsematrix"):
+ if (abs(__Ensemble).mean() > mpr) and (abs(__Covariance.assparsematrix()) / abs(__Ensemble).mean() < mpr).all():
# Traitement d'une covariance nulle ou presque
return __Ensemble
if (abs(__Ensemble).mean() <= mpr) and (abs(__Covariance.assparsematrix()) < mpr).all():
# Traitement d'une covariance nulle ou presque
return __Ensemble
else:
- if (abs(__Ensemble).mean() > mpr) and (abs(__Covariance)/abs(__Ensemble).mean() < mpr).all():
+ if (abs(__Ensemble).mean() > mpr) and (abs(__Covariance) / abs(__Ensemble).mean() < mpr).all():
# Traitement d'une covariance nulle ou presque
return __Ensemble
if (abs(__Ensemble).mean() <= mpr) and (abs(__Covariance) < mpr).all():
return __Ensemble
#
__n, __m = __Ensemble.shape
- if __Seed is not None: numpy.random.seed(__Seed)
+ if __Seed is not None:
+ numpy.random.seed(__Seed)
#
- if hasattr(__Covariance,"isscalar") and __Covariance.isscalar():
+ if hasattr(__Covariance, "isscalar") and __Covariance.isscalar():
# Traitement d'une covariance multiple de l'identité
__zero = 0.
__std = numpy.sqrt(__Covariance.assparsematrix())
- __Ensemble += numpy.random.normal(__zero, __std, size=(__m,__n)).T
+ __Ensemble += numpy.random.normal(__zero, __std, size=(__m, __n)).T
#
- elif hasattr(__Covariance,"isvector") and __Covariance.isvector():
+ elif hasattr(__Covariance, "isvector") and __Covariance.isvector():
# Traitement d'une covariance diagonale avec variances non identiques
__zero = numpy.zeros(__n)
__std = numpy.sqrt(__Covariance.assparsematrix())
__Ensemble += numpy.asarray([numpy.random.normal(__zero, __std) for i in range(__m)]).T
#
- elif hasattr(__Covariance,"ismatrix") and __Covariance.ismatrix():
+ elif hasattr(__Covariance, "ismatrix") and __Covariance.ismatrix():
# Traitement d'une covariance pleine
__Ensemble += numpy.random.multivariate_normal(numpy.zeros(__n), __Covariance.asfullmatrix(__n), size=__m).T
#
__InputCovOrEns,
__InflationType = None,
__InflationFactor = None,
- __BackgroundCov = None,
- ):
+ __BackgroundCov = None ):
"""
Inflation applicable soit sur Pb ou Pa, soit sur les ensembles EXb ou EXa
__InflationFactor = float(__InflationFactor)
#
__InputCovOrEns = numpy.asarray(__InputCovOrEns)
- if __InputCovOrEns.size == 0: return __InputCovOrEns
+ if __InputCovOrEns.size == 0:
+ return __InputCovOrEns
#
if __InflationType in ["MultiplicativeOnAnalysisCovariance", "MultiplicativeOnBackgroundCovariance"]:
if __InflationFactor < 1.:
raise ValueError("Inflation factor for multiplicative inflation has to be greater or equal than 1.")
- if __InflationFactor < 1.+mpr: # No inflation = 1
+ if __InflationFactor < 1. + mpr: # No inflation = 1
return __InputCovOrEns
__OutputCovOrEns = __InflationFactor**2 * __InputCovOrEns
#
elif __InflationType in ["MultiplicativeOnAnalysisAnomalies", "MultiplicativeOnBackgroundAnomalies"]:
if __InflationFactor < 1.:
raise ValueError("Inflation factor for multiplicative inflation has to be greater or equal than 1.")
- if __InflationFactor < 1.+mpr: # No inflation = 1
+ if __InflationFactor < 1. + mpr: # No inflation = 1
return __InputCovOrEns
__InputCovOrEnsMean = __InputCovOrEns.mean(axis=1, dtype=mfp).astype('float')
- __OutputCovOrEns = __InputCovOrEnsMean[:,numpy.newaxis] \
- + __InflationFactor * (__InputCovOrEns - __InputCovOrEnsMean[:,numpy.newaxis])
+ __OutputCovOrEns = __InputCovOrEnsMean[:, numpy.newaxis] \
+ + __InflationFactor * (__InputCovOrEns - __InputCovOrEnsMean[:, numpy.newaxis])
#
elif __InflationType in ["AdditiveOnAnalysisCovariance", "AdditiveOnBackgroundCovariance"]:
if __InflationFactor < 0.:
raise ValueError("Inflation factor for additive inflation has to be greater or equal than 0.")
- if __InflationFactor < mpr: # No inflation = 0
+ if __InflationFactor < mpr: # No inflation = 0
return __InputCovOrEns
__n, __m = __InputCovOrEns.shape
if __n != __m:
raise ValueError("Additive inflation can only be applied to squared (covariance) matrix.")
- __tr = __InputCovOrEns.trace()/__n
+ __tr = __InputCovOrEns.trace() / __n
if __InflationFactor > __tr:
raise ValueError("Inflation factor for additive inflation has to be small over %.0e."%__tr)
- __OutputCovOrEns = (1. - __InflationFactor)*__InputCovOrEns + __InflationFactor * numpy.identity(__n)
+ __OutputCovOrEns = (1. - __InflationFactor) * __InputCovOrEns + __InflationFactor * numpy.identity(__n)
#
elif __InflationType == "HybridOnBackgroundCovariance":
if __InflationFactor < 0.:
raise ValueError("Inflation factor for hybrid inflation has to be greater or equal than 0.")
- if __InflationFactor < mpr: # No inflation = 0
+ if __InflationFactor < mpr: # No inflation = 0
return __InputCovOrEns
__n, __m = __InputCovOrEns.shape
if __n != __m:
#
__HessienneI = []
for i in range(int(__nb)):
- __ee = numpy.zeros((__nb,1))
+ __ee = numpy.zeros((__nb, 1))
__ee[i] = 1.
- __HtEE = numpy.dot(__HtM,__ee).reshape((-1,1))
+ __HtEE = numpy.dot(__HtM, __ee).reshape((-1, 1))
__HessienneI.append( numpy.ravel( __BI * __ee + __HaM * (__RI * __HtEE) ) )
#
__A = numpy.linalg.inv(numpy.array( __HessienneI ))
- __A = (__A + __A.T) * 0.5 # Symétrie
- __A = __A + mpr*numpy.trace( __A ) * numpy.identity(__nb) # Positivité
+ __A = (__A + __A.T) * 0.5 # Symétrie
+ __A = __A + mpr * numpy.trace( __A ) * numpy.identity(__nb) # Positivité
#
if min(__A.shape) != max(__A.shape):
raise ValueError(
- "The %s a posteriori covariance matrix A"%(__selfA._name,)+\
- " is of shape %s, despites it has to be a"%(str(__A.shape),)+\
- " squared matrix. There is an error in the observation operator,"+\
+ "The %s a posteriori covariance matrix A"%(__selfA._name,) + \
+ " is of shape %s, despites it has to be a"%(str(__A.shape),) + \
+ " squared matrix. There is an error in the observation operator," + \
" please check it.")
if (numpy.diag(__A) < 0).any():
raise ValueError(
- "The %s a posteriori covariance matrix A"%(__selfA._name,)+\
- " has at least one negative value on its diagonal. There is an"+\
+ "The %s a posteriori covariance matrix A"%(__selfA._name,) + \
+ " has at least one negative value on its diagonal. There is an" + \
" error in the observation operator, please check it.")
- if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
+ if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
try:
numpy.linalg.cholesky( __A )
logging.debug("%s La matrice de covariance a posteriori A est bien symétrique définie positive."%(__selfA._name,))
except Exception:
raise ValueError(
- "The %s a posteriori covariance matrix A"%(__selfA._name,)+\
- " is not symmetric positive-definite. Please check your a"+\
+ "The %s a posteriori covariance matrix A"%(__selfA._name,) + \
+ " is not symmetric positive-definite. Please check your a" + \
" priori covariances and your observation operator.")
#
return __A
#
# Traitement des bornes
if "StateBoundsForQuantiles" in selfA._parameters:
- LBounds = selfA._parameters["StateBoundsForQuantiles"] # Prioritaire
+ LBounds = selfA._parameters["StateBoundsForQuantiles"] # Prioritaire
elif "Bounds" in selfA._parameters:
LBounds = selfA._parameters["Bounds"] # Défaut raisonnable
else:
EXr = None
for i in range(nbsamples):
if selfA._parameters["SimulationForQuantiles"] == "Linear" and HtM is not None and HXa is not None:
- dXr = (numpy.random.multivariate_normal(__Xa,A) - __Xa).reshape((-1,1))
- if LBounds is not None: # "EstimateProjection" par défaut
- dXr = numpy.max(numpy.hstack((dXr,LBounds[:,0].reshape((-1,1))) - __Xa.reshape((-1,1))),axis=1)
- dXr = numpy.min(numpy.hstack((dXr,LBounds[:,1].reshape((-1,1))) - __Xa.reshape((-1,1))),axis=1)
+ dXr = (numpy.random.multivariate_normal(__Xa, A) - __Xa).reshape((-1, 1))
+ if LBounds is not None: # "EstimateProjection" par défaut
+ dXr = numpy.max(numpy.hstack((dXr, LBounds[:, 0].reshape((-1, 1))) - __Xa.reshape((-1, 1))), axis=1)
+ dXr = numpy.min(numpy.hstack((dXr, LBounds[:, 1].reshape((-1, 1))) - __Xa.reshape((-1, 1))), axis=1)
dYr = HtM @ dXr
- Yr = HXa.reshape((-1,1)) + dYr
- if selfA._toStore("SampledStateForQuantiles"): Xr = __Xa + numpy.ravel(dXr)
+ Yr = HXa.reshape((-1, 1)) + dYr
+ if selfA._toStore("SampledStateForQuantiles"):
+ Xr = __Xa + numpy.ravel(dXr)
elif selfA._parameters["SimulationForQuantiles"] == "NonLinear" and Hm is not None:
- Xr = numpy.random.multivariate_normal(__Xa,A)
- if LBounds is not None: # "EstimateProjection" par défaut
- Xr = numpy.max(numpy.hstack((Xr.reshape((-1,1)),LBounds[:,0].reshape((-1,1)))),axis=1)
- Xr = numpy.min(numpy.hstack((Xr.reshape((-1,1)),LBounds[:,1].reshape((-1,1)))),axis=1)
+ Xr = numpy.random.multivariate_normal(__Xa, A)
+ if LBounds is not None: # "EstimateProjection" par défaut
+ Xr = numpy.max(numpy.hstack((Xr.reshape((-1, 1)), LBounds[:, 0].reshape((-1, 1)))), axis=1)
+ Xr = numpy.min(numpy.hstack((Xr.reshape((-1, 1)), LBounds[:, 1].reshape((-1, 1)))), axis=1)
Yr = numpy.asarray(Hm( Xr ))
else:
raise ValueError("Quantile simulations has only to be Linear or NonLinear.")
#
if YfQ is None:
- YfQ = Yr.reshape((-1,1))
- if selfA._toStore("SampledStateForQuantiles"): EXr = Xr.reshape((-1,1))
+ YfQ = Yr.reshape((-1, 1))
+ if selfA._toStore("SampledStateForQuantiles"):
+ EXr = Xr.reshape((-1, 1))
else:
- YfQ = numpy.hstack((YfQ,Yr.reshape((-1,1))))
- if selfA._toStore("SampledStateForQuantiles"): EXr = numpy.hstack((EXr,Xr.reshape((-1,1))))
+ YfQ = numpy.hstack((YfQ, Yr.reshape((-1, 1))))
+ if selfA._toStore("SampledStateForQuantiles"):
+ EXr = numpy.hstack((EXr, Xr.reshape((-1, 1))))
#
# Extraction des quantiles
YfQ.sort(axis=-1)
YQ = None
for quantile in selfA._parameters["Quantiles"]:
- if not (0. <= float(quantile) <= 1.): continue
- indice = int(nbsamples * float(quantile) - 1./nbsamples)
- if YQ is None: YQ = YfQ[:,indice].reshape((-1,1))
- else: YQ = numpy.hstack((YQ,YfQ[:,indice].reshape((-1,1))))
- if YQ is not None: # Liste non vide de quantiles
+ if not (0. <= float(quantile) <= 1.):
+ continue
+ indice = int(nbsamples * float(quantile) - 1. / nbsamples)
+ if YQ is None:
+ YQ = YfQ[:, indice].reshape((-1, 1))
+ else:
+ YQ = numpy.hstack((YQ, YfQ[:, indice].reshape((-1, 1))))
+ if YQ is not None: # Liste non vide de quantiles
selfA.StoredVariables["SimulationQuantiles"].store( YQ )
if selfA._toStore("SampledStateForQuantiles"):
selfA.StoredVariables["SampledStateForQuantiles"].store( EXr )
def ForceNumericBounds( __Bounds, __infNumbers = True ):
"Force les bornes à être des valeurs numériques, sauf si globalement None"
# Conserve une valeur par défaut à None s'il n'y a pas de bornes
- if __Bounds is None: return None
+ if __Bounds is None:
+ return None
+ #
# Converti toutes les bornes individuelles None à +/- l'infini chiffré
- __Bounds = numpy.asarray( __Bounds, dtype=float )
- if len(__Bounds.shape) != 2 or min(__Bounds.shape) <= 0 or __Bounds.shape[1] != 2:
- raise ValueError("Incorrectly shaped bounds data")
+ __Bounds = numpy.asarray( __Bounds, dtype=float ).reshape((-1, 2))
+ if len(__Bounds.shape) != 2 or __Bounds.shape[0] == 0 or __Bounds.shape[1] != 2:
+ raise ValueError("Incorrectly shaped bounds data (effective shape is %s)"%(__Bounds.shape,))
if __infNumbers:
- __Bounds[numpy.isnan(__Bounds[:,0]),0] = -float('inf')
- __Bounds[numpy.isnan(__Bounds[:,1]),1] = float('inf')
+ __Bounds[numpy.isnan(__Bounds[:, 0]), 0] = -float('inf')
+ __Bounds[numpy.isnan(__Bounds[:, 1]), 1] = float('inf')
else:
- __Bounds[numpy.isnan(__Bounds[:,0]),0] = -sys.float_info.max
- __Bounds[numpy.isnan(__Bounds[:,1]),1] = sys.float_info.max
+ __Bounds[numpy.isnan(__Bounds[:, 0]), 0] = -sys.float_info.max
+ __Bounds[numpy.isnan(__Bounds[:, 1]), 1] = sys.float_info.max
return __Bounds
# ==============================================================================
def RecentredBounds( __Bounds, __Center, __Scale = None ):
"Recentre les bornes autour de 0, sauf si globalement None"
# Conserve une valeur par défaut à None s'il n'y a pas de bornes
- if __Bounds is None: return None
+ if __Bounds is None:
+ return None
+ #
if __Scale is None:
# Recentre les valeurs numériques de bornes
- return ForceNumericBounds( __Bounds ) - numpy.ravel( __Center ).reshape((-1,1))
+ return ForceNumericBounds( __Bounds ) - numpy.ravel( __Center ).reshape((-1, 1))
else:
# Recentre les valeurs numériques de bornes et change l'échelle par une matrice
- return __Scale @ (ForceNumericBounds( __Bounds, False ) - numpy.ravel( __Center ).reshape((-1,1)))
+ return __Scale @ (ForceNumericBounds( __Bounds, False ) - numpy.ravel( __Center ).reshape((-1, 1)))
# ==============================================================================
def ApplyBounds( __Vector, __Bounds, __newClip = True ):
"Applique des bornes numériques à un point"
# Conserve une valeur par défaut s'il n'y a pas de bornes
- if __Bounds is None: return __Vector
+ if __Bounds is None:
+ return __Vector
#
- if not isinstance(__Vector, numpy.ndarray): # Is an array
+ if not isinstance(__Vector, numpy.ndarray): # Is an array
raise ValueError("Incorrect array definition of vector data")
- if not isinstance(__Bounds, numpy.ndarray): # Is an array
+ if not isinstance(__Bounds, numpy.ndarray): # Is an array
raise ValueError("Incorrect array definition of bounds data")
- if 2*__Vector.size != __Bounds.size: # Is a 2 column array of vector length
- raise ValueError("Incorrect bounds number (%i) to be applied for this vector (of size %i)"%(__Bounds.size,__Vector.size))
+ if 2 * __Vector.size != __Bounds.size: # Is a 2 column array of vector length
+ raise ValueError("Incorrect bounds number (%i) to be applied for this vector (of size %i)"%(__Bounds.size, __Vector.size))
if len(__Bounds.shape) != 2 or min(__Bounds.shape) <= 0 or __Bounds.shape[1] != 2:
raise ValueError("Incorrectly shaped bounds data")
#
if __newClip:
__Vector = __Vector.clip(
- __Bounds[:,0].reshape(__Vector.shape),
- __Bounds[:,1].reshape(__Vector.shape),
- )
+ __Bounds[:, 0].reshape(__Vector.shape),
+ __Bounds[:, 1].reshape(__Vector.shape),
+ )
else:
- __Vector = numpy.max(numpy.hstack((__Vector.reshape((-1,1)),numpy.asmatrix(__Bounds)[:,0])),axis=1)
- __Vector = numpy.min(numpy.hstack((__Vector.reshape((-1,1)),numpy.asmatrix(__Bounds)[:,1])),axis=1)
+ __Vector = numpy.max(numpy.hstack((__Vector.reshape((-1, 1)), numpy.asmatrix(__Bounds)[:, 0])), axis=1)
+ __Vector = numpy.min(numpy.hstack((__Vector.reshape((-1, 1)), numpy.asmatrix(__Bounds)[:, 1])), axis=1)
__Vector = numpy.asarray(__Vector)
#
return __Vector
__Bounds = __BoxBounds
logging.debug("%s Definition of parameter bounds from current parameter increment bounds"%(__Name,))
elif __Bounds is not None and __BoxBounds is None:
- __BoxBounds = __Multiplier * (__Bounds - __Xini.reshape((-1,1))) # "M * [Xmin,Xmax]-Xini"
+ __BoxBounds = __Multiplier * (__Bounds - __Xini.reshape((-1, 1))) # "M * [Xmin,Xmax]-Xini"
logging.debug("%s Definition of parameter increment bounds from current parameter bounds"%(__Name,))
return __Bounds, __BoxBounds
selfB._parameters["InitializationPoint"] = Xf
from daAlgorithms.Atoms import std3dvar
std3dvar.std3dvar(selfB, Xf, __Ynpu, None, __HO, None, __R, Pf)
- Xa = selfB.get("Analysis")[-1].reshape((-1,1))
+ Xa = selfB.get("Analysis")[-1].reshape((-1, 1))
del selfB
#
return Xa + EnsembleOfAnomalies( __EnXn )
__GaussDelta = numpy.random.normal( 0, 1, size=__Center.shape )
__VectorNorm = numpy.linalg.norm( __GaussDelta )
__PointOnHS = __Radius * (__GaussDelta / __VectorNorm)
- __MoveInHS = math.exp( math.log(numpy.random.uniform()) / __Dimension) # rand()**1/n
+ __MoveInHS = math.exp( math.log(numpy.random.uniform()) / __Dimension) # rand()**1/n
__PointInHS = __MoveInHS * __PointOnHS
return __Center + __PointInHS
+# ==============================================================================
+class GenerateWeightsAndSigmaPoints(object):
+ "Génère les points sigma et les poids associés"
+
+ def __init__(self,
+ Nn=0, EO="State", VariantM="UKF",
+ Alpha=None, Beta=2., Kappa=0.):
+ self.Nn = int(Nn)
+ self.Alpha = numpy.longdouble( Alpha )
+ self.Beta = numpy.longdouble( Beta )
+ if abs(Kappa) < 2 * mpr:
+ if EO == "Parameters":
+ self.Kappa = 3 - self.Nn
+ else: # EO == "State":
+ self.Kappa = 0
+ else:
+ self.Kappa = Kappa
+ self.Kappa = numpy.longdouble( self.Kappa )
+ self.Lambda = self.Alpha**2 * ( self.Nn + self.Kappa ) - self.Nn
+ self.Gamma = self.Alpha * numpy.sqrt( self.Nn + self.Kappa )
+ # Rq.: Gamma = sqrt(n+Lambda) = Alpha*sqrt(n+Kappa)
+ assert 0. < self.Alpha <= 1., "Alpha has to be between 0 strictly and 1 included"
+ #
+ if VariantM == "UKF":
+ self.Wm, self.Wc, self.SC = self.__UKF2000()
+ elif VariantM == "S3F":
+ self.Wm, self.Wc, self.SC = self.__S3F2022()
+ elif VariantM == "MSS":
+ self.Wm, self.Wc, self.SC = self.__MSS2011()
+ elif VariantM == "5OS":
+ self.Wm, self.Wc, self.SC = self.__5OS2002()
+ else:
+ raise ValueError("Variant \"%s\" is not a valid one."%VariantM)
+
+ def __UKF2000(self):
+ "Standard Set, Julier et al. 2000 (aka Canonical UKF)"
+ # Rq.: W^{(m)}_{i=/=0} = 1. / (2.*(n + Lambda))
+ Winn = 1. / (2. * self.Alpha**2 * ( self.Nn + self.Kappa ))
+ Ww = []
+ Ww.append( 0. )
+ for point in range(2 * self.Nn):
+ Ww.append( Winn )
+ # Rq.: LsLpL = Lambda / (n + Lambda)
+ LsLpL = 1. - self.Nn / (self.Alpha**2 * ( self.Nn + self.Kappa ))
+ Wm = numpy.array( Ww )
+ Wm[0] = LsLpL
+ Wc = numpy.array( Ww )
+ Wc[0] = LsLpL + (1. - self.Alpha**2 + self.Beta)
+ #
+ SC = numpy.zeros((self.Nn, len(Ww)))
+ for ligne in range(self.Nn):
+ it = ligne + 1
+ SC[ligne, it ] = self.Gamma
+ SC[ligne, self.Nn + it] = -self.Gamma
+ #
+ return Wm, Wc, SC
+
+ def __S3F2022(self):
+ "Scaled Spherical Simplex Set, Papakonstantinou et al. 2022"
+ # Rq.: W^{(m)}_{i=/=0} = (n + Kappa) / ((n + Lambda) * (n + 1 + Kappa))
+ Winn = 1. / (self.Alpha**2 * (self.Nn + 1. + self.Kappa))
+ Ww = []
+ Ww.append( 0. )
+ for point in range(self.Nn + 1):
+ Ww.append( Winn )
+ # Rq.: LsLpL = Lambda / (n + Lambda)
+ LsLpL = 1. - self.Nn / (self.Alpha**2 * ( self.Nn + self.Kappa ))
+ Wm = numpy.array( Ww )
+ Wm[0] = LsLpL
+ Wc = numpy.array( Ww )
+ Wc[0] = LsLpL + (1. - self.Alpha**2 + self.Beta)
+ # assert abs(Wm.sum()-1.) < self.Nn*mpr, "S3F ill-conditioned"
+ #
+ SC = numpy.zeros((self.Nn, len(Ww)))
+ for ligne in range(self.Nn):
+ it = ligne + 1
+ q_t = it / math.sqrt( it * (it + 1) * Winn )
+ SC[ligne, 1:it + 1] = -q_t / it
+ SC[ligne, it + 1 ] = q_t
+ #
+ return Wm, Wc, SC
+
+ def __MSS2011(self):
+ "Minimum Set, Menegaz et al. 2011"
+ rho2 = (1 - self.Alpha) / self.Nn
+ Cc = numpy.real(scipy.linalg.sqrtm( numpy.identity(self.Nn) - rho2 ))
+ Ww = self.Alpha * rho2 * scipy.linalg.inv(Cc) @ numpy.ones(self.Nn) @ scipy.linalg.inv(Cc.T)
+ #
+ Wm = Wc = numpy.concatenate((Ww, [self.Alpha]))
+ #
+ # inv(sqrt(W)) = diag(inv(sqrt(W)))
+ SC1an = Cc @ numpy.diag(1. / numpy.sqrt( Ww ))
+ SCnpu = (- numpy.sqrt(rho2) / numpy.sqrt(self.Alpha)) * numpy.ones(self.Nn).reshape((-1, 1))
+ SC = numpy.concatenate((SC1an, SCnpu), axis=1)
+ #
+ return Wm, Wc, SC
+
+ def __5OS2002(self):
+ "Fifth Order Set, Lerner 2002"
+ Ww = []
+ for point in range(2 * self.Nn):
+ Ww.append( (4. - self.Nn) / 18. )
+ for point in range(2 * self.Nn, 2 * self.Nn**2):
+ Ww.append( 1. / 36. )
+ Ww.append( (self.Nn**2 - 7 * self.Nn) / 18. + 1.)
+ Wm = Wc = numpy.array( Ww )
+ #
+ xi1n = numpy.diag( 3. * numpy.ones( self.Nn ) )
+ xi2n = numpy.diag( -3. * numpy.ones( self.Nn ) )
+ #
+ xi3n1 = numpy.zeros((int((self.Nn - 1) * self.Nn / 2), self.Nn), dtype=float)
+ xi3n2 = numpy.zeros((int((self.Nn - 1) * self.Nn / 2), self.Nn), dtype=float)
+ xi4n1 = numpy.zeros((int((self.Nn - 1) * self.Nn / 2), self.Nn), dtype=float)
+ xi4n2 = numpy.zeros((int((self.Nn - 1) * self.Nn / 2), self.Nn), dtype=float)
+ ia = 0
+ for i1 in range(self.Nn - 1):
+ for i2 in range(i1 + 1, self.Nn):
+ xi3n1[ia, i1] = xi3n2[ia, i2] = 3
+ xi3n2[ia, i1] = xi3n1[ia, i2] = -3
+ # --------------------------------
+ xi4n1[ia, i1] = xi4n1[ia, i2] = 3
+ xi4n2[ia, i1] = xi4n2[ia, i2] = -3
+ ia += 1
+ SC = numpy.concatenate((xi1n, xi2n, xi3n1, xi3n2, xi4n1, xi4n2, numpy.zeros((1, self.Nn)))).T
+ #
+ return Wm, Wc, SC
+
+ def nbOfPoints(self):
+ assert self.Nn == self.SC.shape[0], "Size mismatch %i =/= %i"%(self.Nn, self.SC.shape[0])
+ assert self.Wm.size == self.SC.shape[1], "Size mismatch %i =/= %i"%(self.Wm.size, self.SC.shape[1])
+ assert self.Wm.size == self.Wc.size, "Size mismatch %i =/= %i"%(self.Wm.size, self.Wc.size)
+ return self.Wm.size
+
+ def get(self):
+ return self.Wm, self.Wc, self.SC
+
+ def __repr__(self):
+ "x.__repr__() <==> repr(x)"
+ msg = ""
+ msg += " Alpha = %s\n"%self.Alpha
+ msg += " Beta = %s\n"%self.Beta
+ msg += " Kappa = %s\n"%self.Kappa
+ msg += " Lambda = %s\n"%self.Lambda
+ msg += " Gamma = %s\n"%self.Gamma
+ msg += " Wm = %s\n"%self.Wm
+ msg += " Wc = %s\n"%self.Wc
+ msg += " sum(Wm) = %s\n"%numpy.sum(self.Wm)
+ msg += " SC =\n%s\n"%self.SC
+ return msg
+
# ==============================================================================
def GetNeighborhoodTopology( __ntype, __ipop ):
"Renvoi une topologie de connexion pour une population de points"
__topology = [__ipop for __i in __ipop]
elif __ntype in ["RingNeighborhoodWithRadius1", "RingNeighbourhoodWithRadius1", "lbest"]:
__cpop = list(__ipop[-1:]) + list(__ipop) + list(__ipop[:1])
- __topology = [__cpop[__n:__n+3] for __n in range(len(__ipop))]
+ __topology = [__cpop[__n:__n + 3] for __n in range(len(__ipop))]
elif __ntype in ["RingNeighborhoodWithRadius2", "RingNeighbourhoodWithRadius2"]:
__cpop = list(__ipop[-2:]) + list(__ipop) + list(__ipop[:2])
- __topology = [__cpop[__n:__n+5] for __n in range(len(__ipop))]
+ __topology = [__cpop[__n:__n + 5] for __n in range(len(__ipop))]
elif __ntype in ["AdaptativeRandomWith3Neighbors", "AdaptativeRandomWith3Neighbours", "abest"]:
- __cpop = 3*list(__ipop)
- __topology = [[__i]+list(numpy.random.choice(__cpop,3)) for __i in __ipop]
+ __cpop = 3 * list(__ipop)
+ __topology = [[__i] + list(numpy.random.choice(__cpop, 3)) for __i in __ipop]
elif __ntype in ["AdaptativeRandomWith5Neighbors", "AdaptativeRandomWith5Neighbours"]:
- __cpop = 5*list(__ipop)
- __topology = [[__i]+list(numpy.random.choice(__cpop,5)) for __i in __ipop]
+ __cpop = 5 * list(__ipop)
+ __topology = [[__i] + list(numpy.random.choice(__cpop, 5)) for __i in __ipop]
else:
raise ValueError("Swarm topology type unavailable because name \"%s\" is unknown."%__ntype)
return __topology
"Exprime les indices des noms exclus, en ignorant les absents"
if __ExcludeLocations is None:
__ExcludeIndexes = ()
- elif isinstance(__ExcludeLocations, (list, numpy.ndarray, tuple)) and len(__ExcludeLocations)==0:
+ elif isinstance(__ExcludeLocations, (list, numpy.ndarray, tuple)) and len(__ExcludeLocations) == 0:
__ExcludeIndexes = ()
# ----------
elif __NameOfLocations is None:
raise ValueError("to exclude named locations, initial location name list can not be void and has to have the same length as one state")
else:
raise ValueError(str(e))
- elif isinstance(__NameOfLocations, (list, numpy.ndarray, tuple)) and len(__NameOfLocations)==0:
+ elif isinstance(__NameOfLocations, (list, numpy.ndarray, tuple)) and len(__NameOfLocations) == 0:
try:
__ExcludeIndexes = numpy.asarray(__ExcludeLocations, dtype=int)
except ValueError as e:
__ExcludeIndexes = numpy.asarray(__ExcludeLocations, dtype=int)
except ValueError as e:
if "invalid literal for int() with base 10:" in str(e):
- if len(__NameOfLocations) < 1.e6+1 and len(__ExcludeLocations) > 1500:
+ if len(__NameOfLocations) < 1.e6 + 1 and len(__ExcludeLocations) > 1500:
__Heuristic = True
else:
__Heuristic = False
__NameToIndex = dict(numpy.array((
__NameOfLocations,
range(len(__NameOfLocations))
- )).T)
+ )).T)
__ExcludeIndexes = numpy.asarray([__NameToIndex.get(k, -1) for k in __ExcludeLocations], dtype=int)
#
else:
#
# Ignore les noms absents
__ExcludeIndexes = numpy.compress(__ExcludeIndexes > -1, __ExcludeIndexes)
- if len(__ExcludeIndexes)==0: __ExcludeIndexes = ()
+ if len(__ExcludeIndexes) == 0:
+ __ExcludeIndexes = ()
else:
raise ValueError(str(e))
# ----------
# ==============================================================================
def BuildComplexSampleList(
- __SampleAsnUplet,
- __SampleAsExplicitHyperCube,
- __SampleAsMinMaxStepHyperCube,
- __SampleAsMinMaxLatinHyperCube,
- __SampleAsMinMaxSobolSequence,
- __SampleAsIndependantRandomVariables,
- __X0,
- __Seed = None,
- ):
+ __SampleAsnUplet,
+ __SampleAsExplicitHyperCube,
+ __SampleAsMinMaxStepHyperCube,
+ __SampleAsMinMaxLatinHyperCube,
+ __SampleAsMinMaxSobolSequence,
+ __SampleAsIndependantRandomVariables,
+ __X0,
+ __Seed = None ):
# ---------------------------
if len(__SampleAsnUplet) > 0:
sampleList = __SampleAsnUplet
- for i,Xx in enumerate(sampleList):
+ for i, Xx in enumerate(sampleList):
if numpy.ravel(Xx).size != __X0.size:
- raise ValueError("The size %i of the %ith state X in the sample and %i of the checking point Xb are different, they have to be identical."%(numpy.ravel(Xx).size,i+1,__X0.size))
+ raise ValueError("The size %i of the %ith state X in the sample and %i of the checking point Xb are different, they have to be identical."%(numpy.ravel(Xx).size, i + 1, __X0.size))
# ---------------------------
elif len(__SampleAsExplicitHyperCube) > 0:
sampleList = itertools.product(*list(__SampleAsExplicitHyperCube))
# ---------------------------
elif len(__SampleAsMinMaxStepHyperCube) > 0:
coordinatesList = []
- for i,dim in enumerate(__SampleAsMinMaxStepHyperCube):
+ for i, dim in enumerate(__SampleAsMinMaxStepHyperCube):
if len(dim) != 3:
- raise ValueError("For dimension %i, the variable definition \"%s\" is incorrect, it should be [min,max,step]."%(i,dim))
+ raise ValueError("For dimension %i, the variable definition \"%s\" is incorrect, it should be [min,max,step]."%(i, dim))
else:
- coordinatesList.append(numpy.linspace(dim[0],dim[1],1+int((float(dim[1])-float(dim[0]))/float(dim[2]))))
+ coordinatesList.append(numpy.linspace(dim[0], dim[1], 1 + int((float(dim[1]) - float(dim[0])) / float(dim[2]))))
sampleList = itertools.product(*coordinatesList)
# ---------------------------
elif len(__SampleAsMinMaxLatinHyperCube) > 0:
sampleList = []
else:
__spDesc = list(__SampleAsMinMaxLatinHyperCube)
- __nbDime,__nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
+ __nbDime, __nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
__sample = scipy.stats.qmc.LatinHypercube(
d = len(__spDesc),
seed = numpy.random.default_rng(__Seed),
- )
+ )
__sample = __sample.random(n = __nbSamp)
- __bounds = numpy.array(__spDesc)[:,0:2]
- __l_bounds = __bounds[:,0]
- __u_bounds = __bounds[:,1]
+ __bounds = numpy.array(__spDesc)[:, 0:2]
+ __l_bounds = __bounds[:, 0]
+ __u_bounds = __bounds[:, 1]
sampleList = scipy.stats.qmc.scale(__sample, __l_bounds, __u_bounds)
# ---------------------------
elif len(__SampleAsMinMaxSobolSequence) > 0:
sampleList = []
else:
__spDesc = list(__SampleAsMinMaxSobolSequence)
- __nbDime,__nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
+ __nbDime, __nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
if __nbDime != len(__spDesc):
warnings.warn("Declared space dimension (%i) is not equal to number of bounds (%i), the last one will be used."%(__nbDime, len(__spDesc)), FutureWarning, stacklevel=50)
__sample = scipy.stats.qmc.Sobol(
d = len(__spDesc),
seed = numpy.random.default_rng(__Seed),
- )
- __sample = __sample.random_base2(m = int(math.log2(__nbSamp))+1)
- __bounds = numpy.array(__spDesc)[:,0:2]
- __l_bounds = __bounds[:,0]
- __u_bounds = __bounds[:,1]
+ )
+ __sample = __sample.random_base2(m = int(math.log2(__nbSamp)) + 1)
+ __bounds = numpy.array(__spDesc)[:, 0:2]
+ __l_bounds = __bounds[:, 0]
+ __u_bounds = __bounds[:, 1]
sampleList = scipy.stats.qmc.scale(__sample, __l_bounds, __u_bounds)
# ---------------------------
elif len(__SampleAsIndependantRandomVariables) > 0:
coordinatesList = []
- for i,dim in enumerate(__SampleAsIndependantRandomVariables):
+ for i, dim in enumerate(__SampleAsIndependantRandomVariables):
if len(dim) != 3:
- raise ValueError("For dimension %i, the variable definition \"%s\" is incorrect, it should be ('distribution',(parameters),length) with distribution in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]."%(i,dim))
- elif not( str(dim[0]) in ['normal','lognormal','uniform','weibull'] and hasattr(numpy.random,dim[0]) ):
- raise ValueError("For dimension %i, the distribution name \"%s\" is not allowed, please choose in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]"%(i,dim[0]))
+ raise ValueError("For dimension %i, the variable definition \"%s\" is incorrect, it should be ('distribution',(parameters),length) with distribution in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]."%(i, dim))
+ elif not ( str(dim[0]) in ['normal', 'lognormal', 'uniform', 'weibull'] \
+ and hasattr(numpy.random, str(dim[0])) ):
+ raise ValueError("For dimension %i, the distribution name \"%s\" is not allowed, please choose in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]"%(i, str(dim[0])))
else:
- distribution = getattr(numpy.random,str(dim[0]),'normal')
- coordinatesList.append(distribution(*dim[1], size=max(1,int(dim[2]))))
+ distribution = getattr(numpy.random, str(dim[0]), 'normal')
+ coordinatesList.append(distribution(*dim[1], size=max(1, int(dim[2]))))
sampleList = itertools.product(*coordinatesList)
else:
sampleList = iter([__X0,])
return sampleList
# ==============================================================================
-def multiXOsteps(selfA, Xb, Y, U, HO, EM, CM, R, B, Q, oneCycle,
- __CovForecast = False):
+def multiXOsteps(
+ selfA, Xb, Y, U, HO, EM, CM, R, B, Q, oneCycle,
+ __CovForecast = False ):
"""
Prévision multi-pas avec une correction par pas (multi-méthodes)
"""
# Initialisation
# --------------
if selfA._parameters["EstimationOf"] == "State":
- if len(selfA.StoredVariables["Analysis"])==0 or not selfA._parameters["nextStep"]:
+ if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
Xn = numpy.asarray(Xb)
- if __CovForecast: Pn = B
+ if __CovForecast:
+ Pn = B
selfA.StoredVariables["Analysis"].store( Xn )
if selfA._toStore("APosterioriCovariance"):
- if hasattr(B,"asfullmatrix"):
+ if hasattr(B, "asfullmatrix"):
selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(Xn.size) )
else:
selfA.StoredVariables["APosterioriCovariance"].store( B )
selfA._setInternalState("seed", numpy.random.get_state())
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
- if __CovForecast: Pn = selfA._getInternalState("Pn")
+ if __CovForecast:
+ Pn = selfA._getInternalState("Pn")
else:
Xn = numpy.asarray(Xb)
- if __CovForecast: Pn = B
+ if __CovForecast:
+ Pn = B
#
- if hasattr(Y,"stepnumber"):
+ if hasattr(Y, "stepnumber"):
duration = Y.stepnumber()
else:
duration = 2
#
# Multi-steps
# -----------
- for step in range(duration-1):
+ for step in range(duration - 1):
selfA.StoredVariables["CurrentStepNumber"].store( len(selfA.StoredVariables["Analysis"]) )
#
- if hasattr(Y,"store"):
- Ynpu = numpy.asarray( Y[step+1] ).reshape((-1,1))
+ if hasattr(Y, "store"):
+ Ynpu = numpy.asarray( Y[step + 1] ).reshape((-1, 1))
else:
- Ynpu = numpy.asarray( Y ).reshape((-1,1))
+ Ynpu = numpy.asarray( Y ).reshape((-1, 1))
#
if U is not None:
- if hasattr(U,"store") and len(U)>1:
- Un = numpy.asarray( U[step] ).reshape((-1,1))
- elif hasattr(U,"store") and len(U)==1:
- Un = numpy.asarray( U[0] ).reshape((-1,1))
+ if hasattr(U, "store") and len(U) > 1:
+ Un = numpy.asarray( U[step] ).reshape((-1, 1))
+ elif hasattr(U, "store") and len(U) == 1:
+ Un = numpy.asarray( U[0] ).reshape((-1, 1))
else:
- Un = numpy.asarray( U ).reshape((-1,1))
+ Un = numpy.asarray( U ).reshape((-1, 1))
else:
Un = None
#
if selfA._parameters["EstimationOf"] == "State":
if __CovForecast:
Mt = EM["Tangent"].asMatrix(Xn)
- Mt = Mt.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Mt = Mt.reshape(Xn.size, Xn.size) # ADAO & check shape
if __CovForecast:
Ma = EM["Adjoint"].asMatrix(Xn)
- Ma = Ma.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Ma = Ma.reshape(Xn.size, Xn.size) # ADAO & check shape
Pn_predicted = Q + Mt @ (Pn @ Ma)
M = EM["Direct"].appliedControledFormTo
- Xn_predicted = M( (Xn, Un) ).reshape((-1,1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn_predicted = M( (Xn, Un) ).reshape((-1, 1))
+ if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
- Cm = Cm.reshape(Xn.size,Un.size) # ADAO & check shape
- Xn_predicted = Xn_predicted + (Cm @ Un).reshape((-1,1))
- elif selfA._parameters["EstimationOf"] == "Parameters": # No forecast
+ Cm = Cm.reshape(Xn.size, Un.size) # ADAO & check shape
+ Xn_predicted = Xn_predicted + (Cm @ Un).reshape((-1, 1))
+ elif selfA._parameters["EstimationOf"] == "Parameters": # No forecast
# --- > Par principe, M = Id, Q = 0
Xn_predicted = Xn
- if __CovForecast: Pn_predicted = Pn
- Xn_predicted = numpy.asarray(Xn_predicted).reshape((-1,1))
+ if __CovForecast:
+ Pn_predicted = Pn
+ Xn_predicted = numpy.asarray(Xn_predicted).reshape((-1, 1))
if selfA._toStore("ForecastState"):
selfA.StoredVariables["ForecastState"].store( Xn_predicted )
if __CovForecast:
- if hasattr(Pn_predicted,"asfullmatrix"):
+ if hasattr(Pn_predicted, "asfullmatrix"):
Pn_predicted = Pn_predicted.asfullmatrix(Xn.size)
else:
- Pn_predicted = numpy.asarray(Pn_predicted).reshape((Xn.size,Xn.size))
+ Pn_predicted = numpy.asarray(Pn_predicted).reshape((Xn.size, Xn.size))
if selfA._toStore("ForecastCovariance"):
selfA.StoredVariables["ForecastCovariance"].store( Pn_predicted )
#
else:
oneCycle(selfA, Xn_predicted, Ynpu, Un, HO, CM, R, B, True)
#
- #--------------------------
+ # --------------------------
Xn = selfA._getInternalState("Xn")
- if __CovForecast: Pn = selfA._getInternalState("Pn")
+ if __CovForecast:
+ Pn = selfA._getInternalState("Pn")
#
return 0
import os, numpy, copy, math
import gzip, bz2, pickle
-from daCore.PlatformInfo import PathManagement ; PathManagement()
+from daCore.PlatformInfo import PathManagement ; PathManagement() # noqa: E702,E203
from daCore.PlatformInfo import has_gnuplot, PlatformInfo
mfp = PlatformInfo().MaximumPrecision()
if has_gnuplot:
__slots__ = (
"__name", "__unit", "__basetype", "__values", "__tags", "__dynamic",
"__g", "__title", "__ltitle", "__pause", "__dataobservers",
- )
- #
+ )
+
def __init__(self, name="", unit="", basetype=str):
"""
name : nom courant
"""
Stocke une valeur avec ses informations de filtrage.
"""
- if value is None: raise ValueError("Value argument required")
+ if value is None:
+ raise ValueError("Value argument required")
#
self.__values.append(copy.copy(self.__basetype(value)))
self.__tags.append(kwargs)
#
- if self.__dynamic: self.__replots()
+ if self.__dynamic:
+ self.__replots()
__step = len(self.__values) - 1
for hook, parameters, scheduler in self.__dataobservers:
if __step in scheduler:
def __str__(self):
"x.__str__() <==> str(x)"
msg = " Index Value Tags\n"
- for i,v in enumerate(self.__values):
- msg += " i=%05i %10s %s\n"%(i,v,self.__tags[i])
+ for iv, vv in enumerate(self.__values):
+ msg += " i=%05i %10s %s\n"%(iv, vv, self.__tags[iv])
return msg
def __len__(self):
def index(self, value, start=0, stop=None):
"L.index(value, [start, [stop]]) -> integer -- return first index of value."
- if stop is None : stop = len(self.__values)
+ if stop is None:
+ stop = len(self.__values)
return self.__values.index(value, start, stop)
# ---------------------------------------------------------
if tagKey in self.__tags[i]:
if self.__tags[i][tagKey] == kwargs[tagKey]:
__tmp.append( i )
- elif isinstance(kwargs[tagKey],(list,tuple)) and self.__tags[i][tagKey] in kwargs[tagKey]:
+ elif isinstance(kwargs[tagKey], (list, tuple)) and self.__tags[i][tagKey] in kwargs[tagKey]:
__tmp.append( i )
__indexOfFilteredItems = __tmp
- if len(__indexOfFilteredItems) == 0: break
+ if len(__indexOfFilteredItems) == 0:
+ break
return __indexOfFilteredItems
# ---------------------------------------------------------
__indexOfFilteredItems = self.__filteredIndexes(**kwargs)
return [self.__values[i] for i in __indexOfFilteredItems]
- def keys(self, keyword=None , **kwargs):
+ def keys(self, keyword=None, **kwargs):
"D.keys() -> list of D's keys"
__indexOfFilteredItems = self.__filteredIndexes(**kwargs)
__keys = []
__keys.append( None )
return __keys
- def items(self, keyword=None , **kwargs):
+ def items(self, keyword=None, **kwargs):
"D.items() -> list of D's (key, value) pairs, as 2-tuples"
__indexOfFilteredItems = self.__filteredIndexes(**kwargs)
__pairs = []
__indexOfFilteredItems = [item,]
#
# Dans le cas où la sortie donne les valeurs d'un "outputTag"
- if outputTag is not None and isinstance(outputTag,str) :
+ if outputTag is not None and isinstance(outputTag, str):
outputValues = []
for index in __indexOfFilteredItems:
if outputTag in self.__tags[index].keys():
except Exception:
raise TypeError("Base type is incompatible with numpy")
- msds=mses # Mean-Square Deviation (MSD=MSE)
+ msds = mses # Mean-Square Deviation (MSD=MSE)
def rmses(self, _predictor=None):
"""
except Exception:
raise TypeError("Base type is incompatible with numpy")
- rmsds = rmses # Root-Mean-Square Deviation (RMSD=RMSE)
+ rmsds = rmses # Root-Mean-Square Deviation (RMSD=RMSE)
def __preplots(self,
title = "",
ltitle = None,
geometry = "600x400",
persist = False,
- pause = True,
- ):
+ pause = True ):
"Préparation des plots"
#
# Vérification de la disponibilité du module Gnuplot
raise ImportError("The Gnuplot module is required to plot the object.")
#
# Vérification et compléments sur les paramètres d'entrée
- if ltitle is None: ltitle = ""
+ if ltitle is None:
+ ltitle = ""
__geometry = str(geometry)
- __sizespec = (__geometry.split('+')[0]).replace('x',',')
+ __sizespec = (__geometry.split('+')[0]).replace('x', ',')
#
if persist:
Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist '
#
- self.__g = Gnuplot.Gnuplot() # persist=1
- self.__g('set terminal '+Gnuplot.GnuplotOpts.default_term+' size '+__sizespec)
+ self.__g = Gnuplot.Gnuplot() # persist=1
+ self.__g('set terminal ' + Gnuplot.GnuplotOpts.default_term + ' size ' + __sizespec)
self.__g('set style data lines')
self.__g('set grid')
self.__g('set autoscale')
- self.__g('set xlabel "'+str(xlabel)+'"')
- self.__g('set ylabel "'+str(ylabel)+'"')
+ self.__g('set xlabel "' + str(xlabel) + '"')
+ self.__g('set ylabel "' + str(ylabel) + '"')
self.__title = title
self.__ltitle = ltitle
self.__pause = pause
filename = "",
dynamic = False,
persist = False,
- pause = True,
- ):
+ pause = True ):
"""
Renvoie un affichage de la valeur à chaque pas, si elle est compatible
avec un affichage Gnuplot (donc essentiellement un vecteur). Si
self.__preplots(title, xlabel, ylabel, ltitle, geometry, persist, pause )
if dynamic:
self.__dynamic = True
- if len(self.__values) == 0: return 0
+ if len(self.__values) == 0:
+ return 0
#
# Tracé du ou des vecteurs demandés
indexes = []
#
i = -1
for index in indexes:
- self.__g('set title "'+str(title)+' (pas '+str(index)+')"')
+ self.__g('set title "' + str(title) + ' (pas ' + str(index) + ')"')
if isinstance(steps, (list, numpy.ndarray)):
Steps = list(steps)
else:
#
if filename != "":
i += 1
- stepfilename = "%s_%03i.ps"%(filename,i)
+ stepfilename = "%s_%03i.ps"%(filename, i)
if os.path.isfile(stepfilename):
raise ValueError("Error: a file with this name \"%s\" already exists."%stepfilename)
self.__g.hardcopy(filename=stepfilename, color=1)
"""
Affichage dans le cas du suivi dynamique de la variable
"""
- if self.__dynamic and len(self.__values) < 2: return 0
+ if self.__dynamic and len(self.__values) < 2:
+ return 0
#
- self.__g('set title "'+str(self.__title))
+ self.__g('set title "' + str(self.__title))
Steps = list(range(len(self.__values)))
self.__g.plot( Gnuplot.Data( Steps, self.__values, title=self.__ltitle ) )
#
"""
try:
if numpy.version.version >= '1.1.0':
- return numpy.asarray(self.__values).std(ddof=ddof,axis=0).astype('float')
+ return numpy.asarray(self.__values).std(ddof=ddof, axis=0).astype('float')
else:
return numpy.asarray(self.__values).std(axis=0).astype('float')
except Exception:
geometry = "600x400",
filename = "",
persist = False,
- pause = True,
- ):
+ pause = True ):
"""
Renvoie un affichage unique pour l'ensemble des valeurs à chaque pas, si
elles sont compatibles avec un affichage Gnuplot (donc essentiellement
raise ImportError("The Gnuplot module is required to plot the object.")
#
# Vérification et compléments sur les paramètres d'entrée
- if ltitle is None: ltitle = ""
+ if ltitle is None:
+ ltitle = ""
if isinstance(steps, (list, numpy.ndarray)):
Steps = list(steps)
else:
Steps = list(range(len(self.__values[0])))
__geometry = str(geometry)
- __sizespec = (__geometry.split('+')[0]).replace('x',',')
+ __sizespec = (__geometry.split('+')[0]).replace('x', ',')
#
if persist:
Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist '
#
- self.__g = Gnuplot.Gnuplot() # persist=1
- self.__g('set terminal '+Gnuplot.GnuplotOpts.default_term+' size '+__sizespec)
+ self.__g = Gnuplot.Gnuplot() # persist=1
+ self.__g('set terminal ' + Gnuplot.GnuplotOpts.default_term + ' size ' + __sizespec)
self.__g('set style data lines')
self.__g('set grid')
self.__g('set autoscale')
- self.__g('set title "'+str(title) +'"')
- self.__g('set xlabel "'+str(xlabel)+'"')
- self.__g('set ylabel "'+str(ylabel)+'"')
+ self.__g('set title "' + str(title) + '"')
+ self.__g('set xlabel "' + str(xlabel) + '"')
+ self.__g('set ylabel "' + str(ylabel) + '"')
#
# Tracé du ou des vecteurs demandés
indexes = list(range(len(self.__values)))
- self.__g.plot( Gnuplot.Data( Steps, self.__values[indexes.pop(0)], title=ltitle+" (pas 0)" ) )
+ self.__g.plot( Gnuplot.Data( Steps, self.__values[indexes.pop(0)], title=ltitle + " (pas 0)" ) )
for index in indexes:
- self.__g.replot( Gnuplot.Data( Steps, self.__values[index], title=ltitle+" (pas %i)"%index ) )
+ self.__g.replot( Gnuplot.Data( Steps, self.__values[index], title=ltitle + " (pas %i)"%index ) )
#
if filename != "":
self.__g.hardcopy(filename=filename, color=1)
# Vérification du Scheduler
# -------------------------
maxiter = int( 1e9 )
- if isinstance(Scheduler,int): # Considéré comme une fréquence à partir de 0
+ if isinstance(Scheduler, int): # Considéré comme une fréquence à partir de 0
Schedulers = range( 0, maxiter, int(Scheduler) )
- elif isinstance(Scheduler,range): # Considéré comme un itérateur
+ elif isinstance(Scheduler, range): # Considéré comme un itérateur
Schedulers = Scheduler
- elif isinstance(Scheduler,(list,tuple)): # Considéré comme des index explicites
- Schedulers = [int(i) for i in Scheduler] # map( long, Scheduler )
- else: # Dans tous les autres cas, activé par défaut
+ elif isinstance(Scheduler, (list, tuple)): # Considéré comme des index explicites
+ Schedulers = [int(i) for i in Scheduler] # Similaire à map( int, Scheduler ) # noqa: E262
+ else: # Dans tous les autres cas, activé par défaut
Schedulers = range( 0, maxiter )
#
# Stockage interne de l'observer dans la variable
définition, ou un simple string qui est le nom de la fonction. Si
AllObservers est vrai, supprime tous les observers enregistrés.
"""
- if hasattr(HookFunction,"func_name"):
+ if hasattr(HookFunction, "func_name"):
name = str( HookFunction.func_name )
- elif hasattr(HookFunction,"__name__"):
+ elif hasattr(HookFunction, "__name__"):
name = str( HookFunction.__name__ )
- elif isinstance(HookFunction,str):
+ elif isinstance(HookFunction, str):
name = str( HookFunction )
else:
name = None
#
- i = -1
+ ih = -1
index_to_remove = []
for [hf, hp, hs] in self.__dataobservers:
- i = i + 1
- if name is hf.__name__ or AllObservers: index_to_remove.append( i )
+ ih = ih + 1
+ if name is hf.__name__ or AllObservers:
+ index_to_remove.append( ih )
index_to_remove.reverse()
- for i in index_to_remove:
- self.__dataobservers.pop( i )
+ for ih in index_to_remove:
+ self.__dataobservers.pop( ih )
return len(index_to_remove)
def hasDataObserver(self):
Classe générale d'interface de type Scheduler/Trigger
"""
__slots__ = ()
- #
+
def __init__(self,
simplifiedCombo = None,
startTime = 0,
endTime = int( 1e9 ),
timeDelay = 1,
timeUnit = 1,
- frequency = None,
- ):
+ frequency = None ):
pass
# ==============================================================================
pour conserver une signification claire des noms.
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = float):
Persistence.__init__(self, name, unit, basetype)
Classe définissant le stockage d'une valeur unique entière (int) par pas.
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = int):
Persistence.__init__(self, name, unit, basetype)
pas utiliser cette classe pour des données hétérogènes, mais "OneList".
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = numpy.ravel):
Persistence.__init__(self, name, unit, basetype)
Classe de stockage d'une matrice de valeurs homogènes par pas.
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = numpy.array):
Persistence.__init__(self, name, unit, basetype)
Classe de stockage d'une matrice de valeurs homogènes par pas.
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = numpy.matrix):
Persistence.__init__(self, name, unit, basetype)
"OneVector".
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = list):
Persistence.__init__(self, name, unit, basetype)
volontairement, et pas du tout par défaut.
"""
__slots__ = ()
- #
+
def __init__(self, name="", unit="", basetype = NoType):
Persistence.__init__(self, name, unit, basetype)
être ajoutés.
"""
__slots__ = ("__name", "__StoredObjects")
- #
+
def __init__(self, name="", defaults=True):
"""
name : nom courant
"""
Stockage d'une valeur "value" pour le "step" dans la variable "name".
"""
- if name is None: raise ValueError("Storable object name is required for storage.")
+ if name is None:
+ raise ValueError("Storable object name is required for storage.")
if name not in self.__StoredObjects.keys():
raise ValueError("No such name '%s' exists in storable objects."%name)
self.__StoredObjects[name].store( value=value, **kwargs )
Ajoute dans les objets stockables un nouvel objet défini par son nom,
son type de Persistence et son type de base à chaque pas.
"""
- if name is None: raise ValueError("Object name is required for adding an object.")
+ if name is None:
+ raise ValueError("Object name is required for adding an object.")
if name in self.__StoredObjects.keys():
raise ValueError("An object with the same name '%s' already exists in storable objects. Choose another one."%name)
if basetype is None:
"""
Renvoie l'objet de type Persistence qui porte le nom demandé.
"""
- if name is None: raise ValueError("Object name is required for retrieving an object.")
+ if name is None:
+ raise ValueError("Object name is required for retrieving an object.")
if name not in self.__StoredObjects.keys():
raise ValueError("No such name '%s' exists in stored objects."%name)
return self.__StoredObjects[name]
comporter les méthodes habituelles de Persistence pour que cela
fonctionne.
"""
- if name is None: raise ValueError("Object name is required for setting an object.")
+ if name is None:
+ raise ValueError("Object name is required for setting an object.")
if name in self.__StoredObjects.keys():
raise ValueError("An object with the same name '%s' already exists in storable objects. Choose another one."%name)
self.__StoredObjects[name] = objet
"""
Supprime un objet de la liste des objets stockables.
"""
- if name is None: raise ValueError("Object name is required for retrieving an object.")
+ if name is None:
+ raise ValueError("Object name is required for retrieving an object.")
if name not in self.__StoredObjects.keys():
raise ValueError("No such name '%s' exists in stored objects."%name)
del self.__StoredObjects[name]
usedObjs = []
for k in objs:
try:
- if len(self.__StoredObjects[k]) > 0: usedObjs.append( k )
+ if len(self.__StoredObjects[k]) > 0:
+ usedObjs.append( k )
finally:
pass
objs = usedObjs
Rassemblement des informations sur le code et la plateforme
"""
__slots__ = ()
- #
+
def __init__(self):
"Sans effet"
pass
- #
+
def getName(self):
"Retourne le nom de l'application"
import daCore.version as dav
return dav.name
- #
+
def getVersion(self):
"Retourne le numéro de la version"
import daCore.version as dav
return dav.version
- #
+
def getDate(self):
"Retourne la date de création de la version"
import daCore.version as dav
return dav.date
- #
+
def getYear(self):
"Retourne l'année de création de la version"
import daCore.version as dav
return dav.year
- #
+
def getSystemInformation(self, __prefix=""):
__msg = ""
- __msg += "\n%s%30s : %s" %(__prefix,"platform.system",platform.system())
- __msg += "\n%s%30s : %s" %(__prefix,"sys.platform",sys.platform)
- __msg += "\n%s%30s : %s" %(__prefix,"platform.version",platform.version())
- __msg += "\n%s%30s : %s" %(__prefix,"platform.platform",platform.platform())
- __msg += "\n%s%30s : %s" %(__prefix,"platform.machine",platform.machine())
- if len(platform.processor())>0:
- __msg += "\n%s%30s : %s" %(__prefix,"platform.processor",platform.processor())
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.system", platform.system())
+ __msg += "\n%s%30s : %s"%(__prefix, "sys.platform", sys.platform)
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.version", platform.version())
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.platform", platform.platform())
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.machine", platform.machine())
+ if len(platform.processor()) > 0:
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.processor", platform.processor())
#
if sys.platform.startswith('linux'):
if hasattr(platform, 'linux_distribution'):
- __msg += "\n%s%30s : %s" %(__prefix,
- "platform.linux_distribution",str(platform.linux_distribution()))
+ __msg += "\n%s%30s : %s"%(__prefix,
+ "platform.linux_distribution", str(platform.linux_distribution())) # noqa: E128
elif hasattr(platform, 'dist'):
- __msg += "\n%s%30s : %s" %(__prefix,
- "platform.dist",str(platform.dist()))
+ __msg += "\n%s%30s : %s"%(__prefix,
+ "platform.dist", str(platform.dist())) # noqa: E128
elif sys.platform.startswith('darwin'):
if hasattr(platform, 'mac_ver'):
# https://fr.wikipedia.org/wiki/MacOS
__macosxv10 = {
- '0' : 'Cheetah', '1' : 'Puma', '2' : 'Jaguar',
- '3' : 'Panther', '4' : 'Tiger', '5' : 'Leopard',
- '6' : 'Snow Leopard', '7' : 'Lion', '8' : 'Mountain Lion',
- '9' : 'Mavericks', '10': 'Yosemite', '11': 'El Capitan',
- '12': 'Sierra', '13': 'High Sierra', '14': 'Mojave',
+ '0' : 'Cheetah', '1' : 'Puma', '2' : 'Jaguar', # noqa: E241,E203
+ '3' : 'Panther', '4' : 'Tiger', '5' : 'Leopard', # noqa: E241,E203
+ '6' : 'Snow Leopard', '7' : 'Lion', '8' : 'Mountain Lion', # noqa: E241,E203
+ '9' : 'Mavericks', '10': 'Yosemite', '11': 'El Capitan', # noqa: E241,E203
+ '12': 'Sierra', '13': 'High Sierra', '14': 'Mojave', # noqa: E241,E203
'15': 'Catalina',
- }
+ }
for key in __macosxv10:
__details = platform.mac_ver()[0].split('.')
- if (len(__details)>0) and (__details[1] == key):
- __msg += "\n%s%30s : %s" %(__prefix,
- "platform.mac_ver",str(platform.mac_ver()[0]+"(" + __macosxv10[key]+")"))
+ if (len(__details) > 0) and (__details[1] == key):
+ __msg += "\n%s%30s : %s"%(__prefix,
+ "platform.mac_ver", str(platform.mac_ver()[0] + "(" + __macosxv10[key] + ")")) # noqa: E128
__macosxv11 = {
- '11': 'Big Sur', '12': 'Monterey', '13': 'Ventura',
+ '11': 'Big Sur', '12': 'Monterey', '13': 'Ventura', # noqa: E241
'14': 'Sonoma',
- }
+ }
for key in __macosxv11:
__details = platform.mac_ver()[0].split('.')
if (__details[0] == key):
- __msg += "\n%s%30s : %s" %(__prefix,
- "platform.mac_ver",str(platform.mac_ver()[0]+"(" + __macosxv11[key]+")"))
+ __msg += "\n%s%30s : %s"%(__prefix,
+ "platform.mac_ver", str(platform.mac_ver()[0] + "(" + __macosxv11[key] + ")")) # noqa: E128
elif hasattr(platform, 'dist'):
- __msg += "\n%s%30s : %s" %(__prefix,"platform.dist",str(platform.dist()))
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.dist", str(platform.dist()))
elif os.name == 'nt':
- __msg += "\n%s%30s : %s" %(__prefix,"platform.win32_ver",platform.win32_ver()[1])
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.win32_ver", platform.win32_ver()[1])
#
__msg += "\n"
- __msg += "\n%s%30s : %s" %(__prefix,"platform.python_implementation",platform.python_implementation())
- __msg += "\n%s%30s : %s" %(__prefix,"sys.executable",sys.executable)
- __msg += "\n%s%30s : %s" %(__prefix,"sys.version",sys.version.replace('\n',''))
- __msg += "\n%s%30s : %s" %(__prefix,"sys.getfilesystemencoding",str(sys.getfilesystemencoding()))
- if sys.version_info.major == 3 and sys.version_info.minor < 11: # Python 3.10
- __msg += "\n%s%30s : %s" %(__prefix,"locale.getdefaultlocale",str(locale.getdefaultlocale()))
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.python_implementation", platform.python_implementation())
+ __msg += "\n%s%30s : %s"%(__prefix, "sys.executable", sys.executable)
+ __msg += "\n%s%30s : %s"%(__prefix, "sys.version", sys.version.replace('\n', ''))
+ __msg += "\n%s%30s : %s"%(__prefix, "sys.getfilesystemencoding", str(sys.getfilesystemencoding()))
+ if sys.version_info.major == 3 and sys.version_info.minor < 11: # Python 3.10
+ __msg += "\n%s%30s : %s"%(__prefix, "locale.getdefaultlocale", str(locale.getdefaultlocale()))
else:
- __msg += "\n%s%30s : %s" %(__prefix,"locale.getlocale",str(locale.getlocale()))
+ __msg += "\n%s%30s : %s"%(__prefix, "locale.getlocale", str(locale.getlocale()))
__msg += "\n"
- __msg += "\n%s%30s : %s" %(__prefix,"os.cpu_count",os.cpu_count())
+ __msg += "\n%s%30s : %s"%(__prefix, "os.cpu_count", os.cpu_count())
if hasattr(os, 'sched_getaffinity'):
- __msg += "\n%s%30s : %s" %(__prefix,"len(os.sched_getaffinity(0))",len(os.sched_getaffinity(0)))
+ __msg += "\n%s%30s : %s"%(__prefix, "len(os.sched_getaffinity(0))", len(os.sched_getaffinity(0)))
else:
- __msg += "\n%s%30s : %s" %(__prefix,"len(os.sched_getaffinity(0))","Unsupported on this platform")
+ __msg += "\n%s%30s : %s"%(__prefix, "len(os.sched_getaffinity(0))", "Unsupported on this platform")
__msg += "\n"
- __msg += "\n%s%30s : %s" %(__prefix,"platform.node",platform.node())
- __msg += "\n%s%30s : %s" %(__prefix,"socket.getfqdn",socket.getfqdn())
- __msg += "\n%s%30s : %s" %(__prefix,"os.path.expanduser",os.path.expanduser('~'))
+ __msg += "\n%s%30s : %s"%(__prefix, "platform.node", platform.node())
+ __msg += "\n%s%30s : %s"%(__prefix, "socket.getfqdn", socket.getfqdn())
+ __msg += "\n%s%30s : %s"%(__prefix, "os.path.expanduser", os.path.expanduser('~'))
return __msg
- #
+
def getApplicationInformation(self, __prefix=""):
__msg = ""
- __msg += "\n%s%30s : %s" %(__prefix,"ADAO version",self.getVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "ADAO version", self.getVersion())
__msg += "\n"
- __msg += "\n%s%30s : %s" %(__prefix,"Python version",self.getPythonVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"Numpy version",self.getNumpyVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"Scipy version",self.getScipyVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"NLopt version",self.getNloptVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"MatplotLib version",self.getMatplotlibVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"GnuplotPy version",self.getGnuplotVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"Sphinx version",self.getSphinxVersion())
- __msg += "\n%s%30s : %s" %(__prefix,"Fmpy version",self.getFmpyVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "Python version", self.getPythonVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "Numpy version", self.getNumpyVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "Scipy version", self.getScipyVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "NLopt version", self.getNloptVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "MatplotLib version", self.getMatplotlibVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "GnuplotPy version", self.getGnuplotVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "Sphinx version", self.getSphinxVersion())
+ __msg += "\n%s%30s : %s"%(__prefix, "Fmpy version", self.getFmpyVersion())
return __msg
- #
+
def getAllInformation(self, __prefix="", __title="Whole system information"):
__msg = ""
- if len(__title)>0:
- __msg += "\n"+"="*80+"\n"+__title+"\n"+"="*80+"\n"
+ if len(__title) > 0:
+ __msg += "\n" + "=" * 80 + "\n" + __title + "\n" + "=" * 80 + "\n"
__msg += self.getSystemInformation(__prefix)
__msg += "\n"
__msg += self.getApplicationInformation(__prefix)
return __msg
- #
+
def getPythonVersion(self):
"Retourne la version de python disponible"
- return ".".join([str(x) for x in sys.version_info[0:3]]) # map(str,sys.version_info[0:3]))
- #
+ return ".".join([str(x) for x in sys.version_info[0:3]]) # map(str,sys.version_info[0:3]))
+
def getNumpyVersion(self):
"Retourne la version de numpy disponible"
import numpy.version
return numpy.version.version
- #
+
def getScipyVersion(self):
"Retourne la version de scipy disponible"
if has_scipy:
else:
__version = "0.0.0"
return __version
- #
+
def getMatplotlibVersion(self):
"Retourne la version de matplotlib disponible"
if has_matplotlib:
else:
__version = "0.0.0"
return __version
- #
+
def getGnuplotVersion(self):
"Retourne la version de gnuplotpy disponible"
if has_gnuplot:
else:
__version = "0.0"
return __version
- #
+
def getSphinxVersion(self):
"Retourne la version de sphinx disponible"
if has_sphinx:
else:
__version = "0.0.0"
return __version
- #
+
def getNloptVersion(self):
"Retourne la version de nlopt disponible"
if has_nlopt:
nlopt.version_major(),
nlopt.version_minor(),
nlopt.version_bugfix(),
- )
+ )
else:
__version = "0.0.0"
return __version
- #
+
def getSdfVersion(self):
"Retourne la version de sdf disponible"
if has_sdf:
else:
__version = "0.0.0"
return __version
- #
+
def getFmpyVersion(self):
"Retourne la version de fmpy disponible"
if has_fmpy:
else:
__version = "0.0.0"
return __version
- #
+
def getCurrentMemorySize(self):
"Retourne la taille mémoire courante utilisée"
return 1
- #
+
def MaximumPrecision(self):
- "Retourne la precision maximale flottante pour Numpy"
+ "Retourne la précision maximale flottante pour Numpy"
import numpy
try:
numpy.array([1.,], dtype='float128')
except Exception:
mfp = 'float64'
return mfp
- #
+
def MachinePrecision(self):
# Alternative sans module :
# eps = 2.38
# old_eps = eps
# eps = (1.0 + eps/2) - 1.0
return sys.float_info.epsilon
- #
+
def __str__(self):
import daCore.version as dav
- return "%s %s (%s)"%(dav.name,dav.version,dav.date)
+ return "%s %s (%s)"%(dav.name, dav.version, dav.date)
# ==============================================================================
# Tests d'importation de modules système
- Ne pas accepter comme itérable un "numpy.ndarray"
- Ne pas accepter comme itérable avec hasattr(__sequence, "__iter__")
"""
- if isinstance( __sequence, (list, tuple, map, dict) ):
+ if isinstance( __sequence, (list, tuple, map, dict) ):
__isOk = True
- elif type(__sequence).__name__ in ('generator','range'):
+ elif type(__sequence).__name__ in ('generator', 'range'):
__isOk = True
elif "_iterator" in type(__sequence).__name__:
__isOk = True
raise TypeError("Not iterable or unkown input type%s: %s"%(__header, type(__sequence),))
return __isOk
-def date2int( __date, __lang="FR" ):
+def date2int( __date: str, __lang="FR" ):
"""
Fonction de secours, conversion pure : dd/mm/yy hh:mm ---> int(yyyymmddhhmm)
"""
__date = __date.strip()
if __date.count('/') == 2 and __date.count(':') == 0 and __date.count(' ') == 0:
- d,m,y = __date.split("/")
- __number = (10**4)*int(y)+(10**2)*int(m)+int(d)
+ d, m, y = __date.split("/")
+ __number = (10**4) * int(y) + (10**2) * int(m) + int(d)
elif __date.count('/') == 2 and __date.count(':') == 1 and __date.count(' ') > 0:
part1, part2 = __date.split()
- d,m,y = part1.strip().split("/")
- h,n = part2.strip().split(":")
- __number = (10**8)*int(y)+(10**6)*int(m)+(10**4)*int(d)+(10**2)*int(h)+int(n)
+ d, m, y = part1.strip().split("/")
+ h, n = part2.strip().split(":")
+ __number = (10**8) * int(y) + (10**6) * int(m) + (10**4) * int(d) + (10**2) * int(h) + int(n)
else:
- raise ValueError("Cannot convert \"%s\" as a D/M/Y H:M date"%d)
+ raise ValueError("Cannot convert \"%s\" as a D/M/Y H:M date"%__date)
return __number
-def vfloat(__value :numpy.ndarray):
+def vfloat(__value: numpy.ndarray):
"""
Conversion en flottant d'un vecteur de taille 1 et de dimensions quelconques
"""
- if hasattr(__value,"size") and __value.size == 1:
+ if hasattr(__value, "size") and __value.size == 1:
return float(__value.flat[0])
- elif isinstance(__value, (float,int)):
+ elif isinstance(__value, (float, int)):
return float(__value)
else:
raise ValueError("Error in converting multiple float values from array when waiting for only one")
représentation de vecteur en une liste de chaînes de caractères de
représentation de flottants
"""
- for s in ("array", "matrix", "list", "tuple", "[", "]", "(", ")"):
- __strvect = __strvect.replace(s,"") # Rien
- for s in (",", ";"):
- __strvect = __strvect.replace(s," ") # Blanc
+ for st in ("array", "matrix", "list", "tuple", "[", "]", "(", ")"):
+ __strvect = __strvect.replace(st, "") # Rien
+ for st in (",", ";"):
+ __strvect = __strvect.replace(st, " ") # Blanc
return __strvect.split()
def strmatrix2liststr( __strvect ):
représentation de matrice en une liste de chaînes de caractères de
représentation de flottants
"""
- for s in ("array", "matrix", "list", "tuple", "[", "(", "'", '"'):
- __strvect = __strvect.replace(s,"") # Rien
- __strvect = __strvect.replace(","," ") # Blanc
- for s in ("]", ")"):
- __strvect = __strvect.replace(s,";") # "]" et ")" par ";"
- __strvect = re.sub(r';\s*;',r';',__strvect)
- __strvect = __strvect.rstrip(";") # Après ^ et avant v
+ for st in ("array", "matrix", "list", "tuple", "[", "(", "'", '"'):
+ __strvect = __strvect.replace(st, "") # Rien
+ __strvect = __strvect.replace(",", " ") # Blanc
+ for st in ("]", ")"):
+ __strvect = __strvect.replace(st, ";") # "]" et ")" par ";"
+ __strvect = re.sub(r';\s*;', r';', __strvect)
+ __strvect = __strvect.rstrip(";") # Après ^ et avant v
__strmat = [__l.split() for __l in __strvect.split(";")]
return __strmat
def checkFileNameConformity( __filename, __warnInsteadOfPrint=True ):
if sys.platform.startswith("win") and len(__filename) > 256:
__conform = False
- __msg = (" For some shared or older file systems on Windows, a file "+\
- "name longer than 256 characters can lead to access problems."+\
- "\n The name of the file in question is the following:"+\
+ __msg = (
+ " For some shared or older file systems on Windows, a file " + \
+ "name longer than 256 characters can lead to access problems." + \
+ "\n The name of the file in question is the following:" + \
"\n %s")%(__filename,)
- if __warnInsteadOfPrint: logging.warning(__msg)
- else: print(__msg)
+ if __warnInsteadOfPrint:
+ logging.warning(__msg)
+ else:
+ print(__msg)
else:
__conform = True
#
def checkFileNameImportability( __filename, __warnInsteadOfPrint=True ):
if str(__filename).count(".") > 1:
__conform = False
- __msg = (" The file name contains %i point(s) before the extension "+\
- "separator, which can potentially lead to problems when "+\
- "importing this file into Python, as it can then be recognized "+\
- "as a sub-module (generating a \"ModuleNotFoundError\"). If it "+\
- "is intentional, make sure that there is no module with the "+\
- "same name as the part before the first point, and that there is "+\
- "no \"__init__.py\" file in the same directory."+\
- "\n The name of the file in question is the following:"+\
- "\n %s")%(int(str(__filename).count(".")-1), __filename)
- if __warnInsteadOfPrint is None: pass
- elif __warnInsteadOfPrint: logging.warning(__msg)
- else: print(__msg)
+ __msg = (
+ " The file name contains %i point(s) before the extension " + \
+ "separator, which can potentially lead to problems when " + \
+ "importing this file into Python, as it can then be recognized " + \
+ "as a sub-module (generating a \"ModuleNotFoundError\"). If it " + \
+ "is intentional, make sure that there is no module with the " + \
+ "same name as the part before the first point, and that there is " + \
+ "no \"__init__.py\" file in the same directory." + \
+ "\n The name of the file in question is the following:" + \
+ "\n %s")%(int(str(__filename).count(".") - 1), __filename)
+ if __warnInsteadOfPrint is None:
+ pass
+ elif __warnInsteadOfPrint:
+ logging.warning(__msg)
+ else:
+ print(__msg)
else:
__conform = True
#
Mise à jour du path système pour les répertoires d'outils
"""
__slots__ = ("__paths")
- #
+
def __init__(self):
"Déclaration des répertoires statiques"
- parent = os.path.abspath(os.path.join(os.path.dirname(__file__),".."))
+ parent = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
self.__paths = {}
- self.__paths["daNumerics"] = os.path.join(parent,"daNumerics")
+ self.__paths["daNumerics"] = os.path.join(parent, "daNumerics")
#
for v in self.__paths.values():
- if os.path.isdir(v): sys.path.insert(0, v )
+ if os.path.isdir(v):
+ sys.path.insert(0, v )
#
# Conserve en unique exemplaire chaque chemin
sys.path = uniq( sys.path )
del parent
- #
+
def getpaths(self):
"""
Renvoie le dictionnaire des chemins ajoutés
_proc_status = '/proc/%d/status' % os.getpid()
_memo_status = '/proc/meminfo'
_scale = {
- 'o' : 1.0, # Multiples SI de l'octet
- 'ko' : 1.e3,
- 'Mo' : 1.e6,
- 'Go' : 1.e9,
- 'kio': 1024.0, # Multiples binaires de l'octet
- 'Mio': 1024.0*1024.0,
- 'Gio': 1024.0*1024.0*1024.0,
- 'B': 1.0, # Multiples binaires du byte=octet
- 'kB' : 1024.0,
- 'MB' : 1024.0*1024.0,
- 'GB' : 1024.0*1024.0*1024.0,
- }
- #
+ 'o' : 1.0, # Multiples SI de l'octet # noqa: E203
+ 'ko' : 1.e3, # noqa: E203
+ 'Mo' : 1.e6, # noqa: E203
+ 'Go' : 1.e9, # noqa: E203
+ 'kio': 1024.0, # Multiples binaires de l'octet # noqa: E203
+ 'Mio': 1024.0 * 1024.0, # noqa: E203
+ 'Gio': 1024.0 * 1024.0 * 1024.0, # noqa: E203
+ 'B' : 1.0, # Multiples binaires du byte=octet # noqa: E203
+ 'kB' : 1024.0, # noqa: E203
+ 'MB' : 1024.0 * 1024.0, # noqa: E203
+ 'GB' : 1024.0 * 1024.0 * 1024.0, # noqa: E203
+ }
+
def __init__(self):
"Sans effet"
pass
- #
+
def _VmA(self, VmKey, unit):
"Lecture des paramètres mémoire de la machine"
try:
v = t.read()
t.close()
except IOError:
- return 0.0 # non-Linux?
- i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
- v = v[i:].split(None, 3) # whitespace
+ return 0.0 # non-Linux?
+ i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
+ v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
- return 0.0 # invalid format?
+ return 0.0 # invalid format?
# convert Vm value to bytes
mem = float(v[1]) * self._scale[v[2]]
return mem / self._scale[unit]
- #
+
def getAvailablePhysicalMemory(self, unit="o"):
"Renvoie la mémoire physique utilisable en octets"
return self._VmA('MemTotal:', unit)
- #
+
def getAvailableSwapMemory(self, unit="o"):
"Renvoie la mémoire swap utilisable en octets"
return self._VmA('SwapTotal:', unit)
- #
+
def getAvailableMemory(self, unit="o"):
"Renvoie la mémoire totale (physique+swap) utilisable en octets"
return self._VmA('MemTotal:', unit) + self._VmA('SwapTotal:', unit)
- #
+
def getUsableMemory(self, unit="o"):
"""Renvoie la mémoire utilisable en octets
Rq : il n'est pas sûr que ce décompte soit juste...
"""
return self._VmA('MemFree:', unit) + self._VmA('SwapFree:', unit) + \
- self._VmA('Cached:', unit) + self._VmA('SwapCached:', unit)
- #
+ self._VmA('Cached:', unit) + self._VmA('SwapCached:', unit)
+
def _VmB(self, VmKey, unit):
"Lecture des paramètres mémoire du processus"
try:
v = t.read()
t.close()
except IOError:
- return 0.0 # non-Linux?
- i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
- v = v[i:].split(None, 3) # whitespace
+ return 0.0 # non-Linux?
+ i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
+ v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
- return 0.0 # invalid format?
+ return 0.0 # invalid format?
# convert Vm value to bytes
mem = float(v[1]) * self._scale[v[2]]
return mem / self._scale[unit]
- #
+
def getUsedMemory(self, unit="o"):
"Renvoie la mémoire résidente utilisée en octets"
return self._VmB('VmRSS:', unit)
- #
+
def getVirtualMemory(self, unit="o"):
"Renvoie la mémoire totale utilisée en octets"
return self._VmB('VmSize:', unit)
- #
+
def getUsedStacksize(self, unit="o"):
"Renvoie la taille du stack utilisé en octets"
return self._VmB('VmStk:', unit)
- #
+
def getMaxUsedMemory(self, unit="o"):
"Renvoie la mémoire résidente maximale mesurée"
return self._VmB('VmHWM:', unit)
- #
+
def getMaxVirtualMemory(self, unit="o"):
"Renvoie la mémoire totale maximale mesurée"
return self._VmB('VmPeak:', unit)
Store and retrieve the data for C: internal class
"""
__slots__ = ("__part", "__styles", "__content")
- #
+
def __init__(self, part="default"):
self.__part = str(part)
self.__styles = []
self.__content = []
- #
+
def append(self, content, style="p", position=-1):
if position == -1:
self.__styles.append(style)
self.__styles.insert(position, style)
self.__content.insert(position, content)
return 0
- #
+
def get_styles(self):
return self.__styles
- #
+
def get_content(self):
return self.__content
Store and retrieve the data for C: internal class
"""
__slots__ = ("__document")
- #
+
def __init__(self, part='default'):
self.__document = {}
self.__document[part] = _ReportPartM__(part)
- #
- def append(self, content, style="p", position=-1, part='default'):
+
+ def append(self, content, style="p", position=-1, part='default'):
if part not in self.__document:
self.__document[part] = _ReportPartM__(part)
self.__document[part].append(content, style, position)
return 0
- #
+
def get_styles(self):
- op = list(self.__document.keys()) ; op.sort()
+ op = list(self.__document.keys())
+ op.sort()
return [self.__document[k].get_styles() for k in op]
- #
+
def get_content(self):
- op = list(self.__document.keys()) ; op.sort()
+ op = list(self.__document.keys())
+ op.sort()
return [self.__document[k].get_content() for k in op]
- #
+
def clear(self):
self.__init__()
__slots__ = ()
#
m = _ReportM__()
- #
+
def append(self, content="", style="p", position=-1, part="default"):
return self.m.append(content, style, position, part)
- #
+
def retrieve(self):
st = self.m.get_styles()
ct = self.m.get_content()
return st, ct
- #
+
def clear(self):
self.m.clear()
"""
__slots__ = ("c")
#
- default_filename="report.txt"
- #
+ default_filename = "report.txt"
+
def __init__(self, c):
self.c = c
- #
+
def save(self, filename=None):
if filename is None:
filename = self.default_filename
_filename = os.path.abspath(filename)
#
- h = self.get()
+ _inside = self.get()
fid = open(_filename, 'w')
- fid.write(h)
+ fid.write(_inside)
fid.close()
return filename, _filename
- #
+
def retrieve(self):
return self.c.retrieve()
- #
+
def __str__(self):
return self.get()
- #
+
def close(self):
del self.c
return 0
"""
__slots__ = ()
#
- default_filename="report.html"
+ default_filename = "report.html"
tags = {
- "oli":"li",
- "uli":"li",
- }
- #
+ "oli": "li",
+ "uli": "li",
+ }
+
def get(self):
st, ct = self.retrieve()
inuLi, inoLi = False, False
pg = "<html>\n<head>"
pg += "\n<title>Report in HTML</title>"
pg += "\n</head>\n<body>"
- for k,ps in enumerate(st):
- pc = ct[k]
+ for ks, ps in enumerate(st):
+ pc = ct[ks]
try:
ii = ps.index("title")
title = pc[ii]
- pg += "%s\n%s\n%s"%('<hr noshade><h1 align="center">',title,'</h1><hr noshade>')
+ pg += "%s\n%s\n%s"%('<hr noshade><h1 align="center">', title, '</h1><hr noshade>')
except Exception:
pass
- for i,s in enumerate(ps):
- c = pc[i]
- if s == "uli" and not inuLi:
+ for ip, sp in enumerate(ps):
+ cp = pc[ip]
+ if sp == "uli" and not inuLi:
pg += "\n<ul>"
inuLi = True
- elif s == "oli" and not inoLi:
+ elif sp == "oli" and not inoLi:
pg += "\n<ol>"
inoLi = True
- elif s != "uli" and inuLi:
+ elif sp != "uli" and inuLi:
pg += "\n</ul>"
inuLi = False
- elif s != "oli" and inoLi:
+ elif sp != "oli" and inoLi:
pg += "\n</ol>"
inoLi = False
- elif s == "title":
+ elif sp == "title":
continue
- for t in self.tags:
- if s == t: s = self.tags[t]
- pg += "\n<%s>%s</%s>"%(s,c,s)
+ for tp in self.tags:
+ if sp == tp:
+ sp = self.tags[tp]
+ pg += "\n<%s>%s</%s>"%(sp, cp, sp)
pg += "\n</body>\n</html>"
return pg
"""
__slots__ = ()
#
- default_filename="report.rst"
+ default_filename = "report.rst"
tags = {
- "p":["\n\n",""],
- "uli":["\n - ",""],
- "oli":["\n #. ",""],
- }
+ "p": ["\n\n", ""],
+ "uli": ["\n - ", ""],
+ "oli": ["\n #. ", ""],
+ }
titles = {
- # "title":["=","="],
- "h1":["","-"],
- "h2":["","+"],
- "h3":["","*"],
- }
+ "h1": ["", "-"],
+ "h2": ["", "+"],
+ "h3": ["", "*"],
+ }
translation = {
- "<b>":"**",
- "<i>":"*",
- "</b>":"**",
- "</i>":"*",
- }
- #
+ "<b>": "**",
+ "<i>": "*",
+ "</b>": "**",
+ "</i>": "*",
+ }
+
def get(self):
st, ct = self.retrieve()
inuLi, inoLi = False, False
pg = ""
- for k,ps in enumerate(st):
- pc = ct[k]
+ for ks, ps in enumerate(st):
+ pc = ct[ks]
try:
ii = ps.index("title")
title = pc[ii]
- pg += "%s\n%s\n%s"%("="*80,title,"="*80)
+ pg += "%s\n%s\n%s"%("=" * 80, title, "=" * 80)
except Exception:
pass
- for i,s in enumerate(ps):
- c = pc[i]
- if s == "uli" and not inuLi:
+ for ip, sp in enumerate(ps):
+ cp = pc[ip]
+ if sp == "uli" and not inuLi:
pg += "\n"
inuLi = True
- elif s == "oli" and not inoLi:
+ elif sp == "oli" and not inoLi:
pg += "\n"
inoLi = True
- elif s != "uli" and inuLi:
+ elif sp != "uli" and inuLi:
pg += "\n"
inuLi = False
- elif s != "oli" and inoLi:
+ elif sp != "oli" and inoLi:
pg += "\n"
inoLi = False
- for t in self.translation:
- c = c.replace(t,self.translation[t])
- if s in self.titles.keys():
- pg += "\n%s\n%s\n%s"%(self.titles[s][0]*len(c),c,self.titles[s][1]*len(c))
- elif s in self.tags.keys():
- pg += "%s%s%s"%(self.tags[s][0],c,self.tags[s][1])
+ for tp in self.translation:
+ cp = cp.replace(tp, self.translation[tp])
+ if sp in self.titles.keys():
+ pg += "\n%s\n%s\n%s"%(self.titles[sp][0] * len(cp), cp, self.titles[sp][1] * len(cp))
+ elif sp in self.tags.keys():
+ pg += "%s%s%s"%(self.tags[sp][0], cp, self.tags[sp][1])
pg += "\n"
return pg
#
__slots__ = ()
#
- default_filename="report.txt"
+ default_filename = "report.txt"
tags = {
- "p":["\n",""],
- "uli":["\n - ",""],
- "oli":["\n - ",""],
- }
+ "p": ["\n", ""],
+ "uli": ["\n - ", ""],
+ "oli": ["\n - ", ""],
+ }
titles = {
- # "title":["=","="],
- "h1":["",""],
- "h2":["",""],
- "h3":["",""],
- }
+ "h1": ["", ""],
+ "h2": ["", ""],
+ "h3": ["", ""],
+ }
translation = {
- "<b>":"",
- "<i>":"",
- "</b>":"",
- "</i>":"",
- }
- #
+ "<b>": "",
+ "<i>": "",
+ "</b>": "",
+ "</i>": "",
+ }
+
def get(self):
st, ct = self.retrieve()
inuLi, inoLi = False, False
pg = ""
- for k,ps in enumerate(st):
- pc = ct[k]
+ for ks, ps in enumerate(st):
+ pc = ct[ks]
try:
ii = ps.index("title")
title = pc[ii]
- pg += "%s\n%s\n%s"%("="*80,title,"="*80)
+ pg += "%s\n%s\n%s"%("=" * 80, title, "=" * 80)
except Exception:
pass
- for i,s in enumerate(ps):
- c = pc[i]
- if s == "uli" and not inuLi:
+ for ip, sp in enumerate(ps):
+ cp = pc[ip]
+ if sp == "uli" and not inuLi:
inuLi = True
- elif s == "oli" and not inoLi:
+ elif sp == "oli" and not inoLi:
inoLi = True
- elif s != "uli" and inuLi:
+ elif sp != "uli" and inuLi:
inuLi = False
- elif s != "oli" and inoLi:
+ elif sp != "oli" and inoLi:
inoLi = False
- for t in self.translation:
- c = c.replace(t,self.translation[t])
- if s in self.titles.keys():
- pg += "\n%s\n%s\n%s"%(self.titles[s][0]*len(c),c,self.titles[s][1]*len(c))
- elif s in self.tags.keys():
- pg += "\n%s%s%s"%(self.tags[s][0],c,self.tags[s][1])
+ for tp in self.translation:
+ cp = cp.replace(tp, self.translation[tp])
+ if sp in self.titles.keys():
+ pg += "\n%s\n%s\n%s"%(self.titles[sp][0] * len(cp), cp, -self.titles[sp][1] * len(cp))
+ elif sp in self.tags.keys():
+ pg += "\n%s%s%s"%(self.tags[sp][0], cp, self.tags[sp][1])
pg += "\n"
return pg
# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-
"""
Modèles généraux pour les observers, le post-processing.
"""
__author__ = "Jean-Philippe ARGAUD"
__all__ = ["ObserverTemplates"]
+# flake8: noqa
+
import numpy
# ==============================================================================
(Template)
"""
__slots__ = ("__preferedLanguage", "__values", "__order")
- #
+
def __init__( self, language = "fr_FR" ):
self.__preferedLanguage = language
self.__values = {}
self.__order = -1
- #
+
def store( self, name = None, content = None, fr_FR = "", en_EN = "", order = "next" ):
"D.store(k, c, fr_FR, en_EN, o) -> Store template k and its main characteristics"
if name is None or content is None:
self.__order = int(order)
self.__values[str(name)] = {
'content': str(content),
- 'fr_FR' : str(fr_FR),
- 'en_EN' : str(en_EN),
- 'order' : int(self.__order),
- }
- #
+ 'fr_FR' : str(fr_FR), # noqa: E203
+ 'en_EN' : str(en_EN), # noqa: E203
+ 'order' : int(self.__order), # noqa: E203
+ }
+
def keys(self):
"D.keys() -> list of D's keys"
__keys = sorted(self.__values.keys())
return __keys
- #
+
def __contains__(self, name):
"D.__contains__(k) -> True if D has a key k, else False"
return name in self.__values
- #
+
def __len__(self):
"x.__len__() <==> len(x)"
return len(self.__values)
- #
+
def __getitem__(self, name=None ):
"x.__getitem__(y) <==> x[y]"
return self.__values[name]['content']
- #
+
def getdoc(self, name = None, lang = "fr_FR"):
"D.getdoc(k, l) -> Return documentation of key k in language l"
- if lang not in self.__values[name]: lang = self.__preferedLanguage
+ if lang not in self.__values[name]:
+ lang = self.__preferedLanguage
return self.__values[name][lang]
- #
+
def keys_in_presentation_order(self):
"D.keys_in_presentation_order() -> list of D's keys in presentation order"
__orders = []
- for k in self.keys():
- __orders.append( self.__values[k]['order'] )
+ for ik in self.keys():
+ __orders.append( self.__values[ik]['order'] )
__reorder = numpy.array(__orders).argsort()
return list(numpy.array(self.keys())[__reorder])
fr_FR = "Imprime sur la sortie standard la valeur courante de la variable",
en_EN = "Print on standard output the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueAndIndexPrinter",
content = """print(str(info)+(" index %i:"%(len(var)-1))+" "+str(var[-1]))""",
fr_FR = "Imprime sur la sortie standard la valeur courante de la variable, en ajoutant son index",
en_EN = "Print on standard output the current value of the variable, adding its index",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSeriePrinter",
content = """print(str(info)+" "+str(var[:]))""",
fr_FR = "Imprime sur la sortie standard la série des valeurs de la variable",
en_EN = "Print on standard output the value series of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSaver",
content = """import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
fr_FR = "Enregistre la valeur courante de la variable dans un fichier du répertoire '/tmp' nommé 'value...txt' selon le nom de la variable et l'étape d'enregistrement",
en_EN = "Save the current value of the variable in a file of the '/tmp' directory named 'value...txt' from the variable name and the saving step",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSerieSaver",
content = """import numpy, re\nv=numpy.array(var[:], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
fr_FR = "Enregistre la série des valeurs de la variable dans un fichier du répertoire '/tmp' nommé 'value...txt' selon le nom de la variable et l'étape",
en_EN = "Save the value series of the variable in a file of the '/tmp' directory named 'value...txt' from the variable name and the saving step",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValuePrinterAndSaver",
content = """import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nprint(str(info)+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur courante de la variable",
en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueIndexPrinterAndSaver",
content = """import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nprint(str(info)+(" index %i:"%(len(var)-1))+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur courante de la variable, en ajoutant son index",
en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the current value of the variable, adding its index",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSeriePrinterAndSaver",
content = """import numpy, re\nv=numpy.array(var[:], ndmin=1)\nprint(str(info)+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
fr_FR = "Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp', la série des valeurs de la variable",
en_EN = "Print on standard output and, in the same time, save in a file of the '/tmp' directory, the value series of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueGnuPlotter",
content = """import numpy, Gnuplot\nv=numpy.array(var[-1], ndmin=1)\nglobal ifig, gp\ntry:\n ifig+=1\n gp('set style data lines')\nexcept:\n ifig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set style data lines')\ngp('set title \"%s (Figure %i)\"'%(info,ifig))\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
fr_FR = "Affiche graphiquement avec Gnuplot la valeur courante de la variable",
en_EN = "Graphically plot with Gnuplot the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSerieGnuPlotter",
content = """import numpy, Gnuplot\nv=numpy.array(var[:], ndmin=1)\nglobal ifig, gp\ntry:\n ifig+=1\n gp('set style data lines')\nexcept:\n ifig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set style data lines')\ngp('set title \"%s (Figure %i)\"'%(info,ifig))\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
fr_FR = "Affiche graphiquement avec Gnuplot la série des valeurs de la variable",
en_EN = "Graphically plot with Gnuplot the value series of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValuePrinterAndGnuPlotter",
content = """print(str(info)+' '+str(var[-1]))\nimport numpy, Gnuplot\nv=numpy.array(var[-1], ndmin=1)\nglobal ifig,gp\ntry:\n ifig+=1\n gp('set style data lines')\nexcept:\n ifig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set style data lines')\ngp('set title \"%s (Figure %i)\"'%(info,ifig))\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
fr_FR = "Imprime sur la sortie standard et, en même temps, affiche graphiquement avec Gnuplot la valeur courante de la variable",
en_EN = "Print on standard output and, in the same time, graphically plot with Gnuplot the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSeriePrinterAndGnuPlotter",
content = """print(str(info)+' '+str(var[:]))\nimport numpy, Gnuplot\nv=numpy.array(var[:], ndmin=1)\nglobal ifig,gp\ntry:\n ifig+=1\n gp('set style data lines')\nexcept:\n ifig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set style data lines')\ngp('set title \"%s (Figure %i)\"'%(info,ifig))\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
fr_FR = "Imprime sur la sortie standard et, en même temps, affiche graphiquement avec Gnuplot la série des valeurs de la variable",
en_EN = "Print on standard output and, in the same time, graphically plot with Gnuplot the value series of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValuePrinterSaverAndGnuPlotter",
content = """print(str(info)+' '+str(var[-1]))\nimport numpy, re\nv=numpy.array(var[-1], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)\nimport Gnuplot\nglobal ifig,gp\ntry:\n ifig+=1\n gp('set style data lines')\nexcept:\n ifig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set style data lines')\ngp('set title \"%s (Figure %i)\"'%(info,ifig))\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
fr_FR = "Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp' et affiche graphiquement la valeur courante de la variable",
en_EN = "Print on standard output and, in the same, time save in a file of the '/tmp' directory and graphically plot the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueSeriePrinterSaverAndGnuPlotter",
content = """print(str(info)+' '+str(var[:]))\nimport numpy, re\nv=numpy.array(var[:], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)\nimport Gnuplot\nglobal ifig,gp\ntry:\n ifig+=1\n gp('set style data lines')\nexcept:\n ifig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set style data lines')\ngp('set title \"%s (Figure %i)\"'%(info,ifig))\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
fr_FR = "Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp' et affiche graphiquement la série des valeurs de la variable",
en_EN = "Print on standard output and, in the same, time save in a file of the '/tmp' directory and graphically plot the value series of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueMean",
content = """import numpy\nprint(str(info)+' '+str(numpy.nanmean(var[-1])))""",
fr_FR = "Imprime sur la sortie standard la moyenne de la valeur courante de la variable",
en_EN = "Print on standard output the mean of the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueStandardError",
content = """import numpy\nprint(str(info)+' '+str(numpy.nanstd(var[-1])))""",
fr_FR = "Imprime sur la sortie standard l'écart-type de la valeur courante de la variable",
en_EN = "Print on standard output the standard error of the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueVariance",
content = """import numpy\nprint(str(info)+' '+str(numpy.nanvar(var[-1])))""",
fr_FR = "Imprime sur la sortie standard la variance de la valeur courante de la variable",
en_EN = "Print on standard output the variance of the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueL2Norm",
content = """import numpy\nv = numpy.ravel( var[-1] )\nprint(str(info)+' '+str(float( numpy.linalg.norm(v) )))""",
fr_FR = "Imprime sur la sortie standard la norme L2 de la valeur courante de la variable",
en_EN = "Print on standard output the L2 norm of the current value of the variable",
order = "next",
- )
+)
ObserverTemplates.store(
name = "ValueRMS",
content = """import numpy\nv = numpy.ravel( var[-1] )\nprint(str(info)+' '+str(float( numpy.sqrt((1./v.size)*numpy.dot(v,v)) )))""",
fr_FR = "Imprime sur la sortie standard la racine de la moyenne des carrés (RMS), ou moyenne quadratique, de la valeur courante de la variable",
en_EN = "Print on standard output the root mean square (RMS), or quadratic mean, of the current value of the variable",
order = "next",
- )
+)
# ==============================================================================
UserPostAnalysisTemplates = TemplateStorage()
fr_FR = "Imprime sur la sortie standard la valeur optimale",
en_EN = "Print on standard output the optimal value",
order = "next",
- )
+)
UserPostAnalysisTemplates.store(
name = "AnalysisSaver",
content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
fr_FR = "Enregistre la valeur optimale dans un fichier du répertoire '/tmp' nommé 'analysis.txt'",
en_EN = "Save the optimal value in a file of the '/tmp' directory named 'analysis.txt'",
order = "next",
- )
+)
UserPostAnalysisTemplates.store(
name = "AnalysisPrinterAndSaver",
content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nprint('Analysis',xa)\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur optimale",
en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the optimal value",
order = "next",
- )
+)
UserPostAnalysisTemplates.store(
name = "AnalysisSeriePrinter",
content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nprint('Analysis',xa)""",
fr_FR = "Imprime sur la sortie standard la série des valeurs optimales",
en_EN = "Print on standard output the optimal value series",
order = "next",
- )
+)
UserPostAnalysisTemplates.store(
name = "AnalysisSerieSaver",
content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
fr_FR = "Enregistre la série des valeurs optimales dans un fichier du répertoire '/tmp' nommé 'analysis.txt'",
en_EN = "Save the optimal value series in a file of the '/tmp' directory named 'analysis.txt'",
order = "next",
- )
+)
UserPostAnalysisTemplates.store(
name = "AnalysisSeriePrinterAndSaver",
content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nprint('Analysis',xa)\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la série des valeurs optimales",
en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the optimal value series",
order = "next",
- )
+)
# ==============================================================================
if __name__ == "__main__":
date = "mercredi 22 mai 2024, 22:22:22 (UTC+0100)"
longname = name + ", a module for Data Assimilation and Optimization"
-cata = "V" + version.replace(".","_")
+cata = "V" + version.replace(".", "_")
__version__ = version
print(msg+"\n"+"="*len(msg))
verify_similarity_of_algo_results(("3DVAR", "Blue", "ExtendedBlue", "4DVAR", "DerivativeFreeOptimization"), Xa, 5.e-5)
verify_similarity_of_algo_results(("LinearLeastSquares", "NonLinearLeastSquares"), Xa, 5.e-7)
- verify_similarity_of_algo_results(("KalmanFilter", "ExtendedKalmanFilter", "UnscentedKalmanFilter"), Xa, 1.e-14)
+ verify_similarity_of_algo_results(("KalmanFilter", "ExtendedKalmanFilter", "UnscentedKalmanFilter"), Xa, 5.e-10)
verify_similarity_of_algo_results(("KalmanFilter", "EnsembleKalmanFilter"), Xa, 2.e-1)
print(" Les resultats obtenus sont corrects.")
print("")
msg = "Tests des ecarts attendus :"
print(msg+"\n"+"="*len(msg))
verify_similarity_of_algo_results(("3DVAR", "Blue", "ExtendedBlue", "4DVAR", "DerivativeFreeOptimization"), Xa, 5.e-5)
- verify_similarity_of_algo_results(("KalmanFilter", "ExtendedKalmanFilter", "UnscentedKalmanFilter"), Xa, 2.e-14)
+ verify_similarity_of_algo_results(("KalmanFilter", "ExtendedKalmanFilter", "UnscentedKalmanFilter"), Xa, 5.e-10)
verify_similarity_of_algo_results(("KalmanFilter", "EnsembleKalmanFilter"), Xa, 2e-1)
print(" Les resultats obtenus sont corrects.")
print("")