Advanced uses
-------------
+#. :ref:`subsection_tui_advanced_ex11`
+#. :ref:`subsection_tui_advanced_ex12`
#. :ref:`section_advanced_convert_JDC`
#. :ref:`section_advanced_YACS_tui`
#. :ref:`section_advanced_R`
optimal estimate of the inaccessible true value of a system state, eventually
over time. It uses information coming from experimental measurements or
observations, and from numerical *a priori* models, including information about
-their errors. Parts of the framework are also known under the names of
-*calibration*, *adjustment*, *state estimation*, *parameter estimation*,
-*parameter adjustment*, *inverse problems*, *inverse methods*, *Bayesian
-estimation*, *optimal interpolation*, *mathematical regularization*,
-*meta-heuristics for optimization*, *model reduction*, *data smoothing*, etc.
+their errors. Some methods that are parts of the framework are also known under
+the names of
+*adjustment*,
+*calibration*,
+*state estimation*,
+*parameter estimation*,
+*parameter adjustment*,
+*inverse problems*,
+*inverse methods*,
+*inversion*,
+*Bayesian estimation*,
+*optimal interpolation*,
+*optimal learning*,
+*mathematical regularization*,
+*meta-heuristics* for optimization,
+*model reduction*,
+*assimilation in reduced space*,
+*data smoothing*,
+etc.
More details can be found in the section :ref:`section_theory`. The ADAO module
currently offers more than one hundred different algorithmic methods and allows
the study of about 400 distinct applied problems.
.. include:: snippets/SocialAcceleration.rst
+.. include:: snippets/StoreInitialState.rst
+
StoreSupplementaryCalculations
.. index:: single: StoreSupplementaryCalculations
When the assimilation explicitly establishes a **temporal iterative
process**, as in state data assimilation, **the first observation is not used
- but must be present in the data description of a ADAO case**. By convention,
+ but must be present in the data description of an ADAO case**. By convention,
it is therefore considered to be available at the same time as the draft time
value, and does not lead to a correction at that time. The numbering of the
observations starts at 0 by convention, so it is only from number 1 that the
Templates are given hereafter as :ref:`subsection_r_o_v_Template`. In all cases,
the post-processing of the user has in the namespace a variable whose name is
"*ADD*", and whose only available method is named ``get``. The arguments of this
-method are an output information name, as described in the
+method are an output information name, as described in an
:ref:`subsection_r_o_v_Inventaire`.
For example, to have the optimal state after a data assimilation or optimization
ADAO Study report
================================================================================
-Summary build with ADAO version 9.13.0
+Summary build with ADAO version 9.14.0
- AlgorithmParameters command has been set with values:
Algorithm = '3DVAR'
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+from matplotlib import pyplot as plt
+from numpy import array, set_printoptions
+from adao import adaoBuilder
+set_printoptions(precision=4, floatmode='fixed')
+#
+#-------------------------------------------------------------------------------
+#
+case = adaoBuilder.New()
+case.set( 'AlgorithmParameters',
+ Algorithm='3DVAR',
+ Parameters = {
+ "StoreSupplementaryCalculations":[
+ "CostFunctionJ",
+ "CurrentState",
+ "InnovationAtCurrentState",
+ ],
+ }
+)
+case.set( 'Background', Vector=[0, 1, 2] )
+case.set( 'BackgroundError', ScalarSparseMatrix=1.0 )
+case.set( 'Observation', Vector=array([0.5, 1.5, 2.5]) )
+case.set( 'ObservationError', DiagonalSparseMatrix='1 1 1' )
+case.set( 'ObservationOperator', Matrix='1 0 0;0 2 0;0 0 3' )
+case.set( 'Observer',
+ Variable="CurrentState",
+ Template="ValuePrinter",
+ Info=" Current state:",
+)
+#
+print("Displays current state values, at each step:")
+case.execute()
+print("")
+#
+#-------------------------------------------------------------------------------
+#
+print("Calculation-measurement deviation (or error) indicators")
+print(" (display only first 3 steps)")
+print("")
+CalculMeasureErrors = case.get("InnovationAtCurrentState")
+#
+print("===> Maximum error between calculations and measurements, at each step:")
+print(" ",array(
+ CalculMeasureErrors.maxs()
+ [0:3] ))
+print("===> Minimum error between calculations and measurements, at each step:")
+print(" ",array(
+ CalculMeasureErrors.mins()
+ [0:3] ))
+print("===> Norm of the error between calculation and measurement, at each step:")
+print(" ",array(
+ CalculMeasureErrors.norms()
+ [0:3] ))
+print("===> Mean absolute error (MAE) between calculations and measurements, at each step:")
+print(" ",array(
+ CalculMeasureErrors.maes()
+ [0:3] ))
+print("===> Mean square error (MSE) between calculations and measurements, at each step:")
+print(" ",array(
+ CalculMeasureErrors.mses()
+ [0:3] ))
+print("===> Root mean square error (RMSE) between calculations and measurements, at each step:")
+print(" ",array(
+ CalculMeasureErrors.rmses()
+ [0:3] ))
+#
+#-------------------------------------------------------------------------------
+#
+import matplotlib.pyplot as plt
+plt.rcParams['figure.figsize'] = (8, 12)
+#
+plt.figure()
+plt.suptitle('Indicators built on the current value of the calculation-measurement deviation (or error)\n', fontweight='bold')
+plt.subplot(611)
+plt.plot(CalculMeasureErrors.maxs(), 'bx--', label='Indicator at current step')
+plt.ylabel('Maximum (a.u.)')
+plt.legend()
+plt.subplot(612)
+plt.plot(CalculMeasureErrors.mins(), 'bx--', label='Indicator at current step')
+plt.ylabel('Minimum (a.u.)')
+plt.legend()
+plt.subplot(613)
+plt.plot(CalculMeasureErrors.norms(), 'bx-', label='Indicator at current step')
+plt.ylabel('Norm (a.u.)')
+plt.legend()
+plt.subplot(614)
+plt.plot(CalculMeasureErrors.maes(), 'kx-', label='Indicator at current step')
+plt.ylabel('MAE (a.u.)')
+plt.legend()
+plt.subplot(615)
+plt.plot(CalculMeasureErrors.mses(), 'gx-', label='Indicator at current step')
+plt.ylabel('MSE (a.u.)')
+plt.legend()
+plt.subplot(616)
+plt.plot(CalculMeasureErrors.rmses(), 'rx-', label='Indicator at current step')
+plt.ylabel('RMSE (a.u.)')
+plt.legend()
+plt.xlabel('Step size calculation (step number or rank)')
+plt.tight_layout()
+plt.savefig("tui_example_12.png")
--- /dev/null
+Displays current state values, at each step:
+ Current state: [0.0000 1.0000 2.0000]
+ Current state: [0.0474 0.9053 1.0056]
+ Current state: [0.0905 0.8492 0.9461]
+ Current state: [0.1529 0.7984 0.9367]
+ Current state: [0.2245 0.7899 0.9436]
+ Current state: [0.2508 0.8005 0.9486]
+ Current state: [0.2500 0.7998 0.9502]
+ Current state: [0.2500 0.8000 0.9500]
+ Current state: [0.2500 0.8000 0.9500]
+
+Calculation-measurement deviation (or error) indicators
+ (display only first 3 steps)
+
+===> Maximum error between calculations and measurements, at each step:
+ [0.5000 0.4526 0.4095]
+===> Minimum error between calculations and measurements, at each step:
+ [-3.5000 -0.5169 -0.3384]
+===> Norm of the error between calculation and measurement, at each step:
+ [3.5707 0.7540 0.5670]
+===> Mean absolute error (MAE) between calculations and measurements, at each step:
+ [1.5000 0.4267 0.3154]
+===> Mean square error (MSE) between calculations and measurements, at each step:
+ [4.2500 0.1895 0.1072]
+===> Root mean square error (RMSE) between calculations and measurements, at each step:
+ [2.0616 0.4353 0.3274]
Example of TXT file for "*Observation*" variable in "*DataFile*" ::
- # Fichier TXT à séparateur espace
# TXT file with space delimiter
# =============================
- # Ligne de commentaires quelconques débutant par #
- # Ligne suivante réservée au nommage des variables
+ # Comment line beginning with #
+ # The next line is dedicated to variable naming
Alpha1 Observation Alpha2
0.1234 5.6789 9.0123
1.2345 2.3456 3.
named ``get``, of the variable "*ADD*" of the post-processing in graphical
interface, or of the case in textual interface. The input variables, available
to the user at the output in order to facilitate the writing of post-processing
-procedures, are described in the :ref:`subsection_r_o_v_Inventaire`.
+procedures, are described in an :ref:`subsection_r_o_v_Inventaire`.
**Permanent outputs (non conditional)**
running calculations, it is strongly recommended to revert to supporting tool
versions within the range described below.
-.. csv-table:: Support tool verification intervals for ADAO
+.. csv-table:: Support tool verification version intervals for ADAO
:header: "Tool", "Minimal version", "Reached version"
:widths: 20, 10, 10
+ :align: center
- Python, 3.6.5, 3.12.3
- Numpy, 1.14.3, 1.26.4
- Scipy, 0.19.1, 1.14.0
- MatplotLib, 2.2.2, 3.8.4
+ Python, 3.6.5, 3.12.6
+ Numpy, 1.14.3, 2.1.2
+ Scipy, 0.19.1, 1.14.1
+ MatplotLib, 2.2.2, 3.9.2
GnuplotPy, 1.8, 1.8
- NLopt, 2.4.2, 2.7.1
+ NLopt, 2.4.2, 2.8.0
+ FMPy, 0.3.20, 0.3.20
purposes only, knowing that, in case of doubt, the SALOME version sheet
[Salome]_ is the official validation version.
-.. csv-table:: Validation versions of support tools for ADAO
+.. csv-table:: Validation versions of the support tools for ADAO
:header: "Tool", "Version"
:widths: 20, 10
+ :align: center
ADAO, |release|
EFICAS, |release|
- *None*
+ *None* (messages or statistics are nevertheless displayed)
--- /dev/null
+.. index:: single: StoreInitialState
+
+StoreInitialState
+ *Boolean value*. This variable defines whether the algorithm's initial state
+ is stored (with True) or not (with False, by default) as the first state in
+ the iterative sequence of found states. This makes algorithmic iterative
+ storage identical to temporal iterative storage (similar, for example, to
+ Kalman filters).
+
+ Example :
+ ``{"StoreInitialState":False}``
execution entry of the menu) is the following:
.. literalinclude:: scripts/tui_example_01.res
+ :language: none
Detailed setup of an ADAO TUI calculation case
+++++++++++++++++++++++++++++++++++++++++++++++
visualization. Its argument the name of a variable "*Concept*" and returns
back the quantity as a list (even if there is only one specimen) of this
base variable. For a list of variables and use them, the user has to refer
- to the :ref:`subsection_r_o_v_Inventaire` and more generally to the
+ to an :ref:`subsection_r_o_v_Inventaire` and more generally to the
:ref:`section_ref_output_variables` and to the individual documentations of
the algorithms.
The saving or loading of a calculation case deals with quantities and actions
that are linked by the previous commands, excepted case external operations
-(such as, for example, post-processing that can be developped after the
-calculation cas). The registered or loaded commands remain fully compatible
+(such as, for example, post-processing that can be developed after the
+calculation case). The registered or loaded commands remain fully compatible
with these Python external case operations.
.. index:: single: load
one the commands establishing the current calculation case. Some formats
are only available as input or as output.
-Obtain information on the case, the computation or the system
-+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+Getting information about the case, the calculation or the system
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-It's easy to obtain **aggregate information on the study case** as defined by
-the user, by using Python's "*print*" command directly on the case, at any
-stage during its completion. For example:
+There are various ways to obtain global information relating to the calculation
+case, the run or the system on which a case is run.
-.. literalinclude:: scripts/tui_example_07.py
- :language: python
+*print* (*case*)
+ It's easy to obtain **aggregate information on the study case** as defined
+ by the user, by using Python's "*print*" command directly on the case, at
+ any stage during its construction. For example:
-which result is here:
+ .. literalinclude:: scripts/tui_example_07.py
+ :language: python
-.. literalinclude:: scripts/tui_example_07.res
+ which result is here:
-.. index:: single: callinfo
+ .. literalinclude:: scripts/tui_example_07.res
+ :language: none
-**Synthetic information on the number of calls to operator computations** can
-be dynamically obtained with the "**callinfo()**" command. These operator
-computations are those defined by the user in an ADAO case, for the observation
-and evolution operators. It is used after the calculation has been performed in
-the ADAO case, bearing in mind that the result of this command is simply empty
-when no calculation has been performed:
-::
+.. index:: single: callinfo
- from adao import adaoBuilder
- case = adaoBuilder.New()
- ...
- case.execute()
- print(case.callinfo())
+**callinfo** ()
+ A **synthesized information on the number of calls to operator
+ calculations** can be dynamically obtained with the "**callinfo()**"
+ command. These operator calculations are those defined by the user in an
+ ADAO case, for the observation and evolution operators. It is used after
+ the case calculation has been executed, bearing in mind that the result of
+ this command is simply empty when no calculation has been performed:
+ ::
+
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ ...
+ case.execute()
+ print(case.callinfo())
.. index:: single: sysinfo
-Synthetic **system information** can be obtained with the "**sysinfo()**"
-command, present in every calculation case. It dynamically returns system
-information and details of Python modules useful for ADAO. It is used as
-follows:
-::
+**sysinfo** ()
+ **Synthetic system information** can be obtained with the "**sysinfo()**"
+ command, present in every ADAO calculation case. It dynamically returns system
+ information and details of Python modules useful for ADAO. It is used as
+ follows:
+ ::
- from adao import adaoBuilder
- case = adaoBuilder.New()
- print(case.sysinfo())
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ print(case.sysinfo())
.. _subsection_tui_advanced:
We propose here more comprehensive examples of ADAO TUI calculation, by giving
the purpose of the example and a set of commands that can achieve this goal.
+.. _subsection_tui_advanced_ex11:
+
Independent holding of the results of a calculation case
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The command set execution gives the following results:
.. literalinclude:: scripts/tui_example_11.res
+ :language: none
As it should be in twin experiments, when we trust mainly in observations, it
is found that we get correctly the parameters that were used to artificially
build the observations.
+.. _subsection_tui_advanced_ex12:
+
+Some common numerical indicators : norm, RMS, MSE et RMSE...
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+The numerical quantities obtained from an ADAO calculation are often vectors
+(such as the analysis :math:`\mathbf{x}^a`) or matrices (such as the analysis
+covariance :math:`\mathbf{A}`). They are requested by the user through the
+standard "*StoreSupplementaryCalculations*" variable of the ADAO case
+algorithm. These quantities are available at each step of an iterative
+algorithm, and therefore take the form of a series of vectors, or a series of
+matrices.
+
+These objects support special methods for computing commonly used indicators.
+The methods are named by the name of the indicator followed by "*s*" to note
+that they apply to a series of elementary objects, and that they themselves
+return a series of values.
+
+Note: some indicators are intended to qualify, for example, a "*value
+increment*", a "*value deviation*" or a "*value difference*", rather than a
+"*value*" itself. However, there is no computational impossibility to compute
+indicators for any given quantity, so it's up to the user to check that the
+indicator he is requesting is being used as intended.
+
+.. index:: single: means
+
+**means** ()
+ Average of the quantity values, available at each step.
+
+.. index:: single: stds
+
+**stds** ()
+ Standard deviation of the quantity values, available at each step.
+
+.. index:: single: sums
+
+**sums** ()
+ Sum of the quantity values, available at each step.
+
+.. index:: single: mins
+
+**mins** ()
+ Minimum of the quantity values, available at each step.
+
+.. index:: single: maxs
+
+**maxs** ()
+ Maximum of the quantity values, available at each step.
+
+.. index:: single: norms
+
+**norms** (*_ord=None*)
+ Norm of the quantity, available at each step (*_ord*: see
+ *numpy.linalg.norm*).
+
+.. index:: single: traces
+
+**traces** (*offset=0*)
+ Trace of the quantity, available at each step (*offset*: see
+ *numpy.trace*).
+
+.. index:: single: maes
+.. index:: single: Mean Absolute Error (MAE)
+
+**maes** (*predictor=None*)
+ Mean absolute error (**MAE**). This indicator is computed as the average of
+ the absolute deviations of the quantity from the predictor, and is
+ available at each step. If the predictor is not specified, this indicator
+ theoretically applies only to an increment or a difference.
+
+.. index:: single: mses
+.. index:: single: msds
+.. index:: single: Mean-Square Error (MSE)
+.. index:: single: Mean-Square Deviation (MSD)
+
+**mses** (*predictor=None*) ou **msds** (*predictor=None*)
+ Mean square error (**MSE**) or mean-square deviation* (**MSD**). This
+ indicator is computed as the root-mean-square deviation of the quantity
+ from the predictor, and is available at each step. If the predictor is not
+ specified, this indicator theoretically applies only to an increment or
+ difference.
+
+.. index:: single: rmses
+.. index:: single: rmsds
+.. index:: single: Root-Mean-Square Error (RMSE)
+.. index:: single: Root-Mean-Square Deviation (RMSD)
+.. index:: single: Root-Mean-Square (RMS)
+
+**rmses** (*predictor=None*) or **rmsds** (*predictor=None*)
+ Root-mean-square error (**RMSE**) or root-mean-square deviation (**RMSD**).
+ This indicator is calculated as the root mean square of the deviations of
+ the quantity from the predictor, and is available at each step. If the
+ predictor is not specified, this indicator theoretically applies only to an
+ increment or a difference. In the latter case, it is a **RMS** of the
+ quantity.
+
+As a simple example, we can use the calculation example presented above:
+
+.. literalinclude:: scripts/tui_example_12.py
+ :language: python
+
+Execution of the command set gives the following results, which illustrate the
+series structure of the indicators, associated with the series of values of the
+incremental quantity "*InnovationAtCurrentState*" required:
+
+.. literalinclude:: scripts/tui_example_12.res
+ :language: none
+
+In graphical form, the indicators are displayed over all the steps:
+
+.. _tui_example_12:
+.. image:: scripts/tui_example_12.png
+ :align: center
+ :width: 90%
+
.. [HOMARD] For more information on HOMARD, see the *HOMARD module* and its integrated help available from the main menu *Help* of the SALOME platform.
.. [PARAVIS] For more information on PARAVIS, see the *PARAVIS module* and its integrated help available from the main menu *Help* of the SALOME platform.
Utilisations avancées
---------------------
+#. :ref:`subsection_tui_advanced_ex11`
+#. :ref:`subsection_tui_advanced_ex12`
#. :ref:`section_advanced_convert_JDC`
#. :ref:`section_advanced_YACS_tui`
#. :ref:`section_advanced_R`
provenant de mesures expérimentales, ou observations, et de modèles numériques
*a priori*, y compris des informations sur leurs erreurs. Certaines des
méthodes incluses dans ce cadre sont également connues sous les noms de
-*calage* ou *recalage*, *calibration*, *estimation d'état*, *estimation de
-paramètres*, *ajustement de paramètres*, *problèmes inverses*, *méthodes
-inverses*, *inversion*, *estimation bayésienne*, *interpolation optimale*,
-*régularisation mathématique*, *méta-heuristiques* d'optimisation, *réduction
-de modèles*, *lissage de données*, etc. De plus amples détails peuvent être
-trouvés dans la partie proposant :ref:`section_theory`. Le module ADAO offre
-actuellement plus d'une centaine de méthodes algorithmiques différentes et
-permet l'étude d'environ 400 problèmes appliqués distincts.
+*calage* ou *recalage*,
+*calibration*,
+*estimation d'état*,
+*estimation de paramètres*,
+*ajustement de paramètres*,
+*problèmes inverses*,
+*méthodes inverses*,
+*inversion*,
+*estimation bayésienne*,
+*interpolation optimale*,
+*apprentissage optimal*,
+*régularisation mathématique*,
+*méta-heuristiques* d'optimisation,
+*réduction de modèles*,
+*assimilation en espace réduit*,
+*lissage de données*,
+etc.
+De plus amples détails peuvent être trouvés dans la partie proposant
+:ref:`section_theory`. Le module ADAO offre actuellement plus d'une centaine de
+méthodes algorithmiques différentes et permet l'étude d'environ 400 problèmes
+appliqués distincts.
La documentation de ce module est divisée en plusieurs grandes catégories,
relatives à la **documentation théorique** (indiquée dans le titre de section
Les résultats obtenus avec cet algorithme peuvent être utilisés pour alimenter
un :ref:`section_ref_algorithm_MeasurementsOptimalPositioningTask`. De manière
-complémentaire, et si le but est d'évaluer l'erreur calcul-mesure, un
+complémentaire, et si le but est d'évaluer l'erreur calculs-mesures, un
:ref:`section_ref_algorithm_SamplingTest` utilise les mêmes commandes
d'échantillonnage pour établir un ensemble de valeurs de fonctionnelle d'erreur
:math:`J` à partir d'observations :math:`\mathbf{y}^o`.
.. include:: snippets/SocialAcceleration.rst
+.. include:: snippets/StoreInitialState.rst
+
StoreSupplementaryCalculations
.. index:: single: StoreSupplementaryCalculations
:ref:`subsection_r_o_v_Template`. Dans tous les cas, le post-processing de
l'utilisateur dispose dans l'espace de noms d'une variable dont le nom est
"*ADD*", et dont l'unique méthode utilisable est nommée ``get``. Les arguments
-de cette méthode sont un nom d'information de sortie, comme décrit dans
-l':ref:`subsection_r_o_v_Inventaire`.
+de cette méthode sont un nom d'information de sortie, comme décrit dans un
+:ref:`subsection_r_o_v_Inventaire`.
Par exemple, pour avoir l'état optimal après un calcul d'assimilation de données
ou d'optimisation, on utilise l'appel suivant::
.. code-block:: python
[...]
- "SampleAsMinMaxSobolSequence":[[0, 4, 1], [0, 4, 1], [2, 25]]
+ "SampleAsMinMaxSobolSequence":[[0, 4], [0, 4], [2, 25]]
[...]
La répartition des états (il y en a ici 32 par principe de construction de la
ADAO Study report
================================================================================
-Summary build with ADAO version 9.13.0
+Summary build with ADAO version 9.14.0
- AlgorithmParameters command has been set with values:
Algorithm = '3DVAR'
J_values = case.get("CostFunctionJ")[:]
#
# =============================================================
-# EXPLOITATION DES RÉSULTATS INDÉPENDANTE
+# EXPLOITATION INDÉPENDANTE DES RÉSULTATS
#
print("")
print("Nombre d'itérations internes...: %i"%len(J_values))
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+from matplotlib import pyplot as plt
+from numpy import array, set_printoptions
+from adao import adaoBuilder
+set_printoptions(precision=4, floatmode='fixed')
+#
+#-------------------------------------------------------------------------------
+#
+case = adaoBuilder.New()
+case.set( 'AlgorithmParameters',
+ Algorithm='3DVAR',
+ Parameters = {
+ "StoreSupplementaryCalculations":[
+ "CostFunctionJ",
+ "CurrentState",
+ "InnovationAtCurrentState",
+ ],
+ }
+)
+case.set( 'Background', Vector=[0, 1, 2] )
+case.set( 'BackgroundError', ScalarSparseMatrix=1.0 )
+case.set( 'Observation', Vector=array([0.5, 1.5, 2.5]) )
+case.set( 'ObservationError', DiagonalSparseMatrix='1 1 1' )
+case.set( 'ObservationOperator', Matrix='1 0 0;0 2 0;0 0 3' )
+case.set( 'Observer',
+ Variable="CurrentState",
+ Template="ValuePrinter",
+ Info=" État courant :",
+)
+#
+print("Affichage des valeurs de l'état courant, à chaque pas :")
+case.execute()
+print("")
+#
+#-------------------------------------------------------------------------------
+#
+print("Indicateurs sur les écarts (ou erreurs) calculs-mesures")
+print(" (affichage des 3 premiers pas uniquement)")
+print("")
+CalculMeasureErrors = case.get("InnovationAtCurrentState")
+#
+print("===> Maximum de l'erreur entre calculs et mesures, à chaque pas :")
+print(" ",array(
+ CalculMeasureErrors.maxs()
+ [0:3] ))
+print("===> Minimum de l'erreur entre calculs et mesures, à chaque pas :")
+print(" ",array(
+ CalculMeasureErrors.mins()
+ [0:3] ))
+print("===> Norme de l'erreur entre calculs et mesures, à chaque pas :")
+print(" ",array(
+ CalculMeasureErrors.norms()
+ [0:3] ))
+print("===> Erreur absolue moyenne (MAE) entre calculs et mesures, à chaque pas :")
+print(" ",array(
+ CalculMeasureErrors.maes()
+ [0:3] ))
+print("===> Erreur quadratique moyenne (MSE) entre calculs et mesures, à chaque pas :")
+print(" ",array(
+ CalculMeasureErrors.mses()
+ [0:3] ))
+print("===> Racine de l'erreur quadratique moyenne (RMSE) entre calculs et mesures, à chaque pas :")
+print(" ",array(
+ CalculMeasureErrors.rmses()
+ [0:3] ))
+#
+#-------------------------------------------------------------------------------
+#
+import matplotlib.pyplot as plt
+plt.rcParams['figure.figsize'] = (8, 12)
+#
+plt.figure()
+plt.suptitle('Indicateurs construits sur la valeur courante des écarts (ou erreurs) calculs-mesures\n', fontweight='bold')
+plt.subplot(611)
+plt.plot(CalculMeasureErrors.maxs(), 'bx--', label='Indicateur au pas courant')
+plt.ylabel('Maximum (u.a.)')
+plt.legend()
+plt.subplot(612)
+plt.plot(CalculMeasureErrors.mins(), 'bx--', label='Indicateur au pas courant')
+plt.ylabel('Minimum (u.a.)')
+plt.legend()
+plt.subplot(613)
+plt.plot(CalculMeasureErrors.norms(), 'bx-', label='Indicateur au pas courant')
+plt.ylabel('Norme (u.a.)')
+plt.legend()
+plt.subplot(614)
+plt.plot(CalculMeasureErrors.maes(), 'kx-', label='Indicateur au pas courant')
+plt.ylabel('MAE (u.a.)')
+plt.legend()
+plt.subplot(615)
+plt.plot(CalculMeasureErrors.mses(), 'gx-', label='Indicateur au pas courant')
+plt.ylabel('MSE (u.a.)')
+plt.legend()
+plt.subplot(616)
+plt.plot(CalculMeasureErrors.rmses(), 'rx-', label='Indicateur au pas courant')
+plt.ylabel('RMSE (u.a.)')
+plt.legend()
+plt.xlabel('Pas de calcul de la grandeur (numéro ou rang du pas)')
+plt.tight_layout()
+plt.savefig("tui_example_12.png")
--- /dev/null
+Affichage des valeurs de l'état courant, à chaque pas :
+ État courant : [0.0000 1.0000 2.0000]
+ État courant : [0.0474 0.9053 1.0056]
+ État courant : [0.0905 0.8492 0.9461]
+ État courant : [0.1529 0.7984 0.9367]
+ État courant : [0.2245 0.7899 0.9436]
+ État courant : [0.2508 0.8005 0.9486]
+ État courant : [0.2500 0.7998 0.9502]
+ État courant : [0.2500 0.8000 0.9500]
+ État courant : [0.2500 0.8000 0.9500]
+
+Indicateurs sur les écarts (ou erreurs) calculs-mesures
+ (affichage des 3 premiers pas uniquement)
+
+===> Maximum de l'erreur entre calculs et mesures, à chaque pas :
+ [0.5000 0.4526 0.4095]
+===> Minimum de l'erreur entre calculs et mesures, à chaque pas :
+ [-3.5000 -0.5169 -0.3384]
+===> Norme de l'erreur entre calculs et mesures, à chaque pas :
+ [3.5707 0.7540 0.5670]
+===> Erreur absolue moyenne (MAE) entre calculs et mesures, à chaque pas :
+ [1.5000 0.4267 0.3154]
+===> Erreur quadratique moyenne (MSE) entre calculs et mesures, à chaque pas :
+ [4.2500 0.1895 0.1072]
+===> Racine de l'erreur quadratique moyenne (RMSE) entre calculs et mesures, à chaque pas :
+ [2.0616 0.4353 0.3274]
En sortie, après exécution de l'algorithme, on dispose d'informations et de
variables issues du calcul. La description des
-:ref:`section_ref_output_variables` indique la manière de les obtenir par la
-méthode nommée ``get``, de la variable "*ADD*" du post-processing en interface
-graphique, ou du cas en interface textuelle. Les variables d'entrée, mises à
-disposition de l'utilisateur en sortie pour faciliter l'écriture des procédures
-de post-processing, sont décrites dans l':ref:`subsection_r_o_v_Inventaire`.
+:ref:`section_ref_output_variables` indique la manière de les obtenir, par la
+méthode nommée ``get``, depuis la variable "*ADD*" du post-processing en
+interface graphique, ou depuis le cas en interface textuelle. Les variables
+d'entrée, mises à disposition de l'utilisateur en sortie pour faciliter
+l'écriture des procédures de post-processing, sont décrites dans un
+:ref:`subsection_r_o_v_Inventaire`.
**Sorties permanentes (non conditionnelles)**
MaximumNumberOfLocations
*Valeur entière*. Cette clé indique le nombre maximum possible de positions
- trouvée dans la recherche optimale. La valeur par défaut est 1. La recherche
+ trouvées dans la recherche optimale. La valeur par défaut est 1. La recherche
optimale peut éventuellement trouver moins de positions que ce qui est requis
par cette clé, comme par exemple dans le cas où le résidu associé à
l'approximation est inférieur au critère et conduit à l'arrêt anticipé de la
fortement conseillé de revenir à des versions d'outils supports comprises dans
l'étendue décrite ci-dessous.
-.. csv-table:: Intervalles de vérification des outils support pour ADAO
+.. csv-table:: Intervalles de version de vérification des outils support pour ADAO
:header: "Outil", "Version minimale", "Version atteinte"
:widths: 20, 10, 10
+ :align: center
- Python, 3.6.5, 3.12.3
- Numpy, 1.14.3, 1.26.4
- Scipy, 0.19.1, 1.14.0
- MatplotLib, 2.2.2, 3.8.4
+ Python, 3.6.5, 3.12.6
+ Numpy, 1.14.3, 2.1.2
+ Scipy, 0.19.1, 1.14.1
+ MatplotLib, 2.2.2, 3.9.2
GnuplotPy, 1.8, 1.8
- NLopt, 2.4.2, 2.7.1
+ NLopt, 2.4.2, 2.8.0
+ FMPy, 0.3.20, 0.3.20
.. csv-table:: Versions de validation des outils support pour ADAO
:header: "Outil", "Version"
:widths: 20, 10
+ :align: center
ADAO, |release|
EFICAS, |release|
- *Aucune*
+ *Aucune* (messages ou statistiques sont néanmoins affichés)
'normal' de paramètres (mean,std), 'lognormal' de paramètres (mean,sigma),
'uniform' de paramètres (low,high), ou 'weibull' de paramètre (shape). C'est
donc une liste de la même taille que celle de l'état. Par nature, les points
- sont inclus le domaine non borné ou borné selon les caractéristiques des
+ sont inclus dans le domaine non borné ou borné selon les caractéristiques des
distributions choisies par variable.
Exemple :
lequel les points de calcul seront placés, sous la forme d'une paire
*[min,max]* pour chaque composante de l'état. Les bornes inférieures sont
incluses. Cette liste de paires, en nombre identique à la taille de l'espace
- des états, est complétée par une paire d'entier *[dim,nbr]* comportant la
+ des états, est complétée par une paire d'entiers *[dim,nbr]* comportant la
dimension de l'espace des états et le nombre souhaité de points
d'échantillonnage. L'échantillonnage est ensuite construit automatiquement
selon la méthode de l'hypercube Latin (LHS). Par nature, les points sont
lequel les points de calcul seront placés, sous la forme d'une paire
*[min,max]* pour chaque composante de l'état. Les bornes inférieures sont
incluses. Cette liste de paires, en nombre identique à la taille de l'espace
- des états, est complétée par une paire d'entier *[dim,nbr]* comportant la
+ des états, est complétée par une paire d'entiers *[dim,nbr]* comportant la
dimension de l'espace des états et le nombre minimum souhaité de points
d'échantillonnage (par construction, le nombre de points générés dans la
séquence de Sobol sera la puissance de 2 immédiatement supérieure à ce nombre
--- /dev/null
+.. index:: single: StoreInitialState
+
+StoreInitialState
+ *Valeur booléenne*. Cette variable définit le stockage (avec True) ou pas
+ (avec False, par défaut) de l'état initial de l'algorithme comme étant le
+ premier état dans la suite itérative des états trouvés. Cela rend le stockage
+ algorithmique itératif identique au stockage temporel itératif (de manière
+ par exemple similaire aux filtres de Kalman).
+
+ Exemple :
+ ``{"StoreInitialState":False}``
par le menu d'exécution d'un script) est le suivant :
.. literalinclude:: scripts/tui_example_01.res
+ :language: none
Création détaillée d'un cas de calcul TUI ADAO
++++++++++++++++++++++++++++++++++++++++++++++
variable dans "*Concept*", et renvoie en retour la grandeur sous la forme
d'une liste (même s'il n'y en a qu'un exemplaire) de cette variable de
base. Pour connaître la liste des variables et les utiliser, on se
- reportera à l':ref:`subsection_r_o_v_Inventaire`, et plus généralement à la
- fois aux :ref:`section_ref_output_variables` et aux documentations
+ reportera à un :ref:`subsection_r_o_v_Inventaire`, et plus généralement à
+ la fois aux :ref:`section_ref_output_variables` et aux documentations
individuelles des algorithmes.
Enregistrer, charger ou convertir les commandes de cas de calcul
Obtenir des informations sur le cas, le calcul ou le système
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-On peut obtenir de manière simple une **information agrégée sur le cas
-d'étude** tel que défini par l'utilisateur, en utilisant directement la
-commande "*print*" de Python sur le cas, à n'importe quelle étape lors de sa
-construction. Par exemple :
+Il existe plusieurs manières d'obtenir des informations globales relatives
+au cas de calcul, à l'exécution ou au système sur lequel est exécuté un cas.
-.. literalinclude:: scripts/tui_example_07.py
- :language: python
+*print* (*cas*)
+ On peut obtenir de manière simple une **information agrégée sur le cas
+ d'étude** tel que défini par l'utilisateur, en utilisant directement la
+ commande "*print*" de Python sur le cas, à n'importe quelle étape lors de sa
+ construction. Par exemple :
-dont le résultat est ici :
+ .. literalinclude:: scripts/tui_example_07.py
+ :language: python
-.. literalinclude:: scripts/tui_example_07.res
+ dont le résultat est ici :
-.. index:: single: callinfo
+ .. literalinclude:: scripts/tui_example_07.res
+ :language: none
-Une **information synthétique sur le nombre d'appels aux calculs d'opérateurs**
-peut être dynamiquement obtenue par la commande "**callinfo()**". Ces calculs
-d'opérateurs sont ceux définis par l'utilisateur dans un cas ADAO, pour les
-opérateurs d'observation et d'évolution. Elle s'utilise après l'exécution du
-calcul dans le cas ADAO, sachant que le résultat de cette commande est
-simplement vide lorsqu'aucun calcul n'a été effectué :
-::
+.. index:: single: callinfo
- from adao import adaoBuilder
- case = adaoBuilder.New()
- ...
- case.execute()
- print(case.callinfo())
+**callinfo** ()
+ Une **information synthétique sur le nombre d'appels aux calculs
+ d'opérateurs** peut être dynamiquement obtenue par la commande
+ "**callinfo()**". Ces calculs d'opérateurs sont ceux définis par
+ l'utilisateur dans un cas ADAO, pour les opérateurs d'observation et
+ d'évolution. Elle s'utilise après l'exécution du calcul du cas, sachant que
+ le résultat de cette commande est simplement vide lorsqu'aucun calcul n'a
+ été effectué :
+ ::
+
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ ...
+ case.execute()
+ print(case.callinfo())
.. index:: single: sysinfo
-Une **information synthétique sur le système** peut être obtenue par la
-commande "**sysinfo()**", présente dans chaque cas de calcul ADAO. Elle
-retourne dynamiquement des informations système et des détails sur les modules
-Python utiles pour ADAO. Elle s'utilise de la manière suivante :
-::
+**sysinfo** ()
+ Une **information synthétique sur le système** peut être obtenue par la
+ commande "**sysinfo()**", présente dans chaque cas de calcul ADAO. Elle
+ retourne dynamiquement des informations système et des détails sur les
+ modules Python utiles pour ADAO. Elle s'utilise de la manière suivante :
+ ::
- from adao import adaoBuilder
- case = adaoBuilder.New()
- print(case.sysinfo())
+ from adao import adaoBuilder
+ case = adaoBuilder.New()
+ print(case.sysinfo())
.. _subsection_tui_advanced:
l'objectif de l'exemple et un jeu de commandes qui permet de parvenir à cet
objectif.
+.. _subsection_tui_advanced_ex11:
+
Exploitation indépendante des résultats d'un cas de calcul
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
L'exécution de jeu de commandes donne les résultats suivants :
.. literalinclude:: scripts/tui_example_11.res
+ :language: none
Comme il se doit en expériences jumelles, avec une confiance majoritairement
placée dans les observations, on constate que l'on retrouve bien les paramètres
qui ont servi à construire artificiellement les observations.
+.. _subsection_tui_advanced_ex12:
+
+Quelques indicateurs numériques particuliers : norme, RMS, MSE et RMSE...
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Les grandeurs numériques obtenues à l'issue d'un calcul ADAO sont souvent des
+vecteurs (comme l'analyse :math:`\mathbf{x}^a`) ou des matrices (comme la
+covariance d'analyse :math:`\mathbf{A}`). Elles sont requises par l'utilisateur
+à travers la variable standard "*StoreSupplementaryCalculations*" de
+l'algorithme du cas ADAO. Ces grandeurs sont disponible à chaque étape d'un
+algorithme itératif, et se présentent donc sous la forme d'une série de
+vecteurs, ou d'une série de matrices.
+
+Les objets portant ces grandeurs supportent des méthodes particulières pour
+calculer des indicateurs courants. Les méthodes sont nommées par le nom de
+l'indicateur suivi de "*s*" pour noter qu'elle s'appliquent à une série d'objets
+élémentaires, et qu'elles renvoient elles-mêmes une série de valeurs.
+
+Remarque : certains indicateurs sont destinés à qualifier par exemple un
+"*incrément de valeur*", un "*écart de valeur*" ou une "*différence de
+valeur*", plutôt qu'une "*valeur*" elle-même. Informatiquement, il n'y a
+néanmoins pas d'impossibilité à calculer les indicateurs quelle que soit la
+grandeur considérée, c'est donc à l'utilisateur de bien vérifier que
+l'indicateur dont il demande le calcul est utilisé de manière licite.
+
+.. index:: single: means
+
+**means** ()
+ Moyenne des valeurs de la grandeur, disponible à chaque pas.
+
+.. index:: single: stds
+
+**stds** ()
+ Écart-type des valeurs de la grandeur, disponible à chaque pas.
+
+.. index:: single: sums
+
+**sums** ()
+ Somme des valeurs de la grandeur, disponible à chaque pas.
+
+.. index:: single: mins
+
+**mins** ()
+ Minimum des valeurs de la grandeur, disponible à chaque pas.
+
+.. index:: single: maxs
+
+**maxs** ()
+ Maximum des valeurs de la grandeur, disponible à chaque pas.
+
+.. index:: single: norms
+
+**norms** (*_ord=None*)
+ Norme de la grandeur, disponible à chaque pas (*_ord* : voir
+ *numpy.linalg.norm*).
+
+.. index:: single: traces
+
+**traces** (*offset=0*)
+ Trace de la grandeur, disponible à chaque pas (*offset* : voir
+ *numpy.trace*).
+
+.. index:: single: maes
+.. index:: single: Mean Absolute Error (MAE)
+
+**maes** (*predictor=None*)
+ Erreur ou écart moyen absolu (*Mean Absolute Error* (**MAE**)). Cet
+ indicateur est calculé comme la moyenne des écarts en valeur absolue de la
+ grandeur par rapport au prédicteur, et l'indicateur est disponible à chaque
+ pas. Si le prédicteur est non renseigné, cet indicateur ne s'applique
+ théoriquement qu'à un incrément ou une différence.
+
+.. index:: single: mses
+.. index:: single: msds
+.. index:: single: Mean-Square Error (MSE)
+.. index:: single: Mean-Square Deviation (MSD)
+
+**mses** (*predictor=None*) ou **msds** (*predictor=None*)
+ Erreur ou écart quadratique moyen (*Mean-Square Error* (**MSE**) ou
+ *Mean-Square Deviation* (**MSD**)). Cet indicateur est calculé comme la
+ moyenne quadratique des écarts de la grandeur par rapport au prédicteur, et
+ l'indicateur est disponible à chaque pas. Si le prédicteur est non
+ renseigné, cet indicateur ne s'applique théoriquement qu'à un incrément ou
+ une différence.
+
+.. index:: single: rmses
+.. index:: single: rmsds
+.. index:: single: Root-Mean-Square Error (RMSE)
+.. index:: single: Root-Mean-Square Deviation (RMSD)
+.. index:: single: Root-Mean-Square (RMS)
+
+**rmses** (*predictor=None*) ou **rmsds** (*predictor=None*)
+ Racine de l'erreur ou de l'écart quadratique moyen (*Root-Mean-Square
+ Error* (**RMSE**) ou *Root-Mean-Square Deviation* (**RMSD**)). Cet
+ indicateur est calculé comme la racine de la moyenne quadratique des écarts
+ de la grandeur par rapport au prédicteur, et l'indicateur est disponible à
+ chaque pas. Si le prédicteur est non renseigné, cet indicateur ne
+ s'applique théoriquement qu'à un incrément ou une différence. Dans ce
+ dernier cas, c'est une **RMS** de la grandeur.
+
+À titre d'exemple simple, on peut reprendre le cas de calcul déjà présenté plus
+haut :
+
+.. literalinclude:: scripts/tui_example_12.py
+ :language: python
+
+L'exécution de jeu de commandes donne les résultats suivants, qui illustrent la
+structure en série des indicateurs, associés à la série de valeurs de la
+grandeur incrémentale "*InnovationAtCurrentState*" requise :
+
+.. literalinclude:: scripts/tui_example_12.res
+ :language: none
+
+Sous forme graphique, on observe les indicateurs sur l'ensemble des pas :
+
+.. _tui_example_12:
+.. image:: scripts/tui_example_12.png
+ :align: center
+ :width: 90%
+
.. Réconciliation de courbes à l'aide de MedCoupling
.. +++++++++++++++++++++++++++++++++++++++++++++++++
#
nbfct = 1 # Nb d'évaluations
JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
+ if selfA._parameters["StoreInitialState"]:
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
+ selfA.StoredVariables["CostFunctionJ" ].store( JXini )
+ selfA.StoredVariables["CostFunctionJb"].store( JbXini )
+ selfA.StoredVariables["CostFunctionJo"].store( JoXini )
+ if selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xini )
+ if selfA._toStore("SimulatedObservationAtCurrentState"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( Xini ) )
#
Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,gbest,lbest)
for __p in range(__nbP):
step = 0
while KeepRunningCondition(step, nbfct):
step += 1
+ #
for __i in range(__nbI):
__rct = rand(size=__nbP)
__rst = rand(size=__nbP)
#
nbfct = 1 # Nb d'évaluations
JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
+ if selfA._parameters["StoreInitialState"]:
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
+ selfA.StoredVariables["CostFunctionJ" ].store( JXini )
+ selfA.StoredVariables["CostFunctionJb"].store( JbXini )
+ selfA.StoredVariables["CostFunctionJo"].store( JoXini )
+ if selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xini )
+ if selfA._toStore("SimulatedObservationAtCurrentState"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( Xini ) )
#
Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,xbest,lbest)
for __p in range(__nbP):
step = 0
while KeepRunningCondition(step, nbfct):
step += 1
+ #
for __i in range(__nbI):
rct = rand(size=__nbP)
rst = rand(size=__nbP)
#
nbfct = 1 # Nb d'évaluations
JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
+ if selfA._parameters["StoreInitialState"]:
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
+ selfA.StoredVariables["CostFunctionJ" ].store( JXini )
+ selfA.StoredVariables["CostFunctionJb"].store( JbXini )
+ selfA.StoredVariables["CostFunctionJo"].store( JoXini )
+ if selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xini )
+ if selfA._toStore("SimulatedObservationAtCurrentState"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( Xini ) )
#
Swarm = numpy.zeros((__nbI, 3, __nbP)) # 3 car (x,v,xbest)
for __p in range(__nbP):
step = 0
while KeepRunningCondition(step, nbfct):
step += 1
+ #
for __i in range(__nbI):
for __p in range(__nbP):
# Vitesse
nbfct = 1 # Nb d'évaluations
HX = Hm( Xini )
JXini, JbXini, JoXini = CostFunction(Xini, HX, selfA._parameters["QualityCriterion"])
+ if selfA._parameters["StoreInitialState"]:
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
+ selfA.StoredVariables["CostFunctionJ" ].store( JXini )
+ selfA.StoredVariables["CostFunctionJb"].store( JbXini )
+ selfA.StoredVariables["CostFunctionJo"].store( JoXini )
+ if selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xini )
+ if selfA._toStore("SimulatedObservationAtCurrentState"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( HX )
#
Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,gbest,lbest)
for __p in range(__nbP):
#
nbfct = 1 # Nb d'évaluations
JXini, JbXini, JoXini = CostFunction(Xini, selfA._parameters["QualityCriterion"])
+ if selfA._parameters["StoreInitialState"]:
+ selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
+ selfA.StoredVariables["CostFunctionJ" ].store( JXini )
+ selfA.StoredVariables["CostFunctionJb"].store( JbXini )
+ selfA.StoredVariables["CostFunctionJo"].store( JoXini )
+ if selfA._toStore("CurrentState"):
+ selfA.StoredVariables["CurrentState"].store( Xini )
+ if selfA._toStore("SimulatedObservationAtCurrentState"):
+ selfA.StoredVariables["SimulatedObservationAtCurrentState"].store( Hm( Xini ) )
#
Swarm = numpy.zeros((__nbI, 4, __nbP)) # 4 car (x,v,gbest,lbest)
for __p in range(__nbP):
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _dInnovation )
#
- Jb = vfloat( 0.5 * _dX.T * (BI * _dX) )
- Jo = vfloat( 0.5 * _dInnovation.T * (RI * _dInnovation) )
+ Jb = vfloat( 0.5 * _dX.T @ (BI @ _dX) )
+ Jo = vfloat( 0.5 * _dInnovation.T @ (RI @ _dInnovation) )
J = Jb + Jo
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
#
- Jb = vfloat( 0.5 * (_X - Xb).T * (BI * (_X - Xb)) )
- Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
+ Jb = vfloat( 0.5 * (_X - Xb).T @ (BI @ (_X - Xb)) )
+ Jo = vfloat( 0.5 * _Innovation.T @ (RI @ _Innovation) )
J = Jb + Jo
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
if selfA._toStore("InnovationAtCurrentState"):
selfA.StoredVariables["InnovationAtCurrentState"].store( _Innovation )
#
- Jb = vfloat( 0.5 * _V.T * (BT * _V) )
- Jo = vfloat( 0.5 * _Innovation.T * (RI * _Innovation) )
+ Jb = vfloat( 0.5 * _V.T @ (BT @ _V) )
+ Jo = vfloat( 0.5 * _Innovation.T @ (RI @ _Innovation) )
J = Jb + Jo
#
selfA.StoredVariables["CurrentIterationNumber"].store( len(selfA.StoredVariables["CostFunctionJ"]) )
self.defineRequiredParameter(
name = "OptimalLocations",
default = [],
- typecast = tuple,
+ typecast = numpy.array,
message = "Liste des indices ou noms de positions optimales de mesure selon l'ordre interne d'un vecteur de base", # noqa: E501
)
self.defineRequiredParameter(
typecast = bool,
message = "Stockage des variables internes ou intermédiaires du calcul",
)
+ self.defineRequiredParameter(
+ name = "StoreInitialState",
+ default = False,
+ typecast = bool,
+ message = "Stockage du premier état à la manière des algorithmes récursifs",
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
import os
import sys
import inspect
-#
+
from daCore.BasicObjects import State, Covariance, FullOperator, Operator
from daCore.BasicObjects import AlgorithmAndParameters, DataObserver
from daCore.BasicObjects import RegulationAndParameters, CaseLogger
from daCore import version
from daCore import ExtendedLogging
+
ExtendedLogging.ExtendedLogging() # A importer en premier
import logging # noqa: E402
+
# ==============================================================================
class Aidsm(object):
- """ ADAO Internal Data Structure Model """
+ """ADAO Internal Data Structure Model"""
+
__slots__ = (
- "__name", "__objname", "__directory", "__case", "__parent",
- "__adaoObject", "__StoredInputs", "__PostAnalysis", "__Concepts",
+ "__name",
+ "__objname",
+ "__directory",
+ "__case",
+ "__parent",
+ "__adaoObject",
+ "__StoredInputs",
+ "__PostAnalysis",
+ "__Concepts",
)
def __init__(self, name="", addViewers=None):
- self.__name = str(name)
- self.__objname = None
- self.__directory = None
+ self.__name = str(name)
+ self.__objname = None
+ self.__directory = None
self.__case = CaseLogger(self.__name, "case", addViewers)
#
- self.__adaoObject = {}
+ self.__adaoObject = {}
self.__StoredInputs = {}
self.__PostAnalysis = []
#
for ename in ("ObservationOperator", "EvolutionModel", "ControlModel"):
self.__adaoObject[ename] = {}
for ename in ("BackgroundError", "ObservationError"):
- self.__adaoObject[ename] = Covariance(ename, asEyeByScalar = 1.)
+ self.__adaoObject[ename] = Covariance(ename, asEyeByScalar=1.0)
for ename in ("EvolutionError",):
- self.__adaoObject[ename] = Covariance(ename, asEyeByScalar = 1.e-16)
+ self.__adaoObject[ename] = Covariance(ename, asEyeByScalar=1.0e-16)
for ename in ("Observer", "UserPostAnalysis"):
- self.__adaoObject[ename] = []
+ self.__adaoObject[ename] = []
self.__StoredInputs[ename] = [] # Vide par defaut
self.__StoredInputs["Name"] = self.__name
self.__StoredInputs["Directory"] = self.__directory
# qui est activée dans Persistence)
self.__parent = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
sys.path.insert(0, self.__parent)
- sys.path = PlatformInfo.uniq( sys.path ) # Conserve en unique exemplaire chaque chemin
-
- def set(self,
- Concept = None, # Premier argument
- Algorithm = None,
- AppliedInXb = None,
- Checked = False,
- ColMajor = False,
- ColNames = None,
- CrossObs = False,
- DataFile = None,
- DiagonalSparseMatrix = None,
- ExtraArguments = None,
- Info = None,
- InputFunctionAsMulti = False,
- Matrix = None,
- ObjectFunction = None,
- ObjectMatrix = None,
- OneFunction = None,
- Parameters = None,
- PerformanceProfile = None,
- ScalarSparseMatrix = None,
- Scheduler = None,
- Script = None,
- Stored = False,
- String = None,
- SyncObs = True,
- Template = None,
- ThreeFunctions = None,
- Variable = None,
- Vector = None,
- VectorSerie = None,
- ):
+ sys.path = PlatformInfo.uniq(
+ sys.path
+ ) # Conserve en unique exemplaire chaque chemin
+
+ def set(
+ self,
+ Concept=None, # Premier argument
+ Algorithm=None,
+ AppliedInXb=None,
+ Checked=False,
+ ColMajor=False,
+ ColNames=None,
+ CrossObs=False,
+ DataFile=None,
+ DiagonalSparseMatrix=None,
+ ExtraArguments=None,
+ Info=None,
+ InputFunctionAsMulti=False,
+ Matrix=None,
+ ObjectFunction=None,
+ ObjectMatrix=None,
+ OneFunction=None,
+ Parameters=None,
+ PerformanceProfile=None,
+ ScalarSparseMatrix=None,
+ Scheduler=None,
+ Script=None,
+ Stored=False,
+ String=None,
+ SyncObs=True,
+ Template=None,
+ ThreeFunctions=None,
+ Variable=None,
+ Vector=None,
+ VectorSerie=None,
+ ):
"Interface unique de définition de variables d'entrées par argument"
self.__case.register("set", dir(), locals(), None, True)
try:
- if Concept in ("Background", "CheckingPoint", "ControlInput", "Observation"):
+ if Concept in (
+ "Background",
+ "CheckingPoint",
+ "ControlInput",
+ "Observation",
+ ):
commande = getattr(self, "set" + Concept)
- commande(Vector, VectorSerie, Script, DataFile, ColNames, ColMajor, Stored, Scheduler, Checked )
+ commande(
+ Vector,
+ VectorSerie,
+ Script,
+ DataFile,
+ ColNames,
+ ColMajor,
+ Stored,
+ Scheduler,
+ Checked,
+ )
elif Concept in ("BackgroundError", "ObservationError", "EvolutionError"):
commande = getattr(self, "set" + Concept)
- commande(Matrix, ScalarSparseMatrix, DiagonalSparseMatrix,
- Script, Stored, ObjectMatrix, Checked )
+ commande(
+ Matrix,
+ ScalarSparseMatrix,
+ DiagonalSparseMatrix,
+ Script,
+ Stored,
+ ObjectMatrix,
+ Checked,
+ )
elif Concept == "AlgorithmParameters":
- self.setAlgorithmParameters( Algorithm, Parameters, Script )
+ self.setAlgorithmParameters(Algorithm, Parameters, Script)
elif Concept == "RegulationParameters":
- self.setRegulationParameters( Algorithm, Parameters, Script )
+ self.setRegulationParameters(Algorithm, Parameters, Script)
elif Concept == "Name":
self.setName(String)
elif Concept == "Directory":
self.setNoDebug()
elif Concept == "Observer":
self.setObserver(
- Variable, Template, String, Script, Info,
- ObjectFunction, CrossObs, SyncObs, Scheduler )
+ Variable,
+ Template,
+ String,
+ Script,
+ Info,
+ ObjectFunction,
+ CrossObs,
+ SyncObs,
+ Scheduler,
+ )
elif Concept == "UserPostAnalysis":
- self.setUserPostAnalysis( Template, String, Script )
+ self.setUserPostAnalysis(Template, String, Script)
elif Concept == "SupplementaryParameters":
- self.setSupplementaryParameters( Parameters, Script )
+ self.setSupplementaryParameters(Parameters, Script)
elif Concept == "ObservationOperator":
self.setObservationOperator(
- Matrix, OneFunction, ThreeFunctions, AppliedInXb,
- Parameters, Script, ExtraArguments,
- Stored, PerformanceProfile, InputFunctionAsMulti, Checked )
+ Matrix,
+ OneFunction,
+ ThreeFunctions,
+ AppliedInXb,
+ Parameters,
+ Script,
+ ExtraArguments,
+ Stored,
+ PerformanceProfile,
+ InputFunctionAsMulti,
+ Checked,
+ )
elif Concept in ("EvolutionModel", "ControlModel"):
commande = getattr(self, "set" + Concept)
commande(
- Matrix, OneFunction, ThreeFunctions,
- Parameters, Script, Scheduler, ExtraArguments,
- Stored, PerformanceProfile, InputFunctionAsMulti, Checked )
+ Matrix,
+ OneFunction,
+ ThreeFunctions,
+ Parameters,
+ Script,
+ Scheduler,
+ ExtraArguments,
+ Stored,
+ PerformanceProfile,
+ InputFunctionAsMulti,
+ Checked,
+ )
else:
- raise ValueError("the variable named '%s' is not allowed."%str(Concept))
+ raise ValueError(
+ "the variable named '%s' is not allowed." % str(Concept)
+ )
except Exception as e:
if isinstance(e, SyntaxError):
- msg = " at %s: %s"%(e.offset, e.text)
+ msg = " at %s: %s" % (e.offset, e.text)
else:
msg = ""
- raise ValueError((
- "during settings, the following error occurs:\n" + \
- "\n%s%s\n\nSee also the potential messages, " + \
- "which can show the origin of the above error, " + \
- "in the launching terminal.")%(str(e), msg))
+ raise ValueError(
+ (
+ "during settings, the following error occurs:\n"
+ + "\n%s%s\n\nSee also the potential messages, "
+ + "which can show the origin of the above error, "
+ + "in the launching terminal."
+ )
+ % (str(e), msg)
+ )
# -----------------------------------------------------------
def setBackground(
- self,
- Vector = None,
- VectorSerie = None,
- Script = None,
- DataFile = None,
- ColNames = None,
- ColMajor = False,
- Stored = False,
- Scheduler = None,
- Checked = False ):
+ self,
+ Vector=None,
+ VectorSerie=None,
+ Script=None,
+ DataFile=None,
+ ColNames=None,
+ ColMajor=False,
+ Stored=False,
+ Scheduler=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "Background"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
- name = Concept,
- asVector = Vector,
- asPersistentVector = VectorSerie,
- asScript = self.__with_directory(Script),
- asDataFile = DataFile,
- colNames = ColNames,
- colMajor = ColMajor,
- scheduledBy = Scheduler,
- toBeChecked = Checked,
+ name=Concept,
+ asVector=Vector,
+ asPersistentVector=VectorSerie,
+ asScript=self.__with_directory(Script),
+ asDataFile=DataFile,
+ colNames=ColNames,
+ colMajor=ColMajor,
+ scheduledBy=Scheduler,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setCheckingPoint(
- self,
- Vector = None,
- VectorSerie = None,
- Script = None,
- DataFile = None,
- ColNames = None,
- ColMajor = False,
- Stored = False,
- Scheduler = None,
- Checked = False ):
+ self,
+ Vector=None,
+ VectorSerie=None,
+ Script=None,
+ DataFile=None,
+ ColNames=None,
+ ColMajor=False,
+ Stored=False,
+ Scheduler=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "CheckingPoint"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
- name = Concept,
- asVector = Vector,
- asPersistentVector = VectorSerie,
- asScript = self.__with_directory(Script),
- asDataFile = DataFile,
- colNames = ColNames,
- colMajor = ColMajor,
- scheduledBy = Scheduler,
- toBeChecked = Checked,
+ name=Concept,
+ asVector=Vector,
+ asPersistentVector=VectorSerie,
+ asScript=self.__with_directory(Script),
+ asDataFile=DataFile,
+ colNames=ColNames,
+ colMajor=ColMajor,
+ scheduledBy=Scheduler,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setControlInput(
- self,
- Vector = None,
- VectorSerie = None,
- Script = None,
- DataFile = None,
- ColNames = None,
- ColMajor = False,
- Stored = False,
- Scheduler = None,
- Checked = False ):
+ self,
+ Vector=None,
+ VectorSerie=None,
+ Script=None,
+ DataFile=None,
+ ColNames=None,
+ ColMajor=False,
+ Stored=False,
+ Scheduler=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "ControlInput"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
- name = Concept,
- asVector = Vector,
- asPersistentVector = VectorSerie,
- asScript = self.__with_directory(Script),
- asDataFile = DataFile,
- colNames = ColNames,
- colMajor = ColMajor,
- scheduledBy = Scheduler,
- toBeChecked = Checked,
+ name=Concept,
+ asVector=Vector,
+ asPersistentVector=VectorSerie,
+ asScript=self.__with_directory(Script),
+ asDataFile=DataFile,
+ colNames=ColNames,
+ colMajor=ColMajor,
+ scheduledBy=Scheduler,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setObservation(
- self,
- Vector = None,
- VectorSerie = None,
- Script = None,
- DataFile = None,
- ColNames = None,
- ColMajor = False,
- Stored = False,
- Scheduler = None,
- Checked = False ):
+ self,
+ Vector=None,
+ VectorSerie=None,
+ Script=None,
+ DataFile=None,
+ ColNames=None,
+ ColMajor=False,
+ Stored=False,
+ Scheduler=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "Observation"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = State(
- name = Concept,
- asVector = Vector,
- asPersistentVector = VectorSerie,
- asScript = self.__with_directory(Script),
- asDataFile = DataFile,
- colNames = ColNames,
- colMajor = ColMajor,
- scheduledBy = Scheduler,
- toBeChecked = Checked,
+ name=Concept,
+ asVector=Vector,
+ asPersistentVector=VectorSerie,
+ asScript=self.__with_directory(Script),
+ asDataFile=DataFile,
+ colNames=ColNames,
+ colMajor=ColMajor,
+ scheduledBy=Scheduler,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setBackgroundError(
- self,
- Matrix = None,
- ScalarSparseMatrix = None,
- DiagonalSparseMatrix = None,
- Script = None,
- Stored = False,
- ObjectMatrix = None,
- Checked = False ):
+ self,
+ Matrix=None,
+ ScalarSparseMatrix=None,
+ DiagonalSparseMatrix=None,
+ Script=None,
+ Stored=False,
+ ObjectMatrix=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "BackgroundError"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = Covariance(
- name = Concept,
- asCovariance = Matrix,
- asEyeByScalar = ScalarSparseMatrix,
- asEyeByVector = DiagonalSparseMatrix,
- asCovObject = ObjectMatrix,
- asScript = self.__with_directory(Script),
- toBeChecked = Checked,
+ name=Concept,
+ asCovariance=Matrix,
+ asEyeByScalar=ScalarSparseMatrix,
+ asEyeByVector=DiagonalSparseMatrix,
+ asCovObject=ObjectMatrix,
+ asScript=self.__with_directory(Script),
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setObservationError(
- self,
- Matrix = None,
- ScalarSparseMatrix = None,
- DiagonalSparseMatrix = None,
- Script = None,
- Stored = False,
- ObjectMatrix = None,
- Checked = False ):
+ self,
+ Matrix=None,
+ ScalarSparseMatrix=None,
+ DiagonalSparseMatrix=None,
+ Script=None,
+ Stored=False,
+ ObjectMatrix=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "ObservationError"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = Covariance(
- name = Concept,
- asCovariance = Matrix,
- asEyeByScalar = ScalarSparseMatrix,
- asEyeByVector = DiagonalSparseMatrix,
- asCovObject = ObjectMatrix,
- asScript = self.__with_directory(Script),
- toBeChecked = Checked,
+ name=Concept,
+ asCovariance=Matrix,
+ asEyeByScalar=ScalarSparseMatrix,
+ asEyeByVector=DiagonalSparseMatrix,
+ asCovObject=ObjectMatrix,
+ asScript=self.__with_directory(Script),
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setEvolutionError(
- self,
- Matrix = None,
- ScalarSparseMatrix = None,
- DiagonalSparseMatrix = None,
- Script = None,
- Stored = False,
- ObjectMatrix = None,
- Checked = False ):
+ self,
+ Matrix=None,
+ ScalarSparseMatrix=None,
+ DiagonalSparseMatrix=None,
+ Script=None,
+ Stored=False,
+ ObjectMatrix=None,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "EvolutionError"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = Covariance(
- name = Concept,
- asCovariance = Matrix,
- asEyeByScalar = ScalarSparseMatrix,
- asEyeByVector = DiagonalSparseMatrix,
- asCovObject = ObjectMatrix,
- asScript = self.__with_directory(Script),
- toBeChecked = Checked,
+ name=Concept,
+ asCovariance=Matrix,
+ asEyeByScalar=ScalarSparseMatrix,
+ asEyeByVector=DiagonalSparseMatrix,
+ asCovObject=ObjectMatrix,
+ asScript=self.__with_directory(Script),
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setObservationOperator(
- self,
- Matrix = None,
- OneFunction = None,
- ThreeFunctions = None,
- AppliedInXb = None,
- Parameters = None,
- Script = None,
- ExtraArguments = None,
- Stored = False,
- PerformanceProfile = None,
- InputFunctionAsMulti = False,
- Checked = False ):
+ self,
+ Matrix=None,
+ OneFunction=None,
+ ThreeFunctions=None,
+ AppliedInXb=None,
+ Parameters=None,
+ Script=None,
+ ExtraArguments=None,
+ Stored=False,
+ PerformanceProfile=None,
+ InputFunctionAsMulti=False,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "ObservationOperator"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = FullOperator(
- name = Concept,
- asMatrix = Matrix,
- asOneFunction = OneFunction,
- asThreeFunctions = ThreeFunctions,
- asScript = self.__with_directory(Script),
- asDict = Parameters,
- appliedInX = AppliedInXb,
- extraArguments = ExtraArguments,
- performancePrf = PerformanceProfile,
- inputAsMF = InputFunctionAsMulti,
- scheduledBy = None,
- toBeChecked = Checked,
+ name=Concept,
+ asMatrix=Matrix,
+ asOneFunction=OneFunction,
+ asThreeFunctions=ThreeFunctions,
+ asScript=self.__with_directory(Script),
+ asDict=Parameters,
+ appliedInX=AppliedInXb,
+ extraArguments=ExtraArguments,
+ performancePrf=PerformanceProfile,
+ inputAsMF=InputFunctionAsMulti,
+ scheduledBy=None,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setEvolutionModel(
- self,
- Matrix = None,
- OneFunction = None,
- ThreeFunctions = None,
- Parameters = None,
- Script = None,
- Scheduler = None,
- ExtraArguments = None,
- Stored = False,
- PerformanceProfile = None,
- InputFunctionAsMulti = False,
- Checked = False ):
+ self,
+ Matrix=None,
+ OneFunction=None,
+ ThreeFunctions=None,
+ Parameters=None,
+ Script=None,
+ Scheduler=None,
+ ExtraArguments=None,
+ Stored=False,
+ PerformanceProfile=None,
+ InputFunctionAsMulti=False,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "EvolutionModel"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = FullOperator(
- name = Concept,
- asMatrix = Matrix,
- asOneFunction = OneFunction,
- asThreeFunctions = ThreeFunctions,
- asScript = self.__with_directory(Script),
- asDict = Parameters,
- appliedInX = None,
- extraArguments = ExtraArguments,
- performancePrf = PerformanceProfile,
- inputAsMF = InputFunctionAsMulti,
- scheduledBy = Scheduler,
- toBeChecked = Checked,
+ name=Concept,
+ asMatrix=Matrix,
+ asOneFunction=OneFunction,
+ asThreeFunctions=ThreeFunctions,
+ asScript=self.__with_directory(Script),
+ asDict=Parameters,
+ appliedInX=None,
+ extraArguments=ExtraArguments,
+ performancePrf=PerformanceProfile,
+ inputAsMF=InputFunctionAsMulti,
+ scheduledBy=Scheduler,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
return 0
def setControlModel(
- self,
- Matrix = None,
- OneFunction = None,
- ThreeFunctions = None,
- Parameters = None,
- Script = None,
- Scheduler = None,
- ExtraArguments = None,
- Stored = False,
- PerformanceProfile = None,
- InputFunctionAsMulti = False,
- Checked = False ):
+ self,
+ Matrix=None,
+ OneFunction=None,
+ ThreeFunctions=None,
+ Parameters=None,
+ Script=None,
+ Scheduler=None,
+ ExtraArguments=None,
+ Stored=False,
+ PerformanceProfile=None,
+ InputFunctionAsMulti=False,
+ Checked=False,
+ ):
"Définition d'un concept de calcul"
Concept = "ControlModel"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = FullOperator(
- name = Concept,
- asMatrix = Matrix,
- asOneFunction = OneFunction,
- asThreeFunctions = ThreeFunctions,
- asScript = self.__with_directory(Script),
- asDict = Parameters,
- appliedInX = None,
- extraArguments = ExtraArguments,
- performancePrf = PerformanceProfile,
- inputAsMF = InputFunctionAsMulti,
- scheduledBy = Scheduler,
- toBeChecked = Checked,
+ name=Concept,
+ asMatrix=Matrix,
+ asOneFunction=OneFunction,
+ asThreeFunctions=ThreeFunctions,
+ asScript=self.__with_directory(Script),
+ asDict=Parameters,
+ appliedInX=None,
+ extraArguments=ExtraArguments,
+ performancePrf=PerformanceProfile,
+ inputAsMF=InputFunctionAsMulti,
+ scheduledBy=Scheduler,
+ toBeChecked=Checked,
)
if Stored:
self.__StoredInputs[Concept] = self.__adaoObject[Concept].getO()
self.__directory = None
self.__StoredInputs["Directory"] = self.__directory
- def setDebug(self, __level = 10):
+ def setDebug(self, __level=10):
"NOTSET=0 < DEBUG=10 < INFO=20 < WARNING=30 < ERROR=40 < CRITICAL=50"
self.__case.register("setDebug", dir(), locals())
log = logging.getLogger()
- log.setLevel( __level )
- logging.debug("Mode debug initialisé avec %s %s"%(version.name, version.version))
- self.__StoredInputs["Debug"] = __level
+ log.setLevel(__level)
+ logging.debug(
+ "Mode debug initialisé avec %s %s" % (version.name, version.version)
+ )
+ self.__StoredInputs["Debug"] = __level
self.__StoredInputs["NoDebug"] = False
return 0
"NOTSET=0 < DEBUG=10 < INFO=20 < WARNING=30 < ERROR=40 < CRITICAL=50"
self.__case.register("setNoDebug", dir(), locals())
log = logging.getLogger()
- log.setLevel( logging.WARNING )
- self.__StoredInputs["Debug"] = logging.WARNING
+ log.setLevel(logging.WARNING)
+ self.__StoredInputs["Debug"] = logging.WARNING
self.__StoredInputs["NoDebug"] = True
return 0
- def setAlgorithmParameters(
- self,
- Algorithm = None,
- Parameters = None,
- Script = None ):
+ def setAlgorithmParameters(self, Algorithm=None, Parameters=None, Script=None):
"Définition d'un concept de calcul"
Concept = "AlgorithmParameters"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = AlgorithmAndParameters(
- name = Concept,
- asAlgorithm = Algorithm,
- asDict = Parameters,
- asScript = self.__with_directory(Script),
+ name=Concept,
+ asAlgorithm=Algorithm,
+ asDict=Parameters,
+ asScript=self.__with_directory(Script),
)
return 0
- def updateAlgorithmParameters(
- self,
- Parameters = None,
- Script = None ):
+ def updateAlgorithmParameters(self, Parameters=None, Script=None):
"Mise à jour d'un concept de calcul"
Concept = "AlgorithmParameters"
if Concept not in self.__adaoObject or self.__adaoObject[Concept] is None:
- raise ValueError("\n\nNo algorithm registred, set one before updating parameters or executing\n")
+ raise ValueError(
+ "\n\nNo algorithm registred, set one before updating parameters or executing\n"
+ )
self.__adaoObject[Concept].updateParameters(
- asDict = Parameters,
- asScript = self.__with_directory(Script),
+ asDict=Parameters,
+ asScript=self.__with_directory(Script),
)
# RaJ du register
return 0
- def setRegulationParameters(
- self,
- Algorithm = None,
- Parameters = None,
- Script = None ):
+ def setRegulationParameters(self, Algorithm=None, Parameters=None, Script=None):
"Définition d'un concept de calcul"
Concept = "RegulationParameters"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = RegulationAndParameters(
- name = Concept,
- asAlgorithm = Algorithm,
- asDict = Parameters,
- asScript = self.__with_directory(Script),
+ name=Concept,
+ asAlgorithm=Algorithm,
+ asDict=Parameters,
+ asScript=self.__with_directory(Script),
)
return 0
- def setSupplementaryParameters(
- self,
- Parameters = None,
- Script = None ):
+ def setSupplementaryParameters(self, Parameters=None, Script=None):
"Définition d'un concept de calcul"
Concept = "SupplementaryParameters"
self.__case.register("set" + Concept, dir(), locals())
self.__adaoObject[Concept] = ExternalParameters(
- name = Concept,
- asDict = Parameters,
- asScript = self.__with_directory(Script),
+ name=Concept,
+ asDict=Parameters,
+ asScript=self.__with_directory(Script),
)
return 0
- def updateSupplementaryParameters(
- self,
- Parameters = None,
- Script = None ):
+ def updateSupplementaryParameters(self, Parameters=None, Script=None):
"Mise à jour d'un concept de calcul"
Concept = "SupplementaryParameters"
if Concept not in self.__adaoObject or self.__adaoObject[Concept] is None:
- self.__adaoObject[Concept] = ExternalParameters(name = Concept)
+ self.__adaoObject[Concept] = ExternalParameters(name=Concept)
self.__adaoObject[Concept].updateParameters(
- asDict = Parameters,
- asScript = self.__with_directory(Script),
+ asDict=Parameters,
+ asScript=self.__with_directory(Script),
)
return 0
def setObserver(
- self,
- Variable = None,
- Template = None,
- String = None,
- Script = None,
- Info = None,
- ObjectFunction = None,
- CrossObs = False,
- SyncObs = True,
- Scheduler = None ):
+ self,
+ Variable=None,
+ Template=None,
+ String=None,
+ Script=None,
+ Info=None,
+ ObjectFunction=None,
+ CrossObs=False,
+ SyncObs=True,
+ Scheduler=None,
+ ):
"Définition d'un concept de calcul"
Concept = "Observer"
self.__case.register("set" + Concept, dir(), locals())
- self.__adaoObject[Concept].append( DataObserver(
- name = Concept,
- onVariable = Variable,
- asTemplate = Template,
- asString = String,
- asScript = self.__with_directory(Script),
- asObsObject = ObjectFunction,
- withInfo = Info,
- crossObs = CrossObs,
- syncObs = SyncObs,
- scheduledBy = Scheduler,
- withAlgo = self.__adaoObject["AlgorithmParameters"]
- ))
+ self.__adaoObject[Concept].append(
+ DataObserver(
+ name=Concept,
+ onVariable=Variable,
+ asTemplate=Template,
+ asString=String,
+ asScript=self.__with_directory(Script),
+ asObsObject=ObjectFunction,
+ withInfo=Info,
+ crossObs=CrossObs,
+ syncObs=SyncObs,
+ scheduledBy=Scheduler,
+ withAlgo=self.__adaoObject["AlgorithmParameters"],
+ )
+ )
return 0
- def removeObserver(
- self,
- Variable = None,
- ObjectFunction = None ):
+ def removeObserver(self, Variable=None, ObjectFunction=None):
"Permet de retirer un observer à une ou des variable nommées"
if "AlgorithmParameters" not in self.__adaoObject:
- raise ValueError("No algorithm registred, ask for one before removing observers")
+ raise ValueError(
+ "No algorithm registred, ask for one before removing observers"
+ )
#
# Vérification du nom de variable et typage
# -----------------------------------------
if isinstance(Variable, str):
VariableNames = (Variable,)
elif isinstance(Variable, list):
- VariableNames = tuple(map( str, Variable ))
+ VariableNames = tuple(map(str, Variable))
else:
- raise ValueError("The observer requires a name or a list of names of variables.")
+ raise ValueError(
+ "The observer requires a name or a list of names of variables."
+ )
#
# Association interne de l'observer à la variable
# -----------------------------------------------
for ename in VariableNames:
if ename not in self.__adaoObject["AlgorithmParameters"]:
- raise ValueError("An observer requires to be removed on a variable named %s which does not exist."%ename)
+ raise ValueError(
+ "An observer requires to be removed on a variable named %s which does not exist."
+ % ename
+ )
else:
- return self.__adaoObject["AlgorithmParameters"].removeObserver( ename, ObjectFunction )
+ return self.__adaoObject["AlgorithmParameters"].removeObserver(
+ ename, ObjectFunction
+ )
- def setUserPostAnalysis(
- self,
- Template = None,
- String = None,
- Script = None ):
+ def setUserPostAnalysis(self, Template=None, String=None, Script=None):
"Définition d'un concept de calcul"
Concept = "UserPostAnalysis"
self.__case.register("set" + Concept, dir(), locals())
- self.__adaoObject[Concept].append( repr(UserScript(
- name = Concept,
- asTemplate = Template,
- asString = String,
- asScript = self.__with_directory(Script),
- )))
+ self.__adaoObject[Concept].append(
+ repr(
+ UserScript(
+ name=Concept,
+ asTemplate=Template,
+ asString=String,
+ asScript=self.__with_directory(Script),
+ )
+ )
+ )
return 0
# -----------------------------------------------------------
- def get(self, Concept=None, noDetails=True ):
+ def get(self, Concept=None, noDetails=True):
"Récupération d'une sortie du calcul"
if Concept is not None:
try:
- self.__case.register("get", dir(), locals(), Concept) # Break pickle in Python 2
+ self.__case.register(
+ "get", dir(), locals(), Concept
+ ) # Break pickle in Python 2
except Exception:
pass
if Concept in self.__StoredInputs:
return self.__StoredInputs[Concept]
#
- elif self.__adaoObject["AlgorithmParameters"] is not None and Concept == "AlgorithmParameters":
+ elif (
+ self.__adaoObject["AlgorithmParameters"] is not None
+ and Concept == "AlgorithmParameters"
+ ):
return self.__adaoObject["AlgorithmParameters"].get()
#
- elif self.__adaoObject["AlgorithmParameters"] is not None and Concept in self.__adaoObject["AlgorithmParameters"]:
- return self.__adaoObject["AlgorithmParameters"].get( Concept )
+ elif (
+ self.__adaoObject["AlgorithmParameters"] is not None
+ and Concept in self.__adaoObject["AlgorithmParameters"]
+ ):
+ return self.__adaoObject["AlgorithmParameters"].get(Concept)
#
- elif Concept == "AlgorithmRequiredParameters" and self.__adaoObject["AlgorithmParameters"] is not None:
- return self.__adaoObject["AlgorithmParameters"].getAlgorithmRequiredParameters(noDetails)
+ elif (
+ Concept == "AlgorithmRequiredParameters"
+ and self.__adaoObject["AlgorithmParameters"] is not None
+ ):
+ return self.__adaoObject[
+ "AlgorithmParameters"
+ ].getAlgorithmRequiredParameters(noDetails)
#
- elif Concept == "AlgorithmRequiredInputs" and self.__adaoObject["AlgorithmParameters"] is not None:
- return self.__adaoObject["AlgorithmParameters"].getAlgorithmInputArguments()
+ elif (
+ Concept == "AlgorithmRequiredInputs"
+ and self.__adaoObject["AlgorithmParameters"] is not None
+ ):
+ return self.__adaoObject[
+ "AlgorithmParameters"
+ ].getAlgorithmInputArguments()
#
- elif Concept == "AlgorithmAttributes" and self.__adaoObject["AlgorithmParameters"] is not None:
+ elif (
+ Concept == "AlgorithmAttributes"
+ and self.__adaoObject["AlgorithmParameters"] is not None
+ ):
return self.__adaoObject["AlgorithmParameters"].getAlgorithmAttributes()
#
- elif self.__adaoObject["SupplementaryParameters"] is not None and Concept == "SupplementaryParameters":
+ elif (
+ self.__adaoObject["SupplementaryParameters"] is not None
+ and Concept == "SupplementaryParameters"
+ ):
return self.__adaoObject["SupplementaryParameters"].get()
#
- elif self.__adaoObject["SupplementaryParameters"] is not None and Concept in self.__adaoObject["SupplementaryParameters"]:
- return self.__adaoObject["SupplementaryParameters"].get( Concept )
+ elif (
+ self.__adaoObject["SupplementaryParameters"] is not None
+ and Concept in self.__adaoObject["SupplementaryParameters"]
+ ):
+ return self.__adaoObject["SupplementaryParameters"].get(Concept)
#
else:
- raise ValueError("The requested key \"%s\" does not exists as an input or a stored variable."%Concept)
+ raise ValueError(
+ 'The requested key "%s" does not exists as an input or a stored variable.'
+ % Concept
+ )
else:
allvariables = {}
- allvariables.update( {"AlgorithmParameters": self.__adaoObject["AlgorithmParameters"].get()} )
+ allvariables.update(
+ {"AlgorithmParameters": self.__adaoObject["AlgorithmParameters"].get()}
+ )
if self.__adaoObject["SupplementaryParameters"] is not None:
- allvariables.update( {"SupplementaryParameters": self.__adaoObject["SupplementaryParameters"].get()} )
+ allvariables.update(
+ {
+ "SupplementaryParameters": self.__adaoObject[
+ "SupplementaryParameters"
+ ].get()
+ }
+ )
# allvariables.update( self.__adaoObject["AlgorithmParameters"].get() )
- allvariables.update( self.__StoredInputs )
- allvariables.pop('Observer', None)
- allvariables.pop('UserPostAnalysis', None)
+ allvariables.update(self.__StoredInputs)
+ allvariables.pop("Observer", None)
+ allvariables.pop("UserPostAnalysis", None)
return allvariables
# -----------------------------------------------------------
identifiés par les chaînes de caractères. L'algorithme doit avoir été
préalablement choisi sinon la méthode renvoie "None".
"""
- if len(list(self.__adaoObject["AlgorithmParameters"].keys())) == 0 and \
- len(list(self.__StoredInputs.keys())) == 0:
+ if (
+ len(list(self.__adaoObject["AlgorithmParameters"].keys())) == 0
+ and len(list(self.__StoredInputs.keys())) == 0
+ ):
return None
else:
variables = []
if len(list(self.__adaoObject["AlgorithmParameters"].keys())) > 0:
variables.extend(list(self.__adaoObject["AlgorithmParameters"].keys()))
- if self.__adaoObject["SupplementaryParameters"] is not None and \
- len(list(self.__adaoObject["SupplementaryParameters"].keys())) > 0:
- variables.extend(list(self.__adaoObject["SupplementaryParameters"].keys()))
+ if (
+ self.__adaoObject["SupplementaryParameters"] is not None
+ and len(list(self.__adaoObject["SupplementaryParameters"].keys())) > 0
+ ):
+ variables.extend(
+ list(self.__adaoObject["SupplementaryParameters"].keys())
+ )
if len(list(self.__StoredInputs.keys())) > 0:
- variables.extend( list(self.__StoredInputs.keys()) )
- variables.remove('Observer')
- variables.remove('UserPostAnalysis')
+ variables.extend(list(self.__StoredInputs.keys()))
+ variables.remove("Observer")
+ variables.remove("UserPostAnalysis")
variables.sort()
return variables
continue
with open(os.path.join(trypath, fname)) as fc:
iselal = bool("class ElementaryAlgorithm" in fc.read())
- if iselal and ext == '.py' and root != '__init__':
+ if iselal and ext == ".py" and root != "__init__":
files.append(root)
files.sort()
return files
se trouve un sous-répertoire "daAlgorithms"
"""
if not os.path.isdir(Path):
- raise ValueError("The given " + Path + " argument must exist as a directory")
+ raise ValueError(
+ "The given " + Path + " argument must exist as a directory"
+ )
if not os.path.isdir(os.path.join(Path, "daAlgorithms")):
- raise ValueError("The given \"" + Path + "\" argument must contain a subdirectory named \"daAlgorithms\"")
+ raise ValueError(
+ 'The given "'
+ + Path
+ + '" argument must contain a subdirectory named "daAlgorithms"'
+ )
if not os.path.isfile(os.path.join(Path, "daAlgorithms", "__init__.py")):
- raise ValueError("The given \"" + Path + "/daAlgorithms\" path must contain a file named \"__init__.py\"")
+ raise ValueError(
+ 'The given "'
+ + Path
+ + '/daAlgorithms" path must contain a file named "__init__.py"'
+ )
sys.path.insert(0, os.path.abspath(Path))
- sys.path = PlatformInfo.uniq( sys.path ) # Conserve en unique exemplaire chaque chemin
+ sys.path = PlatformInfo.uniq(
+ sys.path
+ ) # Conserve en unique exemplaire chaque chemin
return 0
# -----------------------------------------------------------
Operator.CM.clearCache()
try:
if Executor == "YACS":
- self.__executeYACSScheme( SaveCaseInFile )
+ self.__executeYACSScheme(SaveCaseInFile)
else:
- self.__executePythonScheme( SaveCaseInFile )
+ self.__executePythonScheme(SaveCaseInFile)
except Exception as e:
if isinstance(e, SyntaxError):
- msg = "at %s: %s"%(e.offset, e.text)
+ msg = "at %s: %s" % (e.offset, e.text)
else:
msg = ""
- raise ValueError((
- "during execution, the following error occurs:\n" + \
- "\n%s %s\n\nSee also the potential messages, " + \
- "which can show the origin of the above error, " + \
- "in the launching terminal.\n")%(str(e), msg))
+ raise ValueError(
+ (
+ "during execution, the following error occurs:\n"
+ + "\n%s %s\n\nSee also the potential messages, "
+ + "which can show the origin of the above error, "
+ + "in the launching terminal.\n"
+ )
+ % (str(e), msg)
+ )
return 0
def __executePythonScheme(self, FileName=None):
"Lancement du calcul"
self.__case.register("executePythonScheme", dir(), locals())
if FileName is not None:
- self.dump( FileName, "TUI")
- self.__adaoObject["AlgorithmParameters"].executePythonScheme( self.__adaoObject )
- if "UserPostAnalysis" in self.__adaoObject and len(self.__adaoObject["UserPostAnalysis"]) > 0:
+ self.dump(FileName, "TUI")
+ self.__adaoObject["AlgorithmParameters"].executePythonScheme(self.__adaoObject)
+ if (
+ "UserPostAnalysis" in self.__adaoObject
+ and len(self.__adaoObject["UserPostAnalysis"]) > 0
+ ):
self.__objname = self.__retrieve_objname()
for __UpaOne in self.__adaoObject["UserPostAnalysis"]:
__UpaOne = eval(str(__UpaOne))
- exec(__UpaOne, {}, {'self': self, 'ADD': self, 'case': self, 'adaopy': self, self.__objname: self})
+ exec(
+ __UpaOne,
+ {},
+ {
+ "self": self,
+ "ADD": self,
+ "case": self,
+ "adaopy": self,
+ self.__objname: self,
+ },
+ )
return 0
def __executeYACSScheme(self, FileName=None):
"Lancement du calcul"
self.__case.register("executeYACSScheme", dir(), locals())
- self.dump( FileName, "YACS")
- self.__adaoObject["AlgorithmParameters"].executeYACSScheme( FileName )
+ self.dump(FileName, "YACS")
+ self.__adaoObject["AlgorithmParameters"].executeYACSScheme(FileName)
return 0
# -----------------------------------------------------------
"Chargement normalisé des commandes"
__commands = self.__case.load(FileName, Content, Object, Formater)
from numpy import array, matrix # noqa: F401
+
for __command in __commands:
- if (__command.find("set") > -1 and __command.find("set_") < 0) or 'UserPostAnalysis' in __command:
+ if (
+ __command.find("set") > -1 and __command.find("set_") < 0
+ ) or "UserPostAnalysis" in __command:
exec("self." + __command, {}, locals())
else:
self.__PostAnalysis.append(__command)
return self
- def convert(self,
- FileNameFrom=None, ContentFrom=None, ObjectFrom=None, FormaterFrom="TUI",
- FileNameTo=None, FormaterTo="TUI" ):
+ def convert(
+ self,
+ FileNameFrom=None,
+ ContentFrom=None,
+ ObjectFrom=None,
+ FormaterFrom="TUI",
+ FileNameTo=None,
+ FormaterTo="TUI",
+ ):
"Conversion normalisée des commandes"
return self.load(
- FileName=FileNameFrom, Content=ContentFrom, Object=ObjectFrom, Formater=FormaterFrom
- ).dump(
- FileName=FileNameTo, Formater=FormaterTo
- )
+ FileName=FileNameFrom,
+ Content=ContentFrom,
+ Object=ObjectFrom,
+ Formater=FormaterFrom,
+ ).dump(FileName=FileNameTo, Formater=FormaterTo)
def clear(self):
"Effacement du contenu du cas en cours"
def __retrieve_objname(self):
"Ne pas utiliser dans le __init__, la variable appelante n'existe pas encore"
- __names = []
+ __names = []
for level in reversed(inspect.stack()):
- __names += [name for name, value in level.frame.f_locals.items() if value is self]
+ __names += [
+ name for name, value in level.frame.f_locals.items() if value is self
+ ]
__names += [name for name, value in globals().items() if value is self]
- while 'self' in __names:
- __names.remove('self') # Devrait toujours être trouvé, donc pas d'erreur
+ while "self" in __names:
+ __names.remove("self") # Devrait toujours être trouvé, donc pas d'erreur
if len(__names) > 0:
self.__objname = __names[0]
else:
def __dir__(self):
"Clarifie la visibilité des méthodes"
- return ['set', 'get', 'execute', 'dump', 'load', '__doc__', '__init__', '__module__']
+ return [
+ "set",
+ "get",
+ "execute",
+ "dump",
+ "load",
+ "__doc__",
+ "__init__",
+ "__module__",
+ ]
def __str__(self):
"Représentation pour impression (mais pas repr)"
- msg = self.dump(None, "SimpleReportInPlainTxt")
+ msg = self.dump(None, "SimpleReportInPlainTxt")
return msg
def sysinfo(self, title=""):
return msg
def callinfo(self, __prefix=" "):
- msg = ""
+ msg = ""
for oname in ["ObservationOperator", "EvolutionModel"]:
if hasattr(self.__adaoObject[oname], "nbcalls"):
ostats = self.__adaoObject[oname].nbcalls()
- msg += "\n%sNumber of calls for the %s:"%(__prefix, oname)
+ msg += "\n%sNumber of calls for the %s:" % (__prefix, oname)
for otype in ["Direct", "Tangent", "Adjoint"]:
if otype in ostats:
- msg += "\n%s%30s : %s"%(__prefix, "%s evaluation"%(otype,), ostats[otype][0])
+ msg += "\n%s%30s : %s" % (
+ __prefix,
+ "%s evaluation" % (otype,),
+ ostats[otype][0],
+ )
msg += "\n"
return msg
def prepare_to_pickle(self):
"Retire les variables non pickelisables, avec recopie efficace"
- if self.__adaoObject['AlgorithmParameters'] is not None:
- for k in self.__adaoObject['AlgorithmParameters'].keys():
+ if self.__adaoObject["AlgorithmParameters"] is not None:
+ for k in self.__adaoObject["AlgorithmParameters"].keys():
if k == "Algorithm":
continue
if k in self.__StoredInputs:
- raise ValueError("The key \"%s\" to be transfered for pickling will overwrite an existing one."%(k,))
- if self.__adaoObject['AlgorithmParameters'].hasObserver( k ):
- self.__adaoObject['AlgorithmParameters'].removeObserver( k, "", True )
- self.__StoredInputs[k] = self.__adaoObject['AlgorithmParameters'].pop(k, None)
+ raise ValueError(
+ 'The key "%s" to be transfered for pickling will overwrite an existing one.'
+ % (k,)
+ )
+ if self.__adaoObject["AlgorithmParameters"].hasObserver(k):
+ self.__adaoObject["AlgorithmParameters"].removeObserver(k, "", True)
+ self.__StoredInputs[k] = self.__adaoObject["AlgorithmParameters"].pop(
+ k, None
+ )
if sys.version_info[0] == 2:
- del self.__adaoObject # Because it breaks pickle in Python 2. Not required for Python 3
- del self.__case # Because it breaks pickle in Python 2. Not required for Python 3
+ del (
+ self.__adaoObject
+ ) # Because it breaks pickle in Python 2. Not required for Python 3
+ del (
+ self.__case
+ ) # Because it breaks pickle in Python 2. Not required for Python 3
if sys.version_info.major < 3:
return 0
else:
return self.__StoredInputs
+
# ==============================================================================
if __name__ == "__main__":
print("\n AUTODIAGNOSTIC\n")
from daCore.Aidsm import Aidsm as _Aidsm
+
# ==============================================================================
class AssimilationStudy(_Aidsm):
"""
Generic ADAO TUI builder
"""
+
__slots__ = ()
- def __init__(self, name = ""):
+ def __init__(self, name=""):
_Aidsm.__init__(self, name)
+
# ==============================================================================
if __name__ == "__main__":
print("\n AUTODIAGNOSTIC\n")
from daCore import Interfaces
from daCore import Templates
+
# ==============================================================================
class CacheManager(object):
"""
Classe générale de gestion d'un cache de calculs
"""
+
__slots__ = (
- "__tolerBP", "__lengthOR", "__initlnOR", "__seenNames", "__enabled",
+ "__tolerBP",
+ "__lengthOR",
+ "__initlnOR",
+ "__seenNames",
+ "__enabled",
"__listOPCV",
)
- def __init__(self,
- toleranceInRedundancy = 1.e-18,
- lengthOfRedundancy = -1 ):
+ def __init__(self, toleranceInRedundancy=1.0e-18, lengthOfRedundancy=-1):
"""
Les caractéristiques de tolérance peuvent être modifiées à la création.
"""
- self.__tolerBP = float(toleranceInRedundancy)
- self.__lengthOR = int(lengthOfRedundancy)
- self.__initlnOR = self.__lengthOR
+ self.__tolerBP = float(toleranceInRedundancy)
+ self.__lengthOR = int(lengthOfRedundancy)
+ self.__initlnOR = self.__lengthOR
self.__seenNames = []
- self.__enabled = True
+ self.__enabled = True
self.clearCache()
def clearCache(self):
"Vide le cache"
- self.__listOPCV = []
+ self.__listOPCV = []
self.__seenNames = []
- def wasCalculatedIn(self, xValue, oName="" ):
+ def wasCalculatedIn(self, xValue, oName=""):
"Vérifie l'existence d'un calcul correspondant à la valeur"
__alc = False
__HxV = None
if self.__enabled:
for i in range(min(len(self.__listOPCV), self.__lengthOR) - 1, -1, -1):
- if not hasattr(xValue, 'size'):
+ if not hasattr(xValue, "size"):
pass
- elif (str(oName) != self.__listOPCV[i][3]):
+ elif str(oName) != self.__listOPCV[i][3]:
pass
- elif (xValue.size != self.__listOPCV[i][0].size):
+ elif xValue.size != self.__listOPCV[i][0].size:
pass
- elif (numpy.ravel(xValue)[0] - self.__listOPCV[i][0][0]) > (self.__tolerBP * self.__listOPCV[i][2] / self.__listOPCV[i][0].size):
+ elif (numpy.ravel(xValue)[0] - self.__listOPCV[i][0][0]) > (
+ self.__tolerBP * self.__listOPCV[i][2] / self.__listOPCV[i][0].size
+ ):
pass
- elif numpy.linalg.norm(numpy.ravel(xValue) - self.__listOPCV[i][0]) < (self.__tolerBP * self.__listOPCV[i][2]):
- __alc = True
+ elif numpy.linalg.norm(numpy.ravel(xValue) - self.__listOPCV[i][0]) < (
+ self.__tolerBP * self.__listOPCV[i][2]
+ ):
+ __alc = True
__HxV = self.__listOPCV[i][1]
break
return __alc, __HxV
- def storeValueInX(self, xValue, HxValue, oName="" ):
+ def storeValueInX(self, xValue, HxValue, oName=""):
"Stocke pour un opérateur o un calcul Hx correspondant à la valeur x"
if self.__lengthOR < 0:
self.__lengthOR = 2 * min(numpy.size(xValue), 50) + 2
self.__seenNames.append(str(oName))
while len(self.__listOPCV) > self.__lengthOR:
self.__listOPCV.pop(0)
- self.__listOPCV.append((
- copy.copy(numpy.ravel(xValue)), # 0 Previous point
- copy.copy(HxValue), # 1 Previous value
- numpy.linalg.norm(xValue), # 2 Norm
- str(oName), # 3 Operator name
- ))
+ self.__listOPCV.append(
+ (
+ copy.copy(numpy.ravel(xValue)), # 0 Previous point
+ copy.copy(HxValue), # 1 Previous value
+ numpy.linalg.norm(xValue), # 2 Norm
+ str(oName), # 3 Operator name
+ )
+ )
def disable(self):
"Inactive le cache"
self.__initlnOR = self.__lengthOR
self.__lengthOR = 0
- self.__enabled = False
+ self.__enabled = False
def enable(self):
"Active le cache"
self.__lengthOR = self.__initlnOR
- self.__enabled = True
+ self.__enabled = True
+
# ==============================================================================
class Operator(object):
"""
Classe générale d'interface de type opérateur simple
"""
+
__slots__ = (
- "__name", "__NbCallsAsMatrix", "__NbCallsAsMethod",
- "__NbCallsOfCached", "__reduceM", "__avoidRC", "__inputAsMF",
- "__mpEnabled", "__extraArgs", "__Method", "__Matrix", "__Type",
+ "__name",
+ "__NbCallsAsMatrix",
+ "__NbCallsAsMethod",
+ "__NbCallsOfCached",
+ "__reduceM",
+ "__avoidRC",
+ "__inputAsMF",
+ "__mpEnabled",
+ "__extraArgs",
+ "__Method",
+ "__Matrix",
+ "__Type",
)
#
NbCallsAsMatrix = 0
NbCallsOfCached = 0
CM = CacheManager()
- def __init__(self,
- name = "GenericOperator",
- fromMethod = None,
- fromMatrix = None,
- avoidingRedundancy = True,
- reducingMemoryUse = False,
- inputAsMultiFunction = False,
- enableMultiProcess = False,
- extraArguments = None ):
+ def __init__(
+ self,
+ name="GenericOperator",
+ fromMethod=None,
+ fromMatrix=None,
+ avoidingRedundancy=True,
+ reducingMemoryUse=False,
+ inputAsMultiFunction=False,
+ enableMultiProcess=False,
+ extraArguments=None,
+ ):
"""
On construit un objet de ce type en fournissant, à l'aide de l'un des
deux mots-clé, soit une fonction ou un multi-fonction python, soit une
- extraArguments : arguments supplémentaires passés à la fonction de
base et ses dérivées (tuple ou dictionnaire)
"""
- self.__name = str(name)
+ self.__name = str(name)
self.__NbCallsAsMatrix, self.__NbCallsAsMethod, self.__NbCallsOfCached = 0, 0, 0
- self.__reduceM = bool( reducingMemoryUse )
- self.__avoidRC = bool( avoidingRedundancy )
- self.__inputAsMF = bool( inputAsMultiFunction )
- self.__mpEnabled = bool( enableMultiProcess )
+ self.__reduceM = bool(reducingMemoryUse)
+ self.__avoidRC = bool(avoidingRedundancy)
+ self.__inputAsMF = bool(inputAsMultiFunction)
+ self.__mpEnabled = bool(enableMultiProcess)
self.__extraArgs = extraArguments
if fromMethod is not None and self.__inputAsMF:
self.__Method = fromMethod # logtimer(fromMethod)
self.__Matrix = None
- self.__Type = "Method"
+ self.__Type = "Method"
elif fromMethod is not None and not self.__inputAsMF:
- self.__Method = partial( MultiFonction, _sFunction=fromMethod, _mpEnabled=self.__mpEnabled)
+ self.__Method = partial(
+ MultiFonction, _sFunction=fromMethod, _mpEnabled=self.__mpEnabled
+ )
self.__Matrix = None
- self.__Type = "Method"
+ self.__Type = "Method"
elif fromMatrix is not None:
self.__Method = None
if isinstance(fromMatrix, str):
- fromMatrix = PlatformInfo.strmatrix2liststr( fromMatrix )
- self.__Matrix = numpy.asarray( fromMatrix, dtype=float )
- self.__Type = "Matrix"
+ fromMatrix = PlatformInfo.strmatrix2liststr(fromMatrix)
+ self.__Matrix = numpy.asarray(fromMatrix, dtype=float)
+ self.__Type = "Matrix"
else:
self.__Method = None
self.__Matrix = None
- self.__Type = None
+ self.__Type = None
def disableAvoidingRedundancy(self):
"Inactive le cache"
"Renvoie le type"
return self.__Type
- def appliedTo(self, xValue, HValue = None, argsAsSerie = False, returnSerieAsArrayMatrix = False):
+ def appliedTo(
+ self, xValue, HValue=None, argsAsSerie=False, returnSerieAsArrayMatrix=False
+ ):
"""
Permet de restituer le résultat de l'application de l'opérateur à une
série d'arguments xValue. Cette méthode se contente d'appliquer, chaque
_HValue = (HValue,)
else:
_HValue = HValue
- PlatformInfo.isIterable( _xValue, True, " in Operator.appliedTo" )
+ PlatformInfo.isIterable(_xValue, True, " in Operator.appliedTo")
#
if _HValue is not None:
- assert len(_xValue) == len(_HValue), "Incompatible number of elements in xValue and HValue"
+ assert len(_xValue) == len(
+ _HValue
+ ), "Incompatible number of elements in xValue and HValue"
_HxValue = []
for i in range(len(_HValue)):
- _HxValue.append( _HValue[i] )
+ _HxValue.append(_HValue[i])
if self.__avoidRC:
Operator.CM.storeValueInX(_xValue[i], _HxValue[-1], self.__name)
else:
_hindex = []
for i, xv in enumerate(_xValue):
if self.__avoidRC:
- __alreadyCalculated, __HxV = Operator.CM.wasCalculatedIn(xv, self.__name)
+ __alreadyCalculated, __HxV = Operator.CM.wasCalculatedIn(
+ xv, self.__name
+ )
else:
__alreadyCalculated = False
#
_hv = self.__Matrix @ numpy.ravel(xv)
else:
self.__addOneMethodCall()
- _xserie.append( xv )
- _hindex.append( i )
+ _xserie.append(xv)
+ _hindex.append(i)
_hv = None
- _HxValue.append( _hv )
+ _HxValue.append(_hv)
#
if len(_xserie) > 0 and self.__Matrix is None:
if self.__extraArgs is None:
- _hserie = self.__Method( _xserie ) # Calcul MF
+ _hserie = self.__Method(_xserie) # Calcul MF
else:
- _hserie = self.__Method( _xserie, self.__extraArgs ) # Calcul MF
+ _hserie = self.__Method(_xserie, self.__extraArgs) # Calcul MF
if not hasattr(_hserie, "pop"):
raise TypeError(
- "The user input multi-function doesn't seem to return a" + \
- " result sequence, behaving like a mono-function. It has" + \
- " to be checked." )
+ "The user input multi-function doesn't seem to return a"
+ + " result sequence, behaving like a mono-function. It has"
+ + " to be checked."
+ )
for i in _hindex:
_xv = _xserie.pop(0)
_hv = _hserie.pop(0)
if returnSerieAsArrayMatrix:
_HxValue = numpy.stack([numpy.ravel(_hv) for _hv in _HxValue], axis=1)
#
- if argsAsSerie: return _HxValue # noqa: E701
- else: return _HxValue[-1] # noqa: E241,E272,E701
+ if argsAsSerie:
+ return _HxValue
+ else:
+ return _HxValue[-1]
- def appliedControledFormTo(self, paires, argsAsSerie = False, returnSerieAsArrayMatrix = False):
+ def appliedControledFormTo(
+ self, paires, argsAsSerie=False, returnSerieAsArrayMatrix=False
+ ):
"""
Permet de restituer le résultat de l'application de l'opérateur à des
paires (xValue, uValue). Cette méthode se contente d'appliquer, son
- uValue : argument U adapté pour appliquer l'opérateur
- argsAsSerie : indique si l'argument est une mono ou multi-valeur
"""
- if argsAsSerie: _xuValue = paires # noqa: E701
- else: _xuValue = (paires,) # noqa: E241,E272,E701
- PlatformInfo.isIterable( _xuValue, True, " in Operator.appliedControledFormTo" )
+ if argsAsSerie:
+ _xuValue = paires
+ else:
+ _xuValue = (paires,)
+ PlatformInfo.isIterable(_xuValue, True, " in Operator.appliedControledFormTo")
#
if self.__Matrix is not None:
_HxValue = []
for paire in _xuValue:
_xValue, _uValue = paire
self.__addOneMatrixCall()
- _HxValue.append( self.__Matrix @ numpy.ravel(_xValue) )
+ _HxValue.append(self.__Matrix @ numpy.ravel(_xValue))
else:
_xuArgs = []
for paire in _xuValue:
_xValue, _uValue = paire
if _uValue is not None:
- _xuArgs.append( paire )
+ _xuArgs.append(paire)
else:
- _xuArgs.append( _xValue )
- self.__addOneMethodCall( len(_xuArgs) )
+ _xuArgs.append(_xValue)
+ self.__addOneMethodCall(len(_xuArgs))
if self.__extraArgs is None:
- _HxValue = self.__Method( _xuArgs ) # Calcul MF
+ _HxValue = self.__Method(_xuArgs) # Calcul MF
else:
- _HxValue = self.__Method( _xuArgs, self.__extraArgs ) # Calcul MF
+ _HxValue = self.__Method(_xuArgs, self.__extraArgs) # Calcul MF
#
if returnSerieAsArrayMatrix:
_HxValue = numpy.stack([numpy.ravel(_hv) for _hv in _HxValue], axis=1)
#
- if argsAsSerie: return _HxValue # noqa: E701
- else: return _HxValue[-1] # noqa: E241,E272,E701
+ if argsAsSerie:
+ return _HxValue
+ else:
+ return _HxValue[-1]
- def appliedInXTo(self, paires, argsAsSerie = False, returnSerieAsArrayMatrix = False):
+ def appliedInXTo(self, paires, argsAsSerie=False, returnSerieAsArrayMatrix=False):
"""
Permet de restituer le résultat de l'application de l'opérateur à une
série d'arguments xValue, sachant que l'opérateur est valable en
- xValue : série d'arguments adaptés pour appliquer l'opérateur
- argsAsSerie : indique si l'argument est une mono ou multi-valeur
"""
- if argsAsSerie: _nxValue = paires # noqa: E701
- else: _nxValue = (paires,) # noqa: E241,E272,E701
- PlatformInfo.isIterable( _nxValue, True, " in Operator.appliedInXTo" )
+ if argsAsSerie:
+ _nxValue = paires
+ else:
+ _nxValue = (paires,)
+ PlatformInfo.isIterable(_nxValue, True, " in Operator.appliedInXTo")
#
if self.__Matrix is not None:
_HxValue = []
for paire in _nxValue:
_xNominal, _xValue = paire
self.__addOneMatrixCall()
- _HxValue.append( self.__Matrix @ numpy.ravel(_xValue) )
+ _HxValue.append(self.__Matrix @ numpy.ravel(_xValue))
else:
- self.__addOneMethodCall( len(_nxValue) )
+ self.__addOneMethodCall(len(_nxValue))
if self.__extraArgs is None:
- _HxValue = self.__Method( _nxValue ) # Calcul MF
+ _HxValue = self.__Method(_nxValue) # Calcul MF
else:
- _HxValue = self.__Method( _nxValue, self.__extraArgs ) # Calcul MF
+ _HxValue = self.__Method(_nxValue, self.__extraArgs) # Calcul MF
#
if returnSerieAsArrayMatrix:
_HxValue = numpy.stack([numpy.ravel(_hv) for _hv in _HxValue], axis=1)
#
- if argsAsSerie: return _HxValue # noqa: E701
- else: return _HxValue[-1] # noqa: E241,E272,E701
+ if argsAsSerie:
+ return _HxValue
+ else:
+ return _HxValue[-1]
- def asMatrix(self, ValueForMethodForm = "UnknownVoidValue", argsAsSerie = False):
+ def asMatrix(self, ValueForMethodForm="UnknownVoidValue", argsAsSerie=False):
"""
Permet de renvoyer l'opérateur sous la forme d'une matrice
"""
if self.__Matrix is not None:
self.__addOneMatrixCall()
- mValue = [self.__Matrix,]
- elif not isinstance(ValueForMethodForm, str) or ValueForMethodForm != "UnknownVoidValue": # Ne pas utiliser "None"
+ mValue = [
+ self.__Matrix,
+ ]
+ elif (
+ not isinstance(ValueForMethodForm, str)
+ or ValueForMethodForm != "UnknownVoidValue"
+ ): # Ne pas utiliser "None"
mValue = []
if argsAsSerie:
- self.__addOneMethodCall( len(ValueForMethodForm) )
+ self.__addOneMethodCall(len(ValueForMethodForm))
for _vfmf in ValueForMethodForm:
- mValue.append( self.__Method(((_vfmf, None),)) )
+ mValue.append(self.__Method(((_vfmf, None),)))
else:
self.__addOneMethodCall()
mValue = self.__Method(((ValueForMethodForm, None),))
else:
- raise ValueError("Matrix form of the operator defined as a function/method requires to give an operating point.")
+ raise ValueError(
+ "Matrix form of the operator defined as a function/method requires to give an operating point."
+ )
#
- if argsAsSerie: return mValue # noqa: E701
- else: return mValue[-1] # noqa: E241,E272,E701
+ if argsAsSerie:
+ return mValue
+ else:
+ return mValue[-1]
def shape(self):
"""
if self.__Matrix is not None:
return self.__Matrix.shape
else:
- raise ValueError("Matrix form of the operator is not available, nor the shape")
+ raise ValueError(
+ "Matrix form of the operator is not available, nor the shape"
+ )
def nbcalls(self, which=None):
"""
Operator.NbCallsAsMethod,
Operator.NbCallsOfCached,
)
- if which is None: return __nbcalls # noqa: E701
- else: return __nbcalls[which] # noqa: E241,E272,E701
+ if which is None:
+ return __nbcalls
+ else:
+ return __nbcalls[which]
def __addOneMatrixCall(self):
"Comptabilise un appel"
- self.__NbCallsAsMatrix += 1 # Decompte local
+ self.__NbCallsAsMatrix += 1 # Decompte local
Operator.NbCallsAsMatrix += 1 # Decompte global
- def __addOneMethodCall(self, nb = 1):
+ def __addOneMethodCall(self, nb=1):
"Comptabilise un appel"
- self.__NbCallsAsMethod += nb # Decompte local
+ self.__NbCallsAsMethod += nb # Decompte local
Operator.NbCallsAsMethod += nb # Decompte global
def __addOneCacheCall(self):
"Comptabilise un appel"
- self.__NbCallsOfCached += 1 # Décompte local
+ self.__NbCallsOfCached += 1 # Décompte local
Operator.NbCallsOfCached += 1 # Décompte global
+
# ==============================================================================
class FullOperator(object):
"""
Classe générale d'interface de type opérateur complet
(Direct, Linéaire Tangent, Adjoint)
"""
+
__slots__ = (
- "__name", "__check", "__extraArgs", "__FO", "__T",
+ "__name",
+ "__check",
+ "__extraArgs",
+ "__FO",
+ "__T",
)
- def __init__(self,
- name = "GenericFullOperator",
- asMatrix = None,
- asOneFunction = None, # 1 Fonction
- asThreeFunctions = None, # 3 Fonctions in a dictionary
- asScript = None, # 1 or 3 Fonction(s) by script
- asDict = None, # Parameters
- appliedInX = None,
- extraArguments = None,
- performancePrf = None,
- inputAsMF = False, # Fonction(s) as Multi-Functions
- scheduledBy = None,
- toBeChecked = False ):
- ""
- self.__name = str(name)
- self.__check = bool(toBeChecked)
+ def __init__(
+ self,
+ name="GenericFullOperator",
+ asMatrix=None,
+ asOneFunction=None, # 1 Fonction
+ asThreeFunctions=None, # 3 Fonctions in a dictionary
+ asScript=None, # 1 or 3 Fonction(s) by script
+ asDict=None, # Parameters
+ appliedInX=None,
+ extraArguments=None,
+ performancePrf=None,
+ inputAsMF=False, # Fonction(s) as Multi-Functions
+ scheduledBy=None,
+ toBeChecked=False,
+ ):
+ """"""
+ self.__name = str(name)
+ self.__check = bool(toBeChecked)
self.__extraArgs = extraArguments
#
- self.__FO = {}
+ self.__FO = {}
#
__Parameters = {}
if (asDict is not None) and isinstance(asDict, dict):
- __Parameters.update( asDict ) # Copie mémoire
+ __Parameters.update(asDict) # Copie mémoire
# Deprecated parameters
__Parameters = self.__deprecateOpt(
- collection = __Parameters,
- oldn = "EnableMultiProcessing",
- newn = "EnableWiseParallelism",
+ collection=__Parameters,
+ oldn="EnableMultiProcessing",
+ newn="EnableWiseParallelism",
)
__Parameters = self.__deprecateOpt(
- collection = __Parameters,
- oldn = "EnableMultiProcessingInEvaluation",
- newn = "EnableParallelEvaluations",
+ collection=__Parameters,
+ oldn="EnableMultiProcessingInEvaluation",
+ newn="EnableParallelEvaluations",
)
__Parameters = self.__deprecateOpt(
- collection = __Parameters,
- oldn = "EnableMultiProcessingInDerivatives",
- newn = "EnableParallelDerivatives",
+ collection=__Parameters,
+ oldn="EnableMultiProcessingInDerivatives",
+ newn="EnableParallelDerivatives",
)
# Priorité à EnableParallelDerivatives=True
- if "EnableWiseParallelism" in __Parameters and __Parameters["EnableWiseParallelism"]:
+ if (
+ "EnableWiseParallelism" in __Parameters
+ and __Parameters["EnableWiseParallelism"]
+ ):
__Parameters["EnableParallelDerivatives"] = True
- __Parameters["EnableParallelEvaluations"] = False
+ __Parameters["EnableParallelEvaluations"] = False
if "EnableParallelDerivatives" not in __Parameters:
- __Parameters["EnableParallelDerivatives"] = False
+ __Parameters["EnableParallelDerivatives"] = False
if __Parameters["EnableParallelDerivatives"]:
- __Parameters["EnableParallelEvaluations"] = False
+ __Parameters["EnableParallelEvaluations"] = False
if "EnableParallelEvaluations" not in __Parameters:
- __Parameters["EnableParallelEvaluations"] = False
+ __Parameters["EnableParallelEvaluations"] = False
if "withIncrement" in __Parameters: # Temporaire
__Parameters["DifferentialIncrement"] = __Parameters["withIncrement"]
#
if asScript is not None:
__Matrix, __Function = None, None
if asMatrix:
- __Matrix = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Matrix = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
elif asOneFunction:
- __Function = { "Direct": Interfaces.ImportFromScript(asScript).getvalue( "DirectOperator" ) }
+ __Function = {
+ "Direct": Interfaces.ImportFromScript(asScript).getvalue(
+ "DirectOperator"
+ )
+ }
__Function.update({"useApproximatedDerivatives": True})
__Function.update(__Parameters)
elif asThreeFunctions:
__Function = {
- "Direct": Interfaces.ImportFromScript(asScript).getvalue( "DirectOperator" ),
- "Tangent": Interfaces.ImportFromScript(asScript).getvalue( "TangentOperator" ),
- "Adjoint": Interfaces.ImportFromScript(asScript).getvalue( "AdjointOperator" ),
+ "Direct": Interfaces.ImportFromScript(asScript).getvalue(
+ "DirectOperator"
+ ),
+ "Tangent": Interfaces.ImportFromScript(asScript).getvalue(
+ "TangentOperator"
+ ),
+ "Adjoint": Interfaces.ImportFromScript(asScript).getvalue(
+ "AdjointOperator"
+ ),
}
__Function.update(__Parameters)
else:
if asOneFunction["Direct"] is not None:
__Function = asOneFunction
else:
- raise ValueError("The function has to be given in a dictionnary which have 1 key (\"Direct\")")
+ raise ValueError(
+ 'The function has to be given in a dictionnary which have 1 key ("Direct")'
+ )
else:
- __Function = { "Direct": asOneFunction }
+ __Function = {"Direct": asOneFunction}
__Function.update({"useApproximatedDerivatives": True})
__Function.update(__Parameters)
elif asThreeFunctions is not None:
- if isinstance(asThreeFunctions, dict) and \
- ("Tangent" in asThreeFunctions) and (asThreeFunctions["Tangent"] is not None) and \
- ("Adjoint" in asThreeFunctions) and (asThreeFunctions["Adjoint"] is not None) and \
- (("useApproximatedDerivatives" not in asThreeFunctions) or not bool(asThreeFunctions["useApproximatedDerivatives"])):
+ if (
+ isinstance(asThreeFunctions, dict)
+ and ("Tangent" in asThreeFunctions)
+ and (asThreeFunctions["Tangent"] is not None)
+ and ("Adjoint" in asThreeFunctions)
+ and (asThreeFunctions["Adjoint"] is not None)
+ and (
+ ("useApproximatedDerivatives" not in asThreeFunctions)
+ or not bool(asThreeFunctions["useApproximatedDerivatives"])
+ )
+ ):
__Function = asThreeFunctions
- elif isinstance(asThreeFunctions, dict) and \
- ("Direct" in asThreeFunctions) and (asThreeFunctions["Direct"] is not None):
+ elif (
+ isinstance(asThreeFunctions, dict)
+ and ("Direct" in asThreeFunctions)
+ and (asThreeFunctions["Direct"] is not None)
+ ):
__Function = asThreeFunctions
__Function.update({"useApproximatedDerivatives": True})
else:
raise ValueError(
- "The functions has to be given in a dictionnary which have either" + \
- " 1 key (\"Direct\") or" + \
- " 3 keys (\"Direct\" (optionnal), \"Tangent\" and \"Adjoint\")")
+ "The functions has to be given in a dictionnary which have either"
+ + ' 1 key ("Direct") or'
+ + ' 3 keys ("Direct" (optionnal), "Tangent" and "Adjoint")'
+ )
if "Direct" not in asThreeFunctions:
__Function["Direct"] = asThreeFunctions["Tangent"]
__Function.update(__Parameters)
if scheduledBy is not None:
self.__T = scheduledBy
#
- if isinstance(__Function, dict) and \
- ("useApproximatedDerivatives" in __Function) and bool(__Function["useApproximatedDerivatives"]) and \
- ("Direct" in __Function) and (__Function["Direct"] is not None):
- if "CenteredFiniteDifference" not in __Function: __Function["CenteredFiniteDifference"] = False # noqa: E272,E701
- if "DifferentialIncrement" not in __Function: __Function["DifferentialIncrement"] = 0.01 # noqa: E272,E701
- if "withdX" not in __Function: __Function["withdX"] = None # noqa: E272,E701
- if "withReducingMemoryUse" not in __Function: __Function["withReducingMemoryUse"] = __reduceM # noqa: E272,E701
- if "withAvoidingRedundancy" not in __Function: __Function["withAvoidingRedundancy"] = __avoidRC # noqa: E272,E701
- if "withToleranceInRedundancy" not in __Function: __Function["withToleranceInRedundancy"] = 1.e-18 # noqa: E272,E701
- if "withLengthOfRedundancy" not in __Function: __Function["withLengthOfRedundancy"] = -1 # noqa: E272,E701
- if "NumberOfProcesses" not in __Function: __Function["NumberOfProcesses"] = None # noqa: E272,E701
- if "withmfEnabled" not in __Function: __Function["withmfEnabled"] = inputAsMF # noqa: E272,E701
+ if (
+ isinstance(__Function, dict)
+ and ("useApproximatedDerivatives" in __Function)
+ and bool(__Function["useApproximatedDerivatives"])
+ and ("Direct" in __Function)
+ and (__Function["Direct"] is not None)
+ ):
+ if "CenteredFiniteDifference" not in __Function:
+ __Function["CenteredFiniteDifference"] = False
+ if "DifferentialIncrement" not in __Function:
+ __Function["DifferentialIncrement"] = 0.01
+ if "withdX" not in __Function:
+ __Function["withdX"] = None
+ if "withReducingMemoryUse" not in __Function:
+ __Function["withReducingMemoryUse"] = __reduceM
+ if "withAvoidingRedundancy" not in __Function:
+ __Function["withAvoidingRedundancy"] = __avoidRC
+ if "withToleranceInRedundancy" not in __Function:
+ __Function["withToleranceInRedundancy"] = 1.0e-18
+ if "withLengthOfRedundancy" not in __Function:
+ __Function["withLengthOfRedundancy"] = -1
+ if "NumberOfProcesses" not in __Function:
+ __Function["NumberOfProcesses"] = None
+ if "withmfEnabled" not in __Function:
+ __Function["withmfEnabled"] = inputAsMF
from daCore import NumericObjects
+
FDA = NumericObjects.FDApproximation(
- name = self.__name,
- Function = __Function["Direct"],
- centeredDF = __Function["CenteredFiniteDifference"],
- increment = __Function["DifferentialIncrement"],
- dX = __Function["withdX"],
- extraArguments = self.__extraArgs,
- reducingMemoryUse = __Function["withReducingMemoryUse"],
- avoidingRedundancy = __Function["withAvoidingRedundancy"],
- toleranceInRedundancy = __Function["withToleranceInRedundancy"],
- lengthOfRedundancy = __Function["withLengthOfRedundancy"],
- mpEnabled = __Function["EnableParallelDerivatives"],
- mpWorkers = __Function["NumberOfProcesses"],
- mfEnabled = __Function["withmfEnabled"],
- )
- self.__FO["Direct"] = Operator(
- name = self.__name,
- fromMethod = FDA.DirectOperator,
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- extraArguments = self.__extraArgs,
- enableMultiProcess = __Parameters["EnableParallelEvaluations"] )
+ name=self.__name,
+ Function=__Function["Direct"],
+ centeredDF=__Function["CenteredFiniteDifference"],
+ increment=__Function["DifferentialIncrement"],
+ dX=__Function["withdX"],
+ extraArguments=self.__extraArgs,
+ reducingMemoryUse=__Function["withReducingMemoryUse"],
+ avoidingRedundancy=__Function["withAvoidingRedundancy"],
+ toleranceInRedundancy=__Function["withToleranceInRedundancy"],
+ lengthOfRedundancy=__Function["withLengthOfRedundancy"],
+ mpEnabled=__Function["EnableParallelDerivatives"],
+ mpWorkers=__Function["NumberOfProcesses"],
+ mfEnabled=__Function["withmfEnabled"],
+ )
+ self.__FO["Direct"] = Operator(
+ name=self.__name,
+ fromMethod=FDA.DirectOperator,
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ extraArguments=self.__extraArgs,
+ enableMultiProcess=__Parameters["EnableParallelEvaluations"],
+ )
self.__FO["Tangent"] = Operator(
- name = self.__name + "Tangent",
- fromMethod = FDA.TangentOperator,
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- extraArguments = self.__extraArgs )
+ name=self.__name + "Tangent",
+ fromMethod=FDA.TangentOperator,
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ extraArguments=self.__extraArgs,
+ )
self.__FO["Adjoint"] = Operator(
- name = self.__name + "Adjoint",
- fromMethod = FDA.AdjointOperator,
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- extraArguments = self.__extraArgs )
+ name=self.__name + "Adjoint",
+ fromMethod=FDA.AdjointOperator,
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ extraArguments=self.__extraArgs,
+ )
self.__FO["DifferentialIncrement"] = __Function["DifferentialIncrement"]
- elif isinstance(__Function, dict) and \
- ("Direct" in __Function) and ("Tangent" in __Function) and ("Adjoint" in __Function) and \
- (__Function["Direct"] is not None) and (__Function["Tangent"] is not None) and (__Function["Adjoint"] is not None):
- self.__FO["Direct"] = Operator(
- name = self.__name,
- fromMethod = __Function["Direct"],
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- extraArguments = self.__extraArgs,
- enableMultiProcess = __Parameters["EnableParallelEvaluations"] )
+ elif (
+ isinstance(__Function, dict)
+ and ("Direct" in __Function)
+ and ("Tangent" in __Function)
+ and ("Adjoint" in __Function)
+ and (__Function["Direct"] is not None)
+ and (__Function["Tangent"] is not None)
+ and (__Function["Adjoint"] is not None)
+ ):
+ self.__FO["Direct"] = Operator(
+ name=self.__name,
+ fromMethod=__Function["Direct"],
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ extraArguments=self.__extraArgs,
+ enableMultiProcess=__Parameters["EnableParallelEvaluations"],
+ )
self.__FO["Tangent"] = Operator(
- name = self.__name + "Tangent",
- fromMethod = __Function["Tangent"],
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- extraArguments = self.__extraArgs )
+ name=self.__name + "Tangent",
+ fromMethod=__Function["Tangent"],
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ extraArguments=self.__extraArgs,
+ )
self.__FO["Adjoint"] = Operator(
- name = self.__name + "Adjoint",
- fromMethod = __Function["Adjoint"],
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- extraArguments = self.__extraArgs )
+ name=self.__name + "Adjoint",
+ fromMethod=__Function["Adjoint"],
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ extraArguments=self.__extraArgs,
+ )
self.__FO["DifferentialIncrement"] = None
elif asMatrix is not None:
if isinstance(__Matrix, str):
- __Matrix = PlatformInfo.strmatrix2liststr( __Matrix )
- __matrice = numpy.asarray( __Matrix, dtype=float )
- self.__FO["Direct"] = Operator(
- name = self.__name,
- fromMatrix = __matrice,
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF,
- enableMultiProcess = __Parameters["EnableParallelEvaluations"] )
+ __Matrix = PlatformInfo.strmatrix2liststr(__Matrix)
+ __matrice = numpy.asarray(__Matrix, dtype=float)
+ self.__FO["Direct"] = Operator(
+ name=self.__name,
+ fromMatrix=__matrice,
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ enableMultiProcess=__Parameters["EnableParallelEvaluations"],
+ )
self.__FO["Tangent"] = Operator(
- name = self.__name + "Tangent",
- fromMatrix = __matrice,
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF )
+ name=self.__name + "Tangent",
+ fromMatrix=__matrice,
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ )
self.__FO["Adjoint"] = Operator(
- name = self.__name + "Adjoint",
- fromMatrix = __matrice.T,
- reducingMemoryUse = __reduceM,
- avoidingRedundancy = __avoidRC,
- inputAsMultiFunction = inputAsMF )
+ name=self.__name + "Adjoint",
+ fromMatrix=__matrice.T,
+ reducingMemoryUse=__reduceM,
+ avoidingRedundancy=__avoidRC,
+ inputAsMultiFunction=inputAsMF,
+ )
del __matrice
self.__FO["DifferentialIncrement"] = None
else:
raise ValueError(
- "The %s object is improperly defined or undefined,"%self.__name + \
- " it requires at minima either a matrix, a Direct operator for" + \
- " approximate derivatives or a Tangent/Adjoint operators pair." + \
- " Please check your operator input.")
+ "The %s object is improperly defined or undefined," % self.__name
+ + " it requires at minima either a matrix, a Direct operator for"
+ + " approximate derivatives or a Tangent/Adjoint operators pair."
+ + " Please check your operator input."
+ )
#
if __appliedInX is not None:
self.__FO["AppliedInX"] = {}
for key in __appliedInX:
if isinstance(__appliedInX[key], str):
- __appliedInX[key] = PlatformInfo.strvect2liststr( __appliedInX[key] )
- self.__FO["AppliedInX"][key] = numpy.ravel( __appliedInX[key] ).reshape((-1, 1))
+ __appliedInX[key] = PlatformInfo.strvect2liststr(__appliedInX[key])
+ self.__FO["AppliedInX"][key] = numpy.ravel(__appliedInX[key]).reshape(
+ (-1, 1)
+ )
else:
self.__FO["AppliedInX"] = None
if oldn in collection:
collection[newn] = collection[oldn]
del collection[oldn]
- __msg = "the parameter \"%s\" used in this case is"%(oldn,)
- __msg += " deprecated and has to be replaced by \"%s\"."%(newn,)
+ __msg = 'the parameter "%s" used in this case is' % (oldn,)
+ __msg += ' deprecated and has to be replaced by "%s".' % (newn,)
__msg += " Please update your code."
warnings.warn(__msg, FutureWarning, stacklevel=50)
return collection
+
# ==============================================================================
class Algorithm(object):
"""
Une classe élémentaire d'algorithme doit implémenter la méthode "run".
"""
+
__slots__ = (
- "_name", "_parameters", "__internal_state", "__required_parameters",
- "_m", "__variable_names_not_public", "__canonical_parameter_name",
- "__canonical_stored_name", "__replace_by_the_new_name",
+ "_name",
+ "_parameters",
+ "__internal_state",
+ "__required_parameters",
+ "_m",
+ "__variable_names_not_public",
+ "__canonical_parameter_name",
+ "__canonical_stored_name",
+ "__replace_by_the_new_name",
"StoredVariables",
)
logging.debug("%s Initialisation", str(name))
self._m = PlatformInfo.SystemUsage()
#
- self._name = str( name )
+ self._name = str(name)
self._parameters = {"StoreSupplementaryCalculations": []}
self.__internal_state = {}
self.__required_parameters = {}
"AttributesTags": [],
"AttributesFeatures": [],
}
- self.__variable_names_not_public = {"nextStep": False} # Duplication dans AlgorithmAndParameters
+ self.__variable_names_not_public = {
+ "nextStep": False
+ } # Duplication dans AlgorithmAndParameters
self.__canonical_parameter_name = {} # Correspondance "lower"->"correct"
- self.__canonical_stored_name = {} # Correspondance "lower"->"correct"
- self.__replace_by_the_new_name = {} # Nouveau nom à partir d'un nom ancien
+ self.__canonical_stored_name = {} # Correspondance "lower"->"correct"
+ self.__replace_by_the_new_name = {} # Nouveau nom à partir d'un nom ancien
#
self.StoredVariables = {}
- self.StoredVariables["APosterioriCorrelations"] = Persistence.OneMatrix(name = "APosterioriCorrelations")
- self.StoredVariables["APosterioriCovariance"] = Persistence.OneMatrix(name = "APosterioriCovariance")
- self.StoredVariables["APosterioriStandardDeviations"] = Persistence.OneVector(name = "APosterioriStandardDeviations")
- self.StoredVariables["APosterioriVariances"] = Persistence.OneVector(name = "APosterioriVariances")
- self.StoredVariables["Analysis"] = Persistence.OneVector(name = "Analysis")
- self.StoredVariables["BMA"] = Persistence.OneVector(name = "BMA")
- self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(name = "CostFunctionJ")
- self.StoredVariables["CostFunctionJAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJAtCurrentOptimum")
- self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(name = "CostFunctionJb")
- self.StoredVariables["CostFunctionJbAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJbAtCurrentOptimum")
- self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(name = "CostFunctionJo")
- self.StoredVariables["CostFunctionJoAtCurrentOptimum"] = Persistence.OneScalar(name = "CostFunctionJoAtCurrentOptimum")
- self.StoredVariables["CurrentEnsembleState"] = Persistence.OneMatrix(name = "CurrentEnsembleState")
- self.StoredVariables["CurrentIterationNumber"] = Persistence.OneIndex(name = "CurrentIterationNumber")
- self.StoredVariables["CurrentOptimum"] = Persistence.OneVector(name = "CurrentOptimum")
- self.StoredVariables["CurrentState"] = Persistence.OneVector(name = "CurrentState")
- self.StoredVariables["CurrentStepNumber"] = Persistence.OneIndex(name = "CurrentStepNumber")
- self.StoredVariables["EnsembleOfSimulations"] = Persistence.OneMatrice(name = "EnsembleOfSimulations")
- self.StoredVariables["EnsembleOfSnapshots"] = Persistence.OneMatrice(name = "EnsembleOfSnapshots")
- self.StoredVariables["EnsembleOfStates"] = Persistence.OneMatrice(name = "EnsembleOfStates")
- self.StoredVariables["ExcludedPoints"] = Persistence.OneVector(name = "ExcludedPoints")
- self.StoredVariables["ForecastCovariance"] = Persistence.OneMatrix(name = "ForecastCovariance")
- self.StoredVariables["ForecastState"] = Persistence.OneVector(name = "ForecastState")
- self.StoredVariables["GradientOfCostFunctionJ"] = Persistence.OneVector(name = "GradientOfCostFunctionJ")
- self.StoredVariables["GradientOfCostFunctionJb"] = Persistence.OneVector(name = "GradientOfCostFunctionJb")
- self.StoredVariables["GradientOfCostFunctionJo"] = Persistence.OneVector(name = "GradientOfCostFunctionJo")
- self.StoredVariables["IndexOfOptimum"] = Persistence.OneIndex(name = "IndexOfOptimum")
- self.StoredVariables["Innovation"] = Persistence.OneVector(name = "Innovation")
- self.StoredVariables["InnovationAtCurrentAnalysis"] = Persistence.OneVector(name = "InnovationAtCurrentAnalysis")
- self.StoredVariables["InnovationAtCurrentState"] = Persistence.OneVector(name = "InnovationAtCurrentState")
- self.StoredVariables["InternalCostFunctionJ"] = Persistence.OneVector(name = "InternalCostFunctionJ")
- self.StoredVariables["InternalCostFunctionJb"] = Persistence.OneVector(name = "InternalCostFunctionJb")
- self.StoredVariables["InternalCostFunctionJo"] = Persistence.OneVector(name = "InternalCostFunctionJo")
- self.StoredVariables["InternalStates"] = Persistence.OneMatrix(name = "InternalStates")
- self.StoredVariables["JacobianMatrixAtBackground"] = Persistence.OneMatrix(name = "JacobianMatrixAtBackground")
- self.StoredVariables["JacobianMatrixAtCurrentState"] = Persistence.OneMatrix(name = "JacobianMatrixAtCurrentState")
- self.StoredVariables["JacobianMatrixAtOptimum"] = Persistence.OneMatrix(name = "JacobianMatrixAtOptimum")
- self.StoredVariables["KalmanGainAtOptimum"] = Persistence.OneMatrix(name = "KalmanGainAtOptimum")
- self.StoredVariables["MahalanobisConsistency"] = Persistence.OneScalar(name = "MahalanobisConsistency")
- self.StoredVariables["OMA"] = Persistence.OneVector(name = "OMA")
- self.StoredVariables["OMB"] = Persistence.OneVector(name = "OMB")
- self.StoredVariables["OptimalPoints"] = Persistence.OneVector(name = "OptimalPoints")
- self.StoredVariables["ReducedBasis"] = Persistence.OneMatrix(name = "ReducedBasis")
- self.StoredVariables["ReducedBasisMus"] = Persistence.OneVector(name = "ReducedBasisMus")
- self.StoredVariables["ReducedCoordinates"] = Persistence.OneVector(name = "ReducedCoordinates")
- self.StoredVariables["Residu"] = Persistence.OneScalar(name = "Residu")
- self.StoredVariables["Residus"] = Persistence.OneVector(name = "Residus")
- self.StoredVariables["SampledStateForQuantiles"] = Persistence.OneMatrix(name = "SampledStateForQuantiles")
- self.StoredVariables["SigmaBck2"] = Persistence.OneScalar(name = "SigmaBck2")
- self.StoredVariables["SigmaObs2"] = Persistence.OneScalar(name = "SigmaObs2")
- self.StoredVariables["SimulatedObservationAtBackground"] = Persistence.OneVector(name = "SimulatedObservationAtBackground")
- self.StoredVariables["SimulatedObservationAtCurrentAnalysis"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentAnalysis")
- self.StoredVariables["SimulatedObservationAtCurrentOptimum"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentOptimum")
- self.StoredVariables["SimulatedObservationAtCurrentState"] = Persistence.OneVector(name = "SimulatedObservationAtCurrentState")
- self.StoredVariables["SimulatedObservationAtOptimum"] = Persistence.OneVector(name = "SimulatedObservationAtOptimum")
- self.StoredVariables["SimulationQuantiles"] = Persistence.OneMatrix(name = "SimulationQuantiles")
- self.StoredVariables["SingularValues"] = Persistence.OneVector(name = "SingularValues")
+ self.StoredVariables["APosterioriCorrelations"] = Persistence.OneMatrix(
+ name="APosterioriCorrelations"
+ )
+ self.StoredVariables["APosterioriCovariance"] = Persistence.OneMatrix(
+ name="APosterioriCovariance"
+ )
+ self.StoredVariables["APosterioriStandardDeviations"] = Persistence.OneVector(
+ name="APosterioriStandardDeviations"
+ )
+ self.StoredVariables["APosterioriVariances"] = Persistence.OneVector(
+ name="APosterioriVariances"
+ )
+ self.StoredVariables["Analysis"] = Persistence.OneVector(name="Analysis")
+ self.StoredVariables["BMA"] = Persistence.OneVector(name="BMA")
+ self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(
+ name="CostFunctionJ"
+ )
+ self.StoredVariables["CostFunctionJAtCurrentOptimum"] = Persistence.OneScalar(
+ name="CostFunctionJAtCurrentOptimum"
+ )
+ self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(
+ name="CostFunctionJb"
+ )
+ self.StoredVariables["CostFunctionJbAtCurrentOptimum"] = Persistence.OneScalar(
+ name="CostFunctionJbAtCurrentOptimum"
+ )
+ self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(
+ name="CostFunctionJo"
+ )
+ self.StoredVariables["CostFunctionJoAtCurrentOptimum"] = Persistence.OneScalar(
+ name="CostFunctionJoAtCurrentOptimum"
+ )
+ self.StoredVariables["CurrentEnsembleState"] = Persistence.OneMatrix(
+ name="CurrentEnsembleState"
+ )
+ self.StoredVariables["CurrentIterationNumber"] = Persistence.OneIndex(
+ name="CurrentIterationNumber"
+ )
+ self.StoredVariables["CurrentOptimum"] = Persistence.OneVector(
+ name="CurrentOptimum"
+ )
+ self.StoredVariables["CurrentState"] = Persistence.OneVector(
+ name="CurrentState"
+ )
+ self.StoredVariables["CurrentStepNumber"] = Persistence.OneIndex(
+ name="CurrentStepNumber"
+ )
+ self.StoredVariables["EnsembleOfSimulations"] = Persistence.OneMatrice(
+ name="EnsembleOfSimulations"
+ )
+ self.StoredVariables["EnsembleOfSnapshots"] = Persistence.OneMatrice(
+ name="EnsembleOfSnapshots"
+ )
+ self.StoredVariables["EnsembleOfStates"] = Persistence.OneMatrice(
+ name="EnsembleOfStates"
+ )
+ self.StoredVariables["ExcludedPoints"] = Persistence.OneVector(
+ name="ExcludedPoints"
+ )
+ self.StoredVariables["ForecastCovariance"] = Persistence.OneMatrix(
+ name="ForecastCovariance"
+ )
+ self.StoredVariables["ForecastState"] = Persistence.OneVector(
+ name="ForecastState"
+ )
+ self.StoredVariables["GradientOfCostFunctionJ"] = Persistence.OneVector(
+ name="GradientOfCostFunctionJ"
+ )
+ self.StoredVariables["GradientOfCostFunctionJb"] = Persistence.OneVector(
+ name="GradientOfCostFunctionJb"
+ )
+ self.StoredVariables["GradientOfCostFunctionJo"] = Persistence.OneVector(
+ name="GradientOfCostFunctionJo"
+ )
+ self.StoredVariables["IndexOfOptimum"] = Persistence.OneIndex(
+ name="IndexOfOptimum"
+ )
+ self.StoredVariables["Innovation"] = Persistence.OneVector(name="Innovation")
+ self.StoredVariables["InnovationAtCurrentAnalysis"] = Persistence.OneVector(
+ name="InnovationAtCurrentAnalysis"
+ )
+ self.StoredVariables["InnovationAtCurrentState"] = Persistence.OneVector(
+ name="InnovationAtCurrentState"
+ )
+ self.StoredVariables["InternalCostFunctionJ"] = Persistence.OneVector(
+ name="InternalCostFunctionJ"
+ )
+ self.StoredVariables["InternalCostFunctionJb"] = Persistence.OneVector(
+ name="InternalCostFunctionJb"
+ )
+ self.StoredVariables["InternalCostFunctionJo"] = Persistence.OneVector(
+ name="InternalCostFunctionJo"
+ )
+ self.StoredVariables["InternalStates"] = Persistence.OneMatrix(
+ name="InternalStates"
+ )
+ self.StoredVariables["JacobianMatrixAtBackground"] = Persistence.OneMatrix(
+ name="JacobianMatrixAtBackground"
+ )
+ self.StoredVariables["JacobianMatrixAtCurrentState"] = Persistence.OneMatrix(
+ name="JacobianMatrixAtCurrentState"
+ )
+ self.StoredVariables["JacobianMatrixAtOptimum"] = Persistence.OneMatrix(
+ name="JacobianMatrixAtOptimum"
+ )
+ self.StoredVariables["KalmanGainAtOptimum"] = Persistence.OneMatrix(
+ name="KalmanGainAtOptimum"
+ )
+ self.StoredVariables["MahalanobisConsistency"] = Persistence.OneScalar(
+ name="MahalanobisConsistency"
+ )
+ self.StoredVariables["OMA"] = Persistence.OneVector(name="OMA")
+ self.StoredVariables["OMB"] = Persistence.OneVector(name="OMB")
+ self.StoredVariables["OptimalPoints"] = Persistence.OneVector(
+ name="OptimalPoints"
+ )
+ self.StoredVariables["ReducedBasis"] = Persistence.OneMatrix(
+ name="ReducedBasis"
+ )
+ self.StoredVariables["ReducedBasisMus"] = Persistence.OneVector(
+ name="ReducedBasisMus"
+ )
+ self.StoredVariables["ReducedCoordinates"] = Persistence.OneVector(
+ name="ReducedCoordinates"
+ )
+ self.StoredVariables["Residu"] = Persistence.OneScalar(name="Residu")
+ self.StoredVariables["Residus"] = Persistence.OneVector(name="Residus")
+ self.StoredVariables["SampledStateForQuantiles"] = Persistence.OneMatrix(
+ name="SampledStateForQuantiles"
+ )
+ self.StoredVariables["SigmaBck2"] = Persistence.OneScalar(name="SigmaBck2")
+ self.StoredVariables["SigmaObs2"] = Persistence.OneScalar(name="SigmaObs2")
+ self.StoredVariables["SimulatedObservationAtBackground"] = (
+ Persistence.OneVector(name="SimulatedObservationAtBackground")
+ )
+ self.StoredVariables["SimulatedObservationAtCurrentAnalysis"] = (
+ Persistence.OneVector(name="SimulatedObservationAtCurrentAnalysis")
+ )
+ self.StoredVariables["SimulatedObservationAtCurrentOptimum"] = (
+ Persistence.OneVector(name="SimulatedObservationAtCurrentOptimum")
+ )
+ self.StoredVariables["SimulatedObservationAtCurrentState"] = (
+ Persistence.OneVector(name="SimulatedObservationAtCurrentState")
+ )
+ self.StoredVariables["SimulatedObservationAtOptimum"] = Persistence.OneVector(
+ name="SimulatedObservationAtOptimum"
+ )
+ self.StoredVariables["SimulationQuantiles"] = Persistence.OneMatrix(
+ name="SimulationQuantiles"
+ )
+ self.StoredVariables["SingularValues"] = Persistence.OneVector(
+ name="SingularValues"
+ )
#
for k in self.StoredVariables:
self.__canonical_stored_name[k.lower()] = k
for k, v in self.__variable_names_not_public.items():
self.__canonical_parameter_name[k.lower()] = k
self.__canonical_parameter_name["algorithm"] = "Algorithm"
- self.__canonical_parameter_name["storesupplementarycalculations"] = "StoreSupplementaryCalculations"
+ self.__canonical_parameter_name["storesupplementarycalculations"] = (
+ "StoreSupplementaryCalculations"
+ )
- def _pre_run(self, Parameters, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None ):
+ def _pre_run(
+ self,
+ Parameters,
+ Xb=None,
+ Y=None,
+ U=None,
+ HO=None,
+ EM=None,
+ CM=None,
+ R=None,
+ B=None,
+ Q=None,
+ ):
"Pré-calcul"
logging.debug("%s Lancement", self._name)
- logging.debug("%s Taille mémoire utilisée de %.0f Mio"%(self._name, self._m.getUsedMemory("Mio")))
+ logging.debug(
+ "%s Taille mémoire utilisée de %.0f Mio"
+ % (self._name, self._m.getUsedMemory("Mio"))
+ )
self._getTimeState(reset=True)
#
# Mise à jour des paramètres internes avec le contenu de Parameters, en
self.__setParameters(Parameters, reset=True) # Copie mémoire
for k, v in self.__variable_names_not_public.items():
if k not in self._parameters:
- self.__setParameters( {k: v} )
+ self.__setParameters({k: v})
def __test_vvalue(argument, variable, argname, symbol=None):
"Corrections et compléments des vecteurs"
if symbol is None:
symbol = variable
if argument is None:
- if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- raise ValueError("%s %s vector %s is not set and has to be properly defined!"%(self._name, argname, symbol))
- elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s vector %s is not set, but is optional."%(self._name, argname, symbol))
+ if (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["mandatory"]
+ ):
+ raise ValueError(
+ "%s %s vector %s is not set and has to be properly defined!"
+ % (self._name, argname, symbol)
+ )
+ elif (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["optional"]
+ ):
+ logging.debug(
+ "%s %s vector %s is not set, but is optional."
+ % (self._name, argname, symbol)
+ )
else:
- logging.debug("%s %s vector %s is not set, but is not required."%(self._name, argname, symbol))
+ logging.debug(
+ "%s %s vector %s is not set, but is not required."
+ % (self._name, argname, symbol)
+ )
else:
- if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
+ if (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["mandatory"]
+ ):
logging.debug(
- "%s %s vector %s is required and set, and its full size is %i." \
- % (self._name, argname, symbol, numpy.array(argument).size))
- elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
+ "%s %s vector %s is required and set, and its full size is %i."
+ % (self._name, argname, symbol, numpy.array(argument).size)
+ )
+ elif (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["optional"]
+ ):
logging.debug(
- "%s %s vector %s is optional and set, and its full size is %i." \
- % (self._name, argname, symbol, numpy.array(argument).size))
+ "%s %s vector %s is optional and set, and its full size is %i."
+ % (self._name, argname, symbol, numpy.array(argument).size)
+ )
else:
logging.debug(
- "%s %s vector %s is set although neither required nor optional, and its full size is %i." \
- % (self._name, argname, symbol, numpy.array(argument).size))
+ "%s %s vector %s is set although neither required nor optional, and its full size is %i."
+ % (self._name, argname, symbol, numpy.array(argument).size)
+ )
return 0
- __test_vvalue( Xb, "Xb", "Background or initial state" )
- __test_vvalue( Y, "Y", "Observation" )
- __test_vvalue( U, "U", "Control" )
+
+ __test_vvalue(Xb, "Xb", "Background or initial state")
+ __test_vvalue(Y, "Y", "Observation")
+ __test_vvalue(U, "U", "Control")
def __test_cvalue(argument, variable, argname, symbol=None):
"Corrections et compléments des covariances"
if symbol is None:
symbol = variable
if argument is None:
- if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- raise ValueError("%s %s error covariance matrix %s is not set and has to be properly defined!"%(self._name, argname, symbol))
- elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s error covariance matrix %s is not set, but is optional."%(self._name, argname, symbol))
+ if (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["mandatory"]
+ ):
+ raise ValueError(
+ "%s %s error covariance matrix %s is not set and has to be properly defined!"
+ % (self._name, argname, symbol)
+ )
+ elif (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["optional"]
+ ):
+ logging.debug(
+ "%s %s error covariance matrix %s is not set, but is optional."
+ % (self._name, argname, symbol)
+ )
else:
- logging.debug("%s %s error covariance matrix %s is not set, but is not required."%(self._name, argname, symbol))
+ logging.debug(
+ "%s %s error covariance matrix %s is not set, but is not required."
+ % (self._name, argname, symbol)
+ )
else:
- if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- logging.debug("%s %s error covariance matrix %s is required and set."%(self._name, argname, symbol))
- elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s error covariance matrix %s is optional and set."%(self._name, argname, symbol))
+ if (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["mandatory"]
+ ):
+ logging.debug(
+ "%s %s error covariance matrix %s is required and set."
+ % (self._name, argname, symbol)
+ )
+ elif (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["optional"]
+ ):
+ logging.debug(
+ "%s %s error covariance matrix %s is optional and set."
+ % (self._name, argname, symbol)
+ )
else:
logging.debug(
- "%s %s error covariance matrix %s is set although neither required nor optional." \
- % (self._name, argname, symbol))
+ "%s %s error covariance matrix %s is set although neither required nor optional."
+ % (self._name, argname, symbol)
+ )
return 0
- __test_cvalue( B, "B", "Background" )
- __test_cvalue( R, "R", "Observation" )
- __test_cvalue( Q, "Q", "Evolution" )
+
+ __test_cvalue(B, "B", "Background")
+ __test_cvalue(R, "R", "Observation")
+ __test_cvalue(Q, "Q", "Evolution")
def __test_ovalue(argument, variable, argname, symbol=None):
"Corrections et compléments des opérateurs"
if symbol is None:
symbol = variable
if argument is None or (isinstance(argument, dict) and len(argument) == 0):
- if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- raise ValueError("%s %s operator %s is not set and has to be properly defined!"%(self._name, argname, symbol))
- elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s operator %s is not set, but is optional."%(self._name, argname, symbol))
+ if (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["mandatory"]
+ ):
+ raise ValueError(
+ "%s %s operator %s is not set and has to be properly defined!"
+ % (self._name, argname, symbol)
+ )
+ elif (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["optional"]
+ ):
+ logging.debug(
+ "%s %s operator %s is not set, but is optional."
+ % (self._name, argname, symbol)
+ )
else:
- logging.debug("%s %s operator %s is not set, but is not required."%(self._name, argname, symbol))
+ logging.debug(
+ "%s %s operator %s is not set, but is not required."
+ % (self._name, argname, symbol)
+ )
else:
- if variable in self.__required_inputs["RequiredInputValues"]["mandatory"]:
- logging.debug("%s %s operator %s is required and set."%(self._name, argname, symbol))
- elif variable in self.__required_inputs["RequiredInputValues"]["optional"]:
- logging.debug("%s %s operator %s is optional and set."%(self._name, argname, symbol))
+ if (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["mandatory"]
+ ):
+ logging.debug(
+ "%s %s operator %s is required and set."
+ % (self._name, argname, symbol)
+ )
+ elif (
+ variable
+ in self.__required_inputs["RequiredInputValues"]["optional"]
+ ):
+ logging.debug(
+ "%s %s operator %s is optional and set."
+ % (self._name, argname, symbol)
+ )
else:
- logging.debug("%s %s operator %s is set although neither required nor optional."%(self._name, argname, symbol))
+ logging.debug(
+ "%s %s operator %s is set although neither required nor optional."
+ % (self._name, argname, symbol)
+ )
return 0
- __test_ovalue( HO, "HO", "Observation", "H" )
- __test_ovalue( EM, "EM", "Evolution", "M" )
- __test_ovalue( CM, "CM", "Control Model", "C" )
+
+ __test_ovalue(HO, "HO", "Observation", "H")
+ __test_ovalue(EM, "EM", "Evolution", "M")
+ __test_ovalue(CM, "CM", "Control Model", "C")
#
# Corrections et compléments des bornes
- if ("Bounds" in self._parameters) \
- and isinstance(self._parameters["Bounds"], (list, tuple)):
- if (len(self._parameters["Bounds"]) > 0):
- logging.debug("%s Bounds taken into account"%(self._name,))
+ if ("Bounds" in self._parameters) and isinstance(
+ self._parameters["Bounds"], (list, tuple)
+ ):
+ if len(self._parameters["Bounds"]) > 0:
+ logging.debug("%s Bounds taken into account" % (self._name,))
else:
self._parameters["Bounds"] = None
- elif ("Bounds" in self._parameters) \
- and isinstance(self._parameters["Bounds"], (numpy.ndarray, numpy.matrix)):
- self._parameters["Bounds"] = numpy.ravel(self._parameters["Bounds"]).reshape((-1, 2)).tolist()
- if (len(self._parameters["Bounds"]) > 0):
- logging.debug("%s Bounds for states taken into account"%(self._name,))
+ elif ("Bounds" in self._parameters) and isinstance(
+ self._parameters["Bounds"], (numpy.ndarray, numpy.matrix)
+ ):
+ self._parameters["Bounds"] = (
+ numpy.ravel(self._parameters["Bounds"]).reshape((-1, 2)).tolist()
+ )
+ if len(self._parameters["Bounds"]) > 0:
+ logging.debug("%s Bounds for states taken into account" % (self._name,))
else:
self._parameters["Bounds"] = None
else:
self._parameters["Bounds"] = None
if self._parameters["Bounds"] is None:
- logging.debug("%s There are no bounds for states to take into account"%(self._name,))
- #
- if ("StateBoundsForQuantiles" in self._parameters) \
- and isinstance(self._parameters["StateBoundsForQuantiles"], (list, tuple)) \
- and (len(self._parameters["StateBoundsForQuantiles"]) > 0):
- logging.debug("%s Bounds for quantiles states taken into account"%(self._name,))
- elif ("StateBoundsForQuantiles" in self._parameters) \
- and isinstance(self._parameters["StateBoundsForQuantiles"], (numpy.ndarray, numpy.matrix)):
- self._parameters["StateBoundsForQuantiles"] = numpy.ravel(self._parameters["StateBoundsForQuantiles"]).reshape((-1, 2)).tolist()
- if (len(self._parameters["StateBoundsForQuantiles"]) > 0):
- logging.debug("%s Bounds for quantiles states taken into account"%(self._name,))
+ logging.debug(
+ "%s There are no bounds for states to take into account" % (self._name,)
+ )
+ #
+ if (
+ ("StateBoundsForQuantiles" in self._parameters)
+ and isinstance(self._parameters["StateBoundsForQuantiles"], (list, tuple))
+ and (len(self._parameters["StateBoundsForQuantiles"]) > 0)
+ ):
+ logging.debug(
+ "%s Bounds for quantiles states taken into account" % (self._name,)
+ )
+ elif ("StateBoundsForQuantiles" in self._parameters) and isinstance(
+ self._parameters["StateBoundsForQuantiles"], (numpy.ndarray, numpy.matrix)
+ ):
+ self._parameters["StateBoundsForQuantiles"] = (
+ numpy.ravel(self._parameters["StateBoundsForQuantiles"])
+ .reshape((-1, 2))
+ .tolist()
+ )
+ if len(self._parameters["StateBoundsForQuantiles"]) > 0:
+ logging.debug(
+ "%s Bounds for quantiles states taken into account" % (self._name,)
+ )
# Attention : contrairement à Bounds, il n'y a pas de défaut à None,
# sinon on ne peut pas être sans bornes
#
# Corrections et compléments de l'initialisation en X
if "InitializationPoint" in self._parameters:
if Xb is not None:
- if self._parameters["InitializationPoint"] is not None and hasattr(self._parameters["InitializationPoint"], 'size'):
- if self._parameters["InitializationPoint"].size != numpy.ravel(Xb).size:
+ if self._parameters["InitializationPoint"] is not None and hasattr(
+ self._parameters["InitializationPoint"], "size"
+ ):
+ if (
+ self._parameters["InitializationPoint"].size
+ != numpy.ravel(Xb).size
+ ):
raise ValueError(
- "Incompatible size %i of forced initial point that have to replace the background of size %i" \
- % (self._parameters["InitializationPoint"].size, numpy.ravel(Xb).size))
+ "Incompatible size %i of forced initial point that"
+ % self._parameters["InitializationPoint"].size
+ + " have to replace the background of size %i"
+ % numpy.ravel(Xb).size
+ )
# Obtenu par typecast : numpy.ravel(self._parameters["InitializationPoint"])
else:
self._parameters["InitializationPoint"] = numpy.ravel(Xb)
else:
if self._parameters["InitializationPoint"] is None:
- raise ValueError("Forced initial point can not be set without any given Background or required value")
+ raise ValueError(
+ "Forced initial point can not be set without any given Background or required value"
+ )
#
# Correction pour pallier a un bug de TNC sur le retour du Minimum
if "Minimizer" in self._parameters and self._parameters["Minimizer"] == "TNC":
def _post_run(self, _oH=None, _oM=None):
"Post-calcul"
- if ("StoreSupplementaryCalculations" in self._parameters) and \
- "APosterioriCovariance" in self._parameters["StoreSupplementaryCalculations"]:
+ if (
+ "StoreSupplementaryCalculations" in self._parameters
+ ) and "APosterioriCovariance" in self._parameters[
+ "StoreSupplementaryCalculations"
+ ]:
for _A in self.StoredVariables["APosterioriCovariance"]:
- if "APosterioriVariances" in self._parameters["StoreSupplementaryCalculations"]:
- self.StoredVariables["APosterioriVariances"].store( numpy.diag(_A) )
- if "APosterioriStandardDeviations" in self._parameters["StoreSupplementaryCalculations"]:
- self.StoredVariables["APosterioriStandardDeviations"].store( numpy.sqrt(numpy.diag(_A)) )
- if "APosterioriCorrelations" in self._parameters["StoreSupplementaryCalculations"]:
- _EI = numpy.diag(1. / numpy.sqrt(numpy.diag(_A)))
+ if (
+ "APosterioriVariances"
+ in self._parameters["StoreSupplementaryCalculations"]
+ ):
+ self.StoredVariables["APosterioriVariances"].store(numpy.diag(_A))
+ if (
+ "APosterioriStandardDeviations"
+ in self._parameters["StoreSupplementaryCalculations"]
+ ):
+ self.StoredVariables["APosterioriStandardDeviations"].store(
+ numpy.sqrt(numpy.diag(_A))
+ )
+ if (
+ "APosterioriCorrelations"
+ in self._parameters["StoreSupplementaryCalculations"]
+ ):
+ _EI = numpy.diag(1.0 / numpy.sqrt(numpy.diag(_A)))
_C = numpy.dot(_EI, numpy.dot(_A, _EI))
- self.StoredVariables["APosterioriCorrelations"].store( _C )
- if _oH is not None and "Direct" in _oH and "Tangent" in _oH and "Adjoint" in _oH:
+ self.StoredVariables["APosterioriCorrelations"].store(_C)
+ if (
+ _oH is not None
+ and "Direct" in _oH
+ and "Tangent" in _oH
+ and "Adjoint" in _oH
+ ):
logging.debug(
"%s Nombre d'évaluation(s) de l'opérateur d'observation direct/tangent/adjoint.: %i/%i/%i",
- self._name, _oH["Direct"].nbcalls(0), _oH["Tangent"].nbcalls(0), _oH["Adjoint"].nbcalls(0))
+ self._name,
+ _oH["Direct"].nbcalls(0),
+ _oH["Tangent"].nbcalls(0),
+ _oH["Adjoint"].nbcalls(0),
+ )
logging.debug(
"%s Nombre d'appels au cache d'opérateur d'observation direct/tangent/adjoint..: %i/%i/%i",
- self._name, _oH["Direct"].nbcalls(3), _oH["Tangent"].nbcalls(3), _oH["Adjoint"].nbcalls(3))
- if _oM is not None and "Direct" in _oM and "Tangent" in _oM and "Adjoint" in _oM:
+ self._name,
+ _oH["Direct"].nbcalls(3),
+ _oH["Tangent"].nbcalls(3),
+ _oH["Adjoint"].nbcalls(3),
+ )
+ if (
+ _oM is not None
+ and "Direct" in _oM
+ and "Tangent" in _oM
+ and "Adjoint" in _oM
+ ):
logging.debug(
"%s Nombre d'évaluation(s) de l'opérateur d'évolution direct/tangent/adjoint.: %i/%i/%i",
- self._name, _oM["Direct"].nbcalls(0), _oM["Tangent"].nbcalls(0), _oM["Adjoint"].nbcalls(0))
+ self._name,
+ _oM["Direct"].nbcalls(0),
+ _oM["Tangent"].nbcalls(0),
+ _oM["Adjoint"].nbcalls(0),
+ )
logging.debug(
"%s Nombre d'appels au cache d'opérateur d'évolution direct/tangent/adjoint..: %i/%i/%i",
- self._name, _oM["Direct"].nbcalls(3), _oM["Tangent"].nbcalls(3), _oM["Adjoint"].nbcalls(3))
- logging.debug("%s Taille mémoire utilisée de %.0f Mio", self._name, self._m.getUsedMemory("Mio"))
- logging.debug("%s Durées d'utilisation CPU de %.1fs et elapsed de %.1fs", self._name, self._getTimeState()[0], self._getTimeState()[1])
+ self._name,
+ _oM["Direct"].nbcalls(3),
+ _oM["Tangent"].nbcalls(3),
+ _oM["Adjoint"].nbcalls(3),
+ )
+ logging.debug(
+ "%s Taille mémoire utilisée de %.0f Mio",
+ self._name,
+ self._m.getUsedMemory("Mio"),
+ )
+ logging.debug(
+ "%s Durées d'utilisation CPU de %.1fs et elapsed de %.1fs",
+ self._name,
+ self._getTimeState()[0],
+ self._getTimeState()[1],
+ )
logging.debug("%s Terminé", self._name)
return 0
def pop(self, k, d):
"D.pop(k[,d]) -> v, remove specified key and return the corresponding value"
- if hasattr(self, "StoredVariables") and k.lower() in self.__canonical_stored_name:
+ if (
+ hasattr(self, "StoredVariables")
+ and k.lower() in self.__canonical_stored_name
+ ):
return self.StoredVariables.pop(self.__canonical_stored_name[k.lower()], d)
else:
try:
- msg = "'%s'"%k
+ msg = "'%s'" % k
except Exception:
raise TypeError("pop expected at least 1 arguments, got 0")
"If key is not found, d is returned if given, otherwise KeyError is raised"
except Exception:
raise KeyError(msg)
- def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
+ def run(
+ self,
+ Xb=None,
+ Y=None,
+ U=None,
+ HO=None,
+ EM=None,
+ CM=None,
+ R=None,
+ B=None,
+ Q=None,
+ Parameters=None,
+ ):
"""
Doit implémenter l'opération élémentaire de calcul algorithmique.
"""
- raise NotImplementedError("Mathematical algorithmic calculation has not been implemented!")
+ raise NotImplementedError(
+ "Mathematical algorithmic calculation has not been implemented!"
+ )
def defineRequiredParameter(
- self,
- name = None,
- default = None,
- typecast = None,
- message = None,
- minval = None,
- maxval = None,
- listval = None,
- listadv = None,
- oldname = None ):
+ self,
+ name=None,
+ default=None,
+ typecast=None,
+ message=None,
+ minval=None,
+ maxval=None,
+ listval=None,
+ listadv=None,
+ oldname=None,
+ ):
"""
Permet de définir dans l'algorithme des paramètres requis et leurs
caractéristiques par défaut.
raise ValueError("A name is mandatory to define a required parameter.")
#
self.__required_parameters[name] = {
- "default" : default, # noqa: E203
- "typecast" : typecast, # noqa: E203
- "minval" : minval, # noqa: E203
- "maxval" : maxval, # noqa: E203
- "listval" : listval, # noqa: E203
- "listadv" : listadv, # noqa: E203
- "message" : message, # noqa: E203
- "oldname" : oldname, # noqa: E203
+ "default": default,
+ "typecast": typecast,
+ "minval": minval,
+ "maxval": maxval,
+ "listval": listval,
+ "listadv": listadv,
+ "message": message,
+ "oldname": oldname,
}
self.__canonical_parameter_name[name.lower()] = name
if oldname is not None:
self.__canonical_parameter_name[oldname.lower()] = name # Conversion
self.__replace_by_the_new_name[oldname.lower()] = name
- logging.debug("%s %s (valeur par défaut = %s)", self._name, message, self.setParameterValue(name))
+ logging.debug(
+ "%s %s (valeur par défaut = %s)",
+ self._name,
+ message,
+ self.setParameterValue(name),
+ )
def getRequiredParameters(self, noDetails=True):
"""
Renvoie la valeur d'un paramètre requis de manière contrôlée
"""
__k = self.__canonical_parameter_name[name.lower()]
- default = self.__required_parameters[__k]["default"]
+ default = self.__required_parameters[__k]["default"]
typecast = self.__required_parameters[__k]["typecast"]
- minval = self.__required_parameters[__k]["minval"]
- maxval = self.__required_parameters[__k]["maxval"]
- listval = self.__required_parameters[__k]["listval"]
- listadv = self.__required_parameters[__k]["listadv"]
+ minval = self.__required_parameters[__k]["minval"]
+ maxval = self.__required_parameters[__k]["maxval"]
+ listval = self.__required_parameters[__k]["listval"]
+ listadv = self.__required_parameters[__k]["listadv"]
#
if value is None and default is None:
__val = None
if typecast is None:
__val = default
else:
- __val = typecast( default )
+ __val = typecast(default)
else:
if typecast is None:
__val = value
else:
try:
- __val = typecast( value )
+ __val = typecast(value)
except Exception:
- raise ValueError("The value '%s' for the parameter named '%s' can not be correctly evaluated with type '%s'."%(value, __k, typecast))
+ raise ValueError(
+ "The value '%s' for the parameter named '%s' can not be correctly evaluated with type '%s'."
+ % (value, __k, typecast)
+ )
#
if minval is not None and (numpy.array(__val, float) < minval).any():
- raise ValueError("The parameter named '%s' of value '%s' can not be less than %s."%(__k, __val, minval))
+ raise ValueError(
+ "The parameter named '%s' of value '%s' can not be less than %s."
+ % (__k, __val, minval)
+ )
if maxval is not None and (numpy.array(__val, float) > maxval).any():
- raise ValueError("The parameter named '%s' of value '%s' can not be greater than %s."%(__k, __val, maxval))
+ raise ValueError(
+ "The parameter named '%s' of value '%s' can not be greater than %s."
+ % (__k, __val, maxval)
+ )
if listval is not None or listadv is not None:
- if typecast is list or typecast is tuple or isinstance(__val, list) or isinstance(__val, tuple):
+ if (
+ typecast is list
+ or typecast is tuple
+ or isinstance(__val, list)
+ or isinstance(__val, tuple)
+ ):
for v in __val:
if listval is not None and v in listval:
continue
elif listadv is not None and v in listadv:
continue
else:
- raise ValueError("The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."%(v, __k, listval))
- elif not (listval is not None and __val in listval) and not (listadv is not None and __val in listadv):
- raise ValueError("The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."%(__val, __k, listval))
+ raise ValueError(
+ "The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."
+ % (v, __k, listval)
+ )
+ elif not (listval is not None and __val in listval) and not (
+ listadv is not None and __val in listadv
+ ):
+ raise ValueError(
+ "The value '%s' is not allowed for the parameter named '%s', it has to be in the list %s."
+ % (__val, __k, listval)
+ )
#
- if __k in ["SetSeed",]:
+ if __k in [
+ "SetSeed",
+ ]:
__val = value
#
return __val
"""
Permet d'imposer des arguments de calcul requis en entrée.
"""
- self.__required_inputs["RequiredInputValues"]["mandatory"] = tuple( mandatory )
- self.__required_inputs["RequiredInputValues"]["optional"] = tuple( optional )
+ self.__required_inputs["RequiredInputValues"]["mandatory"] = tuple(mandatory)
+ self.__required_inputs["RequiredInputValues"]["optional"] = tuple(optional)
def getInputArguments(self):
"""
Permet d'obtenir les listes des arguments de calcul requis en entrée.
"""
- return self.__required_inputs["RequiredInputValues"]["mandatory"], self.__required_inputs["RequiredInputValues"]["optional"]
+ return (
+ self.__required_inputs["RequiredInputValues"]["mandatory"],
+ self.__required_inputs["RequiredInputValues"]["optional"],
+ )
def setAttributes(self, tags=(), features=()):
"""
Permet d'adjoindre des attributs comme les tags de classification.
Renvoie la liste actuelle dans tous les cas.
"""
- self.__required_inputs["AttributesTags"].extend( tags )
- self.__required_inputs["AttributesFeatures"].extend( features )
- return (self.__required_inputs["AttributesTags"], self.__required_inputs["AttributesFeatures"])
+ self.__required_inputs["AttributesTags"].extend(tags)
+ self.__required_inputs["AttributesFeatures"].extend(features)
+ return (
+ self.__required_inputs["AttributesTags"],
+ self.__required_inputs["AttributesFeatures"],
+ )
def __setParameters(self, fromDico={}, reset=False):
"""
Permet de stocker les paramètres reçus dans le dictionnaire interne.
"""
- self._parameters.update( fromDico )
+ self._parameters.update(fromDico)
__inverse_fromDico_keys = {}
for k in fromDico.keys():
if k.lower() in self.__canonical_parameter_name:
for k in __inverse_fromDico_keys.values():
if k.lower() in self.__replace_by_the_new_name:
__newk = self.__replace_by_the_new_name[k.lower()]
- __msg = "the parameter \"%s\" used in \"%s\" algorithm case is deprecated and has to be replaced by \"%s\"."%(k, self._name, __newk)
+ __msg = (
+ 'the parameter "%s" used in "%s" algorithm case is deprecated and has to be replaced by "%s".'
+ % (k, self._name, __newk)
+ )
__msg += " Please update your code."
warnings.warn(__msg, FutureWarning, stacklevel=50)
#
for k in self.__required_parameters.keys():
if k in __canonic_fromDico_keys:
- self._parameters[k] = self.setParameterValue(k, fromDico[__inverse_fromDico_keys[k]])
+ self._parameters[k] = self.setParameterValue(
+ k, fromDico[__inverse_fromDico_keys[k]]
+ )
elif reset:
self._parameters[k] = self.setParameterValue(k)
else:
pass
if hasattr(self._parameters[k], "size") and self._parameters[k].size > 100:
- logging.debug("%s %s d'une taille totale de %s", self._name, self.__required_parameters[k]["message"], self._parameters[k].size)
- elif hasattr(self._parameters[k], "__len__") and len(self._parameters[k]) > 100:
- logging.debug("%s %s de longueur %s", self._name, self.__required_parameters[k]["message"], len(self._parameters[k]))
+ logging.debug(
+ "%s %s d'une taille totale de %s",
+ self._name,
+ self.__required_parameters[k]["message"],
+ self._parameters[k].size,
+ )
+ elif (
+ hasattr(self._parameters[k], "__len__")
+ and len(self._parameters[k]) > 100
+ ):
+ logging.debug(
+ "%s %s de longueur %s",
+ self._name,
+ self.__required_parameters[k]["message"],
+ len(self._parameters[k]),
+ )
else:
- logging.debug("%s %s : %s", self._name, self.__required_parameters[k]["message"], self._parameters[k])
+ logging.debug(
+ "%s %s : %s",
+ self._name,
+ self.__required_parameters[k]["message"],
+ self._parameters[k],
+ )
def _setInternalState(self, key=None, value=None, fromDico={}, reset=False):
"""
self.__internal_state = {}
if key is not None and value is not None:
self.__internal_state[key] = value
- self.__internal_state.update( dict(fromDico) )
+ self.__internal_state.update(dict(fromDico))
def _getInternalState(self, key=None):
"""
Initialise ou restitue le temps de calcul (cpu/elapsed) à la seconde
"""
if reset:
- self.__initial_cpu_time = time.process_time()
- self.__initial_elapsed_time = time.perf_counter()
- return 0., 0.
+ self.__initial_cpu_time = time.process_time()
+ self.__initial_elapsed_time = time.perf_counter()
+ return 0.0, 0.0
else:
- self.__cpu_time = time.process_time() - self.__initial_cpu_time
+ self.__cpu_time = time.process_time() - self.__initial_cpu_time
self.__elapsed_time = time.perf_counter() - self.__initial_elapsed_time
return self.__cpu_time, self.__elapsed_time
def _StopOnTimeLimit(self, X=None, withReason=False):
"Stop criteria on time limit: True/False [+ Reason]"
c, e = self._getTimeState()
- if "MaximumCpuTime" in self._parameters and c > self._parameters["MaximumCpuTime"]:
- __SC, __SR = True, "Reached maximum CPU time (%.1fs > %.1fs)"%(c, self._parameters["MaximumCpuTime"])
- elif "MaximumElapsedTime" in self._parameters and e > self._parameters["MaximumElapsedTime"]:
- __SC, __SR = True, "Reached maximum elapsed time (%.1fs > %.1fs)"%(e, self._parameters["MaximumElapsedTime"])
+ if (
+ "MaximumCpuTime" in self._parameters
+ and c > self._parameters["MaximumCpuTime"]
+ ):
+ __SC, __SR = True, "Reached maximum CPU time (%.1fs > %.1fs)" % (
+ c,
+ self._parameters["MaximumCpuTime"],
+ )
+ elif (
+ "MaximumElapsedTime" in self._parameters
+ and e > self._parameters["MaximumElapsedTime"]
+ ):
+ __SC, __SR = True, "Reached maximum elapsed time (%.1fs > %.1fs)" % (
+ e,
+ self._parameters["MaximumElapsedTime"],
+ )
else:
__SC, __SR = False, ""
if withReason:
else:
return __SC
+
# ==============================================================================
class PartialAlgorithm(object):
"""
action avancée comme la vérification . Pour les méthodes reprises ici,
le fonctionnement est identique à celles de la classe "Algorithm".
"""
+
__slots__ = (
- "_name", "_parameters", "StoredVariables", "__canonical_stored_name",
+ "_name",
+ "_parameters",
+ "StoredVariables",
+ "__canonical_stored_name",
)
def __init__(self, name):
- self._name = str( name )
+ self._name = str(name)
self._parameters = {"StoreSupplementaryCalculations": []}
#
self.StoredVariables = {}
- self.StoredVariables["Analysis"] = Persistence.OneVector(name = "Analysis")
- self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(name = "CostFunctionJ")
- self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(name = "CostFunctionJb")
- self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(name = "CostFunctionJo")
- self.StoredVariables["CurrentIterationNumber"] = Persistence.OneIndex(name = "CurrentIterationNumber")
- self.StoredVariables["CurrentStepNumber"] = Persistence.OneIndex(name = "CurrentStepNumber")
+ self.StoredVariables["Analysis"] = Persistence.OneVector(name="Analysis")
+ self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(
+ name="CostFunctionJ"
+ )
+ self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(
+ name="CostFunctionJb"
+ )
+ self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(
+ name="CostFunctionJo"
+ )
+ self.StoredVariables["CurrentIterationNumber"] = Persistence.OneIndex(
+ name="CurrentIterationNumber"
+ )
+ self.StoredVariables["CurrentStepNumber"] = Persistence.OneIndex(
+ name="CurrentStepNumber"
+ )
#
self.__canonical_stored_name = {}
for k in self.StoredVariables:
else:
return self.StoredVariables
+
# ==============================================================================
class AlgorithmAndParameters(object):
"""
Classe générale d'interface d'action pour l'algorithme et ses paramètres
"""
+
__slots__ = (
- "__name", "__algorithm", "__algorithmFile", "__algorithmName", "__A",
- "__P", "__Xb", "__Y", "__U", "__HO", "__EM", "__CM", "__B", "__R",
- "__Q", "__variable_names_not_public",
+ "__name",
+ "__algorithm",
+ "__algorithmFile",
+ "__algorithmName",
+ "__A",
+ "__P",
+ "__Xb",
+ "__Y",
+ "__U",
+ "__HO",
+ "__EM",
+ "__CM",
+ "__B",
+ "__R",
+ "__Q",
+ "__variable_names_not_public",
)
- def __init__(self,
- name = "GenericAlgorithm",
- asAlgorithm = None,
- asDict = None,
- asScript = None ):
- """
- """
- self.__name = str(name)
- self.__A = None
- self.__P = {}
+ def __init__(
+ self, name="GenericAlgorithm", asAlgorithm=None, asDict=None, asScript=None
+ ):
+ """ """
+ self.__name = str(name)
+ self.__A = None
+ self.__P = {}
#
- self.__algorithm = {}
- self.__algorithmFile = None
- self.__algorithmName = None
+ self.__algorithm = {}
+ self.__algorithmFile = None
+ self.__algorithmName = None
#
- self.updateParameters( asDict, asScript )
+ self.updateParameters(asDict, asScript)
#
if asAlgorithm is None and asScript is not None:
- __Algo = Interfaces.ImportFromScript(asScript).getvalue( "Algorithm" )
+ __Algo = Interfaces.ImportFromScript(asScript).getvalue("Algorithm")
else:
__Algo = asAlgorithm
#
if __Algo is not None:
self.__A = str(__Algo)
- self.__P.update( {"Algorithm": self.__A} )
+ self.__P.update({"Algorithm": self.__A})
#
- self.__setAlgorithm( self.__A )
+ self.__setAlgorithm(self.__A)
#
- self.__variable_names_not_public = {"nextStep": False} # Duplication dans Algorithm
+ self.__variable_names_not_public = {
+ "nextStep": False
+ } # Duplication dans Algorithm
- def updateParameters(self, asDict = None, asScript = None ):
+ def updateParameters(self, asDict=None, asScript=None):
"Mise à jour des paramètres"
if asDict is None and asScript is not None:
- __Dict = Interfaces.ImportFromScript(asScript).getvalue( self.__name, "Parameters" )
+ __Dict = Interfaces.ImportFromScript(asScript).getvalue(
+ self.__name, "Parameters"
+ )
else:
__Dict = asDict
#
if __Dict is not None:
- self.__P.update( dict(__Dict) )
+ self.__P.update(dict(__Dict))
- def executePythonScheme(self, asDictAO = None):
+ def executePythonScheme(self, asDictAO=None):
"Permet de lancer le calcul d'assimilation"
Operator.CM.clearCache()
#
if not isinstance(asDictAO, dict):
- raise ValueError("The objects for algorithm calculation have to be given together as a dictionnary, and they are not")
- if hasattr(asDictAO["Background"], "getO"): self.__Xb = asDictAO["Background"].getO() # noqa: E241,E701
- elif hasattr(asDictAO["CheckingPoint"], "getO"): self.__Xb = asDictAO["CheckingPoint"].getO() # noqa: E241,E701
- else: self.__Xb = None # noqa: E241,E701
- if hasattr(asDictAO["Observation"], "getO"): self.__Y = asDictAO["Observation"].getO() # noqa: E241,E701
- else: self.__Y = asDictAO["Observation"] # noqa: E241,E701
- if hasattr(asDictAO["ControlInput"], "getO"): self.__U = asDictAO["ControlInput"].getO() # noqa: E241,E701
- else: self.__U = asDictAO["ControlInput"] # noqa: E241,E701
- if hasattr(asDictAO["ObservationOperator"], "getO"): self.__HO = asDictAO["ObservationOperator"].getO() # noqa: E241,E701
- else: self.__HO = asDictAO["ObservationOperator"] # noqa: E241,E701
- if hasattr(asDictAO["EvolutionModel"], "getO"): self.__EM = asDictAO["EvolutionModel"].getO() # noqa: E241,E701
- else: self.__EM = asDictAO["EvolutionModel"] # noqa: E241,E701
- if hasattr(asDictAO["ControlModel"], "getO"): self.__CM = asDictAO["ControlModel"].getO() # noqa: E241,E701
- else: self.__CM = asDictAO["ControlModel"] # noqa: E241,E701
+ raise ValueError(
+ "The objects for algorithm calculation have to be given together as a dictionnary, and they are not"
+ )
+ if hasattr(asDictAO["Background"], "getO"):
+ self.__Xb = asDictAO["Background"].getO()
+ elif hasattr(asDictAO["CheckingPoint"], "getO"):
+ self.__Xb = asDictAO["CheckingPoint"].getO()
+ else:
+ self.__Xb = None
+ if hasattr(asDictAO["Observation"], "getO"):
+ self.__Y = asDictAO["Observation"].getO()
+ else:
+ self.__Y = asDictAO["Observation"]
+ if hasattr(asDictAO["ControlInput"], "getO"):
+ self.__U = asDictAO["ControlInput"].getO()
+ else:
+ self.__U = asDictAO["ControlInput"]
+ if hasattr(asDictAO["ObservationOperator"], "getO"):
+ self.__HO = asDictAO["ObservationOperator"].getO()
+ else:
+ self.__HO = asDictAO["ObservationOperator"]
+ if hasattr(asDictAO["EvolutionModel"], "getO"):
+ self.__EM = asDictAO["EvolutionModel"].getO()
+ else:
+ self.__EM = asDictAO["EvolutionModel"]
+ if hasattr(asDictAO["ControlModel"], "getO"):
+ self.__CM = asDictAO["ControlModel"].getO()
+ else:
+ self.__CM = asDictAO["ControlModel"]
self.__B = asDictAO["BackgroundError"]
self.__R = asDictAO["ObservationError"]
self.__Q = asDictAO["EvolutionError"]
self.__shape_validate()
#
self.__algorithm.run(
- Xb = self.__Xb,
- Y = self.__Y,
- U = self.__U,
- HO = self.__HO,
- EM = self.__EM,
- CM = self.__CM,
- R = self.__R,
- B = self.__B,
- Q = self.__Q,
- Parameters = self.__P,
+ Xb=self.__Xb,
+ Y=self.__Y,
+ U=self.__U,
+ HO=self.__HO,
+ EM=self.__EM,
+ CM=self.__CM,
+ R=self.__R,
+ B=self.__B,
+ Q=self.__Q,
+ Parameters=self.__P,
)
return 0
if FileName is None or not os.path.exists(FileName):
raise ValueError("a YACS file name has to be given for YACS execution.\n")
else:
- __file = os.path.abspath(FileName)
- logging.debug("The YACS file name is \"%s\"."%__file)
- if not PlatformInfo.has_salome or \
- not PlatformInfo.has_yacs or \
- not PlatformInfo.has_adao:
+ __file = os.path.abspath(FileName)
+ logging.debug('The YACS file name is "%s".' % __file)
+ if (
+ not PlatformInfo.has_salome
+ or not PlatformInfo.has_yacs
+ or not PlatformInfo.has_adao
+ ):
raise ImportError(
- "\n\n" + \
- "Unable to get SALOME, YACS or ADAO environnement variables.\n" + \
- "Please load the right environnement before trying to use it.\n" )
+ "\n\n"
+ + "Unable to get SALOME, YACS or ADAO environnement variables.\n"
+ + "Please load the right environnement before trying to use it.\n"
+ )
#
import pilot
import SALOMERuntime
import loader
+
SALOMERuntime.RuntimeSALOME_setRuntime()
r = pilot.getRuntime()
try:
p = xmlLoader.load(__file)
except IOError as ex:
- print("The YACS XML schema file can not be loaded: %s"%(ex,))
+ print("The YACS XML schema file can not be loaded: %s" % (ex,))
logger = p.getLogger("parser")
if not logger.isEmpty():
#
return 0
- def get(self, key = None):
+ def get(self, key=None):
"Vérifie l'existence d'une clé de variable ou de paramètres"
if key in self.__algorithm:
- return self.__algorithm.get( key )
+ return self.__algorithm.get(key)
elif key in self.__P:
return self.__P[key]
else:
def setObserver(self, __V, __O, __I, __A, __S):
"Associe un observer à une variable unique"
- if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm, "StoredVariables"):
+ if (
+ self.__algorithm is None
+ or isinstance(self.__algorithm, dict)
+ or not hasattr(self.__algorithm, "StoredVariables")
+ ):
raise ValueError("No observer can be build before choosing an algorithm.")
if __V not in self.__algorithm:
- raise ValueError("An observer requires to be set on a variable named %s which does not exist."%__V)
+ raise ValueError(
+ "An observer requires to be set on a variable named %s which does not exist."
+ % __V
+ )
else:
- self.__algorithm.StoredVariables[ __V ].setDataObserver( HookFunction = __O, HookParameters = __I, Scheduler = __S )
+ self.__algorithm.StoredVariables[__V].setDataObserver(
+ HookFunction=__O, HookParameters=__I, Scheduler=__S
+ )
def setCrossObserver(self, __V, __O, __I, __A, __S):
"Associe un observer à une collection ordonnée de variables"
- if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm, "StoredVariables"):
+ if (
+ self.__algorithm is None
+ or isinstance(self.__algorithm, dict)
+ or not hasattr(self.__algorithm, "StoredVariables")
+ ):
raise ValueError("No observer can be build before choosing an algorithm.")
if not isinstance(__V, (list, tuple)):
- raise ValueError("A cross observer requires to be set on a variable series which is not the case of %s."%__V)
+ raise ValueError(
+ "A cross observer requires to be set on a variable series which"
+ + " is not the case of %s." % __V
+ )
if len(__V) != len(__I):
- raise ValueError("The number of information fields has to be the same than the number of variables on which to set the observer.")
+ raise ValueError(
+ "The number of information fields has to be the same than the"
+ + " number of variables on which to set the observer."
+ )
#
for __eV in __V:
if __eV not in self.__algorithm:
- raise ValueError("An observer requires to be set on a variable named %s which does not exist."%__eV)
+ raise ValueError(
+ "An observer requires to be set on a variable named %s which does not exist."
+ % __eV
+ )
else:
- self.__algorithm.StoredVariables[ __eV ].setDataObserver( HookFunction = __O, HookParameters = __I, Scheduler = __S, Order = __V, OSync = __A, DOVar = self.__algorithm.StoredVariables )
-
- def removeObserver(self, __V, __O, __A = False):
- if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm, "StoredVariables"):
+ self.__algorithm.StoredVariables[__eV].setDataObserver(
+ HookFunction=__O,
+ HookParameters=__I,
+ Scheduler=__S,
+ Order=__V,
+ OSync=__A,
+ DOVar=self.__algorithm.StoredVariables,
+ )
+
+ def removeObserver(self, __V, __O, __A=False):
+ if (
+ self.__algorithm is None
+ or isinstance(self.__algorithm, dict)
+ or not hasattr(self.__algorithm, "StoredVariables")
+ ):
raise ValueError("No observer can be removed before choosing an algorithm.")
if __V not in self.__algorithm:
- raise ValueError("An observer requires to be removed on a variable named %s which does not exist."%__V)
+ raise ValueError(
+ "An observer requires to be removed on a variable named %s which does not exist."
+ % __V
+ )
else:
- return self.__algorithm.StoredVariables[ __V ].removeDataObserver( HookFunction = __O, AllObservers = __A )
+ return self.__algorithm.StoredVariables[__V].removeDataObserver(
+ HookFunction=__O, AllObservers=__A
+ )
def hasObserver(self, __V):
- if self.__algorithm is None \
- or isinstance(self.__algorithm, dict) \
- or not hasattr(self.__algorithm, "StoredVariables"):
+ if (
+ self.__algorithm is None
+ or isinstance(self.__algorithm, dict)
+ or not hasattr(self.__algorithm, "StoredVariables")
+ ):
return False
if __V not in self.__algorithm:
return False
- return self.__algorithm.StoredVariables[ __V ].hasDataObserver()
+ return self.__algorithm.StoredVariables[__V].hasDataObserver()
def keys(self):
__allvariables = list(self.__algorithm.keys()) + list(self.__P.keys())
"x.__str__() <==> str(x)"
return str(self.__A) + ", " + str(self.__P)
- def __setAlgorithm(self, choice = None ):
+ def __setAlgorithm(self, choice=None):
"""
Permet de sélectionner l'algorithme à utiliser pour mener à bien l'étude
d'assimilation. L'argument est un champ caractère se rapportant au nom
if choice is None:
raise ValueError("Error: algorithm choice has to be given")
if self.__algorithmName is not None:
- raise ValueError("Error: algorithm choice has already been done as \"%s\", it can't be changed."%self.__algorithmName)
+ raise ValueError(
+ 'Error: algorithm choice has already been done as "%s", it can\'t be changed.'
+ % self.__algorithmName
+ )
daDirectory = "daAlgorithms"
#
# Recherche explicitement le fichier complet
# ------------------------------------------
module_path = None
for directory in sys.path:
- if os.path.isfile(os.path.join(directory, daDirectory, str(choice) + '.py')):
+ if os.path.isfile(
+ os.path.join(directory, daDirectory, str(choice) + ".py")
+ ):
module_path = os.path.abspath(os.path.join(directory, daDirectory))
if module_path is None:
raise ImportError(
- "No algorithm module named \"%s\" has been found in the search path.\n The search path is %s"%(choice, sys.path))
+ 'No algorithm module named "%s" has been found in the search path.'
+ % choice
+ + "\n The search path is %s" % sys.path
+ )
#
# Importe le fichier complet comme un module
# ------------------------------------------
sys.path.insert(0, module_path)
self.__algorithmFile = __import__(str(choice), globals(), locals(), [])
if not hasattr(self.__algorithmFile, "ElementaryAlgorithm"):
- raise ImportError("this module does not define a valid elementary algorithm.")
+ raise ImportError(
+ "this module does not define a valid elementary algorithm."
+ )
self.__algorithmName = str(choice)
sys.path = sys_path_tmp
del sys_path_tmp
except ImportError as e:
raise ImportError(
- "The module named \"%s\" was found, but is incorrect at the import stage.\n The import error message is: %s"%(choice, e))
+ 'The module named "%s" was found, but is incorrect at the import stage.'
+ % choice
+ + "\n The import error message is: %s" % e
+ )
#
# Instancie un objet du type élémentaire du fichier
# -------------------------------------------------
Validation de la correspondance correcte des tailles des variables et
des matrices s'il y en a.
"""
- if self.__Xb is None: __Xb_shape = (0,) # noqa: E241,E701
- elif hasattr(self.__Xb, "size"): __Xb_shape = (self.__Xb.size,) # noqa: E241,E701
+ if self.__Xb is None:
+ __Xb_shape = (0,)
+ elif hasattr(self.__Xb, "size"):
+ __Xb_shape = (self.__Xb.size,)
elif hasattr(self.__Xb, "shape"):
- if isinstance(self.__Xb.shape, tuple): __Xb_shape = self.__Xb.shape # noqa: E241,E701
- else: __Xb_shape = self.__Xb.shape() # noqa: E241,E701
- else: raise TypeError("The background (Xb) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__Xb.shape, tuple):
+ __Xb_shape = self.__Xb.shape
+ else:
+ __Xb_shape = self.__Xb.shape()
+ else:
+ raise TypeError("The background (Xb) has no attribute of shape: problem!")
#
- if self.__Y is None: __Y_shape = (0,) # noqa: E241,E701
- elif hasattr(self.__Y, "size"): __Y_shape = (self.__Y.size,) # noqa: E241,E701
+ if self.__Y is None:
+ __Y_shape = (0,)
+ elif hasattr(self.__Y, "size"):
+ __Y_shape = (self.__Y.size,)
elif hasattr(self.__Y, "shape"):
- if isinstance(self.__Y.shape, tuple): __Y_shape = self.__Y.shape # noqa: E241,E701
- else: __Y_shape = self.__Y.shape() # noqa: E241,E701
- else: raise TypeError("The observation (Y) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__Y.shape, tuple):
+ __Y_shape = self.__Y.shape
+ else:
+ __Y_shape = self.__Y.shape()
+ else:
+ raise TypeError("The observation (Y) has no attribute of shape: problem!")
#
- if self.__U is None: __U_shape = (0,) # noqa: E241,E701
- elif hasattr(self.__U, "size"): __U_shape = (self.__U.size,) # noqa: E241,E701
+ if self.__U is None:
+ __U_shape = (0,)
+ elif hasattr(self.__U, "size"):
+ __U_shape = (self.__U.size,)
elif hasattr(self.__U, "shape"):
- if isinstance(self.__U.shape, tuple): __U_shape = self.__U.shape # noqa: E241,E701
- else: __U_shape = self.__U.shape() # noqa: E241,E701
- else: raise TypeError("The control (U) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__U.shape, tuple):
+ __U_shape = self.__U.shape
+ else:
+ __U_shape = self.__U.shape()
+ else:
+ raise TypeError("The control (U) has no attribute of shape: problem!")
#
- if self.__B is None: __B_shape = (0, 0) # noqa: E241,E701
+ if self.__B is None:
+ __B_shape = (0, 0)
elif hasattr(self.__B, "shape"):
- if isinstance(self.__B.shape, tuple): __B_shape = self.__B.shape # noqa: E241,E701
- else: __B_shape = self.__B.shape() # noqa: E241,E701
- else: raise TypeError("The a priori errors covariance matrix (B) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__B.shape, tuple):
+ __B_shape = self.__B.shape
+ else:
+ __B_shape = self.__B.shape()
+ else:
+ raise TypeError(
+ "The a priori errors covariance matrix (B) has no attribute of shape: problem!"
+ )
#
- if self.__R is None: __R_shape = (0, 0) # noqa: E241,E701
+ if self.__R is None:
+ __R_shape = (0, 0)
elif hasattr(self.__R, "shape"):
- if isinstance(self.__R.shape, tuple): __R_shape = self.__R.shape # noqa: E241,E701
- else: __R_shape = self.__R.shape() # noqa: E241,E701
- else: raise TypeError("The observation errors covariance matrix (R) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__R.shape, tuple):
+ __R_shape = self.__R.shape
+ else:
+ __R_shape = self.__R.shape()
+ else:
+ raise TypeError(
+ "The observation errors covariance matrix (R) has no attribute of shape: problem!"
+ )
#
- if self.__Q is None: __Q_shape = (0, 0) # noqa: E241,E701
+ if self.__Q is None:
+ __Q_shape = (0, 0)
elif hasattr(self.__Q, "shape"):
- if isinstance(self.__Q.shape, tuple): __Q_shape = self.__Q.shape # noqa: E241,E701
- else: __Q_shape = self.__Q.shape() # noqa: E241,E701
- else: raise TypeError("The evolution errors covariance matrix (Q) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__Q.shape, tuple):
+ __Q_shape = self.__Q.shape
+ else:
+ __Q_shape = self.__Q.shape()
+ else:
+ raise TypeError(
+ "The evolution errors covariance matrix (Q) has no attribute of shape: problem!"
+ )
#
- if len(self.__HO) == 0: __HO_shape = (0, 0) # noqa: E241,E701
- elif isinstance(self.__HO, dict): __HO_shape = (0, 0) # noqa: E241,E701
+ if len(self.__HO) == 0:
+ __HO_shape = (0, 0)
+ elif isinstance(self.__HO, dict):
+ __HO_shape = (0, 0)
elif hasattr(self.__HO["Direct"], "shape"):
- if isinstance(self.__HO["Direct"].shape, tuple): __HO_shape = self.__HO["Direct"].shape # noqa: E241,E701
- else: __HO_shape = self.__HO["Direct"].shape() # noqa: E241,E701
- else: raise TypeError("The observation operator (H) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__HO["Direct"].shape, tuple):
+ __HO_shape = self.__HO["Direct"].shape
+ else:
+ __HO_shape = self.__HO["Direct"].shape()
+ else:
+ raise TypeError(
+ "The observation operator (H) has no attribute of shape: problem!"
+ )
#
- if len(self.__EM) == 0: __EM_shape = (0, 0) # noqa: E241,E701
- elif isinstance(self.__EM, dict): __EM_shape = (0, 0) # noqa: E241,E701
+ if len(self.__EM) == 0:
+ __EM_shape = (0, 0)
+ elif isinstance(self.__EM, dict):
+ __EM_shape = (0, 0)
elif hasattr(self.__EM["Direct"], "shape"):
- if isinstance(self.__EM["Direct"].shape, tuple): __EM_shape = self.__EM["Direct"].shape # noqa: E241,E701
- else: __EM_shape = self.__EM["Direct"].shape() # noqa: E241,E701
- else: raise TypeError("The evolution model (EM) has no attribute of shape: problem !") # noqa: E241,E70
+ if isinstance(self.__EM["Direct"].shape, tuple):
+ __EM_shape = self.__EM["Direct"].shape
+ else:
+ __EM_shape = self.__EM["Direct"].shape()
+ else:
+ raise TypeError(
+ "The evolution model (EM) has no attribute of shape: problem!"
+ )
#
- if len(self.__CM) == 0: __CM_shape = (0, 0) # noqa: E241,E701
- elif isinstance(self.__CM, dict): __CM_shape = (0, 0) # noqa: E241,E701
+ if len(self.__CM) == 0:
+ __CM_shape = (0, 0)
+ elif isinstance(self.__CM, dict):
+ __CM_shape = (0, 0)
elif hasattr(self.__CM["Direct"], "shape"):
- if isinstance(self.__CM["Direct"].shape, tuple): __CM_shape = self.__CM["Direct"].shape # noqa: E241,E701
- else: __CM_shape = self.__CM["Direct"].shape() # noqa: E241,E701
- else: raise TypeError("The control model (CM) has no attribute of shape: problem !") # noqa: E701
+ if isinstance(self.__CM["Direct"].shape, tuple):
+ __CM_shape = self.__CM["Direct"].shape
+ else:
+ __CM_shape = self.__CM["Direct"].shape()
+ else:
+ raise TypeError(
+ "The control model (CM) has no attribute of shape: problem!"
+ )
#
# Vérification des conditions
# ---------------------------
- if not ( len(__Xb_shape) == 1 or min(__Xb_shape) == 1 ):
- raise ValueError("Shape characteristic of background (Xb) is incorrect: \"%s\"."%(__Xb_shape,))
- if not ( len(__Y_shape) == 1 or min(__Y_shape) == 1 ):
- raise ValueError("Shape characteristic of observation (Y) is incorrect: \"%s\"."%(__Y_shape,))
- #
- if not ( min(__B_shape) == max(__B_shape) ):
- raise ValueError("Shape characteristic of a priori errors covariance matrix (B) is incorrect: \"%s\"."%(__B_shape,))
- if not ( min(__R_shape) == max(__R_shape) ):
- raise ValueError("Shape characteristic of observation errors covariance matrix (R) is incorrect: \"%s\"."%(__R_shape,))
- if not ( min(__Q_shape) == max(__Q_shape) ):
- raise ValueError("Shape characteristic of evolution errors covariance matrix (Q) is incorrect: \"%s\"."%(__Q_shape,))
- if not ( min(__EM_shape) == max(__EM_shape) ):
- raise ValueError("Shape characteristic of evolution operator (EM) is incorrect: \"%s\"."%(__EM_shape,))
- #
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and not ( __HO_shape[1] == max(__Xb_shape) ):
+ if not (len(__Xb_shape) == 1 or min(__Xb_shape) == 1):
raise ValueError(
- "Shape characteristic of observation operator (H)" + \
- " \"%s\" and state (X) \"%s\" are incompatible."%(__HO_shape, __Xb_shape))
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and not ( __HO_shape[0] == max(__Y_shape) ):
+ 'Shape characteristic of background (Xb) is incorrect: "%s".'
+ % (__Xb_shape,)
+ )
+ if not (len(__Y_shape) == 1 or min(__Y_shape) == 1):
+ raise ValueError(
+ 'Shape characteristic of observation (Y) is incorrect: "%s".'
+ % (__Y_shape,)
+ )
+ #
+ if not (min(__B_shape) == max(__B_shape)):
raise ValueError(
- "Shape characteristic of observation operator (H)" + \
- " \"%s\" and observation (Y) \"%s\" are incompatible."%(__HO_shape, __Y_shape))
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and len(self.__B) > 0 and not ( __HO_shape[1] == __B_shape[0] ):
+ 'Shape characteristic of a priori errors covariance matrix (B) is incorrect: "%s".'
+ % (__B_shape,)
+ )
+ if not (min(__R_shape) == max(__R_shape)):
raise ValueError(
- "Shape characteristic of observation operator (H)" + \
- " \"%s\" and a priori errors covariance matrix (B) \"%s\" are incompatible."%(__HO_shape, __B_shape))
- if len(self.__HO) > 0 and not isinstance(self.__HO, dict) and len(self.__R) > 0 and not ( __HO_shape[0] == __R_shape[1] ):
+ 'Shape characteristic of observation errors covariance matrix (R) is incorrect: "%s".'
+ % (__R_shape,)
+ )
+ if not (min(__Q_shape) == max(__Q_shape)):
raise ValueError(
- "Shape characteristic of observation operator (H)" + \
- " \"%s\" and observation errors covariance matrix (R) \"%s\" are incompatible."%(__HO_shape, __R_shape))
+ 'Shape characteristic of evolution errors covariance matrix (Q) is incorrect: "%s".'
+ % (__Q_shape,)
+ )
+ if not (min(__EM_shape) == max(__EM_shape)):
+ raise ValueError(
+ 'Shape characteristic of evolution operator (EM) is incorrect: "%s".'
+ % (__EM_shape,)
+ )
+ #
+ if (
+ len(self.__HO) > 0
+ and not isinstance(self.__HO, dict)
+ and not (__HO_shape[1] == max(__Xb_shape))
+ ):
+ raise ValueError(
+ "Shape characteristic of observation operator (H)"
+ + ' "%s" and state (X) "%s" are incompatible.'
+ % (__HO_shape, __Xb_shape)
+ )
+ if (
+ len(self.__HO) > 0
+ and not isinstance(self.__HO, dict)
+ and not (__HO_shape[0] == max(__Y_shape))
+ ):
+ raise ValueError(
+ "Shape characteristic of observation operator (H)"
+ + ' "%s" and observation (Y) "%s" are incompatible.'
+ % (__HO_shape, __Y_shape)
+ )
+ if (
+ len(self.__HO) > 0
+ and not isinstance(self.__HO, dict)
+ and len(self.__B) > 0
+ and not (__HO_shape[1] == __B_shape[0])
+ ):
+ raise ValueError(
+ "Shape characteristic of observation operator (H)"
+ + ' "%s" and a priori errors covariance matrix (B) "%s" are incompatible.'
+ % (__HO_shape, __B_shape)
+ )
+ if (
+ len(self.__HO) > 0
+ and not isinstance(self.__HO, dict)
+ and len(self.__R) > 0
+ and not (__HO_shape[0] == __R_shape[1])
+ ):
+ raise ValueError(
+ "Shape characteristic of observation operator (H)"
+ + ' "%s" and observation errors covariance matrix (R) "%s" are incompatible.'
+ % (__HO_shape, __R_shape)
+ )
#
- if self.__B is not None and len(self.__B) > 0 and not ( __B_shape[1] == max(__Xb_shape) ):
- if self.__algorithmName in ["EnsembleBlue",]:
+ if (
+ self.__B is not None
+ and len(self.__B) > 0
+ and not (__B_shape[1] == max(__Xb_shape))
+ ):
+ if self.__algorithmName in [
+ "EnsembleBlue",
+ ]:
asPersistentVector = self.__Xb.reshape((-1, min(__B_shape)))
self.__Xb = Persistence.OneVector("Background")
for member in asPersistentVector:
- self.__Xb.store( numpy.asarray(member, dtype=float) )
+ self.__Xb.store(numpy.asarray(member, dtype=float))
__Xb_shape = min(__B_shape)
else:
raise ValueError(
- "Shape characteristic of a priori errors covariance matrix (B)" + \
- " \"%s\" and background vector (Xb) \"%s\" are incompatible."%(__B_shape, __Xb_shape))
- #
- if self.__R is not None and len(self.__R) > 0 and not ( __R_shape[1] == max(__Y_shape) ):
+ "Shape characteristic of a priori errors covariance matrix (B)"
+ + ' "%s" and background vector (Xb) "%s" are incompatible.'
+ % (__B_shape, __Xb_shape)
+ )
+ #
+ if (
+ self.__R is not None
+ and len(self.__R) > 0
+ and not (__R_shape[1] == max(__Y_shape))
+ ):
raise ValueError(
- "Shape characteristic of observation errors covariance matrix (R)" + \
- " \"%s\" and observation vector (Y) \"%s\" are incompatible."%(__R_shape, __Y_shape))
+ "Shape characteristic of observation errors covariance matrix (R)"
+ + ' "%s" and observation vector (Y) "%s" are incompatible.'
+ % (__R_shape, __Y_shape)
+ )
#
- if self.__EM is not None and len(self.__EM) > 0 and not isinstance(self.__EM, dict) and not ( __EM_shape[1] == max(__Xb_shape) ):
+ if (
+ self.__EM is not None
+ and len(self.__EM) > 0
+ and not isinstance(self.__EM, dict)
+ and not (__EM_shape[1] == max(__Xb_shape))
+ ):
raise ValueError(
- "Shape characteristic of evolution model (EM)" + \
- " \"%s\" and state (X) \"%s\" are incompatible."%(__EM_shape, __Xb_shape))
+ "Shape characteristic of evolution model (EM)"
+ + ' "%s" and state (X) "%s" are incompatible.'
+ % (__EM_shape, __Xb_shape)
+ )
#
- if self.__CM is not None and len(self.__CM) > 0 and not isinstance(self.__CM, dict) and not ( __CM_shape[1] == max(__U_shape) ):
+ if (
+ self.__CM is not None
+ and len(self.__CM) > 0
+ and not isinstance(self.__CM, dict)
+ and not (__CM_shape[1] == max(__U_shape))
+ ):
raise ValueError(
- "Shape characteristic of control model (CM)" + \
- " \"%s\" and control (U) \"%s\" are incompatible."%(__CM_shape, __U_shape))
+ "Shape characteristic of control model (CM)"
+ + ' "%s" and control (U) "%s" are incompatible.'
+ % (__CM_shape, __U_shape)
+ )
#
- if ("Bounds" in self.__P) \
- and isinstance(self.__P["Bounds"], (list, tuple)) \
- and (len(self.__P["Bounds"]) != max(__Xb_shape)):
+ if (
+ ("Bounds" in self.__P)
+ and isinstance(self.__P["Bounds"], (list, tuple))
+ and (len(self.__P["Bounds"]) != max(__Xb_shape))
+ ):
if len(self.__P["Bounds"]) > 0:
- raise ValueError("The number '%s' of bound pairs for the state components is different from the size '%s' of the state (X) itself." \
- % (len(self.__P["Bounds"]), max(__Xb_shape)))
+ raise ValueError(
+ "The number '%s' of bound pairs for the state components"
+ % len(self.__P["Bounds"])
+ + " is different from the size '%s' of the state (X) itself."
+ % max(__Xb_shape)
+ )
else:
self.__P["Bounds"] = None
- if ("Bounds" in self.__P) \
- and isinstance(self.__P["Bounds"], (numpy.ndarray, numpy.matrix)) \
- and (self.__P["Bounds"].shape[0] != max(__Xb_shape)):
+ if (
+ ("Bounds" in self.__P)
+ and isinstance(self.__P["Bounds"], (numpy.ndarray, numpy.matrix))
+ and (self.__P["Bounds"].shape[0] != max(__Xb_shape))
+ ):
if self.__P["Bounds"].size > 0:
- raise ValueError("The number '%s' of bound pairs for the state components is different from the size '%s' of the state (X) itself." \
- % (self.__P["Bounds"].shape[0], max(__Xb_shape)))
+ raise ValueError(
+ "The number '%s' of bound pairs for the state components"
+ % self.__P["Bounds"].shape[0]
+ + " is different from the size '%s' of the state (X) itself."
+ % max(__Xb_shape)
+ )
else:
self.__P["Bounds"] = None
#
- if ("BoxBounds" in self.__P) \
- and isinstance(self.__P["BoxBounds"], (list, tuple)) \
- and (len(self.__P["BoxBounds"]) != max(__Xb_shape)):
- raise ValueError("The number '%s' of bound pairs for the state box components is different from the size '%s' of the state (X) itself." \
- % (len(self.__P["BoxBounds"]), max(__Xb_shape)))
- if ("BoxBounds" in self.__P) \
- and isinstance(self.__P["BoxBounds"], (numpy.ndarray, numpy.matrix)) \
- and (self.__P["BoxBounds"].shape[0] != max(__Xb_shape)):
- raise ValueError("The number '%s' of bound pairs for the state box components is different from the size '%s' of the state (X) itself." \
- % (self.__P["BoxBounds"].shape[0], max(__Xb_shape)))
- #
- if ("StateBoundsForQuantiles" in self.__P) \
- and isinstance(self.__P["StateBoundsForQuantiles"], (list, tuple)) \
- and (len(self.__P["StateBoundsForQuantiles"]) != max(__Xb_shape)):
- raise ValueError("The number '%s' of bound pairs for the quantile state components is different from the size '%s' of the state (X) itself." \
- % (len(self.__P["StateBoundsForQuantiles"]), max(__Xb_shape)))
+ if (
+ ("BoxBounds" in self.__P)
+ and isinstance(self.__P["BoxBounds"], (list, tuple))
+ and (len(self.__P["BoxBounds"]) != max(__Xb_shape))
+ ):
+ raise ValueError(
+ "The number '%s' of bound pairs for the state box components"
+ % len(self.__P["BoxBounds"])
+ + " is different from the size '%s' of the state (X) itself."
+ % max(__Xb_shape)
+ )
+ if (
+ ("BoxBounds" in self.__P)
+ and isinstance(self.__P["BoxBounds"], (numpy.ndarray, numpy.matrix))
+ and (self.__P["BoxBounds"].shape[0] != max(__Xb_shape))
+ ):
+ raise ValueError(
+ "The number '%s' of bound pairs for the state box components"
+ % self.__P["BoxBounds"].shape[0]
+ + " is different from the size '%s' of the state (X) itself."
+ % max(__Xb_shape)
+ )
+ #
+ if (
+ ("StateBoundsForQuantiles" in self.__P)
+ and isinstance(self.__P["StateBoundsForQuantiles"], (list, tuple))
+ and (len(self.__P["StateBoundsForQuantiles"]) != max(__Xb_shape))
+ ):
+ raise ValueError(
+ "The number '%s' of bound pairs for the quantile state components"
+ % len(self.__P["StateBoundsForQuantiles"])
+ + " is different from the size '%s' of the state (X) itself."
+ % max(__Xb_shape)
+ )
#
return 1
+
# ==============================================================================
class RegulationAndParameters(object):
"""
Classe générale d'interface d'action pour la régulation et ses paramètres
"""
+
__slots__ = ("__name", "__P")
- def __init__(self,
- name = "GenericRegulation",
- asAlgorithm = None,
- asDict = None,
- asScript = None ):
- """
- """
- self.__name = str(name)
- self.__P = {}
+ def __init__(
+ self, name="GenericRegulation", asAlgorithm=None, asDict=None, asScript=None
+ ):
+ """ """
+ self.__name = str(name)
+ self.__P = {}
#
if asAlgorithm is None and asScript is not None:
- __Algo = Interfaces.ImportFromScript(asScript).getvalue( "Algorithm" )
+ __Algo = Interfaces.ImportFromScript(asScript).getvalue("Algorithm")
else:
__Algo = asAlgorithm
#
if asDict is None and asScript is not None:
- __Dict = Interfaces.ImportFromScript(asScript).getvalue( self.__name, "Parameters" )
+ __Dict = Interfaces.ImportFromScript(asScript).getvalue(
+ self.__name, "Parameters"
+ )
else:
__Dict = asDict
#
if __Dict is not None:
- self.__P.update( dict(__Dict) )
+ self.__P.update(dict(__Dict))
#
if __Algo is not None:
- self.__P.update( {"Algorithm": str(__Algo)} )
+ self.__P.update({"Algorithm": str(__Algo)})
- def get(self, key = None):
+ def get(self, key=None):
"Vérifie l'existence d'une clé de variable ou de paramètres"
if key in self.__P:
return self.__P[key]
else:
return self.__P
+
# ==============================================================================
class DataObserver(object):
"""
Classe générale d'interface de type observer
"""
+
__slots__ = ("__name", "__V", "__O", "__I")
- def __init__(self,
- name = "GenericObserver",
- onVariable = None,
- asTemplate = None,
- asString = None,
- asScript = None,
- asObsObject = None,
- withInfo = None,
- crossObs = False,
- syncObs = True,
- scheduledBy = None,
- withAlgo = None ):
- """
- """
- self.__name = str(name)
- self.__V = None
- self.__O = None
- self.__I = None
+ def __init__(
+ self,
+ name="GenericObserver",
+ onVariable=None,
+ asTemplate=None,
+ asString=None,
+ asScript=None,
+ asObsObject=None,
+ withInfo=None,
+ crossObs=False,
+ syncObs=True,
+ scheduledBy=None,
+ withAlgo=None,
+ ):
+ """ """
+ self.__name = str(name)
+ self.__V = None
+ self.__O = None
+ self.__I = None
#
if onVariable is None:
- raise ValueError("setting an observer has to be done over a variable name or a list of variable names, not over None.")
+ raise ValueError(
+ "setting an observer has to be done over a variable name or a list of variable names, not over None."
+ )
elif isinstance(onVariable, (tuple, list)):
- self.__V = tuple(map( str, onVariable ))
+ self.__V = tuple(map(str, onVariable))
if withInfo is None:
self.__I = self.__V
elif crossObs or isinstance(withInfo, (tuple, list)):
else:
self.__I = (str(withInfo),)
else:
- raise ValueError("setting an observer has to be done over a variable name or a list of variable names.")
+ raise ValueError(
+ "setting an observer has to be done over a variable name or a list of variable names."
+ )
#
if asObsObject is not None:
self.__O = asObsObject
else:
- __FunctionText = str(UserScript('Observer', asTemplate, asString, asScript))
+ __FunctionText = str(UserScript("Observer", asTemplate, asString, asScript))
__Function = Observer2Func(__FunctionText)
self.__O = __Function.getfunc()
#
for k in range(len(self.__V)):
if self.__V[k] not in withAlgo:
- raise ValueError("An observer is asked to be set on a variable named %s which does not exist."%self.__V[k])
+ raise ValueError(
+ "An observer is asked to be set on a variable named %s which does not exist."
+ % self.__V[k]
+ )
#
if bool(crossObs):
- withAlgo.setCrossObserver(self.__V, self.__O, self.__I, syncObs, scheduledBy)
+ withAlgo.setCrossObserver(
+ self.__V, self.__O, self.__I, syncObs, scheduledBy
+ )
else:
for k in range(len(self.__V)):
- withAlgo.setObserver(self.__V[k], self.__O, self.__I[k], syncObs, scheduledBy)
+ withAlgo.setObserver(
+ self.__V[k], self.__O, self.__I[k], syncObs, scheduledBy
+ )
def __repr__(self):
"x.__repr__() <==> repr(x)"
"x.__str__() <==> str(x)"
return str(self.__V) + "\n" + str(self.__O)
+
# ==============================================================================
class UserScript(object):
"""
Classe générale d'interface de type texte de script utilisateur
"""
+
__slots__ = ("__name", "__F")
- def __init__(self,
- name = "GenericUserScript",
- asTemplate = None,
- asString = None,
- asScript = None ):
- """
- """
- self.__name = str(name)
+ def __init__(
+ self, name="GenericUserScript", asTemplate=None, asString=None, asScript=None
+ ):
+ """ """
+ self.__name = str(name)
#
if asString is not None:
self.__F = asString
- elif self.__name == "UserPostAnalysis" and (asTemplate is not None) and (asTemplate in Templates.UserPostAnalysisTemplates):
+ elif (
+ self.__name == "UserPostAnalysis"
+ and (asTemplate is not None)
+ and (asTemplate in Templates.UserPostAnalysisTemplates)
+ ):
self.__F = Templates.UserPostAnalysisTemplates[asTemplate]
- elif self.__name == "Observer" and (asTemplate is not None) and (asTemplate in Templates.ObserverTemplates):
+ elif (
+ self.__name == "Observer"
+ and (asTemplate is not None)
+ and (asTemplate in Templates.ObserverTemplates)
+ ):
self.__F = Templates.ObserverTemplates[asTemplate]
elif asScript is not None:
self.__F = Interfaces.ImportFromScript(asScript).getstring()
"x.__str__() <==> str(x)"
return str(self.__F)
+
# ==============================================================================
class ExternalParameters(object):
"""
Classe générale d'interface pour le stockage des paramètres externes
"""
+
__slots__ = ("__name", "__P")
- def __init__(self,
- name = "GenericExternalParameters",
- asDict = None,
- asScript = None ):
- """
- """
+ def __init__(self, name="GenericExternalParameters", asDict=None, asScript=None):
+ """ """
self.__name = str(name)
- self.__P = {}
+ self.__P = {}
#
- self.updateParameters( asDict, asScript )
+ self.updateParameters(asDict, asScript)
- def updateParameters(self, asDict = None, asScript = None ):
+ def updateParameters(self, asDict=None, asScript=None):
"Mise à jour des paramètres"
if asDict is None and asScript is not None:
- __Dict = Interfaces.ImportFromScript(asScript).getvalue( self.__name, "ExternalParameters" )
+ __Dict = Interfaces.ImportFromScript(asScript).getvalue(
+ self.__name, "ExternalParameters"
+ )
else:
__Dict = asDict
#
if __Dict is not None:
- self.__P.update( dict(__Dict) )
+ self.__P.update(dict(__Dict))
- def get(self, key = None):
+ def get(self, key=None):
if key in self.__P:
return self.__P[key]
else:
"D.__contains__(k) -> True if D has a key k, else False"
return key in self.__P
+
# ==============================================================================
class State(object):
"""
Classe générale d'interface de type état
"""
+
__slots__ = (
- "__name", "__check", "__V", "__T", "__is_vector", "__is_series",
- "shape", "size",
+ "__name",
+ "__check",
+ "__V",
+ "__T",
+ "__is_vector",
+ "__is_series",
+ "shape",
+ "size",
)
- def __init__(self,
- name = "GenericVector",
- asVector = None,
- asPersistentVector = None,
- asScript = None,
- asDataFile = None,
- colNames = None,
- colMajor = False,
- scheduledBy = None,
- toBeChecked = False ):
+ def __init__(
+ self,
+ name="GenericVector",
+ asVector=None,
+ asPersistentVector=None,
+ asScript=None,
+ asDataFile=None,
+ colNames=None,
+ colMajor=False,
+ scheduledBy=None,
+ toBeChecked=False,
+ ):
"""
Permet de définir un vecteur :
- asVector : entrée des données, comme un vecteur compatible avec le
défaut) ou "asPersistentVector" selon que l'une de ces variables est
placée à "True".
"""
- self.__name = str(name)
- self.__check = bool(toBeChecked)
+ self.__name = str(name)
+ self.__check = bool(toBeChecked)
#
- self.__V = None
- self.__T = None
- self.__is_vector = False
- self.__is_series = False
+ self.__V = None
+ self.__T = None
+ self.__is_vector = False
+ self.__is_series = False
#
if asScript is not None:
__Vector, __Series = None, None
if asPersistentVector:
- __Series = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Series = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
else:
- __Vector = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Vector = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
elif asDataFile is not None:
__Vector, __Series = None, None
if asPersistentVector:
if colNames is not None:
- __Series = Interfaces.ImportFromFile(asDataFile).getvalue( colNames )[1]
+ __Series = Interfaces.ImportFromFile(asDataFile).getvalue(colNames)[
+ 1
+ ]
else:
- __Series = Interfaces.ImportFromFile(asDataFile).getvalue( [self.__name,] )[1]
- if bool(colMajor) and not Interfaces.ImportFromFile(asDataFile).getformat() == "application/numpy.npz":
+ __Series = Interfaces.ImportFromFile(asDataFile).getvalue(
+ [
+ self.__name,
+ ]
+ )[1]
+ if (
+ bool(colMajor)
+ and not Interfaces.ImportFromFile(asDataFile).getformat()
+ == "application/numpy.npz"
+ ):
__Series = numpy.transpose(__Series)
- elif not bool(colMajor) and Interfaces.ImportFromFile(asDataFile).getformat() == "application/numpy.npz":
+ elif (
+ not bool(colMajor)
+ and Interfaces.ImportFromFile(asDataFile).getformat()
+ == "application/numpy.npz"
+ ):
__Series = numpy.transpose(__Series)
else:
if colNames is not None:
- __Vector = Interfaces.ImportFromFile(asDataFile).getvalue( colNames )[1]
+ __Vector = Interfaces.ImportFromFile(asDataFile).getvalue(colNames)[
+ 1
+ ]
else:
- __Vector = Interfaces.ImportFromFile(asDataFile).getvalue( [self.__name,] )[1]
+ __Vector = Interfaces.ImportFromFile(asDataFile).getvalue(
+ [
+ self.__name,
+ ]
+ )[1]
if bool(colMajor):
- __Vector = numpy.ravel(__Vector, order = "F")
+ __Vector = numpy.ravel(__Vector, order="F")
else:
- __Vector = numpy.ravel(__Vector, order = "C")
+ __Vector = numpy.ravel(__Vector, order="C")
else:
__Vector, __Series = asVector, asPersistentVector
#
if __Vector is not None:
self.__is_vector = True
if isinstance(__Vector, str):
- __Vector = PlatformInfo.strvect2liststr( __Vector )
- self.__V = numpy.ravel(numpy.asarray( __Vector, dtype=float )).reshape((-1, 1))
- self.shape = self.__V.shape
- self.size = self.__V.size
+ __Vector = PlatformInfo.strvect2liststr(__Vector)
+ self.__V = numpy.ravel(numpy.asarray(__Vector, dtype=float)).reshape(
+ (-1, 1)
+ )
+ self.shape = self.__V.shape
+ self.size = self.__V.size
elif __Series is not None:
- self.__is_series = True
+ self.__is_series = True
if isinstance(__Series, (tuple, list, numpy.ndarray, numpy.matrix, str)):
self.__V = Persistence.OneVector(self.__name)
if isinstance(__Series, str):
__Series = PlatformInfo.strmatrix2liststr(__Series)
for member in __Series:
if isinstance(member, str):
- member = PlatformInfo.strvect2liststr( member )
- self.__V.store(numpy.asarray( member, dtype=float ))
+ member = PlatformInfo.strvect2liststr(member)
+ self.__V.store(numpy.asarray(member, dtype=float))
else:
self.__V = __Series
if isinstance(self.__V.shape, (tuple, list)):
- self.shape = self.__V.shape
+ self.shape = self.__V.shape
else:
- self.shape = self.__V.shape()
+ self.shape = self.__V.shape()
if len(self.shape) == 1:
- self.shape = (self.shape[0], 1)
- self.size = self.shape[0] * self.shape[1]
+ self.shape = (self.shape[0], 1)
+ self.size = self.shape[0] * self.shape[1]
else:
raise ValueError(
- "The %s object is improperly defined or undefined,"%self.__name + \
- " it requires at minima either a vector, a list/tuple of" + \
- " vectors or a persistent object. Please check your vector input.")
+ "The %s object is improperly defined or undefined," % self.__name
+ + " it requires at minima either a vector, a list/tuple of"
+ + " vectors or a persistent object. Please check your vector input."
+ )
#
if scheduledBy is not None:
self.__T = scheduledBy
"x.__str__() <==> str(x)"
return str(self.__V)
+
# ==============================================================================
class Covariance(object):
"""
Classe générale d'interface de type covariance
"""
+
__slots__ = (
- "__name", "__check", "__C", "__is_scalar", "__is_vector", "__is_matrix",
- "__is_object", "shape", "size",
+ "__name",
+ "__check",
+ "__C",
+ "__is_scalar",
+ "__is_vector",
+ "__is_matrix",
+ "__is_object",
+ "shape",
+ "size",
)
- def __init__(self,
- name = "GenericCovariance",
- asCovariance = None,
- asEyeByScalar = None,
- asEyeByVector = None,
- asCovObject = None,
- asScript = None,
- toBeChecked = False ):
+ def __init__(
+ self,
+ name="GenericCovariance",
+ asCovariance=None,
+ asEyeByScalar=None,
+ asEyeByVector=None,
+ asCovObject=None,
+ asScript=None,
+ toBeChecked=False,
+ ):
"""
Permet de définir une covariance :
- asCovariance : entrée des données, comme une matrice compatible avec
- toBeChecked : booléen indiquant si le caractère SDP de la matrice
pleine doit être vérifié
"""
- self.__name = str(name)
- self.__check = bool(toBeChecked)
+ self.__name = str(name)
+ self.__check = bool(toBeChecked)
#
- self.__C = None
- self.__is_scalar = False
- self.__is_vector = False
- self.__is_matrix = False
- self.__is_object = False
+ self.__C = None
+ self.__is_scalar = False
+ self.__is_vector = False
+ self.__is_matrix = False
+ self.__is_object = False
#
if asScript is not None:
__Matrix, __Scalar, __Vector, __Object = None, None, None, None
if asEyeByScalar:
- __Scalar = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Scalar = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
elif asEyeByVector:
- __Vector = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Vector = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
elif asCovObject:
- __Object = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Object = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
else:
- __Matrix = Interfaces.ImportFromScript(asScript).getvalue( self.__name )
+ __Matrix = Interfaces.ImportFromScript(asScript).getvalue(self.__name)
else:
- __Matrix, __Scalar, __Vector, __Object = asCovariance, asEyeByScalar, asEyeByVector, asCovObject
+ __Matrix, __Scalar, __Vector, __Object = (
+ asCovariance,
+ asEyeByScalar,
+ asEyeByVector,
+ asCovObject,
+ )
#
if __Scalar is not None:
if isinstance(__Scalar, str):
- __Scalar = PlatformInfo.strvect2liststr( __Scalar )
+ __Scalar = PlatformInfo.strvect2liststr(__Scalar)
if len(__Scalar) > 0:
__Scalar = __Scalar[0]
if numpy.array(__Scalar).size != 1:
raise ValueError(
- " The diagonal multiplier given to define a sparse matrix is" + \
- " not a unique scalar value.\n Its actual measured size is" + \
- " %i. Please check your scalar input."%numpy.array(__Scalar).size)
+ " The diagonal multiplier given to define a sparse matrix is"
+ + " not a unique scalar value.\n Its actual measured size is"
+ + " %i. Please check your scalar input."
+ % numpy.array(__Scalar).size
+ )
self.__is_scalar = True
- self.__C = numpy.abs( float(__Scalar) )
- self.shape = (0, 0)
- self.size = 0
+ self.__C = numpy.abs(float(__Scalar))
+ self.shape = (0, 0)
+ self.size = 0
elif __Vector is not None:
if isinstance(__Vector, str):
- __Vector = PlatformInfo.strvect2liststr( __Vector )
+ __Vector = PlatformInfo.strvect2liststr(__Vector)
self.__is_vector = True
- self.__C = numpy.abs( numpy.ravel(numpy.asarray( __Vector, dtype=float )) )
- self.shape = (self.__C.size, self.__C.size)
- self.size = self.__C.size**2
+ self.__C = numpy.abs(numpy.ravel(numpy.asarray(__Vector, dtype=float)))
+ self.shape = (self.__C.size, self.__C.size)
+ self.size = self.__C.size**2
elif __Matrix is not None:
self.__is_matrix = True
- self.__C = numpy.matrix( __Matrix, float )
- self.shape = self.__C.shape
- self.size = self.__C.size
+ self.__C = numpy.matrix(__Matrix, float)
+ self.shape = self.__C.shape
+ self.size = self.__C.size
elif __Object is not None:
self.__is_object = True
- self.__C = __Object
- for at in ("getT", "getI", "diag", "trace", "__add__", "__sub__", "__neg__", "__matmul__", "__mul__", "__rmatmul__", "__rmul__"):
+ self.__C = __Object
+ for at in (
+ "getT",
+ "getI",
+ "diag",
+ "trace",
+ "__add__",
+ "__sub__",
+ "__neg__",
+ "__matmul__",
+ "__mul__",
+ "__rmatmul__",
+ "__rmul__",
+ ):
if not hasattr(self.__C, at):
- raise ValueError("The matrix given for %s as an object has no attribute \"%s\". Please check your object input."%(self.__name, at))
+ raise ValueError(
+ 'The matrix given for %s as an object has no attribute "%s". Please check your object input.'
+ % (self.__name, at)
+ )
if hasattr(self.__C, "shape"):
- self.shape = self.__C.shape
+ self.shape = self.__C.shape
else:
- self.shape = (0, 0)
+ self.shape = (0, 0)
if hasattr(self.__C, "size"):
- self.size = self.__C.size
+ self.size = self.__C.size
else:
- self.size = 0
+ self.size = 0
else:
pass
#
def __validate(self):
"Validation"
if self.__C is None:
- raise UnboundLocalError("%s covariance matrix value has not been set!"%(self.__name,))
+ raise UnboundLocalError(
+ "%s covariance matrix value has not been set!" % (self.__name,)
+ )
if self.ismatrix() and min(self.shape) != max(self.shape):
- raise ValueError("The given matrix for %s is not a square one, its shape is %s. Please check your matrix input."%(self.__name, self.shape))
+ raise ValueError(
+ "The given matrix for %s is not a square one, its shape is %s. Please check your matrix input."
+ % (self.__name, self.shape)
+ )
if self.isobject() and min(self.shape) != max(self.shape):
- raise ValueError("The matrix given for \"%s\" is not a square one, its shape is %s. Please check your object input."%(self.__name, self.shape))
+ raise ValueError(
+ 'The matrix given for "%s" is not a square one, its shape is %s. Please check your object input.'
+ % (self.__name, self.shape)
+ )
if self.isscalar() and self.__C <= 0:
- raise ValueError("The \"%s\" covariance matrix is not positive-definite. Please check your scalar input %s."%(self.__name, self.__C))
+ raise ValueError(
+ 'The "%s" covariance matrix is not positive-definite. Please check your scalar input %s.'
+ % (self.__name, self.__C)
+ )
if self.isvector() and (self.__C <= 0).any():
- raise ValueError("The \"%s\" covariance matrix is not positive-definite. Please check your vector input."%(self.__name,))
- if self.ismatrix() and (self.__check or logging.getLogger().level < logging.WARNING):
+ raise ValueError(
+ 'The "%s" covariance matrix is not positive-definite. Please check your vector input.'
+ % (self.__name,)
+ )
+ if self.ismatrix() and (
+ self.__check or logging.getLogger().level < logging.WARNING
+ ):
try:
- numpy.linalg.cholesky( self.__C )
+ numpy.linalg.cholesky(self.__C)
except Exception:
- raise ValueError("The %s covariance matrix is not symmetric positive-definite. Please check your matrix input."%(self.__name,))
- if self.isobject() and (self.__check or logging.getLogger().level < logging.WARNING):
+ raise ValueError(
+ "The %s covariance matrix is not symmetric positive-definite. Please check your matrix input."
+ % (self.__name,)
+ )
+ if self.isobject() and (
+ self.__check or logging.getLogger().level < logging.WARNING
+ ):
try:
self.__C.cholesky()
except Exception:
- raise ValueError("The %s covariance object is not symmetric positive-definite. Please check your matrix input."%(self.__name,))
+ raise ValueError(
+ "The %s covariance object is not symmetric positive-definite. Please check your matrix input."
+ % (self.__name,)
+ )
def isscalar(self):
"Vérification du type interne"
def getI(self):
"Inversion"
if self.ismatrix():
- return Covariance(self.__name + "I", asCovariance = numpy.linalg.inv(self.__C) )
+ return Covariance(
+ self.__name + "I", asCovariance=numpy.linalg.inv(self.__C)
+ )
elif self.isvector():
- return Covariance(self.__name + "I", asEyeByVector = 1. / self.__C )
+ return Covariance(self.__name + "I", asEyeByVector=1.0 / self.__C)
elif self.isscalar():
- return Covariance(self.__name + "I", asEyeByScalar = 1. / self.__C )
+ return Covariance(self.__name + "I", asEyeByScalar=1.0 / self.__C)
elif self.isobject() and hasattr(self.__C, "getI"):
- return Covariance(self.__name + "I", asCovObject = self.__C.getI() )
+ return Covariance(self.__name + "I", asCovObject=self.__C.getI())
else:
return None # Indispensable
def getT(self):
"Transposition"
if self.ismatrix():
- return Covariance(self.__name + "T", asCovariance = self.__C.T )
+ return Covariance(self.__name + "T", asCovariance=self.__C.T)
elif self.isvector():
- return Covariance(self.__name + "T", asEyeByVector = self.__C )
+ return Covariance(self.__name + "T", asEyeByVector=self.__C)
elif self.isscalar():
- return Covariance(self.__name + "T", asEyeByScalar = self.__C )
+ return Covariance(self.__name + "T", asEyeByScalar=self.__C)
elif self.isobject() and hasattr(self.__C, "getT"):
- return Covariance(self.__name + "T", asCovObject = self.__C.getT() )
+ return Covariance(self.__name + "T", asCovObject=self.__C.getT())
else:
- raise AttributeError("the %s covariance matrix has no getT attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no getT attribute." % (self.__name,)
+ )
def cholesky(self):
"Décomposition de Cholesky"
if self.ismatrix():
- return Covariance(self.__name + "C", asCovariance = numpy.linalg.cholesky(self.__C) )
+ return Covariance(
+ self.__name + "C", asCovariance=numpy.linalg.cholesky(self.__C)
+ )
elif self.isvector():
- return Covariance(self.__name + "C", asEyeByVector = numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "C", asEyeByVector=numpy.sqrt(self.__C))
elif self.isscalar():
- return Covariance(self.__name + "C", asEyeByScalar = numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "C", asEyeByScalar=numpy.sqrt(self.__C))
elif self.isobject() and hasattr(self.__C, "cholesky"):
- return Covariance(self.__name + "C", asCovObject = self.__C.cholesky() )
+ return Covariance(self.__name + "C", asCovObject=self.__C.cholesky())
else:
- raise AttributeError("the %s covariance matrix has no cholesky attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no cholesky attribute." % (self.__name,)
+ )
def choleskyI(self):
"Inversion de la décomposition de Cholesky"
if self.ismatrix():
- return Covariance(self.__name + "H", asCovariance = numpy.linalg.inv(numpy.linalg.cholesky(self.__C)) )
+ return Covariance(
+ self.__name + "H",
+ asCovariance=numpy.linalg.inv(numpy.linalg.cholesky(self.__C)),
+ )
elif self.isvector():
- return Covariance(self.__name + "H", asEyeByVector = 1.0 / numpy.sqrt( self.__C ) )
+ return Covariance(
+ self.__name + "H", asEyeByVector=1.0 / numpy.sqrt(self.__C)
+ )
elif self.isscalar():
- return Covariance(self.__name + "H", asEyeByScalar = 1.0 / numpy.sqrt( self.__C ) )
+ return Covariance(
+ self.__name + "H", asEyeByScalar=1.0 / numpy.sqrt(self.__C)
+ )
elif self.isobject() and hasattr(self.__C, "choleskyI"):
- return Covariance(self.__name + "H", asCovObject = self.__C.choleskyI() )
+ return Covariance(self.__name + "H", asCovObject=self.__C.choleskyI())
else:
- raise AttributeError("the %s covariance matrix has no choleskyI attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no choleskyI attribute." % (self.__name,)
+ )
def sqrtm(self):
"Racine carrée matricielle"
if self.ismatrix():
import scipy
- return Covariance(self.__name + "C", asCovariance = numpy.real(scipy.linalg.sqrtm(self.__C)) )
+
+ return Covariance(
+ self.__name + "C", asCovariance=numpy.real(scipy.linalg.sqrtm(self.__C))
+ )
elif self.isvector():
- return Covariance(self.__name + "C", asEyeByVector = numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "C", asEyeByVector=numpy.sqrt(self.__C))
elif self.isscalar():
- return Covariance(self.__name + "C", asEyeByScalar = numpy.sqrt( self.__C ) )
+ return Covariance(self.__name + "C", asEyeByScalar=numpy.sqrt(self.__C))
elif self.isobject() and hasattr(self.__C, "sqrtm"):
- return Covariance(self.__name + "C", asCovObject = self.__C.sqrtm() )
+ return Covariance(self.__name + "C", asCovObject=self.__C.sqrtm())
else:
- raise AttributeError("the %s covariance matrix has no sqrtm attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no sqrtm attribute." % (self.__name,)
+ )
def sqrtmI(self):
"Inversion de la racine carrée matricielle"
if self.ismatrix():
import scipy
- return Covariance(self.__name + "H", asCovariance = numpy.linalg.inv(numpy.real(scipy.linalg.sqrtm(self.__C))) )
+
+ return Covariance(
+ self.__name + "H",
+ asCovariance=numpy.linalg.inv(numpy.real(scipy.linalg.sqrtm(self.__C))),
+ )
elif self.isvector():
- return Covariance(self.__name + "H", asEyeByVector = 1.0 / numpy.sqrt( self.__C ) )
+ return Covariance(
+ self.__name + "H", asEyeByVector=1.0 / numpy.sqrt(self.__C)
+ )
elif self.isscalar():
- return Covariance(self.__name + "H", asEyeByScalar = 1.0 / numpy.sqrt( self.__C ) )
+ return Covariance(
+ self.__name + "H", asEyeByScalar=1.0 / numpy.sqrt(self.__C)
+ )
elif self.isobject() and hasattr(self.__C, "sqrtmI"):
- return Covariance(self.__name + "H", asCovObject = self.__C.sqrtmI() )
+ return Covariance(self.__name + "H", asCovObject=self.__C.sqrtmI())
else:
- raise AttributeError("the %s covariance matrix has no sqrtmI attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no sqrtmI attribute." % (self.__name,)
+ )
def diag(self, msize=None):
"Diagonale de la matrice"
return self.__C
elif self.isscalar():
if msize is None:
- raise ValueError("the size of the %s covariance matrix has to be given in case of definition as a scalar over the diagonal."%(self.__name,))
+ raise ValueError(
+ "the size of the %s covariance matrix has to be given in"
+ % (self.__name,)
+ + " case of definition as a scalar over the diagonal."
+ )
else:
return self.__C * numpy.ones(int(msize))
elif self.isobject() and hasattr(self.__C, "diag"):
return self.__C.diag()
else:
- raise AttributeError("the %s covariance matrix has no diag attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no diag attribute." % (self.__name,)
+ )
def trace(self, msize=None):
"Trace de la matrice"
return float(numpy.sum(self.__C))
elif self.isscalar():
if msize is None:
- raise ValueError("the size of the %s covariance matrix has to be given in case of definition as a scalar over the diagonal."%(self.__name,))
+ raise ValueError(
+ "the size of the %s covariance matrix has to be given in"
+ % (self.__name,)
+ + " case of definition as a scalar over the diagonal."
+ )
else:
return self.__C * int(msize)
elif self.isobject():
return self.__C.trace()
else:
- raise AttributeError("the %s covariance matrix has no trace attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no trace attribute." % (self.__name,)
+ )
def asfullmatrix(self, msize=None):
"Matrice pleine"
if self.ismatrix():
return numpy.asarray(self.__C, dtype=float)
elif self.isvector():
- return numpy.asarray( numpy.diag(self.__C), dtype=float )
+ return numpy.asarray(numpy.diag(self.__C), dtype=float)
elif self.isscalar():
if msize is None:
- raise ValueError("the size of the %s covariance matrix has to be given in case of definition as a scalar over the diagonal."%(self.__name,))
+ raise ValueError(
+ "the size of the %s covariance matrix has to be given in"
+ % (self.__name,)
+ + " case of definition as a scalar over the diagonal."
+ )
else:
- return numpy.asarray( self.__C * numpy.eye(int(msize)), dtype=float )
+ return numpy.asarray(self.__C * numpy.eye(int(msize)), dtype=float)
elif self.isobject() and hasattr(self.__C, "asfullmatrix"):
return self.__C.asfullmatrix()
else:
- raise AttributeError("the %s covariance matrix has no asfullmatrix attribute."%(self.__name,))
+ raise AttributeError(
+ "the %s covariance matrix has no asfullmatrix attribute."
+ % (self.__name,)
+ )
def assparsematrix(self):
"Valeur sparse"
if len(_A.shape) == 1:
_A.reshape((-1, 1))[::2] += self.__C
else:
- _A.reshape(_A.size)[::_A.shape[1] + 1] += self.__C
+ _A.reshape(_A.size)[:: _A.shape[1] + 1] += self.__C
return numpy.asmatrix(_A)
def __radd__(self, other):
"x.__radd__(y) <==> y+x"
- raise NotImplementedError("%s covariance matrix __radd__ method not available for %s type!"%(self.__name, type(other)))
+ raise NotImplementedError(
+ "%s covariance matrix __radd__ method not available for %s type!"
+ % (self.__name, type(other))
+ )
def __sub__(self, other):
"x.__sub__(y) <==> x-y"
return self.__C - numpy.asmatrix(other)
elif self.isvector() or self.isscalar():
_A = numpy.asarray(other)
- _A.reshape(_A.size)[::_A.shape[1] + 1] = self.__C - _A.reshape(_A.size)[::_A.shape[1] + 1]
+ _A.reshape(_A.size)[:: _A.shape[1] + 1] = (
+ self.__C - _A.reshape(_A.size)[:: _A.shape[1] + 1]
+ )
return numpy.asmatrix(_A)
def __rsub__(self, other):
"x.__rsub__(y) <==> y-x"
- raise NotImplementedError("%s covariance matrix __rsub__ method not available for %s type!"%(self.__name, type(other)))
+ raise NotImplementedError(
+ "%s covariance matrix __rsub__ method not available for %s type!"
+ % (self.__name, type(other))
+ )
def __neg__(self):
"x.__neg__() <==> -x"
- return - self.__C
+ return -self.__C
def __matmul__(self, other):
"x.__mul__(y) <==> x@y"
if self.ismatrix() and isinstance(other, (int, float)):
return numpy.asarray(self.__C) * other
- elif self.ismatrix() and isinstance(other, (list, numpy.matrix, numpy.ndarray, tuple)):
+ elif self.ismatrix() and isinstance(
+ other, (list, numpy.matrix, numpy.ndarray, tuple)
+ ):
if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.ravel(self.__C @ numpy.ravel(other))
elif numpy.asarray(other).shape[0] == self.shape[1]: # Matrice
return numpy.asarray(self.__C) @ numpy.asarray(other)
else:
- raise ValueError("operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.asarray(other).shape, self.__name))
- elif self.isvector() and isinstance(other, (list, numpy.matrix, numpy.ndarray, tuple)):
+ raise ValueError(
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (self.shape, numpy.asarray(other).shape, self.__name)
+ )
+ elif self.isvector() and isinstance(
+ other, (list, numpy.matrix, numpy.ndarray, tuple)
+ ):
if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.ravel(self.__C) * numpy.ravel(other)
elif numpy.asarray(other).shape[0] == self.shape[1]: # Matrice
return numpy.ravel(self.__C).reshape((-1, 1)) * numpy.asarray(other)
else:
- raise ValueError("operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.ravel(other).shape, self.__name))
+ raise ValueError(
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (self.shape, numpy.ravel(other).shape, self.__name)
+ )
elif self.isscalar() and isinstance(other, numpy.matrix):
return numpy.asarray(self.__C * other)
elif self.isscalar() and isinstance(other, (list, numpy.ndarray, tuple)):
- if len(numpy.asarray(other).shape) == 1 or numpy.asarray(other).shape[1] == 1 or numpy.asarray(other).shape[0] == 1:
+ if (
+ len(numpy.asarray(other).shape) == 1
+ or numpy.asarray(other).shape[1] == 1
+ or numpy.asarray(other).shape[0] == 1
+ ):
return self.__C * numpy.ravel(other)
else:
return self.__C * numpy.asarray(other)
elif self.isobject():
return self.__C.__matmul__(other)
else:
- raise NotImplementedError("%s covariance matrix __matmul__ method not available for %s type!"%(self.__name, type(other)))
+ raise NotImplementedError(
+ "%s covariance matrix __matmul__ method not available for %s type!"
+ % (self.__name, type(other))
+ )
def __mul__(self, other):
"x.__mul__(y) <==> x*y"
return self.__C * numpy.asmatrix(other)
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.asmatrix(other).shape, self.__name))
- elif self.isvector() and isinstance(other, (list, numpy.matrix, numpy.ndarray, tuple)):
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (self.shape, numpy.asmatrix(other).shape, self.__name)
+ )
+ elif self.isvector() and isinstance(
+ other, (list, numpy.matrix, numpy.ndarray, tuple)
+ ):
if numpy.ravel(other).size == self.shape[1]: # Vecteur
return numpy.asmatrix(self.__C * numpy.ravel(other)).T
elif numpy.asmatrix(other).shape[0] == self.shape[1]: # Matrice
- return numpy.asmatrix((self.__C * (numpy.asarray(other).transpose())).transpose())
+ return numpy.asmatrix(
+ (self.__C * (numpy.asarray(other).transpose())).transpose()
+ )
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(self.shape, numpy.ravel(other).shape, self.__name))
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (self.shape, numpy.ravel(other).shape, self.__name)
+ )
elif self.isscalar() and isinstance(other, numpy.matrix):
return self.__C * other
elif self.isscalar() and isinstance(other, (list, numpy.ndarray, tuple)):
- if len(numpy.asarray(other).shape) == 1 or numpy.asarray(other).shape[1] == 1 or numpy.asarray(other).shape[0] == 1:
+ if (
+ len(numpy.asarray(other).shape) == 1
+ or numpy.asarray(other).shape[1] == 1
+ or numpy.asarray(other).shape[0] == 1
+ ):
return self.__C * numpy.asmatrix(numpy.ravel(other)).T
else:
return self.__C * numpy.asmatrix(other)
return self.__C.__mul__(other)
else:
raise NotImplementedError(
- "%s covariance matrix __mul__ method not available for %s type!"%(self.__name, type(other)))
+ "%s covariance matrix __mul__ method not available for %s type!"
+ % (self.__name, type(other))
+ )
def __rmatmul__(self, other):
"x.__rmul__(y) <==> y@x"
return numpy.asmatrix(other) * self.__C
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.asmatrix(other).shape, self.shape, self.__name))
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (numpy.asmatrix(other).shape, self.shape, self.__name)
+ )
elif self.isvector() and isinstance(other, numpy.matrix):
if numpy.ravel(other).size == self.shape[0]: # Vecteur
return numpy.asmatrix(numpy.ravel(other) * self.__C)
return numpy.asmatrix(numpy.array(other) * self.__C)
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.ravel(other).shape, self.shape, self.__name))
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (numpy.ravel(other).shape, self.shape, self.__name)
+ )
elif self.isscalar() and isinstance(other, numpy.matrix):
return other * self.__C
elif self.isobject():
return self.__C.__rmatmul__(other)
else:
raise NotImplementedError(
- "%s covariance matrix __rmatmul__ method not available for %s type!"%(self.__name, type(other)))
+ "%s covariance matrix __rmatmul__ method not available for %s type!"
+ % (self.__name, type(other))
+ )
def __rmul__(self, other):
"x.__rmul__(y) <==> y*x"
return numpy.asmatrix(other) * self.__C
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.asmatrix(other).shape, self.shape, self.__name))
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (numpy.asmatrix(other).shape, self.shape, self.__name)
+ )
elif self.isvector() and isinstance(other, numpy.matrix):
if numpy.ravel(other).size == self.shape[0]: # Vecteur
return numpy.asmatrix(numpy.ravel(other) * self.__C)
return numpy.asmatrix(numpy.array(other) * self.__C)
else:
raise ValueError(
- "operands could not be broadcast together with shapes %s %s in %s matrix"%(numpy.ravel(other).shape, self.shape, self.__name))
+ "operands could not be broadcast together with shapes %s %s in %s matrix"
+ % (numpy.ravel(other).shape, self.shape, self.__name)
+ )
elif self.isscalar() and isinstance(other, numpy.matrix):
return other * self.__C
elif self.isscalar() and isinstance(other, float):
return self.__C.__rmul__(other)
else:
raise NotImplementedError(
- "%s covariance matrix __rmul__ method not available for %s type!"%(self.__name, type(other)))
+ "%s covariance matrix __rmul__ method not available for %s type!"
+ % (self.__name, type(other))
+ )
def __len__(self):
"x.__len__() <==> len(x)"
return self.shape[0]
+
+# ==============================================================================
+class DynamicalSimulator(object):
+ """
+ Classe de simulateur ODE d'ordre 1 pour modèles dynamiques :
+
+ dy / dt = F_µ(t, y)
+
+ avec y = f(t) et µ les paramètres intrinsèques. t est couramment le temps,
+ mais il peut être une variable quelconque non temporelle.
+
+ Paramètres d'initialisation :
+ - mu : paramètres intrinsèques du modèle
+ - integrator : intégrateur choisi pour intégrer l'ODE
+ - dt : pas de temps d'intégration
+ - t0 : temps initial d'intégration
+ - tf : temps final
+ - y0 : condition initiale
+ """
+
+ __integrator_list = ["euler", "rk1", "rk2", "rk3", "rk4", "odeint", "solve_ivp"]
+ __slots__ = (
+ "_autonomous",
+ "_mu",
+ "_integrator",
+ "_dt",
+ "_do",
+ "_t0",
+ "_tf",
+ "_y0",
+ )
+
+ def __init__(
+ self,
+ mu=None,
+ integrator=None,
+ dt=None,
+ t0=None,
+ tf=None,
+ y0=None,
+ autonomous=None,
+ ):
+ "None default values are mandatory to allow overriding"
+ self.set_canonical_description()
+ self.set_description(mu, integrator, dt, t0, tf, y0, autonomous)
+
+ # --------------------------------------------------------------------------
+ # User defined ODE model
+
+ def ODEModel(self, t, y):
+ """
+ ODE : return dy / dt = F_µ(t, y)
+ """
+ raise NotImplementedError()
+
+ def set_canonical_description(self):
+ """
+ User settings for default or recommended EDO description
+
+ Setters that >>> can <<< be used:
+ - self.set_mu
+ - self.set_integrator
+ - self.set_dt
+ - self.set_t0
+ - self.set_tf
+ - self.set_y0
+ - self.set_autonomous
+ """
+ pass
+
+ # --------------------------------------------------------------------------
+
+ def set_description(
+ self,
+ mu=None,
+ integrator=None,
+ dt=None,
+ t0=None,
+ tf=None,
+ y0=None,
+ autonomous=False,
+ ):
+ "Explicit setting of EDO description"
+ self.set_mu(mu)
+ self.set_integrator(integrator)
+ self.set_dt(dt)
+ self.set_t0(t0)
+ self.set_tf(tf)
+ self.set_y0(y0)
+ self.set_autonomous(autonomous)
+
+ def set_mu(self, mu=None):
+ "Set EDO intrinsic parameters µ"
+ if mu is not None:
+ self._mu = numpy.ravel(mu)
+ return self._mu
+
+ def set_integrator(self, integrator=None):
+ "Set integration scheme name"
+ if integrator is None:
+ pass
+ elif not (integrator in self.__integrator_list):
+ raise ValueError(
+ "Wrong value %s for integrator in set_integrator call. \nAvailable integrators are: %s"
+ % (integrator, self.__integrator_list)
+ )
+ else:
+ self._integrator = str(integrator)
+ return self._integrator
+
+ def set_dt(self, value=None):
+ "Set integration step size dt"
+ if value is not None:
+ self._dt = max(2.0e-16, abs(float(value)))
+ return self._dt
+
+ def set_t0(self, value=None):
+ "Set initial integration time"
+ if value is not None:
+ self._t0 = float(value)
+ if hasattr(self, "_tf") and self._t0 > self._tf:
+ raise ValueError("Initial time has to remain less than final time")
+ return self._t0
+
+ def set_tf(self, value=None):
+ "Set final integration time"
+ if value is not None:
+ self._tf = float(value)
+ if hasattr(self, "_t0") and self._t0 > self._tf:
+ raise ValueError("Initial time has to remain less than final time")
+ return self._tf
+
+ def set_y0(self, value):
+ "Set integration initial condition"
+ if value is not None:
+ self._y0 = numpy.ravel(value)
+ return self._y0
+
+ def set_autonomous(self, value=None):
+ "Declare the system to be autonomous"
+ if value is not None:
+ self._autonomous = bool(value)
+ return self._autonomous
+
+ def set_do(self, value=None):
+ "Set observation step size do"
+ if value is not None:
+ self._do = max(2.0e-16, abs(float(value)))
+ return self._do
+
+ # --------------------------------------------------------------------------
+
+ def _rk1_step(self, t, y, h, F):
+ "Euler integration schema"
+ y = y + h * F(t, y)
+ t = t + h
+ return [t, y]
+
+ def _rk2_step(self, t, y, h, F):
+ "Runge-Kutta integration schema of order 2 (RK2)"
+ k1 = h * F(t, y)
+ k2 = h * F(t + h / 2.0, y + k1 / 2.0)
+ #
+ y = y + k2
+ t = t + h
+ return [t, y]
+
+ def _rk3_step(self, t, y, h, F):
+ "Runge-Kutta integration schema of order 3 (RK3)"
+ k1 = h * F(t, y)
+ k2 = h * F(t + h / 2.0, y + k1 / 2.0)
+ k3 = h * F(t + h, y - k1 + 2.0 * k2)
+ #
+ y = y + (k1 + 4.0 * k2 + k3) / 6.0
+ t = t + h
+ return [t, y]
+
+ def _rk4_step(self, t, y, h, F):
+ "Runge-Kutta integration schema of order 4 (RK4)"
+ k1 = h * F(t, y)
+ k2 = h * F(t + h / 2.0, y + k1 / 2.0)
+ k3 = h * F(t + h / 2.0, y + k2 / 2.0)
+ k4 = h * F(t + h, y + k3)
+ #
+ y = y + (k1 + 2.0 * k2 + 2.0 * k3 + k4) / 6.0
+ t = t + h
+ return [t, y]
+
+ _euler_step = _rk1_step
+
+ def Integration(self, y0=None, t0=None, tf=None, mu=None):
+ """
+ Exécute l'intégration du modèle entre t0 et tf, en partant de y0,
+ via le schéma d'intégration choisi
+ """
+ if y0 is not None:
+ self.set_y0(y0)
+ if t0 is not None:
+ self.set_t0(t0)
+ if tf is not None:
+ self.set_tf(tf)
+ if mu is not None:
+ self.set_mu(mu)
+ if (
+ (self._mu is None)
+ or (self._integrator is None)
+ or (self._dt is None)
+ or (self._t0 is None)
+ or (self._tf is None)
+ or (self._y0 is None)
+ ):
+ raise ValueError(
+ "Some integration input information are None and not defined\n(%s, %s, %s, %s, %s, %s)"
+ % (
+ self._mu,
+ self._integrator,
+ self._dt,
+ self._t0,
+ self._tf,
+ self._y0,
+ )
+ )
+ #
+ ODE = self.ODEModel
+ times = numpy.arange(self._t0, self._tf + self._dt / 2, self._dt)
+ if self._integrator == "odeint":
+ # intégration 'automatique' dans le cas d'un système pouvant être
+ # problématique avec rk4 ou euler (comme Van Der Pol)
+ from scipy.integrate import odeint
+
+ trajectory = odeint(
+ ODE,
+ numpy.array(self._y0, dtype=float),
+ times,
+ tfirst=True,
+ )
+ elif self._integrator == "solve_ivp":
+ # intégration 'automatique' dans le cas d'un système pouvant être
+ # problématique avec rk4 ou euler (comme Van Der Pol)
+ from scipy.integrate import solve_ivp
+
+ sol = solve_ivp(
+ ODE,
+ (self._t0, self._tf),
+ numpy.array(self._y0, dtype=float),
+ t_eval=times,
+ )
+ trajectory = sol.y.T
+ else:
+ if hasattr(self, "_%s_step" % self._integrator):
+ integration_step = getattr(self, "_%s_step" % self._integrator)
+ else:
+ raise ValueError(
+ "Error in setting the integrator method (no _%s_step method)"
+ % self._integrator
+ )
+ #
+ t = self._t0
+ y = self._y0
+ trajectory = numpy.array([self._y0])
+ #
+ while t < self._tf - self._dt / 2:
+ [t, y] = integration_step(t, y, self._dt, ODE)
+ trajectory = numpy.concatenate((trajectory, numpy.array([y])), axis=0)
+ #
+ return [times, trajectory]
+
+ def ForecastedPath(self, y1=None, t1=None, t2=None, mu=None):
+ "Trajectoire de t1 à t2, en partant de yn, pour un paramètre donné mu"
+ #
+ _, trajectory_from_t1_to_t2 = self.Integration(y1, t1, t2, mu)
+ #
+ return trajectory_from_t1_to_t2
+
+ def ForecastedState(self, y1=None, t1=None, t2=None, mu=None):
+ "État à t2 en intégrant à partir de t1, y1, pour un paramètre donné mu"
+ #
+ _, trajectory_from_t1_to_t2 = self.Integration(y1, t1, t2, mu)
+ #
+ return trajectory_from_t1_to_t2[-1, :]
+
+ def StateTransition(self, y1=None):
+ "État y[n+1] intégré depuis y[n] sur pas constant ou non"
+ if self.set_autonomous():
+ if not hasattr(self, "_do") or self._do is None:
+ raise ValueError(
+ " StateTransition requires an observation step size to be given"
+ )
+ return self.ForecastedState(y1, t1=0.0, t2=self.set_do())
+ else:
+ raise NotImplementedError(
+ " StateTransition has to be provided by the user in case of non-autonomous ODE"
+ )
+
+ def HistoryBoard(self, t_s, i_s, y_s, filename="history_board_of_trajectory.pdf"):
+ """
+ t_s : série des instants t
+ i_s : série des indices i des variables
+ y_s : série des valeurs 1D des variables du système dynamique, pour
+ chaque pas de temps, sous forme d'un tableau 2D de type:
+ SDyn(i,t) = SDyn[i][t] = [SDyn[i] pour chaque t]
+ """
+ import matplotlib
+ import matplotlib.pyplot as plt
+ from matplotlib.ticker import MaxNLocator
+
+ levels = MaxNLocator(nbins=25).tick_values(
+ numpy.ravel(y_s).min(), numpy.ravel(y_s).max()
+ )
+ fig, ax = plt.subplots(figsize=(15, 5))
+ fig.subplots_adjust(bottom=0.1, left=0.05, right=0.95, top=0.9)
+ im = plt.contourf(
+ t_s, i_s, y_s, levels=levels, cmap=plt.get_cmap("gist_gray_r")
+ )
+ fig.colorbar(im, ax=ax)
+ plt.title("Model trajectory with %i variables" % len(y_s[:, 0]))
+ plt.xlabel("Time")
+ plt.ylabel("State variables")
+ if filename is None:
+ plt.show()
+ else:
+ plt.savefig(filename)
+
+
# ==============================================================================
class Observer2Func(object):
"""
Création d'une fonction d'observateur a partir de son texte
"""
- __slots__ = ("__corps")
+
+ __slots__ = ("__corps",)
def __init__(self, corps=""):
self.__corps = corps
"Restitution du pointeur de fonction dans l'objet"
return self.func
+
# ==============================================================================
class CaseLogger(object):
"""
Conservation des commandes de création d'un cas
"""
+
__slots__ = (
- "__name", "__objname", "__logSerie", "__switchoff", "__viewers",
+ "__name",
+ "__objname",
+ "__logSerie",
+ "__switchoff",
+ "__viewers",
"__loaders",
)
- def __init__(self, __name="", __objname="case", __addViewers=None, __addLoaders=None):
- self.__name = str(__name)
- self.__objname = str(__objname)
+ def __init__(
+ self, __name="", __objname="case", __addViewers=None, __addLoaders=None
+ ):
+ self.__name = str(__name)
+ self.__objname = str(__objname)
self.__logSerie = []
self.__switchoff = False
self.__viewers = {
if __addLoaders is not None:
self.__loaders.update(dict(__addLoaders))
- def register(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
+ def register(
+ self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False
+ ):
"Enregistrement d'une commande individuelle"
- if __command is not None and __keys is not None and __local is not None and not self.__switchoff:
+ if (
+ __command is not None
+ and __keys is not None
+ and __local is not None
+ and not self.__switchoff
+ ):
if "self" in __keys:
__keys.remove("self")
- self.__logSerie.append( (str(__command), __keys, __local, __pre, __switchoff) )
+ self.__logSerie.append(
+ (str(__command), __keys, __local, __pre, __switchoff)
+ )
if __switchoff:
self.__switchoff = True
if not __switchoff:
def dump(self, __filename=None, __format="TUI", __upa=""):
"Restitution normalisée des commandes"
if __format in self.__viewers:
- __formater = self.__viewers[__format](self.__name, self.__objname, self.__logSerie)
+ __formater = self.__viewers[__format](
+ self.__name, self.__objname, self.__logSerie
+ )
else:
- raise ValueError("Dumping as \"%s\" is not available"%__format)
+ raise ValueError('Dumping as "%s" is not available' % __format)
return __formater.dump(__filename, __upa)
def load(self, __filename=None, __content=None, __object=None, __format="TUI"):
if __format in self.__loaders:
__formater = self.__loaders[__format]()
else:
- raise ValueError("Loading as \"%s\" is not available"%__format)
+ raise ValueError('Loading as "%s" is not available' % __format)
return __formater.load(__filename, __content, __object)
+
# ==============================================================================
def MultiFonction(
- __xserie,
- _extraArguments = None,
- _sFunction = lambda x: x,
- _mpEnabled = False,
- _mpWorkers = None ):
+ __xserie,
+ _extraArguments=None,
+ _sFunction=lambda x: x,
+ _mpEnabled=False,
+ _mpWorkers=None,
+):
"""
Pour une liste ordonnée de vecteurs en entrée, renvoie en sortie la liste
correspondante de valeurs de la fonction en argument
"""
# Vérifications et définitions initiales
# logging.debug("MULTF Internal multifonction calculations begin with function %s"%(_sFunction.__name__,))
- if not PlatformInfo.isIterable( __xserie ):
- raise TypeError("MultiFonction not iterable unkown input type: %s"%(type(__xserie),))
+ if not PlatformInfo.isIterable(__xserie):
+ raise TypeError(
+ "MultiFonction not iterable unkown input type: %s" % (type(__xserie),)
+ )
if _mpEnabled:
if (_mpWorkers is None) or (_mpWorkers is not None and _mpWorkers < 1):
__mpWorkers = None
__mpWorkers = int(_mpWorkers)
try:
import multiprocessing
+
__mpEnabled = True
except ImportError:
__mpEnabled = False
_jobs = __xserie
# logging.debug("MULTF Internal multiprocessing calculations begin : evaluation of %i point(s)"%(len(_jobs),))
with multiprocessing.Pool(__mpWorkers) as pool:
- __multiHX = pool.map( _sFunction, _jobs )
+ __multiHX = pool.map(_sFunction, _jobs)
pool.close()
pool.join()
# logging.debug("MULTF Internal multiprocessing calculation end")
__multiHX = []
if _extraArguments is None:
for __xvalue in __xserie:
- __multiHX.append( _sFunction( __xvalue ) )
- elif _extraArguments is not None and isinstance(_extraArguments, (list, tuple, map)):
+ __multiHX.append(_sFunction(__xvalue))
+ elif _extraArguments is not None and isinstance(
+ _extraArguments, (list, tuple, map)
+ ):
for __xvalue in __xserie:
- __multiHX.append( _sFunction( __xvalue, *_extraArguments ) )
+ __multiHX.append(_sFunction(__xvalue, *_extraArguments))
elif _extraArguments is not None and isinstance(_extraArguments, dict):
for __xvalue in __xserie:
- __multiHX.append( _sFunction( __xvalue, **_extraArguments ) )
+ __multiHX.append(_sFunction(__xvalue, **_extraArguments))
else:
- raise TypeError("MultiFonction extra arguments unkown input type: %s"%(type(_extraArguments),))
+ raise TypeError(
+ "MultiFonction extra arguments unkown input type: %s"
+ % (type(_extraArguments),)
+ )
# logging.debug("MULTF Internal monoprocessing calculation end")
#
# logging.debug("MULTF Internal multifonction calculations end")
return __multiHX
+
# ==============================================================================
if __name__ == "__main__":
print("\n AUTODIAGNOSTIC\n")
LOGFILE = os.path.join(os.path.abspath(os.curdir), "AdaoOutputLogfile.log")
+
# ==============================================================================
class ExtendedLogging(object):
"""
Logger général pour disposer conjointement de la sortie standard et de la
sortie sur fichier
"""
- __slots__ = ("__logfile")
+
+ __slots__ = ("__logfile",)
def __init__(self, level=logging.WARNING):
"""
if sys.version_info.major <= 3 and sys.version_info.minor < 8:
if logging.getLogger().hasHandlers():
while logging.getLogger().hasHandlers():
- logging.getLogger().removeHandler( logging.getLogger().handlers[-1] )
+ logging.getLogger().removeHandler(logging.getLogger().handlers[-1])
__sys_stdout = logging.StreamHandler(sys.stdout)
- __sys_stdout.setFormatter(logging.Formatter('%(levelname)-8s %(message)s'))
+ __sys_stdout.setFormatter(
+ logging.Formatter("%(levelname)-8s %(message)s")
+ )
logging.getLogger().addHandler(__sys_stdout)
else:
logging.basicConfig(
- format = '%(levelname)-8s %(message)s',
- level = level,
- stream = sys.stdout,
+ format="%(levelname)-8s %(message)s",
+ level=level,
+ stream=sys.stdout,
)
else: # Actif lorsque Python > 3.7
logging.basicConfig(
- format = '%(levelname)-8s %(message)s',
- level = level,
- stream = sys.stdout,
- force = True,
+ format="%(levelname)-8s %(message)s",
+ level=level,
+ stream=sys.stdout,
+ force=True,
)
self.__logfile = None
#
# ---------------------------------
lpi = PlatformInfo.PlatformInfo()
#
- logging.info( "--------------------------------------------------" )
- logging.info( lpi.getName() + " version " + lpi.getVersion() )
- logging.info( "--------------------------------------------------" )
- logging.info( "Library availability:" )
- logging.info( "- Python.......: True" )
- logging.info( "- Numpy........: " + str(lpi.has_numpy) )
- logging.info( "- Scipy........: " + str(lpi.has_scipy) )
- logging.info( "- Matplotlib...: " + str(lpi.has_matplotlib) )
- logging.info( "- Gnuplot......: " + str(lpi.has_gnuplot) )
- logging.info( "- Sphinx.......: " + str(lpi.has_sphinx) )
- logging.info( "- Nlopt........: " + str(lpi.has_nlopt) )
- logging.info( "Library versions:" )
- logging.info( "- Python.......: " + lpi.getPythonVersion() )
- logging.info( "- Numpy........: " + lpi.getNumpyVersion() )
- logging.info( "- Scipy........: " + lpi.getScipyVersion() )
- logging.info( "- Matplotlib...: " + lpi.getMatplotlibVersion() )
- logging.info( "- Gnuplot......: " + lpi.getGnuplotVersion() )
- logging.info( "- Sphinx.......: " + lpi.getSphinxVersion() )
- logging.info( "- Nlopt........: " + lpi.getNloptVersion() )
- logging.info( "" )
+ logging.info("--------------------------------------------------")
+ logging.info(lpi.getName() + " version " + lpi.getVersion())
+ logging.info("--------------------------------------------------")
+ logging.info("Library availability:")
+ logging.info("- Python.......: True")
+ logging.info("- Numpy........: " + str(lpi.has_numpy))
+ logging.info("- Scipy........: " + str(lpi.has_scipy))
+ logging.info("- Matplotlib...: " + str(lpi.has_matplotlib))
+ logging.info("- Gnuplot......: " + str(lpi.has_gnuplot))
+ logging.info("- Sphinx.......: " + str(lpi.has_sphinx))
+ logging.info("- Nlopt........: " + str(lpi.has_nlopt))
+ logging.info("Library versions:")
+ logging.info("- Python.......: " + lpi.getPythonVersion())
+ logging.info("- Numpy........: " + lpi.getNumpyVersion())
+ logging.info("- Scipy........: " + lpi.getScipyVersion())
+ logging.info("- Matplotlib...: " + lpi.getMatplotlibVersion())
+ logging.info("- Gnuplot......: " + lpi.getGnuplotVersion())
+ logging.info("- Sphinx.......: " + lpi.getSphinxVersion())
+ logging.info("- Nlopt........: " + lpi.getNloptVersion())
+ logging.info("")
def setLogfile(self, filename=LOGFILE, filemode="w", level=logging.NOTSET):
"""
self.__logfile = logging.FileHandler(filename, filemode)
self.__logfile.setLevel(level)
self.__logfile.setFormatter(
- logging.Formatter('%(asctime)s %(levelname)-8s %(message)s',
- '%d %b %Y %H:%M:%S'))
+ logging.Formatter(
+ "%(asctime)s %(levelname)-8s %(message)s", "%d %b %Y %H:%M:%S"
+ )
+ )
logging.getLogger().addHandler(self.__logfile)
- def setLogfileLevel(self, level=logging.NOTSET ):
+ def setLogfileLevel(self, level=logging.NOTSET):
"""
Permet de changer le niveau des messages stockés en fichier. Il ne sera
pris en compte que s'il est supérieur au niveau global.
"""
Renvoie le niveau de logging sous forme texte
"""
- return logging.getLevelName( logging.getLogger().getEffectiveLevel() )
+ return logging.getLevelName(logging.getLogger().getEffectiveLevel())
+
# ==============================================================================
def logtimer(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
- start = time.clock() # time.time()
+ start = time.clock() # time.time()
result = f(*args, **kwargs)
- end = time.clock() # time.time()
- msg = 'TIMER Durée elapsed de la fonction utilisateur "{}": {:.3f}s'
+ end = time.clock() # time.time()
+ msg = 'TIMER Durée elapsed de la fonction utilisateur "{}": {:.3f}s'
logging.debug(msg.format(f.__name__, end - start))
return result
+
return wrapper
+
# ==============================================================================
if __name__ == "__main__":
print("\n AUTODIAGNOSTIC\n")
from daCore import Templates
from daCore import Reporting
from daCore import version
+
lpi = PlatformInfo.PlatformInfo()
+
# ==============================================================================
class GenericCaseViewer(object):
"""
Gestion des commandes de création d'une vue de cas
"""
+
__slots__ = (
- "_name", "_objname", "_lineSerie", "_switchoff", "_content",
- "_numobservers", "_object", "_missing",
+ "_name",
+ "_objname",
+ "_lineSerie",
+ "_switchoff",
+ "_content",
+ "_numobservers",
+ "_object",
+ "_missing",
)
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
- self._name = str(__name)
- self._objname = str(__objname)
- self._lineSerie = []
- self._switchoff = False
+ self._name = str(__name)
+ self._objname = str(__objname)
+ self._lineSerie = []
+ self._switchoff = False
self._numobservers = 2
- self._content = __content
- self._object = __object
- self._missing = """raise ValueError("This case requires beforehand to import or define the variable named <%s>. When corrected, remove this command, correct and uncomment the following one.")\n# """
+ self._content = __content
+ self._object = __object
+ self._missing = (
+ """raise ValueError("This case requires beforehand to import or"""
+ + """ define the variable named <%s>. When corrected, remove this"""
+ + """ command, correct and uncomment the following one.")\n# """
+ )
def _append(self, *args):
"Transformation d'une commande individuelle en un enregistrement"
def _initialize(self, __multilines):
"Permet des pré-conversions automatiques simples de commandes ou clés"
__translation = {
- "Study_name" : "StudyName", # noqa: E203
- "Study_repertory" : "StudyRepertory", # noqa: E203
- "MaximumNumberOfSteps" : "MaximumNumberOfIterations", # noqa: E203
+ "Study_name": "StudyName",
+ "Study_repertory": "StudyRepertory",
+ "MaximumNumberOfSteps": "MaximumNumberOfIterations",
"EnableMultiProcessing": "EnableWiseParallelism",
- "FunctionDict" : "ScriptWithSwitch", # noqa: E203
- "FUNCTIONDICT_FILE" : "SCRIPTWITHSWITCH_FILE", # noqa: E203
+ "FunctionDict": "ScriptWithSwitch",
+ "FUNCTIONDICT_FILE": "SCRIPTWITHSWITCH_FILE",
}
for k, v in __translation.items():
__multilines = __multilines.replace(k, v)
"Enregistrement du final"
__hasNotExecute = True
for __l in self._lineSerie:
- if "%s.execute"%(self._objname,) in __l:
+ if "%s.execute" % (self._objname,) in __l:
__hasNotExecute = False
if __hasNotExecute:
- self._lineSerie.append("%s.execute()"%(self._objname,))
+ self._lineSerie.append("%s.execute()" % (self._objname,))
if __upa is not None and len(__upa) > 0:
__upa = __upa.replace("ADD", str(self._objname))
self._lineSerie.append(__upa)
def load(self, __filename=None, __content=None, __object=None):
"Chargement normalisé des commandes"
if __filename is not None and os.path.exists(__filename):
- self._content = open(__filename, 'r').read()
+ self._content = open(__filename, "r").read()
self._content = self._initialize(self._content)
elif __content is not None and type(__content) is str:
self._content = self._initialize(__content)
__commands = self._extract(self._content, self._object)
return __commands
+
class _TUIViewer(GenericCaseViewer):
"""
Établissement des commandes d'un cas ADAO TUI (Cas<->TUI)
"""
+
__slots__ = ()
def __init__(self, __name="", __objname="case", __content=None, __object=None):
self._addLine("import numpy as np")
self._addLine("from numpy import array, matrix")
self._addLine("from adao import adaoBuilder")
- self._addLine("%s = adaoBuilder.New('%s')"%(self._objname, self._name))
+ self._addLine("%s = adaoBuilder.New('%s')" % (self._objname, self._name))
if self._content is not None:
for command in self._content:
self._append(*command)
- def _append(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
+ def _append(
+ self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False
+ ):
"Transformation d'une commande individuelle en un enregistrement"
if __command is not None and __keys is not None and __local is not None:
if "Concept" in __keys:
- logging.debug("TUI Order processed: %s"%(__local["Concept"],))
- __text = ""
+ logging.debug("TUI Order processed: %s" % (__local["Concept"],))
+ __text = ""
if __pre is not None:
- __text += "%s = "%__pre
- __text += "%s.%s( "%(self._objname, str(__command))
+ __text += "%s = " % __pre
+ __text += "%s.%s( " % (self._objname, str(__command))
if "self" in __keys:
__keys.remove("self")
if __command not in ("set", "get") and "Concept" in __keys:
__keys.remove("Concept")
for k in __keys:
- if k not in __local: continue # noqa: E701
+ if k not in __local:
+ continue
__v = __local[k]
- if __v is None: continue # noqa: E701
- if k == "Checked" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "ColMajor" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "CrossObs" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "SyncObs" and __v: continue # noqa: E241,E271,E272,E701
- if k == "InputFunctionAsMulti" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "PerformanceProfile" and __v: continue # noqa: E241,E271,E272,E701
- if k == "Stored" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "nextStep" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "noDetails": continue # noqa: E241,E271,E272,E701
+ if __v is None:
+ continue
+ if k == "Checked" and not __v:
+ continue
+ if k == "ColMajor" and not __v:
+ continue
+ if k == "CrossObs" and not __v:
+ continue
+ if k == "SyncObs" and __v:
+ continue
+ if k == "InputFunctionAsMulti" and not __v:
+ continue
+ if k == "PerformanceProfile" and __v:
+ continue
+ if k == "Stored" and not __v:
+ continue
+ if k == "nextStep" and not __v:
+ continue
+ if k == "noDetails":
+ continue
if isinstance(__v, Persistence.Persistence):
__v = __v.values()
if callable(__v):
- __text = self._missing%__v.__name__ + __text
+ __text = self._missing % __v.__name__ + __text
if isinstance(__v, dict):
for val in __v.values():
if callable(val):
- __text = self._missing%val.__name__ + __text
- numpy.set_printoptions(precision=15, threshold=1000000, linewidth=1000 * 15)
- __text += "%s=%s, "%(k, repr(__v))
+ __text = self._missing % val.__name__ + __text
+ numpy.set_printoptions(
+ precision=15, threshold=1000000, linewidth=1000 * 15
+ )
+ __text += "%s=%s, " % (k, repr(__v))
numpy.set_printoptions(precision=8, threshold=1000, linewidth=75)
__text = __text.rstrip(", ")
__text += " )"
if "adaoBuilder.New" in line and "=" in line:
self._objname = line.split("=")[0].strip()
__is_case = True
- logging.debug("TUI Extracting commands of '%s' object..."%(self._objname,))
+ logging.debug(
+ "TUI Extracting commands of '%s' object..." % (self._objname,)
+ )
if not __is_case:
continue
else:
if self._objname + ".set" in line:
- __commands.append( line.replace(self._objname + ".", "", 1) )
- logging.debug("TUI Extracted command: %s"%(__commands[-1],))
+ __commands.append(line.replace(self._objname + ".", "", 1))
+ logging.debug("TUI Extracted command: %s" % (__commands[-1],))
return __commands
+
class _COMViewer(GenericCaseViewer):
"""
Établissement des commandes d'un cas COMM (Eficas Native Format/Cas<-COM)
"""
+
__slots__ = ("_observerIndex", "_objdata")
def __init__(self, __name="", __objname="case", __content=None, __object=None):
self._addLine("import numpy as np")
self._addLine("from numpy import array, matrix")
self._addLine("#")
- self._addLine("%s = {}"%__objname)
+ self._addLine("%s = {}" % __objname)
if self._content is not None:
for command in self._content:
self._append(*command)
"Transformation d'enregistrement(s) en commande(s) individuelle(s)"
__suppparameters = {}
if __multilines is not None:
- if 'adaoBuilder' in __multilines:
- raise ValueError("Impossible to load given content as an ADAO COMM one (Hint: it's perhaps not a COMM input, but a TUI one).")
+ if "adaoBuilder" in __multilines:
+ raise ValueError(
+ "Impossible to load given content as an ADAO COMM one"
+ + " (Hint: it's perhaps not a COMM input, but a TUI one)."
+ )
if "ASSIMILATION_STUDY" in __multilines:
- __suppparameters.update({'StudyType': "ASSIMILATION_STUDY"})
+ __suppparameters.update({"StudyType": "ASSIMILATION_STUDY"})
__multilines = __multilines.replace("ASSIMILATION_STUDY", "dict")
elif "OPTIMIZATION_STUDY" in __multilines:
- __suppparameters.update({'StudyType': "ASSIMILATION_STUDY"})
+ __suppparameters.update({"StudyType": "ASSIMILATION_STUDY"})
__multilines = __multilines.replace("OPTIMIZATION_STUDY", "dict")
elif "REDUCTION_STUDY" in __multilines:
- __suppparameters.update({'StudyType': "ASSIMILATION_STUDY"})
+ __suppparameters.update({"StudyType": "ASSIMILATION_STUDY"})
__multilines = __multilines.replace("REDUCTION_STUDY", "dict")
elif "CHECKING_STUDY" in __multilines:
- __suppparameters.update({'StudyType': "CHECKING_STUDY"})
+ __suppparameters.update({"StudyType": "CHECKING_STUDY"})
__multilines = __multilines.replace("CHECKING_STUDY", "dict")
else:
__multilines = __multilines.replace("ASSIMILATION_STUDY", "dict")
self._objdata = None
exec("self._objdata = " + __multilines)
#
- if self._objdata is None or not (type(self._objdata) is dict) or not ('AlgorithmParameters' in self._objdata):
- raise ValueError("Impossible to load given content as an ADAO COMM one (no dictionnary or no 'AlgorithmParameters' key found).")
+ if (
+ self._objdata is None
+ or not (type(self._objdata) is dict)
+ or not ("AlgorithmParameters" in self._objdata)
+ ):
+ raise ValueError(
+ "Impossible to load given content as an ADAO COMM one"
+ + " (no dictionnary or no 'AlgorithmParameters' key found)."
+ )
# ----------------------------------------------------------------------
- logging.debug("COMM Extracting commands of '%s' object..."%(self._objname,))
+ logging.debug("COMM Extracting commands of '%s' object..." % (self._objname,))
__commands = []
__UserPostAnalysis = ""
for k, r in self._objdata.items():
__command = k
- logging.debug("COMM Extracted command: %s:%s"%(k, r))
- if __command == "StudyName" and len(str(r)) > 0:
- __commands.append( "set( Concept='Name', String='%s')"%(str(r),) )
+ logging.debug("COMM Extracted command: %s:%s" % (k, r))
+ if __command == "StudyName" and len(str(r)) > 0:
+ __commands.append("set( Concept='Name', String='%s')" % (str(r),))
elif __command == "StudyRepertory":
- __commands.append( "set( Concept='Directory', String='%s')"%(str(r),) )
+ __commands.append("set( Concept='Directory', String='%s')" % (str(r),))
elif __command == "Debug" and str(r) == "0":
- __commands.append( "set( Concept='NoDebug' )" )
+ __commands.append("set( Concept='NoDebug' )")
elif __command == "Debug" and str(r) == "1":
- __commands.append( "set( Concept='Debug' )" )
+ __commands.append("set( Concept='Debug' )")
elif __command == "ExecuteInContainer":
- __suppparameters.update({'ExecuteInContainer': r})
+ __suppparameters.update({"ExecuteInContainer": r})
#
elif __command == "UserPostAnalysis" and type(r) is dict:
- if 'STRING' in r:
- __UserPostAnalysis = r['STRING'].replace("ADD", str(self._objname))
- __commands.append( "set( Concept='UserPostAnalysis', String=\"\"\"%s\"\"\" )"%(__UserPostAnalysis,) )
- elif 'SCRIPT_FILE' in r and os.path.exists(r['SCRIPT_FILE']):
- __UserPostAnalysis = open(r['SCRIPT_FILE'], 'r').read()
- __commands.append( "set( Concept='UserPostAnalysis', Script='%s' )"%(r['SCRIPT_FILE'],) )
- elif 'Template' in r and 'ValueTemplate' not in r:
+ if "STRING" in r:
+ __UserPostAnalysis = r["STRING"].replace("ADD", str(self._objname))
+ __commands.append(
+ 'set( Concept=\'UserPostAnalysis\', String="""%s""" )'
+ % (__UserPostAnalysis,)
+ )
+ elif "SCRIPT_FILE" in r and os.path.exists(r["SCRIPT_FILE"]):
+ __UserPostAnalysis = open(r["SCRIPT_FILE"], "r").read()
+ __commands.append(
+ "set( Concept='UserPostAnalysis', Script='%s' )"
+ % (r["SCRIPT_FILE"],)
+ )
+ elif "Template" in r and "ValueTemplate" not in r:
# AnalysisPrinter...
- if r['Template'] not in Templates.UserPostAnalysisTemplates:
- raise ValueError("User post-analysis template \"%s\" does not exist."%(r['Template'],))
+ if r["Template"] not in Templates.UserPostAnalysisTemplates:
+ raise ValueError(
+ 'User post-analysis template "%s" does not exist.'
+ % (r["Template"],)
+ )
else:
- __UserPostAnalysis = Templates.UserPostAnalysisTemplates[r['Template']]
- __commands.append( "set( Concept='UserPostAnalysis', Template='%s' )"%(r['Template'],) )
- elif 'Template' in r and 'ValueTemplate' in r:
+ __UserPostAnalysis = Templates.UserPostAnalysisTemplates[
+ r["Template"]
+ ]
+ __commands.append(
+ "set( Concept='UserPostAnalysis', Template='%s' )"
+ % (r["Template"],)
+ )
+ elif "Template" in r and "ValueTemplate" in r:
# Le template ayant pu être modifié, donc on ne prend que le ValueTemplate...
- __UserPostAnalysis = r['ValueTemplate']
- __commands.append( "set( Concept='UserPostAnalysis', String=\"\"\"%s\"\"\" )"%(__UserPostAnalysis,) )
+ __UserPostAnalysis = r["ValueTemplate"]
+ __commands.append(
+ 'set( Concept=\'UserPostAnalysis\', String="""%s""" )'
+ % (__UserPostAnalysis,)
+ )
else:
__UserPostAnalysis = ""
#
- elif __command == "AlgorithmParameters" and type(r) is dict and 'Algorithm' in r:
- if 'data' in r and r['Parameters'] == 'Dict':
- __from = r['data']
- if 'STRING' in __from:
- __parameters = ", Parameters=%s"%(repr(eval(__from['STRING'])),)
- elif 'SCRIPT_FILE' in __from: # Pas de test d'existence du fichier pour accepter un fichier relatif
- __parameters = ", Script='%s'"%(__from['SCRIPT_FILE'],)
+ elif (
+ __command == "AlgorithmParameters"
+ and type(r) is dict
+ and "Algorithm" in r
+ ):
+ if "data" in r and r["Parameters"] == "Dict":
+ __from = r["data"]
+ if "STRING" in __from:
+ __parameters = ", Parameters=%s" % (
+ repr(eval(__from["STRING"])),
+ )
+ elif (
+ "SCRIPT_FILE" in __from
+ ): # Pas de test d'existence du fichier pour accepter un fichier relatif
+ __parameters = ", Script='%s'" % (__from["SCRIPT_FILE"],)
else: # if 'Parameters' in r and r['Parameters'] == 'Defaults':
__Dict = copy.deepcopy(r)
- __Dict.pop('Algorithm', '')
- __Dict.pop('Parameters', '')
- if 'SetSeed' in __Dict:
- __Dict['SetSeed'] = int(__Dict['SetSeed'])
- if 'Bounds' in __Dict and type(__Dict['Bounds']) is str:
- __Dict['Bounds'] = eval(__Dict['Bounds'])
- if 'BoxBounds' in __Dict and type(__Dict['BoxBounds']) is str:
- __Dict['BoxBounds'] = eval(__Dict['BoxBounds'])
+ __Dict.pop("Algorithm", "")
+ __Dict.pop("Parameters", "")
+ if "SetSeed" in __Dict:
+ __Dict["SetSeed"] = int(__Dict["SetSeed"])
+ if "Bounds" in __Dict and type(__Dict["Bounds"]) is str:
+ __Dict["Bounds"] = eval(__Dict["Bounds"])
+ if "BoxBounds" in __Dict and type(__Dict["BoxBounds"]) is str:
+ __Dict["BoxBounds"] = eval(__Dict["BoxBounds"])
if len(__Dict) > 0:
- __parameters = ', Parameters=%s'%(repr(__Dict),)
+ __parameters = ", Parameters=%s" % (repr(__Dict),)
else:
__parameters = ""
- __commands.append( "set( Concept='AlgorithmParameters', Algorithm='%s'%s )"%(r['Algorithm'], __parameters) )
+ __commands.append(
+ "set( Concept='AlgorithmParameters', Algorithm='%s'%s )"
+ % (r["Algorithm"], __parameters)
+ )
#
- elif __command == "Observers" and type(r) is dict and 'SELECTION' in r:
- if type(r['SELECTION']) is str:
- __selection = (r['SELECTION'],)
+ elif __command == "Observers" and type(r) is dict and "SELECTION" in r:
+ if type(r["SELECTION"]) is str:
+ __selection = (r["SELECTION"],)
else:
- __selection = tuple(r['SELECTION'])
+ __selection = tuple(r["SELECTION"])
for sk in __selection:
- __idata = r['%s_data'%sk]
- if __idata['NodeType'] == 'Template' and 'Template' in __idata:
- __template = __idata['Template']
- if 'Info' in __idata:
- __info = ", Info=\"\"\"%s\"\"\""%(__idata['Info'],)
+ __idata = r["%s_data" % sk]
+ if __idata["NodeType"] == "Template" and "Template" in __idata:
+ __template = __idata["Template"]
+ if "Info" in __idata:
+ __info = ', Info="""%s"""' % (__idata["Info"],)
else:
__info = ""
- __commands.append( "set( Concept='Observer', Variable='%s', Template=\"\"\"%s\"\"\"%s )"%(sk, __template, __info) )
- if __idata['NodeType'] == 'String' and 'Value' in __idata:
- __value = __idata['Value']
- __commands.append( "set( Concept='Observer', Variable='%s', String=\"\"\"%s\"\"\" )"%(sk, __value) )
+ __commands.append(
+ 'set( Concept=\'Observer\', Variable=\'%s\', Template="""%s"""%s )'
+ % (sk, __template, __info)
+ )
+ if __idata["NodeType"] == "String" and "Value" in __idata:
+ __value = __idata["Value"]
+ __commands.append(
+ 'set( Concept=\'Observer\', Variable=\'%s\', String="""%s""" )'
+ % (sk, __value)
+ )
#
# Background, ObservationError, ObservationOperator...
elif type(r) is dict:
__argumentsList = []
- if 'Stored' in r and bool(r['Stored']):
- __argumentsList.append(['Stored', True])
- if 'INPUT_TYPE' in r and 'data' in r:
+ if "Stored" in r and bool(r["Stored"]):
+ __argumentsList.append(["Stored", True])
+ if "INPUT_TYPE" in r and "data" in r:
# Vector, Matrix, ScalarSparseMatrix, DiagonalSparseMatrix, Function
- __itype = r['INPUT_TYPE']
- __idata = r['data']
- if 'FROM' in __idata:
+ __itype = r["INPUT_TYPE"]
+ __idata = r["data"]
+ if "FROM" in __idata:
# String, Script, DataFile, Template, ScriptWithOneFunction, ScriptWithFunctions
- __ifrom = __idata['FROM']
- __idata.pop('FROM', '')
- if __ifrom == 'String' or __ifrom == 'Template':
- __argumentsList.append([__itype, __idata['STRING']])
- if __ifrom == 'Script':
+ __ifrom = __idata["FROM"]
+ __idata.pop("FROM", "")
+ if __ifrom == "String" or __ifrom == "Template":
+ __argumentsList.append([__itype, __idata["STRING"]])
+ if __ifrom == "Script":
__argumentsList.append([__itype, True])
- __argumentsList.append(['Script', __idata['SCRIPT_FILE']])
- if __ifrom == 'DataFile':
+ __argumentsList.append(["Script", __idata["SCRIPT_FILE"]])
+ if __ifrom == "DataFile":
__argumentsList.append([__itype, True])
- __argumentsList.append(['DataFile', __idata['DATA_FILE']])
- if __ifrom == 'ScriptWithOneFunction':
- __argumentsList.append(['OneFunction', True])
- __argumentsList.append(['Script', __idata.pop('SCRIPTWITHONEFUNCTION_FILE')])
+ __argumentsList.append(["DataFile", __idata["DATA_FILE"]])
+ if __ifrom == "ScriptWithOneFunction":
+ __argumentsList.append(["OneFunction", True])
+ __argumentsList.append(
+ ["Script", __idata.pop("SCRIPTWITHONEFUNCTION_FILE")]
+ )
if len(__idata) > 0:
- __argumentsList.append(['Parameters', __idata])
- if __ifrom == 'ScriptWithFunctions':
- __argumentsList.append(['ThreeFunctions', True])
- __argumentsList.append(['Script', __idata.pop('SCRIPTWITHFUNCTIONS_FILE')])
+ __argumentsList.append(["Parameters", __idata])
+ if __ifrom == "ScriptWithFunctions":
+ __argumentsList.append(["ThreeFunctions", True])
+ __argumentsList.append(
+ ["Script", __idata.pop("SCRIPTWITHFUNCTIONS_FILE")]
+ )
if len(__idata) > 0:
- __argumentsList.append(['Parameters', __idata])
- __arguments = ["%s = %s"%(k, repr(v)) for k, v in __argumentsList]
- __commands.append( "set( Concept='%s', %s )"%(__command, ", ".join(__arguments)))
+ __argumentsList.append(["Parameters", __idata])
+ __arguments = ["%s = %s" % (k, repr(v)) for k, v in __argumentsList]
+ __commands.append(
+ "set( Concept='%s', %s )" % (__command, ", ".join(__arguments))
+ )
#
- __commands.append( "set( Concept='%s', Parameters=%s )"%('SupplementaryParameters', repr(__suppparameters)))
+ __commands.append(
+ "set( Concept='%s', Parameters=%s )"
+ % ("SupplementaryParameters", repr(__suppparameters))
+ )
#
# ----------------------------------------------------------------------
__commands.sort() # Pour commencer par 'AlgorithmParameters'
__commands.append(__UserPostAnalysis)
return __commands
+
class _SCDViewer(GenericCaseViewer):
"""
Établissement des commandes d'un cas SCD (Study Config Dictionary/Cas->SCD)
Remarque : le fichier généré est différent de celui obtenu par EFICAS
"""
+
__slots__ = (
- "__DebugCommandNotSet", "__ObserverCommandNotSet",
- "__UserPostAnalysisNotSet", "__hasAlgorithm")
+ "__DebugCommandNotSet",
+ "__ObserverCommandNotSet",
+ "__UserPostAnalysisNotSet",
+ "__hasAlgorithm",
+ )
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entête"
__command = command[2]["Concept"]
else:
__command = command[0].replace("set", "", 1)
- if __command == 'Name':
+ if __command == "Name":
self._name = command[2]["String"]
#
self.__DebugCommandNotSet = True
self._addLine("#\n# Input for ADAO converter to SCD\n#")
self._addLine("#")
self._addLine("study_config = {}")
- self._addLine("study_config['Name'] = '%s'"%self._name)
+ self._addLine("study_config['Name'] = '%s'" % self._name)
self._addLine("#")
self._addLine("inputvariables_config = {}")
self._addLine("inputvariables_config['Order'] =['adao_default']")
for command in __content:
self._append(*command)
- def _append(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
+ def _append(
+ self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False
+ ):
"Transformation d'une commande individuelle en un enregistrement"
if __command == "set":
__command = __local["Concept"]
else:
__command = __command.replace("set", "", 1)
- logging.debug("SCD Order processed: %s"%(__command))
+ logging.debug("SCD Order processed: %s" % (__command))
#
- __text = None
- if __command in (None, 'execute', 'executePythonScheme', 'executeYACSScheme', 'get', 'Name'):
+ __text = None
+ if __command in (
+ None,
+ "execute",
+ "executePythonScheme",
+ "executeYACSScheme",
+ "get",
+ "Name",
+ ):
return
- elif __command in ['Directory',]:
- __text = "#\nstudy_config['Repertory'] = %s"%(repr(__local['String']))
- elif __command in ['Debug', 'setDebug']:
- __text = "#\nstudy_config['Debug'] = '1'"
+ elif __command in [
+ "Directory",
+ ]:
+ __text = "#\nstudy_config['Repertory'] = %s" % (repr(__local["String"]))
+ elif __command in ["Debug", "setDebug"]:
+ __text = "#\nstudy_config['Debug'] = '1'"
self.__DebugCommandNotSet = False
- elif __command in ['NoDebug', 'setNoDebug']:
- __text = "#\nstudy_config['Debug'] = '0'"
+ elif __command in ["NoDebug", "setNoDebug"]:
+ __text = "#\nstudy_config['Debug'] = '0'"
self.__DebugCommandNotSet = False
- elif __command in ['Observer', 'setObserver']:
+ elif __command in ["Observer", "setObserver"]:
if self.__ObserverCommandNotSet:
self._addLine("observers = {}")
self._addLine("study_config['Observers'] = observers")
self.__ObserverCommandNotSet = False
- __obs = __local['Variable']
+ __obs = __local["Variable"]
self._numobservers += 1
- __text = "#\n"
- __text += "observers['%s'] = {}\n"%__obs
- if __local['String'] is not None:
- __text += "observers['%s']['nodetype'] = '%s'\n"%(__obs, 'String')
- __text += "observers['%s']['String'] = \"\"\"%s\"\"\"\n"%(__obs, __local['String'])
- if __local['Script'] is not None:
- __text += "observers['%s']['nodetype'] = '%s'\n"%(__obs, 'Script')
- __text += "observers['%s']['Script'] = \"%s\"\n"%(__obs, __local['Script'])
- if __local['Template'] is not None and __local['Template'] in Templates.ObserverTemplates:
- __text += "observers['%s']['nodetype'] = '%s'\n"%(__obs, 'String')
- __text += "observers['%s']['String'] = \"\"\"%s\"\"\"\n"%(__obs, Templates.ObserverTemplates[__local['Template']])
- if __local['Info'] is not None:
- __text += "observers['%s']['info'] = \"\"\"%s\"\"\"\n"%(__obs, __local['Info'])
+ __text = "#\n"
+ __text += "observers['%s'] = {}\n" % __obs
+ if __local["String"] is not None:
+ __text += "observers['%s']['nodetype'] = '%s'\n" % (__obs, "String")
+ __text += 'observers[\'%s\'][\'String\'] = """%s"""\n' % (
+ __obs,
+ __local["String"],
+ )
+ if __local["Script"] is not None:
+ __text += "observers['%s']['nodetype'] = '%s'\n" % (__obs, "Script")
+ __text += "observers['%s']['Script'] = \"%s\"\n" % (
+ __obs,
+ __local["Script"],
+ )
+ if (
+ __local["Template"] is not None
+ and __local["Template"] in Templates.ObserverTemplates
+ ):
+ __text += "observers['%s']['nodetype'] = '%s'\n" % (__obs, "String")
+ __text += 'observers[\'%s\'][\'String\'] = """%s"""\n' % (
+ __obs,
+ Templates.ObserverTemplates[__local["Template"]],
+ )
+ if __local["Info"] is not None:
+ __text += 'observers[\'%s\'][\'info\'] = """%s"""\n' % (
+ __obs,
+ __local["Info"],
+ )
else:
- __text += "observers['%s']['info'] = \"\"\"%s\"\"\"\n"%(__obs, __obs)
- __text += "observers['%s']['number'] = %s"%(__obs, self._numobservers)
- elif __command in ['UserPostAnalysis', 'setUserPostAnalysis']:
- __text = "#\n"
+ __text += 'observers[\'%s\'][\'info\'] = """%s"""\n' % (__obs, __obs)
+ __text += "observers['%s']['number'] = %s" % (__obs, self._numobservers)
+ elif __command in ["UserPostAnalysis", "setUserPostAnalysis"]:
+ __text = "#\n"
__text += "Analysis_config = {}\n"
- if __local['String'] is not None:
+ if __local["String"] is not None:
__text += "Analysis_config['From'] = 'String'\n"
- __text += "Analysis_config['Data'] = \"\"\"%s\"\"\"\n"%(__local['String'],)
- if __local['Script'] is not None:
+ __text += 'Analysis_config[\'Data\'] = """%s"""\n' % (
+ __local["String"],
+ )
+ if __local["Script"] is not None:
__text += "Analysis_config['From'] = 'Script'\n"
- __text += "Analysis_config['Data'] = \"\"\"%s\"\"\"\n"%(__local['Script'],)
- if __local['Template'] is not None and __local['Template'] in Templates.UserPostAnalysisTemplates:
+ __text += 'Analysis_config[\'Data\'] = """%s"""\n' % (
+ __local["Script"],
+ )
+ if (
+ __local["Template"] is not None
+ and __local["Template"] in Templates.UserPostAnalysisTemplates
+ ):
__text += "Analysis_config['From'] = 'String'\n"
- __text += "Analysis_config['Data'] = \"\"\"%s\"\"\"\n"%(Templates.UserPostAnalysisTemplates[__local['Template']],)
+ __text += 'Analysis_config[\'Data\'] = """%s"""\n' % (
+ Templates.UserPostAnalysisTemplates[__local["Template"]],
+ )
__text += "study_config['UserPostAnalysis'] = Analysis_config"
self.__UserPostAnalysisNotSet = False
elif __local is not None: # __keys is not None and
numpy.set_printoptions(precision=15, threshold=1000000, linewidth=1000 * 15)
- __text = "#\n"
- __text += "%s_config = {}\n"%__command
- __local.pop('self', '')
+ __text = "#\n"
+ __text += "%s_config = {}\n" % __command
+ __local.pop("self", "")
__to_be_removed = []
__vectorIsDataFile = False
__vectorIsScript = False
for __k, __v in __local.items():
if __k == "Concept":
continue
- if __k in ['ScalarSparseMatrix', 'DiagonalSparseMatrix', 'Matrix', 'OneFunction', 'ThreeFunctions'] \
- and 'Script' in __local \
- and __local['Script'] is not None:
+ if (
+ __k
+ in [
+ "ScalarSparseMatrix",
+ "DiagonalSparseMatrix",
+ "Matrix",
+ "OneFunction",
+ "ThreeFunctions",
+ ]
+ and "Script" in __local
+ and __local["Script"] is not None
+ ):
continue
- if __k in ['Vector', 'VectorSerie'] \
- and 'DataFile' in __local \
- and __local['DataFile'] is not None:
+ if (
+ __k in ["Vector", "VectorSerie"]
+ and "DataFile" in __local
+ and __local["DataFile"] is not None
+ ):
continue
- if __k == 'Parameters' and not (__command in ['AlgorithmParameters', 'SupplementaryParameters']):
+ if __k == "Parameters" and not (
+ __command in ["AlgorithmParameters", "SupplementaryParameters"]
+ ):
continue
- if __k == 'Algorithm':
- __text += "study_config['Algorithm'] = %s\n"%(repr(__v))
- elif __k == 'DataFile':
- __k = 'Vector'
- __f = 'DataFile'
+ if __k == "Algorithm":
+ __text += "study_config['Algorithm'] = %s\n" % (repr(__v))
+ elif __k == "DataFile":
+ __k = "Vector"
+ __f = "DataFile"
__v = "'" + repr(__v) + "'"
- for __lk in ['Vector', 'VectorSerie']:
+ for __lk in ["Vector", "VectorSerie"]:
if __lk in __local and __local[__lk]:
__k = __lk
- __text += "%s_config['Type'] = '%s'\n"%(__command, __k)
- __text += "%s_config['From'] = '%s'\n"%(__command, __f)
- __text += "%s_config['Data'] = %s\n"%(__command, __v)
+ __text += "%s_config['Type'] = '%s'\n" % (__command, __k)
+ __text += "%s_config['From'] = '%s'\n" % (__command, __f)
+ __text += "%s_config['Data'] = %s\n" % (__command, __v)
__text = __text.replace("''", "'")
__vectorIsDataFile = True
- elif __k == 'Script':
- __k = 'Vector'
- __f = 'Script'
+ elif __k == "Script":
+ __k = "Vector"
+ __f = "Script"
__v = "'" + repr(__v) + "'"
- for __lk in ['ScalarSparseMatrix', 'DiagonalSparseMatrix', 'Matrix']:
+ for __lk in [
+ "ScalarSparseMatrix",
+ "DiagonalSparseMatrix",
+ "Matrix",
+ ]:
if __lk in __local and __local[__lk]:
__k = __lk
if __command == "AlgorithmParameters":
__k = "Dict"
- if 'OneFunction' in __local and __local['OneFunction']:
- __text += "%s_ScriptWithOneFunction = {}\n"%(__command,)
- __text += "%s_ScriptWithOneFunction['Function'] = ['Direct', 'Tangent', 'Adjoint']\n"%(__command,)
- __text += "%s_ScriptWithOneFunction['Script'] = {}\n"%(__command,)
- __text += "%s_ScriptWithOneFunction['Script']['Direct'] = %s\n"%(__command, __v)
- __text += "%s_ScriptWithOneFunction['Script']['Tangent'] = %s\n"%(__command, __v)
- __text += "%s_ScriptWithOneFunction['Script']['Adjoint'] = %s\n"%(__command, __v)
- __text += "%s_ScriptWithOneFunction['DifferentialIncrement'] = 1e-06\n"%(__command,)
- __text += "%s_ScriptWithOneFunction['CenteredFiniteDifference'] = 0\n"%(__command,)
- __k = 'Function'
- __f = 'ScriptWithOneFunction'
- __v = '%s_ScriptWithOneFunction'%(__command,)
- if 'ThreeFunctions' in __local and __local['ThreeFunctions']:
- __text += "%s_ScriptWithFunctions = {}\n"%(__command,)
- __text += "%s_ScriptWithFunctions['Function'] = ['Direct', 'Tangent', 'Adjoint']\n"%(__command,)
- __text += "%s_ScriptWithFunctions['Script'] = {}\n"%(__command,)
- __text += "%s_ScriptWithFunctions['Script']['Direct'] = %s\n"%(__command, __v)
- __text += "%s_ScriptWithFunctions['Script']['Tangent'] = %s\n"%(__command, __v)
- __text += "%s_ScriptWithFunctions['Script']['Adjoint'] = %s\n"%(__command, __v)
- __k = 'Function'
- __f = 'ScriptWithFunctions'
- __v = '%s_ScriptWithFunctions'%(__command,)
- __text += "%s_config['Type'] = '%s'\n"%(__command, __k)
- __text += "%s_config['From'] = '%s'\n"%(__command, __f)
- __text += "%s_config['Data'] = %s\n"%(__command, __v)
+ if "OneFunction" in __local and __local["OneFunction"]:
+ __text += "%s_ScriptWithOneFunction = {}\n" % (__command,)
+ __text += (
+ "%s_ScriptWithOneFunction['Function'] = ['Direct', 'Tangent', 'Adjoint']\n"
+ % (__command,)
+ )
+ __text += "%s_ScriptWithOneFunction['Script'] = {}\n" % (
+ __command,
+ )
+ __text += (
+ "%s_ScriptWithOneFunction['Script']['Direct'] = %s\n"
+ % (__command, __v)
+ )
+ __text += (
+ "%s_ScriptWithOneFunction['Script']['Tangent'] = %s\n"
+ % (__command, __v)
+ )
+ __text += (
+ "%s_ScriptWithOneFunction['Script']['Adjoint'] = %s\n"
+ % (__command, __v)
+ )
+ __text += (
+ "%s_ScriptWithOneFunction['DifferentialIncrement'] = 1e-06\n"
+ % (__command,)
+ )
+ __text += (
+ "%s_ScriptWithOneFunction['CenteredFiniteDifference'] = 0\n"
+ % (__command,)
+ )
+ __k = "Function"
+ __f = "ScriptWithOneFunction"
+ __v = "%s_ScriptWithOneFunction" % (__command,)
+ if "ThreeFunctions" in __local and __local["ThreeFunctions"]:
+ __text += "%s_ScriptWithFunctions = {}\n" % (__command,)
+ __text += (
+ "%s_ScriptWithFunctions['Function'] = ['Direct', 'Tangent', 'Adjoint']\n"
+ % (__command,)
+ )
+ __text += "%s_ScriptWithFunctions['Script'] = {}\n" % (
+ __command,
+ )
+ __text += (
+ "%s_ScriptWithFunctions['Script']['Direct'] = %s\n"
+ % (__command, __v)
+ )
+ __text += (
+ "%s_ScriptWithFunctions['Script']['Tangent'] = %s\n"
+ % (__command, __v)
+ )
+ __text += (
+ "%s_ScriptWithFunctions['Script']['Adjoint'] = %s\n"
+ % (__command, __v)
+ )
+ __k = "Function"
+ __f = "ScriptWithFunctions"
+ __v = "%s_ScriptWithFunctions" % (__command,)
+ __text += "%s_config['Type'] = '%s'\n" % (__command, __k)
+ __text += "%s_config['From'] = '%s'\n" % (__command, __f)
+ __text += "%s_config['Data'] = %s\n" % (__command, __v)
__text = __text.replace("''", "'")
__vectorIsScript = True
- elif __k in ('Stored', 'Checked', 'ColMajor', 'CrossObs', 'InputFunctionAsMulti', 'nextStep'):
+ elif __k in (
+ "Stored",
+ "Checked",
+ "ColMajor",
+ "CrossObs",
+ "InputFunctionAsMulti",
+ "nextStep",
+ ):
if bool(__v):
- __text += "%s_config['%s'] = '%s'\n"%(__command, __k, int(bool(__v)))
- elif __k in ('PerformanceProfile', 'SyncObs', 'noDetails'):
+ __text += "%s_config['%s'] = '%s'\n" % (
+ __command,
+ __k,
+ int(bool(__v)),
+ )
+ elif __k in ("PerformanceProfile", "SyncObs", "noDetails"):
if not bool(__v):
- __text += "%s_config['%s'] = '%s'\n"%(__command, __k, int(bool(__v)))
+ __text += "%s_config['%s'] = '%s'\n" % (
+ __command,
+ __k,
+ int(bool(__v)),
+ )
else:
- if __k == 'Vector' and __vectorIsScript:
+ if __k == "Vector" and __vectorIsScript:
continue
- if __k == 'Vector' and __vectorIsDataFile:
+ if __k == "Vector" and __vectorIsDataFile:
continue
- if __k == 'Parameters':
+ if __k == "Parameters":
__k = "Dict"
if isinstance(__v, Persistence.Persistence):
__v = __v.values()
if callable(__v):
- __text = self._missing%__v.__name__ + __text
+ __text = self._missing % __v.__name__ + __text
if isinstance(__v, dict):
for val in __v.values():
if callable(val):
- __text = self._missing%val.__name__ + __text
- __text += "%s_config['Type'] = '%s'\n"%(__command, __k)
- __text += "%s_config['From'] = '%s'\n"%(__command, 'String')
- __text += "%s_config['Data'] = \"\"\"%s\"\"\"\n"%(__command, repr(__v))
- __text += "study_config['%s'] = %s_config"%(__command, __command)
+ __text = self._missing % val.__name__ + __text
+ __text += "%s_config['Type'] = '%s'\n" % (__command, __k)
+ __text += "%s_config['From'] = '%s'\n" % (__command, "String")
+ __text += '%s_config[\'Data\'] = """%s"""\n' % (
+ __command,
+ repr(__v),
+ )
+ __text += "study_config['%s'] = %s_config" % (__command, __command)
numpy.set_printoptions(precision=8, threshold=1000, linewidth=75)
if __switchoff:
self._switchoff = True
self._addLine("#")
self._addLine("Analysis_config = {}")
self._addLine("Analysis_config['From'] = 'String'")
- self._addLine("Analysis_config['Data'] = \"\"\"import numpy")
+ self._addLine('Analysis_config[\'Data\'] = """import numpy')
self._addLine("xa=ADD.get('Analysis')[-1]")
- self._addLine("print('Analysis:',xa)\"\"\"")
+ self._addLine('print(\'Analysis:\',xa)"""')
self._addLine("study_config['UserPostAnalysis'] = Analysis_config")
def __loadVariablesByScript(self):
__ExecVariables = {} # Necessaire pour recuperer la variable
exec("\n".join(self._lineSerie), __ExecVariables)
- study_config = __ExecVariables['study_config']
+ study_config = __ExecVariables["study_config"]
# Pour Python 3 : self.__hasAlgorithm = bool(study_config['Algorithm'])
- if 'Algorithm' in study_config:
+ if "Algorithm" in study_config:
self.__hasAlgorithm = True
else:
self.__hasAlgorithm = False
- if not self.__hasAlgorithm and \
- "AlgorithmParameters" in study_config and \
- isinstance(study_config['AlgorithmParameters'], dict) and \
- "From" in study_config['AlgorithmParameters'] and \
- "Data" in study_config['AlgorithmParameters'] and \
- study_config['AlgorithmParameters']['From'] == 'Script':
- __asScript = study_config['AlgorithmParameters']['Data']
- __var = ImportFromScript(__asScript).getvalue( "Algorithm" )
- __text = "#\nstudy_config['Algorithm'] = '%s'"%(__var,)
+ if (
+ not self.__hasAlgorithm
+ and "AlgorithmParameters" in study_config
+ and isinstance(study_config["AlgorithmParameters"], dict)
+ and "From" in study_config["AlgorithmParameters"]
+ and "Data" in study_config["AlgorithmParameters"]
+ and study_config["AlgorithmParameters"]["From"] == "Script"
+ ):
+ __asScript = study_config["AlgorithmParameters"]["Data"]
+ __var = ImportFromScript(__asScript).getvalue("Algorithm")
+ __text = "#\nstudy_config['Algorithm'] = '%s'" % (__var,)
self._addLine(__text)
- if self.__hasAlgorithm and \
- "AlgorithmParameters" in study_config and \
- isinstance(study_config['AlgorithmParameters'], dict) and \
- "From" not in study_config['AlgorithmParameters'] and \
- "Data" not in study_config['AlgorithmParameters']:
- __text = "#\n"
+ if (
+ self.__hasAlgorithm
+ and "AlgorithmParameters" in study_config
+ and isinstance(study_config["AlgorithmParameters"], dict)
+ and "From" not in study_config["AlgorithmParameters"]
+ and "Data" not in study_config["AlgorithmParameters"]
+ ):
+ __text = "#\n"
__text += "AlgorithmParameters_config['Type'] = 'Dict'\n"
__text += "AlgorithmParameters_config['From'] = 'String'\n"
__text += "AlgorithmParameters_config['Data'] = '{}'\n"
self._addLine(__text)
- if 'SupplementaryParameters' in study_config and \
- isinstance(study_config['SupplementaryParameters'], dict) and \
- "From" in study_config['SupplementaryParameters'] and \
- study_config['SupplementaryParameters']["From"] == 'String' and \
- "Data" in study_config['SupplementaryParameters']:
- __dict = eval(study_config['SupplementaryParameters']["Data"])
- if 'ExecuteInContainer' in __dict:
- self._addLine("#\nstudy_config['ExecuteInContainer'] = '%s'"%__dict['ExecuteInContainer'])
+ if (
+ "SupplementaryParameters" in study_config
+ and isinstance(study_config["SupplementaryParameters"], dict)
+ and "From" in study_config["SupplementaryParameters"]
+ and study_config["SupplementaryParameters"]["From"] == "String"
+ and "Data" in study_config["SupplementaryParameters"]
+ ):
+ __dict = eval(study_config["SupplementaryParameters"]["Data"])
+ if "ExecuteInContainer" in __dict:
+ self._addLine(
+ "#\nstudy_config['ExecuteInContainer'] = '%s'"
+ % __dict["ExecuteInContainer"]
+ )
else:
self._addLine("#\nstudy_config['ExecuteInContainer'] = 'No'")
- if 'StudyType' in __dict:
- self._addLine("#\nstudy_config['StudyType'] = '%s'"%__dict['StudyType'])
- if 'StudyType' in __dict and __dict['StudyType'] != "ASSIMILATION_STUDY":
+ if "StudyType" in __dict:
+ self._addLine(
+ "#\nstudy_config['StudyType'] = '%s'" % __dict["StudyType"]
+ )
+ if "StudyType" in __dict and __dict["StudyType"] != "ASSIMILATION_STUDY":
self.__UserPostAnalysisNotSet = False
del study_config
+
class _YACSViewer(GenericCaseViewer):
"""
Etablissement des commandes d'un cas YACS (Cas->SCD->YACS)
"""
+
__slots__ = ("__internalSCD", "_append")
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
GenericCaseViewer.__init__(self, __name, __objname, __content, __object)
self.__internalSCD = _SCDViewer(__name, __objname, __content, __object)
- self._append = self.__internalSCD._append
+ self._append = self.__internalSCD._append
def dump(self, __filename=None, __upa=None):
"Restitution normalisée des commandes"
if __filename is None:
raise ValueError("A file name has to be given for YACS XML output.")
else:
- __file = os.path.abspath(__filename)
+ __file = os.path.abspath(__filename)
if os.path.isfile(__file) or os.path.islink(__file):
os.remove(__file)
# -----
if not lpi.has_salome or not lpi.has_adao:
raise ImportError(
- "Unable to get SALOME (%s) or ADAO (%s) environnement for YACS conversion.\n"%(lpi.has_salome, lpi.has_adao) + \
- "Please load the right SALOME environnement before trying to use it.")
+ "Unable to get SALOME (%s) or ADAO (%s) environnement for YACS conversion.\n"
+ % (lpi.has_salome, lpi.has_adao)
+ + "Please load the right SALOME environnement before trying to use it."
+ )
else:
from daYacsSchemaCreator.run import create_schema_from_content
# -----
create_schema_from_content(__SCDdump, __file)
# -----
if not os.path.exists(__file):
- __msg = "An error occured during the ADAO YACS Schema build for\n"
+ __msg = "An error occured during the ADAO YACS Schema build for\n"
__msg += "the target output file:\n"
- __msg += " %s\n"%__file
+ __msg += " %s\n" % __file
__msg += "See errors details in your launching terminal log.\n"
raise ValueError(__msg)
# -----
__fid.close()
return __text
+
# ==============================================================================
class _ReportViewer(GenericCaseViewer):
"""
Partie commune de restitution simple
"""
- __slots__ = ("_r")
+
+ __slots__ = ("_r",)
def __init__(self, __name="", __objname="case", __content=None, __object=None):
"Initialisation et enregistrement de l'entete"
self._r.append("ADAO Study report", "title")
else:
self._r.append(str(self._name), "title")
- self._r.append("Summary build with %s version %s"%(version.name, version.version))
+ self._r.append(
+ "Summary build with %s version %s" % (version.name, version.version)
+ )
if self._content is not None:
for command in self._content:
self._append(*command)
- def _append(self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False):
+ def _append(
+ self, __command=None, __keys=None, __local=None, __pre=None, __switchoff=False
+ ):
"Transformation d'une commande individuelle en un enregistrement"
if __command is not None and __keys is not None and __local is not None:
if __command in ("set", "get") and "Concept" in __keys:
__command = __local["Concept"]
- __text = "<i>%s</i> command has been set"%str(__command.replace("set", ""))
+ __text = "<i>%s</i> command has been set" % str(
+ __command.replace("set", "")
+ )
__ktext = ""
for k in __keys:
- if k not in __local: continue # noqa: E701
+ if k not in __local:
+ continue
__v = __local[k]
- if __v is None: continue # noqa: E701
- if k == "Checked" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "ColMajor" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "CrossObs" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "SyncObs" and __v: continue # noqa: E241,E271,E272,E701
- if k == "InputFunctionAsMulti" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "PerformanceProfile" and __v: continue # noqa: E241,E271,E272,E701
- if k == "Stored" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "nextStep" and not __v: continue # noqa: E241,E271,E272,E701
- if k == "noDetails": continue # noqa: E241,E271,E272,E701
- if k == "Concept": continue # noqa: E241,E271,E272,E701
- if k == "self": continue # noqa: E241,E271,E272,E701
+ if __v is None:
+ continue
+ if k == "Checked" and not __v:
+ continue
+ if k == "ColMajor" and not __v:
+ continue
+ if k == "CrossObs" and not __v:
+ continue
+ if k == "SyncObs" and __v:
+ continue
+ if k == "InputFunctionAsMulti" and not __v:
+ continue
+ if k == "PerformanceProfile" and __v:
+ continue
+ if k == "Stored" and not __v:
+ continue
+ if k == "nextStep" and not __v:
+ continue
+ if k == "noDetails":
+ continue
+ if k == "Concept":
+ continue
+ if k == "self":
+ continue
if isinstance(__v, Persistence.Persistence):
__v = __v.values()
- numpy.set_printoptions(precision=15, threshold=1000000, linewidth=1000 * 15)
- __ktext += "\n %s = %s,"%(k, repr(__v))
+ numpy.set_printoptions(
+ precision=15, threshold=1000000, linewidth=1000 * 15
+ )
+ __ktext += "\n %s = %s," % (k, repr(__v))
numpy.set_printoptions(precision=8, threshold=1000, linewidth=75)
if len(__ktext) > 0:
__text += " with values:" + __ktext
"Enregistrement du final"
raise NotImplementedError()
+
class _SimpleReportInRstViewer(_ReportViewer):
"""
Restitution simple en RST
"""
+
__slots__ = ()
def _finalize(self, __upa=None):
self._lineSerie.append(Reporting.ReportViewInRst(self._r).__str__())
+
class _SimpleReportInHtmlViewer(_ReportViewer):
"""
Restitution simple en HTML
"""
+
__slots__ = ()
def _finalize(self, __upa=None):
self._lineSerie.append(Reporting.ReportViewInHtml(self._r).__str__())
+
class _SimpleReportInPlainTxtViewer(_ReportViewer):
"""
Restitution simple en TXT
"""
+
__slots__ = ()
def _finalize(self, __upa=None):
self._lineSerie.append(Reporting.ReportViewInPlainTxt(self._r).__str__())
+
# ==============================================================================
class ImportFromScript(object):
"""
Obtention d'une variable nommee depuis un fichier script importé
"""
+
__slots__ = ("__basename", "__filenspace", "__filestring")
def __init__(self, __filename=None):
"Verifie l'existence et importe le script"
if __filename is None:
- raise ValueError("The name of the file, containing the variable to be read, has to be specified.")
+ raise ValueError(
+ "The name of the file, containing the variable to be read, has to be specified."
+ )
__fullname, __i = __filename, 0
while not os.path.exists(__fullname) and __i < len(sys.path):
# Correction avec le sys.path si nécessaire
__filename = __fullname
else:
raise ValueError(
- "The file containing the variable to be imported doesn't seem to" + \
- " exist. Please check the file. The given file name is:\n \"%s\""%str(__filename))
- if os.path.dirname(__filename) != '':
+ "The file containing the variable to be imported doesn't seem to"
+ + ' exist. Please check the file. The given file name is:\n "%s"'
+ % str(__filename)
+ )
+ if os.path.dirname(__filename) != "":
sys.path.insert(0, os.path.dirname(__filename))
__basename = os.path.basename(__filename).rstrip(".py")
else:
__basename = __filename.rstrip(".py")
- PlatformInfo.checkFileNameImportability( __basename + ".py" )
+ PlatformInfo.checkFileNameImportability(__basename + ".py")
self.__basename = __basename
try:
self.__filenspace = __import__(__basename, globals(), locals(), [])
except NameError:
self.__filenspace = ""
- with open(__filename, 'r') as fid:
+ with open(__filename, "r") as fid:
self.__filestring = fid.read()
- def getvalue(self, __varname=None, __synonym=None ):
+ def getvalue(self, __varname=None, __synonym=None):
"Renvoie la variable demandee par son nom ou son synonyme"
if __varname is None:
- raise ValueError("The name of the variable to be read has to be specified. Please check the content of the file and the syntax.")
+ raise ValueError(
+ "The name of the variable to be read has to be specified."
+ + " Please check the content of the file and the syntax."
+ )
if not hasattr(self.__filenspace, __varname):
if __synonym is None:
raise ValueError(
- "The imported script file \"%s\""%(str(self.__basename) + ".py",) + \
- " doesn't contain the mandatory variable \"%s\""%(__varname,) + \
- " to be read. Please check the content of the file and the syntax.")
+ 'The imported script file "%s"' % (str(self.__basename) + ".py",)
+ + ' doesn\'t contain the mandatory variable "%s"' % (__varname,)
+ + " to be read. Please check the content of the file and the syntax."
+ )
elif not hasattr(self.__filenspace, __synonym):
raise ValueError(
- "The imported script file \"%s\""%(str(self.__basename) + ".py",) + \
- " doesn't contain the mandatory variable \"%s\""%(__synonym,) + \
- " to be read. Please check the content of the file and the syntax.")
+ 'The imported script file "%s"' % (str(self.__basename) + ".py",)
+ + ' doesn\'t contain the mandatory variable "%s"' % (__synonym,)
+ + " to be read. Please check the content of the file and the syntax."
+ )
else:
return getattr(self.__filenspace, __synonym)
else:
"Renvoie le script complet"
return self.__filestring
+
# ==============================================================================
class ImportDetector(object):
"""
Détection des caractéristiques de fichiers ou objets en entrée
"""
+
__slots__ = ("__url", "__usr", "__root", "__end")
def __enter__(self):
self.__usr = str(UserMime).lower()
(self.__root, self.__end) = os.path.splitext(self.__url)
#
- mimetypes.add_type('application/numpy.npy', '.npy')
- mimetypes.add_type('application/numpy.npz', '.npz')
- mimetypes.add_type('application/dymola.sdf', '.sdf')
+ mimetypes.add_type("application/numpy.npy", ".npy")
+ mimetypes.add_type("application/numpy.npz", ".npz")
+ mimetypes.add_type("application/dymola.sdf", ".sdf")
if sys.platform.startswith("win"):
- mimetypes.add_type('text/plain', '.txt')
- mimetypes.add_type('text/csv', '.csv')
- mimetypes.add_type('text/tab-separated-values', '.tsv')
+ mimetypes.add_type("text/plain", ".txt")
+ mimetypes.add_type("text/csv", ".csv")
+ mimetypes.add_type("text/tab-separated-values", ".tsv")
# File related tests
# ------------------
def raise_error_if_not_local_file(self):
if self.is_not_local_file():
- raise ValueError("The name or the url of the file object doesn't seem to exist. The given name is:\n \"%s\""%str(self.__url))
+ raise ValueError(
+ 'The name or the url of the file object doesn\'t seem to exist. The given name is:\n "%s"'
+ % str(self.__url)
+ )
else:
return False
def raise_error_if_not_local_dir(self):
if self.is_not_local_dir():
- raise ValueError("The name or the url of the directory object doesn't seem to exist. The given name is:\n \"%s\""%str(self.__url))
+ raise ValueError(
+ 'The name or the url of the directory object doesn\'t seem to exist. The given name is:\n "%s"'
+ % str(self.__url)
+ )
else:
return False
def get_extension(self):
return self.__end
+
class ImportFromFile(object):
"""
Obtention de variables disrétisées en 1D, définies par une ou des variables
textes doivent présenter en première ligne (hors commentaire ou ligne vide)
les noms des variables de colonnes. Les commentaires commencent par un "#".
"""
+
__slots__ = (
- "_filename", "_colnames", "_colindex", "_varsline", "_format",
- "_delimiter", "_skiprows", "__url", "__filestring", "__header",
- "__allowvoid", "__binaryformats", "__supportedformats")
+ "_filename",
+ "_colnames",
+ "_colindex",
+ "_varsline",
+ "_format",
+ "_delimiter",
+ "_skiprows",
+ "__url",
+ "__filestring",
+ "__header",
+ "__allowvoid",
+ "__binaryformats",
+ "__supportedformats",
+ )
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
- def __init__(self, Filename=None, ColNames=None, ColIndex=None, Format="Guess", AllowVoidNameList=True):
+ def __init__(
+ self,
+ Filename=None,
+ ColNames=None,
+ ColIndex=None,
+ Format="Guess",
+ AllowVoidNameList=True,
+ ):
"""
Verifie l'existence et les informations de définition du fichier. Les
noms de colonnes ou de variables sont ignorées si le format ne permet
"application/numpy.npz",
"application/dymola.sdf",
)
- self.__url = ImportDetector( Filename, Format)
+ self.__url = ImportDetector(Filename, Format)
self.__url.raise_error_if_not_local_file()
self._filename = self.__url.get_absolute_name()
- PlatformInfo.checkFileNameConformity( self._filename )
+ PlatformInfo.checkFileNameConformity(self._filename)
#
self._format = self.__url.get_comprehensive_mime()
#
#
self.__allowvoid = bool(AllowVoidNameList)
- def __getentete(self, __nblines = 3):
+ def __getentete(self, __nblines=3):
"Lit l'entête du fichier pour trouver la définition des variables"
# La première ligne non vide non commentée est toujours considérée
# porter les labels de colonne, donc pas des valeurs
if self._format in self.__binaryformats:
pass
else:
- with open(self._filename, 'r') as fid:
+ with open(self._filename, "r") as fid:
__line = fid.readline().strip()
while "#" in __line or len(__line) < 1:
__header.append(__line)
__header.append(fid.readline())
return (__header, __varsline, __skiprows)
- def __getindices(self, __colnames, __colindex, __delimiter=None ):
+ def __getindices(self, __colnames, __colindex, __delimiter=None):
"Indices de colonnes correspondants à l'index et aux variables"
if __delimiter is None:
- __varserie = self._varsline.strip('#').strip().split()
+ __varserie = self._varsline.strip("#").strip().split()
else:
- __varserie = self._varsline.strip('#').strip().split(str(__delimiter))
+ __varserie = self._varsline.strip("#").strip().split(str(__delimiter))
#
if __colnames is not None:
__usecols = []
if self.__allowvoid:
__usecols = None
else:
- raise ValueError("Can not found any column corresponding to the required names %s"%(__colnames,))
+ raise ValueError(
+ "Can not found any column corresponding to the required names %s"
+ % (__colnames,)
+ )
else:
__usecols = None
#
def getsupported(self):
self.__supportedformats = {}
- self.__supportedformats["text/plain"] = True
- self.__supportedformats["text/csv"] = True
+ self.__supportedformats["text/plain"] = True
+ self.__supportedformats["text/csv"] = True
self.__supportedformats["text/tab-separated-values"] = True
- self.__supportedformats["application/numpy.npy"] = True
- self.__supportedformats["application/numpy.npz"] = True
- self.__supportedformats["application/dymola.sdf"] = lpi.has_sdf
+ self.__supportedformats["application/numpy.npy"] = True
+ self.__supportedformats["application/numpy.npz"] = True
+ self.__supportedformats["application/dymola.sdf"] = lpi.has_sdf
return self.__supportedformats
- def getvalue(self, ColNames=None, ColIndex=None ):
+ def getvalue(self, ColNames=None, ColIndex=None):
"Renvoie la ou les variables demandées par la liste de leurs noms"
# Uniquement si mise à jour
if ColNames is not None:
self._colnames = __allcolumns.files
for nom in self._colnames: # Si une variable demandée n'existe pas
if nom not in __allcolumns.files:
- self._colnames = tuple( __allcolumns.files )
+ self._colnames = tuple(__allcolumns.files)
for nom in self._colnames:
if nom in __allcolumns.files:
if __columns is not None:
# Attention : toutes les variables doivent avoir la même taille
- __columns = numpy.vstack((__columns, numpy.reshape(__allcolumns[nom], (1, -1))))
+ __columns = numpy.vstack(
+ (__columns, numpy.reshape(__allcolumns[nom], (1, -1)))
+ )
else:
# Première colonne
__columns = numpy.reshape(__allcolumns[nom], (1, -1))
if self._colindex is not None and self._colindex in __allcolumns.files:
- __index = numpy.array(numpy.reshape(__allcolumns[self._colindex], (1, -1)), dtype=bytes)
+ __index = numpy.array(
+ numpy.reshape(__allcolumns[self._colindex], (1, -1)),
+ dtype=bytes,
+ )
elif self._format == "text/plain":
__usecols, __useindex = self.__getindices(self._colnames, self._colindex)
- __columns = numpy.loadtxt(self._filename, usecols = __usecols, skiprows=self._skiprows)
+ __columns = numpy.loadtxt(
+ self._filename, usecols=__usecols, skiprows=self._skiprows
+ )
if __useindex is not None:
- __index = numpy.loadtxt(self._filename, dtype = bytes, usecols = (__useindex,), skiprows=self._skiprows)
+ __index = numpy.loadtxt(
+ self._filename,
+ dtype=bytes,
+ usecols=(__useindex,),
+ skiprows=self._skiprows,
+ )
if __usecols is None: # Si une variable demandée n'existe pas
self._colnames = None
#
elif self._format == "application/dymola.sdf" and lpi.has_sdf:
import sdf
+
__content = sdf.load(self._filename)
__columns = None
if self._colnames is None:
- self._colnames = [__content.datasets[i].name for i in range(len(__content.datasets))]
+ self._colnames = [
+ __content.datasets[i].name for i in range(len(__content.datasets))
+ ]
for nom in self._colnames:
if nom in __content:
if __columns is not None:
# Attention : toutes les variables doivent avoir la même taille
- __columns = numpy.vstack((__columns, numpy.reshape(__content[nom].data, (1, -1))))
+ __columns = numpy.vstack(
+ (__columns, numpy.reshape(__content[nom].data, (1, -1)))
+ )
else:
# Première colonne
__columns = numpy.reshape(__content[nom].data, (1, -1))
__index = __content[self._colindex].data
#
elif self._format == "text/csv":
- __usecols, __useindex = self.__getindices(self._colnames, self._colindex, self._delimiter)
- __columns = numpy.loadtxt(self._filename, usecols = __usecols, delimiter = self._delimiter, skiprows=self._skiprows)
+ __usecols, __useindex = self.__getindices(
+ self._colnames, self._colindex, self._delimiter
+ )
+ __columns = numpy.loadtxt(
+ self._filename,
+ usecols=__usecols,
+ delimiter=self._delimiter,
+ skiprows=self._skiprows,
+ )
if __useindex is not None:
- __index = numpy.loadtxt(self._filename, dtype = bytes, usecols = (__useindex,), delimiter = self._delimiter, skiprows=self._skiprows)
+ __index = numpy.loadtxt(
+ self._filename,
+ dtype=bytes,
+ usecols=(__useindex,),
+ delimiter=self._delimiter,
+ skiprows=self._skiprows,
+ )
if __usecols is None: # Si une variable demandée n'existe pas
self._colnames = None
#
elif self._format == "text/tab-separated-values":
- __usecols, __useindex = self.__getindices(self._colnames, self._colindex, self._delimiter)
- __columns = numpy.loadtxt(self._filename, usecols = __usecols, delimiter = self._delimiter, skiprows=self._skiprows)
+ __usecols, __useindex = self.__getindices(
+ self._colnames, self._colindex, self._delimiter
+ )
+ __columns = numpy.loadtxt(
+ self._filename,
+ usecols=__usecols,
+ delimiter=self._delimiter,
+ skiprows=self._skiprows,
+ )
if __useindex is not None:
- __index = numpy.loadtxt(self._filename, dtype = bytes, usecols = (__useindex,), delimiter = self._delimiter, skiprows=self._skiprows)
+ __index = numpy.loadtxt(
+ self._filename,
+ dtype=bytes,
+ usecols=(__useindex,),
+ delimiter=self._delimiter,
+ skiprows=self._skiprows,
+ )
if __usecols is None: # Si une variable demandée n'existe pas
self._colnames = None
else:
- raise ValueError("Unkown file format \"%s\" or no reader available"%self._format)
+ raise ValueError(
+ 'Unkown file format "%s" or no reader available' % self._format
+ )
if __columns is None:
__columns = ()
return value.decode()
except ValueError:
return value
+
if __index is not None:
__index = tuple([toString(v) for v in __index])
#
if self._format in self.__binaryformats:
return ""
else:
- with open(self._filename, 'r') as fid:
+ with open(self._filename, "r") as fid:
return fid.read()
def getformat(self):
return self._format
+
class ImportScalarLinesFromFile(ImportFromFile):
"""
Importation de fichier contenant des variables scalaires nommées. Le
Seule la méthode "getvalue" est changée.
"""
+
__slots__ = ()
def __enter__(self):
def __init__(self, Filename=None, ColNames=None, ColIndex=None, Format="Guess"):
ImportFromFile.__init__(self, Filename, ColNames, ColIndex, Format)
if self._format not in ["text/plain", "text/csv", "text/tab-separated-values"]:
- raise ValueError("Unkown file format \"%s\""%self._format)
+ raise ValueError('Unkown file format "%s"' % self._format)
- def getvalue(self, VarNames = None, HeaderNames=()):
+ def getvalue(self, VarNames=None, HeaderNames=()):
"Renvoie la ou les variables demandées par la liste de leurs noms"
if VarNames is not None:
__varnames = tuple(VarNames)
else:
__varnames = None
#
- if "Name" in self._varsline and "Value" in self._varsline and "Minimum" in self._varsline and "Maximum" in self._varsline:
+ if (
+ "Name" in self._varsline
+ and "Value" in self._varsline
+ and "Minimum" in self._varsline
+ and "Maximum" in self._varsline
+ ):
__ftype = "NamValMinMax"
- __dtypes = {'names' : ('Name', 'Value', 'Minimum', 'Maximum'), # noqa: E203
- 'formats': ('S128', 'g', 'g', 'g')}
- __usecols = (0, 1, 2, 3)
-
- def __replaceNoneN( s ):
- if s.strip() in (b'None', 'None'):
+ __dtypes = {
+ "names": ("Name", "Value", "Minimum", "Maximum"),
+ "formats": ("S128", "g", "g", "g"),
+ }
+ __usecols = (0, 1, 2, 3)
+
+ def __replaceNoneN(s):
+ if s.strip() in (b"None", "None"):
return -numpy.inf
else:
return s
- def __replaceNoneP( s ):
- if s.strip() in (b'None', 'None'):
+ def __replaceNoneP(s):
+ if s.strip() in (b"None", "None"):
return numpy.inf
else:
return s
+
__converters = {2: __replaceNoneN, 3: __replaceNoneP}
- elif "Name" in self._varsline and "Value" in self._varsline and ("Minimum" not in self._varsline or "Maximum" not in self._varsline):
+ elif (
+ "Name" in self._varsline
+ and "Value" in self._varsline
+ and ("Minimum" not in self._varsline or "Maximum" not in self._varsline)
+ ):
__ftype = "NamVal"
- __dtypes = {'names' : ('Name', 'Value'), # noqa: E203
- 'formats': ('S128', 'g')}
+ __dtypes = {
+ "names": ("Name", "Value"),
+ "formats": ("S128", "g"),
+ }
__converters = None
- __usecols = (0, 1)
- elif len(HeaderNames) > 0 and numpy.all([kw in self._varsline for kw in HeaderNames]):
+ __usecols = (0, 1)
+ elif len(HeaderNames) > 0 and numpy.all(
+ [kw in self._varsline for kw in HeaderNames]
+ ):
__ftype = "NamLotOfVals"
- __dtypes = {'names' : HeaderNames, # noqa: E203
- 'formats': tuple(['S128',] + ['g'] * (len(HeaderNames) - 1))}
- __usecols = tuple(range(len(HeaderNames)))
-
- def __replaceNone( s ):
- if s.strip() in (b'None', 'None'):
+ __dtypes = {
+ "names": HeaderNames,
+ "formats": tuple(
+ [
+ "S128",
+ ]
+ + ["g"] * (len(HeaderNames) - 1)
+ ),
+ }
+ __usecols = tuple(range(len(HeaderNames)))
+
+ def __replaceNone(s):
+ if s.strip() in (b"None", "None"):
return numpy.nan
else:
return s
+
__converters = dict()
for i in range(1, len(HeaderNames)):
__converters[i] = __replaceNone
else:
- raise ValueError("Can not find names of columns for initial values. Wrong first line is:\n \"%s\""%self._varsline)
+ raise ValueError(
+ "Can not find names of columns for initial values."
+ + ' Wrong first line is:\n "%s"' % self._varsline
+ )
#
if self._format == "text/plain":
__content = numpy.loadtxt(
self._filename,
- dtype = __dtypes,
- usecols = __usecols,
- skiprows = self._skiprows,
- converters = __converters,
- ndmin = 1,
+ dtype=__dtypes,
+ usecols=__usecols,
+ skiprows=self._skiprows,
+ converters=__converters,
+ ndmin=1,
)
elif self._format in ["text/csv", "text/tab-separated-values"]:
__content = numpy.loadtxt(
self._filename,
- dtype = __dtypes,
- usecols = __usecols,
- skiprows = self._skiprows,
- converters = __converters,
- delimiter = self._delimiter,
- ndmin = 1,
+ dtype=__dtypes,
+ usecols=__usecols,
+ skiprows=self._skiprows,
+ converters=__converters,
+ delimiter=self._delimiter,
+ ndmin=1,
)
else:
- raise ValueError("Unkown file format \"%s\""%self._format)
+ raise ValueError('Unkown file format "%s"' % self._format)
#
__names, __thevalue, __bounds = [], [], []
for sub in __content:
__thevalue.append(va)
__bounds.append((mi, ma))
#
- __names = tuple(__names)
+ __names = tuple(__names)
__thevalue = numpy.array(__thevalue)
- __bounds = tuple(__bounds)
+ __bounds = tuple(__bounds)
#
return (__names, __thevalue, __bounds)
+
# ==============================================================================
class EficasGUI(object):
"""
Lancement autonome de l'interface EFICAS/ADAO
"""
+
__slots__ = ("__msg", "__path_settings_ok")
- def __init__(self, __addpath = None):
+ def __init__(self, __addpath=None):
# Chemin pour l'installation (ordre important)
self.__msg = ""
self.__path_settings_ok = False
__EFICAS_TOOLS_ROOT = os.environ["EFICAS_NOUVEAU_ROOT"]
__path_ok = True
else:
- self.__msg += "\nKeyError:\n" + \
- " the required environment variable EFICAS_TOOLS_ROOT is unknown.\n" + \
- " You have either to be in SALOME environment, or to set this\n" + \
- " variable in your environment to the right path \"<...>\" to\n" + \
- " find an installed EFICAS application. For example:\n" + \
- " EFICAS_TOOLS_ROOT=\"<...>\" command\n"
+ self.__msg += (
+ "\nKeyError:\n"
+ + " the required environment variable EFICAS_TOOLS_ROOT is unknown.\n"
+ + " You have either to be in SALOME environment, or to set this\n"
+ + ' variable in your environment to the right path "<...>" to\n'
+ + " find an installed EFICAS application. For example:\n"
+ + ' EFICAS_TOOLS_ROOT="<...>" command\n'
+ )
__path_ok = False
try:
import adao
+
__path_ok = True and __path_ok
except ImportError:
- self.__msg += "\nImportError:\n" + \
- " the required ADAO library can not be found to be imported.\n" + \
- " You have either to be in ADAO environment, or to be in SALOME\n" + \
- " environment, or to set manually in your Python 3 environment the\n" + \
- " right path \"<...>\" to find an installed ADAO application. For\n" + \
- " example:\n" + \
- " PYTHONPATH=\"<...>:${PYTHONPATH}\" command\n"
+ self.__msg += (
+ "\nImportError:\n"
+ + " the required ADAO library can not be found to be imported.\n"
+ + " You have either to be in ADAO environment, or to be in SALOME\n"
+ + " environment, or to set manually in your Python 3 environment the\n"
+ + ' right path "<...>" to find an installed ADAO application. For\n'
+ + " example:\n"
+ + ' PYTHONPATH="<...>:${PYTHONPATH}" command\n'
+ )
__path_ok = False
try:
import PyQt5 # noqa: F401
+
__path_ok = True and __path_ok
except ImportError:
- self.__msg += "\nImportError:\n" + \
- " the required PyQt5 library can not be found to be imported.\n" + \
- " You have either to have a raisonable up-to-date Python 3\n" + \
- " installation (less than 5 years), or to be in SALOME environment.\n"
+ self.__msg += (
+ "\nImportError:\n"
+ + " the required PyQt5 library can not be found to be imported.\n"
+ + " You have either to have a raisonable up-to-date Python 3\n"
+ + " installation (less than 5 years), or to be in SALOME environment.\n"
+ )
__path_ok = False
# ----------------
if not __path_ok:
- self.__msg += "\nWarning:\n" + \
- " It seems you have some troubles with your installation.\n" + \
- " Be aware that some other errors may exist, that are not\n" + \
- " explained as above, like some incomplete or obsolete\n" + \
- " Python 3, or incomplete module installation.\n" + \
- " \n" + \
- " Please correct the above error(s) before launching the\n" + \
- " standalone EFICAS/ADAO interface.\n"
+ self.__msg += (
+ "\nWarning:\n"
+ + " It seems you have some troubles with your installation.\n"
+ + " Be aware that some other errors may exist, that are not\n"
+ + " explained as above, like some incomplete or obsolete\n"
+ + " Python 3, or incomplete module installation.\n"
+ + " \n"
+ + " Please correct the above error(s) before launching the\n"
+ + " standalone EFICAS/ADAO interface.\n"
+ )
logging.debug("Some of the ADAO/EFICAS/QT5 paths have not been found")
self.__path_settings_ok = False
else:
logging.debug("Launching standalone EFICAS/ADAO interface...")
from daEficas import prefs
from InterfaceQT4 import eficas_go
+
eficas_go.lanceEficas(code=prefs.code)
else:
- logging.debug("Can not launch standalone EFICAS/ADAO interface for path errors.")
+ logging.debug(
+ "Can not launch standalone EFICAS/ADAO interface for path errors."
+ )
+
# ==============================================================================
if __name__ == "__main__":
"""
__author__ = "Jean-Philippe ARGAUD"
-import os, copy, types, sys, logging, math, numpy, scipy, itertools, warnings
+import os
+import copy
+import types
+import sys
+import logging
+import math
+import numpy
+import scipy
+import itertools
+import warnings
import scipy.linalg # Py3.6
from daCore.BasicObjects import Operator, Covariance, PartialAlgorithm
from daCore.PlatformInfo import PlatformInfo, vt, vfloat
+
mpr = PlatformInfo().MachinePrecision()
mfp = PlatformInfo().MaximumPrecision()
# logging.getLogger().setLevel(logging.DEBUG)
+
# ==============================================================================
-def ExecuteFunction( triplet ):
+def ExecuteFunction(triplet):
assert len(triplet) == 3, "Incorrect number of arguments"
X, xArgs, funcrepr = triplet
- __X = numpy.ravel( X ).reshape((-1, 1))
+ __X = numpy.ravel(X).reshape((-1, 1))
__sys_path_tmp = sys.path
sys.path.insert(0, funcrepr["__userFunction__path"])
__module = __import__(funcrepr["__userFunction__modl"], globals(), locals(), [])
sys.path = __sys_path_tmp
del __sys_path_tmp
if isinstance(xArgs, dict):
- __HX = __fonction( __X, **xArgs )
+ __HX = __fonction(__X, **xArgs)
else:
- __HX = __fonction( __X )
- return numpy.ravel( __HX )
+ __HX = __fonction(__X)
+ return numpy.ravel(__HX)
+
# ==============================================================================
class FDApproximation(object):
"dX" qui sera multiplié par "increment" (donc en %), et on effectue de DF
centrées si le booléen "centeredDF" est vrai.
"""
+
__slots__ = (
- "__name", "__extraArgs", "__mpEnabled", "__mpWorkers", "__mfEnabled",
- "__rmEnabled", "__avoidRC", "__tolerBP", "__centeredDF", "__lengthRJ",
- "__listJPCP", "__listJPCI", "__listJPCR", "__listJPPN", "__listJPIN",
- "__userOperator", "__userFunction", "__increment", "__pool", "__dX",
- "__userFunction__name", "__userFunction__modl", "__userFunction__path",
+ "__name",
+ "__extraArgs",
+ "__mpEnabled",
+ "__mpWorkers",
+ "__mfEnabled",
+ "__rmEnabled",
+ "__avoidRC",
+ "__tolerBP",
+ "__centeredDF",
+ "__lengthRJ",
+ "__listJPCP",
+ "__listJPCI",
+ "__listJPCR",
+ "__listJPPN",
+ "__listJPIN",
+ "__userOperator",
+ "__userFunction",
+ "__increment",
+ "__pool",
+ "__dX",
+ "__userFunction__name",
+ "__userFunction__modl",
+ "__userFunction__path",
)
- def __init__(self,
- name = "FDApproximation",
- Function = None,
- centeredDF = False,
- increment = 0.01,
- dX = None,
- extraArguments = None,
- reducingMemoryUse = False,
- avoidingRedundancy = True,
- toleranceInRedundancy = 1.e-18,
- lengthOfRedundancy = -1,
- mpEnabled = False,
- mpWorkers = None,
- mfEnabled = False ):
+ def __init__(
+ self,
+ name="FDApproximation",
+ Function=None,
+ centeredDF=False,
+ increment=0.01,
+ dX=None,
+ extraArguments=None,
+ reducingMemoryUse=False,
+ avoidingRedundancy=True,
+ toleranceInRedundancy=1.0e-18,
+ lengthOfRedundancy=-1,
+ mpEnabled=False,
+ mpWorkers=None,
+ mfEnabled=False,
+ ):
#
self.__name = str(name)
self.__extraArgs = extraArguments
if mpEnabled:
try:
import multiprocessing # noqa: F401
+
self.__mpEnabled = True
except ImportError:
self.__mpEnabled = False
self.__mpWorkers = mpWorkers
if self.__mpWorkers is not None and self.__mpWorkers < 1:
self.__mpWorkers = None
- logging.debug("FDA Calculs en multiprocessing : %s (nombre de processus : %s)"%(self.__mpEnabled, self.__mpWorkers))
+ logging.debug(
+ "FDA Calculs en multiprocessing : %s (nombre de processus : %s)"
+ % (self.__mpEnabled, self.__mpWorkers)
+ )
#
self.__mfEnabled = bool(mfEnabled)
- logging.debug("FDA Calculs en multifonctions : %s"%(self.__mfEnabled,))
+ logging.debug("FDA Calculs en multifonctions : %s" % (self.__mfEnabled,))
#
self.__rmEnabled = bool(reducingMemoryUse)
- logging.debug("FDA Calculs avec réduction mémoire : %s"%(self.__rmEnabled,))
+ logging.debug("FDA Calculs avec réduction mémoire : %s" % (self.__rmEnabled,))
#
if avoidingRedundancy:
self.__avoidRC = True
self.__listJPIN = [] # Jacobian Previous Calculated Increment Norms
else:
self.__avoidRC = False
- logging.debug("FDA Calculs avec réduction des doublons : %s"%self.__avoidRC)
+ logging.debug("FDA Calculs avec réduction des doublons : %s" % self.__avoidRC)
if self.__avoidRC:
- logging.debug("FDA Tolérance de détermination des doublons : %.2e"%self.__tolerBP)
+ logging.debug(
+ "FDA Tolérance de détermination des doublons : %.2e" % self.__tolerBP
+ )
#
if self.__mpEnabled:
if isinstance(Function, types.FunctionType):
logging.debug("FDA Calculs en multiprocessing : FunctionType")
self.__userFunction__name = Function.__name__
try:
- mod = os.path.join(Function.__globals__['filepath'], Function.__globals__['filename'])
+ mod = os.path.join(
+ Function.__globals__["filepath"],
+ Function.__globals__["filename"],
+ )
except Exception:
- mod = os.path.abspath(Function.__globals__['__file__'])
+ mod = os.path.abspath(Function.__globals__["__file__"])
if not os.path.isfile(mod):
- raise ImportError("No user defined function or method found with the name %s"%(mod,))
- self.__userFunction__modl = os.path.basename(mod).replace('.pyc', '').replace('.pyo', '').replace('.py', '')
+ raise ImportError(
+ "No user defined function or method found with the name %s"
+ % (mod,)
+ )
+ self.__userFunction__modl = (
+ os.path.basename(mod)
+ .replace(".pyc", "")
+ .replace(".pyo", "")
+ .replace(".py", "")
+ )
self.__userFunction__path = os.path.dirname(mod)
del mod
self.__userOperator = Operator(
- name = self.__name,
- fromMethod = Function,
- avoidingRedundancy = self.__avoidRC,
- inputAsMultiFunction = self.__mfEnabled,
- extraArguments = self.__extraArgs )
- self.__userFunction = self.__userOperator.appliedTo # Pour le calcul Direct
+ name=self.__name,
+ fromMethod=Function,
+ avoidingRedundancy=self.__avoidRC,
+ inputAsMultiFunction=self.__mfEnabled,
+ extraArguments=self.__extraArgs,
+ )
+ self.__userFunction = (
+ self.__userOperator.appliedTo
+ ) # Pour le calcul Direct
elif isinstance(Function, types.MethodType):
logging.debug("FDA Calculs en multiprocessing : MethodType")
self.__userFunction__name = Function.__name__
try:
- mod = os.path.join(Function.__globals__['filepath'], Function.__globals__['filename'])
+ mod = os.path.join(
+ Function.__globals__["filepath"],
+ Function.__globals__["filename"],
+ )
except Exception:
- mod = os.path.abspath(Function.__func__.__globals__['__file__'])
+ mod = os.path.abspath(Function.__func__.__globals__["__file__"])
if not os.path.isfile(mod):
- raise ImportError("No user defined function or method found with the name %s"%(mod,))
- self.__userFunction__modl = os.path.basename(mod).replace('.pyc', '').replace('.pyo', '').replace('.py', '')
+ raise ImportError(
+ "No user defined function or method found with the name %s"
+ % (mod,)
+ )
+ self.__userFunction__modl = (
+ os.path.basename(mod)
+ .replace(".pyc", "")
+ .replace(".pyo", "")
+ .replace(".py", "")
+ )
self.__userFunction__path = os.path.dirname(mod)
del mod
self.__userOperator = Operator(
- name = self.__name,
- fromMethod = Function,
- avoidingRedundancy = self.__avoidRC,
- inputAsMultiFunction = self.__mfEnabled,
- extraArguments = self.__extraArgs )
- self.__userFunction = self.__userOperator.appliedTo # Pour le calcul Direct
+ name=self.__name,
+ fromMethod=Function,
+ avoidingRedundancy=self.__avoidRC,
+ inputAsMultiFunction=self.__mfEnabled,
+ extraArguments=self.__extraArgs,
+ )
+ self.__userFunction = (
+ self.__userOperator.appliedTo
+ ) # Pour le calcul Direct
else:
- raise TypeError("User defined function or method has to be provided for finite differences approximation.")
+ raise TypeError(
+ "User defined function or method has to be provided for finite differences approximation."
+ )
else:
self.__userOperator = Operator(
- name = self.__name,
- fromMethod = Function,
- avoidingRedundancy = self.__avoidRC,
- inputAsMultiFunction = self.__mfEnabled,
- extraArguments = self.__extraArgs )
+ name=self.__name,
+ fromMethod=Function,
+ avoidingRedundancy=self.__avoidRC,
+ inputAsMultiFunction=self.__mfEnabled,
+ extraArguments=self.__extraArgs,
+ )
self.__userFunction = self.__userOperator.appliedTo
#
self.__centeredDF = bool(centeredDF)
- if abs(float(increment)) > 1.e-15:
- self.__increment = float(increment)
+ if abs(float(increment)) > 1.0e-15:
+ self.__increment = float(increment)
else:
- self.__increment = 0.01
+ self.__increment = 0.01
if dX is None:
- self.__dX = None
+ self.__dX = None
else:
- self.__dX = numpy.ravel( dX )
+ self.__dX = numpy.ravel(dX)
# ---------------------------------------------------------
def __doublon__(self, __e, __l, __n, __v=None):
if numpy.linalg.norm(__e - __l[i]) < self.__tolerBP * __n[i]:
__ac, __iac = True, i
if __v is not None:
- logging.debug("FDA Cas%s déjà calculé, récupération du doublon %i"%(__v, __iac))
+ logging.debug(
+ "FDA Cas%s déjà calculé, récupération du doublon %i"
+ % (__v, __iac)
+ )
break
return __ac, __iac
# ---------------------------------------------------------
- def __listdotwith__(self, __LMatrix, __dotWith = None, __dotTWith = None):
+ def __listdotwith__(self, __LMatrix, __dotWith=None, __dotTWith=None):
"Produit incrémental d'une matrice liste de colonnes avec un vecteur"
if not isinstance(__LMatrix, (list, tuple)):
- raise TypeError("Columnwise list matrix has not the proper type: %s"%type(__LMatrix))
+ raise TypeError(
+ "Columnwise list matrix has not the proper type: %s" % type(__LMatrix)
+ )
if __dotWith is not None:
- __Idwx = numpy.ravel( __dotWith )
+ __Idwx = numpy.ravel(__dotWith)
assert len(__LMatrix) == __Idwx.size, "Incorrect size of elements"
__Produit = numpy.zeros(__LMatrix[0].size)
for i, col in enumerate(__LMatrix):
__Produit += float(__Idwx[i]) * col
return __Produit
elif __dotTWith is not None:
- _Idwy = numpy.ravel( __dotTWith ).T
+ _Idwy = numpy.ravel(__dotTWith).T
assert __LMatrix[0].size == _Idwy.size, "Incorrect size of elements"
__Produit = numpy.zeros(len(__LMatrix))
for i, col in enumerate(__LMatrix):
- __Produit[i] = vfloat( _Idwy @ col)
+ __Produit[i] = vfloat(_Idwy @ col)
return __Produit
else:
__Produit = None
return __Produit
# ---------------------------------------------------------
- def DirectOperator(self, X, **extraArgs ):
+ def DirectOperator(self, X, **extraArgs):
"""
Calcul du direct à l'aide de la fonction fournie.
"""
logging.debug("FDA Calcul DirectOperator (explicite)")
if self.__mfEnabled:
- _HX = self.__userFunction( X, argsAsSerie = True )
+ _HX = self.__userFunction(X, argsAsSerie=True)
else:
- _HX = numpy.ravel(self.__userFunction( numpy.ravel(X) ))
+ _HX = numpy.ravel(self.__userFunction(numpy.ravel(X)))
#
return _HX
# ---------------------------------------------------------
- def TangentMatrix(self, X, dotWith = None, dotTWith = None ):
+ def TangentMatrix(self, X, dotWith=None, dotTWith=None):
"""
Calcul de l'opérateur tangent comme la Jacobienne par différences finies,
c'est-à-dire le gradient de H en X. On utilise des différences finies
"""
logging.debug("FDA Début du calcul de la Jacobienne")
- logging.debug("FDA Incrément de............: %s*X"%float(self.__increment))
- logging.debug("FDA Approximation centrée...: %s"%(self.__centeredDF))
+ logging.debug("FDA Incrément de............: %s*X" % float(self.__increment))
+ logging.debug("FDA Approximation centrée...: %s" % (self.__centeredDF))
#
if X is None or len(X) == 0:
- raise ValueError("Nominal point X for approximate derivatives can not be None or void (given X: %s)."%(str(X),))
+ raise ValueError(
+ "Nominal point X for approximate derivatives can not be None or void (given X: %s)."
+ % (str(X),)
+ )
#
- _X = numpy.ravel( X )
+ _X = numpy.ravel(X)
#
if self.__dX is None:
- _dX = self.__increment * _X
+ _dX = self.__increment * _X
else:
- _dX = numpy.ravel( self.__dX )
- assert len(_X) == len(_dX), "Inconsistent dX increment length with respect to the X one"
- assert _X.size == _dX.size, "Inconsistent dX increment size with respect to the X one"
+ _dX = numpy.ravel(self.__dX)
+ assert len(_X) == len(
+ _dX
+ ), "Inconsistent dX increment length with respect to the X one"
+ assert (
+ _X.size == _dX.size
+ ), "Inconsistent dX increment size with respect to the X one"
#
- if (_dX == 0.).any():
+ if (_dX == 0.0).any():
moyenne = _dX.mean()
- if moyenne == 0.:
- _dX = numpy.where( _dX == 0., float(self.__increment), _dX )
+ if moyenne == 0.0:
+ _dX = numpy.where(_dX == 0.0, float(self.__increment), _dX)
else:
- _dX = numpy.where( _dX == 0., moyenne, _dX )
+ _dX = numpy.where(_dX == 0.0, moyenne, _dX)
#
- __alreadyCalculated = False
+ __alreadyCalculated = False
if self.__avoidRC:
- __bidon, __alreadyCalculatedP = self.__doublon__( _X, self.__listJPCP, self.__listJPPN, None)
- __bidon, __alreadyCalculatedI = self.__doublon__(_dX, self.__listJPCI, self.__listJPIN, None)
+ __bidon, __alreadyCalculatedP = self.__doublon__(
+ _X, self.__listJPCP, self.__listJPPN, None
+ )
+ __bidon, __alreadyCalculatedI = self.__doublon__(
+ _dX, self.__listJPCI, self.__listJPIN, None
+ )
if __alreadyCalculatedP == __alreadyCalculatedI > -1:
__alreadyCalculated, __i = True, __alreadyCalculatedP
- logging.debug("FDA Cas J déjà calculé, récupération du doublon %i"%__i)
+ logging.debug(
+ "FDA Cas J déjà calculé, récupération du doublon %i" % __i
+ )
#
if __alreadyCalculated:
- logging.debug("FDA Calcul Jacobienne (par récupération du doublon %i)"%__i)
+ logging.debug(
+ "FDA Calcul Jacobienne (par récupération du doublon %i)" % __i
+ )
_Jacobienne = self.__listJPCR[__i]
logging.debug("FDA Fin du calcul de la Jacobienne")
if dotWith is not None:
- return numpy.dot( _Jacobienne, numpy.ravel( dotWith ))
+ return numpy.dot(_Jacobienne, numpy.ravel(dotWith))
elif dotTWith is not None:
- return numpy.dot(_Jacobienne.T, numpy.ravel( dotTWith ))
+ return numpy.dot(_Jacobienne.T, numpy.ravel(dotTWith))
else:
logging.debug("FDA Calcul Jacobienne (explicite)")
if self.__centeredDF:
"__userFunction__name": self.__userFunction__name,
}
_jobs = []
- for i in range( len(_dX) ):
- _dXi = _dX[i]
- _X_plus_dXi = numpy.array( _X, dtype=float )
- _X_plus_dXi[i] = _X[i] + _dXi
- _X_moins_dXi = numpy.array( _X, dtype=float )
+ for i in range(len(_dX)):
+ _dXi = _dX[i]
+ _X_plus_dXi = numpy.array(_X, dtype=float)
+ _X_plus_dXi[i] = _X[i] + _dXi
+ _X_moins_dXi = numpy.array(_X, dtype=float)
_X_moins_dXi[i] = _X[i] - _dXi
#
- _jobs.append( ( _X_plus_dXi, self.__extraArgs, funcrepr) )
- _jobs.append( (_X_moins_dXi, self.__extraArgs, funcrepr) )
+ _jobs.append((_X_plus_dXi, self.__extraArgs, funcrepr))
+ _jobs.append((_X_moins_dXi, self.__extraArgs, funcrepr))
#
import multiprocessing
+
self.__pool = multiprocessing.Pool(self.__mpWorkers)
- _HX_plusmoins_dX = self.__pool.map( ExecuteFunction, _jobs )
+ _HX_plusmoins_dX = self.__pool.map(ExecuteFunction, _jobs)
self.__pool.close()
self.__pool.join()
#
- _Jacobienne = []
- for i in range( len(_dX) ):
- _Jacobienne.append( numpy.ravel( _HX_plusmoins_dX[2 * i] - _HX_plusmoins_dX[2 * i + 1] ) / (2. * _dX[i]) )
+ _Jacobienne = []
+ for i in range(len(_dX)):
+ _Jacobienne.append(
+ numpy.ravel(
+ _HX_plusmoins_dX[2 * i] - _HX_plusmoins_dX[2 * i + 1]
+ )
+ / (2.0 * _dX[i])
+ )
#
elif self.__mfEnabled:
_xserie = []
- for i in range( len(_dX) ):
- _dXi = _dX[i]
- _X_plus_dXi = numpy.array( _X, dtype=float )
- _X_plus_dXi[i] = _X[i] + _dXi
- _X_moins_dXi = numpy.array( _X, dtype=float )
+ for i in range(len(_dX)):
+ _dXi = _dX[i]
+ _X_plus_dXi = numpy.array(_X, dtype=float)
+ _X_plus_dXi[i] = _X[i] + _dXi
+ _X_moins_dXi = numpy.array(_X, dtype=float)
_X_moins_dXi[i] = _X[i] - _dXi
#
- _xserie.append( _X_plus_dXi )
- _xserie.append( _X_moins_dXi )
+ _xserie.append(_X_plus_dXi)
+ _xserie.append(_X_moins_dXi)
#
- _HX_plusmoins_dX = self.DirectOperator( _xserie )
+ _HX_plusmoins_dX = self.DirectOperator(_xserie)
#
- _Jacobienne = []
- for i in range( len(_dX) ):
- _Jacobienne.append( numpy.ravel( _HX_plusmoins_dX[2 * i] - _HX_plusmoins_dX[2 * i + 1] ) / (2. * _dX[i]) )
+ _Jacobienne = []
+ for i in range(len(_dX)):
+ _Jacobienne.append(
+ numpy.ravel(
+ _HX_plusmoins_dX[2 * i] - _HX_plusmoins_dX[2 * i + 1]
+ )
+ / (2.0 * _dX[i])
+ )
#
else:
- _Jacobienne = []
- for i in range( _dX.size ):
- _dXi = _dX[i]
- _X_plus_dXi = numpy.array( _X, dtype=float )
- _X_plus_dXi[i] = _X[i] + _dXi
- _X_moins_dXi = numpy.array( _X, dtype=float )
+ _Jacobienne = []
+ for i in range(_dX.size):
+ _dXi = _dX[i]
+ _X_plus_dXi = numpy.array(_X, dtype=float)
+ _X_plus_dXi[i] = _X[i] + _dXi
+ _X_moins_dXi = numpy.array(_X, dtype=float)
_X_moins_dXi[i] = _X[i] - _dXi
#
- _HX_plus_dXi = self.DirectOperator( _X_plus_dXi )
- _HX_moins_dXi = self.DirectOperator( _X_moins_dXi )
+ _HX_plus_dXi = self.DirectOperator(_X_plus_dXi)
+ _HX_moins_dXi = self.DirectOperator(_X_moins_dXi)
#
- _Jacobienne.append( numpy.ravel( _HX_plus_dXi - _HX_moins_dXi ) / (2. * _dXi) )
+ _Jacobienne.append(
+ numpy.ravel(_HX_plus_dXi - _HX_moins_dXi) / (2.0 * _dXi)
+ )
#
else:
#
"__userFunction__name": self.__userFunction__name,
}
_jobs = []
- _jobs.append( (_X, self.__extraArgs, funcrepr) )
- for i in range( len(_dX) ):
- _X_plus_dXi = numpy.array( _X, dtype=float )
+ _jobs.append((_X, self.__extraArgs, funcrepr))
+ for i in range(len(_dX)):
+ _X_plus_dXi = numpy.array(_X, dtype=float)
_X_plus_dXi[i] = _X[i] + _dX[i]
#
- _jobs.append( (_X_plus_dXi, self.__extraArgs, funcrepr) )
+ _jobs.append((_X_plus_dXi, self.__extraArgs, funcrepr))
#
import multiprocessing
+
self.__pool = multiprocessing.Pool(self.__mpWorkers)
- _HX_plus_dX = self.__pool.map( ExecuteFunction, _jobs )
+ _HX_plus_dX = self.__pool.map(ExecuteFunction, _jobs)
self.__pool.close()
self.__pool.join()
#
_HX = _HX_plus_dX.pop(0)
#
_Jacobienne = []
- for i in range( len(_dX) ):
- _Jacobienne.append( numpy.ravel(( _HX_plus_dX[i] - _HX ) / _dX[i]) )
+ for i in range(len(_dX)):
+ _Jacobienne.append(numpy.ravel((_HX_plus_dX[i] - _HX) / _dX[i]))
#
elif self.__mfEnabled:
_xserie = []
- _xserie.append( _X )
- for i in range( len(_dX) ):
- _X_plus_dXi = numpy.array( _X, dtype=float )
+ _xserie.append(_X)
+ for i in range(len(_dX)):
+ _X_plus_dXi = numpy.array(_X, dtype=float)
_X_plus_dXi[i] = _X[i] + _dX[i]
#
- _xserie.append( _X_plus_dXi )
+ _xserie.append(_X_plus_dXi)
#
- _HX_plus_dX = self.DirectOperator( _xserie )
+ _HX_plus_dX = self.DirectOperator(_xserie)
#
_HX = _HX_plus_dX.pop(0)
#
_Jacobienne = []
- for i in range( len(_dX) ):
- _Jacobienne.append( numpy.ravel(( _HX_plus_dX[i] - _HX ) / _dX[i]) )
+ for i in range(len(_dX)):
+ _Jacobienne.append(numpy.ravel((_HX_plus_dX[i] - _HX) / _dX[i]))
#
else:
- _Jacobienne = []
- _HX = self.DirectOperator( _X )
- for i in range( _dX.size ):
- _dXi = _dX[i]
- _X_plus_dXi = numpy.array( _X, dtype=float )
- _X_plus_dXi[i] = _X[i] + _dXi
+ _Jacobienne = []
+ _HX = self.DirectOperator(_X)
+ for i in range(_dX.size):
+ _dXi = _dX[i]
+ _X_plus_dXi = numpy.array(_X, dtype=float)
+ _X_plus_dXi[i] = _X[i] + _dXi
#
- _HX_plus_dXi = self.DirectOperator( _X_plus_dXi )
+ _HX_plus_dXi = self.DirectOperator(_X_plus_dXi)
#
- _Jacobienne.append( numpy.ravel(( _HX_plus_dXi - _HX ) / _dXi) )
+ _Jacobienne.append(numpy.ravel((_HX_plus_dXi - _HX) / _dXi))
#
if (dotWith is not None) or (dotTWith is not None):
__Produit = self.__listdotwith__(_Jacobienne, dotWith, dotTWith)
else:
__Produit = None
if __Produit is None or self.__avoidRC:
- _Jacobienne = numpy.transpose( numpy.vstack( _Jacobienne ) )
+ _Jacobienne = numpy.transpose(numpy.vstack(_Jacobienne))
if self.__avoidRC:
if self.__lengthRJ < 0:
self.__lengthRJ = 2 * _X.size
self.__listJPCR.pop(0)
self.__listJPPN.pop(0)
self.__listJPIN.pop(0)
- self.__listJPCP.append( copy.copy(_X) )
- self.__listJPCI.append( copy.copy(_dX) )
- self.__listJPCR.append( copy.copy(_Jacobienne) )
- self.__listJPPN.append( numpy.linalg.norm(_X) )
- self.__listJPIN.append( numpy.linalg.norm(_Jacobienne) )
+ self.__listJPCP.append(copy.copy(_X))
+ self.__listJPCI.append(copy.copy(_dX))
+ self.__listJPCR.append(copy.copy(_Jacobienne))
+ self.__listJPPN.append(numpy.linalg.norm(_X))
+ self.__listJPIN.append(numpy.linalg.norm(_Jacobienne))
logging.debug("FDA Fin du calcul de la Jacobienne")
if __Produit is not None:
return __Produit
return _Jacobienne
# ---------------------------------------------------------
- def TangentOperator(self, paire, **extraArgs ):
+ def TangentOperator(self, paire, **extraArgs):
"""
Calcul du tangent à l'aide de la Jacobienne.
#
# Calcul de la forme matricielle si le second argument est None
# -------------------------------------------------------------
- _Jacobienne = self.TangentMatrix( X )
+ _Jacobienne = self.TangentMatrix(X)
if self.__mfEnabled:
- return [_Jacobienne,]
+ return [
+ _Jacobienne,
+ ]
else:
return _Jacobienne
else:
#
# Calcul de la valeur linéarisée de H en X appliqué à dX
# ------------------------------------------------------
- _HtX = self.TangentMatrix( X, dotWith = dX )
+ _HtX = self.TangentMatrix(X, dotWith=dX)
if self.__mfEnabled:
- return [_HtX,]
+ return [
+ _HtX,
+ ]
else:
return _HtX
# ---------------------------------------------------------
- def AdjointOperator(self, paire, **extraArgs ):
+ def AdjointOperator(self, paire, **extraArgs):
"""
Calcul de l'adjoint à l'aide de la Jacobienne.
#
# Calcul de la forme matricielle si le second argument est None
# -------------------------------------------------------------
- _JacobienneT = self.TangentMatrix( X ).T
+ _JacobienneT = self.TangentMatrix(X).T
if self.__mfEnabled:
- return [_JacobienneT,]
+ return [
+ _JacobienneT,
+ ]
else:
return _JacobienneT
else:
#
# Calcul de la valeur de l'adjoint en X appliqué à Y
# --------------------------------------------------
- _HaY = self.TangentMatrix( X, dotTWith = Y )
+ _HaY = self.TangentMatrix(X, dotTWith=Y)
if self.__mfEnabled:
- return [_HaY,]
+ return [
+ _HaY,
+ ]
else:
return _HaY
+
# ==============================================================================
-def SetInitialDirection( __Direction = [], __Amplitude = 1., __Position = None ):
+def SetInitialDirection(__Direction=[], __Amplitude=1.0, __Position=None):
"Établit ou élabore une direction avec une amplitude"
#
if len(__Direction) == 0 and __Position is None:
- raise ValueError("If initial direction is void, current position has to be given")
+ raise ValueError(
+ "If initial direction is void, current position has to be given"
+ )
if abs(float(__Amplitude)) < mpr:
raise ValueError("Amplitude of perturbation can not be zero")
#
else:
__dX0 = []
__X0 = numpy.ravel(numpy.asarray(__Position))
- __mX0 = numpy.mean( __X0, dtype=mfp )
+ __mX0 = numpy.mean(__X0, dtype=mfp)
if abs(__mX0) < 2 * mpr:
- __mX0 = 1. # Évite le problème de position nulle
+ __mX0 = 1.0 # Évite le problème de position nulle
for v in __X0:
- if abs(v) > 1.e-8:
- __dX0.append( numpy.random.normal(0., abs(v)) )
+ if abs(v) > 1.0e-8:
+ __dX0.append(numpy.random.normal(0.0, abs(v)))
else:
- __dX0.append( numpy.random.normal(0., __mX0) )
+ __dX0.append(numpy.random.normal(0.0, __mX0))
#
__dX0 = numpy.asarray(__dX0, float) # Évite le problème d'array de taille 1
- __dX0 = numpy.ravel( __dX0 ) # Redresse les vecteurs
+ __dX0 = numpy.ravel(__dX0) # Redresse les vecteurs
__dX0 = float(__Amplitude) * __dX0
#
return __dX0
+
# ==============================================================================
-def EnsembleOfCenteredPerturbations( __bgCenter, __bgCovariance, __nbMembers ):
+def EnsembleOfCenteredPerturbations(__bgCenter, __bgCovariance, __nbMembers):
"Génération d'un ensemble de taille __nbMembers-1 d'états aléatoires centrés"
#
__bgCenter = numpy.ravel(__bgCenter)[:, None]
if __nbMembers < 1:
- raise ValueError("Number of members has to be strictly more than 1 (given number: %s)."%(str(__nbMembers),))
+ raise ValueError(
+ "Number of members has to be strictly more than 1 (given number: %s)."
+ % (str(__nbMembers),)
+ )
#
if __bgCovariance is None:
- _Perturbations = numpy.tile( __bgCenter, __nbMembers)
+ _Perturbations = numpy.tile(__bgCenter, __nbMembers)
else:
- _Z = numpy.random.multivariate_normal(numpy.zeros(__bgCenter.size), __bgCovariance, size=__nbMembers).T
- _Perturbations = numpy.tile( __bgCenter, __nbMembers) + _Z
+ _Z = numpy.random.multivariate_normal(
+ numpy.zeros(__bgCenter.size), __bgCovariance, size=__nbMembers
+ ).T
+ _Perturbations = numpy.tile(__bgCenter, __nbMembers) + _Z
#
return _Perturbations
+
# ==============================================================================
def EnsembleOfBackgroundPerturbations(
- __bgCenter,
- __bgCovariance,
- __nbMembers,
- __withSVD = True ):
+ __bgCenter, __bgCovariance, __nbMembers, __withSVD=True
+):
"Génération d'un ensemble de taille __nbMembers-1 d'états aléatoires centrés"
+
def __CenteredRandomAnomalies(Zr, N):
"""
Génère une matrice de N anomalies aléatoires centrées sur Zr selon les
notes manuscrites de MB et conforme au code de PS avec eps = -1
"""
eps = -1
- Q = numpy.identity(N - 1) - numpy.ones((N - 1, N - 1)) / numpy.sqrt(N) / (numpy.sqrt(N) - eps)
+ Q = numpy.identity(N - 1) - numpy.ones((N - 1, N - 1)) / numpy.sqrt(N) / (
+ numpy.sqrt(N) - eps
+ )
Q = numpy.concatenate((Q, [eps * numpy.ones(N - 1) / numpy.sqrt(N)]), axis=0)
- R, _ = numpy.linalg.qr(numpy.random.normal(size = (N - 1, N - 1)))
+ R, _ = numpy.linalg.qr(numpy.random.normal(size=(N - 1, N - 1)))
Q = numpy.dot(Q, R)
Zr = numpy.dot(Q, Zr)
return Zr.T
+
#
__bgCenter = numpy.ravel(__bgCenter).reshape((-1, 1))
if __nbMembers < 1:
- raise ValueError("Number of members has to be strictly more than 1 (given number: %s)."%(str(__nbMembers),))
+ raise ValueError(
+ "Number of members has to be strictly more than 1 (given number: %s)."
+ % (str(__nbMembers),)
+ )
if __bgCovariance is None:
- _Perturbations = numpy.tile( __bgCenter, __nbMembers)
+ _Perturbations = numpy.tile(__bgCenter, __nbMembers)
else:
if __withSVD:
_U, _s, _V = numpy.linalg.svd(__bgCovariance, full_matrices=False)
_nbctl = __bgCenter.size
if __nbMembers > _nbctl:
- _Z = numpy.concatenate((numpy.dot(
- numpy.diag(numpy.sqrt(_s[:_nbctl])), _V[:_nbctl]),
- numpy.random.multivariate_normal(numpy.zeros(_nbctl), __bgCovariance, __nbMembers - 1 - _nbctl)), axis = 0)
+ _Z = numpy.concatenate(
+ (
+ numpy.dot(numpy.diag(numpy.sqrt(_s[:_nbctl])), _V[:_nbctl]),
+ numpy.random.multivariate_normal(
+ numpy.zeros(_nbctl),
+ __bgCovariance,
+ __nbMembers - 1 - _nbctl,
+ ),
+ ),
+ axis=0,
+ )
else:
- _Z = numpy.dot(numpy.diag(numpy.sqrt(_s[:__nbMembers - 1])), _V[:__nbMembers - 1])
+ _Z = numpy.dot(
+ numpy.diag(numpy.sqrt(_s[: __nbMembers - 1])), _V[: __nbMembers - 1]
+ )
_Zca = __CenteredRandomAnomalies(_Z, __nbMembers)
_Perturbations = __bgCenter + _Zca
else:
if max(abs(__bgCovariance.flatten())) > 0:
_nbctl = __bgCenter.size
- _Z = numpy.random.multivariate_normal(numpy.zeros(_nbctl), __bgCovariance, __nbMembers - 1)
+ _Z = numpy.random.multivariate_normal(
+ numpy.zeros(_nbctl), __bgCovariance, __nbMembers - 1
+ )
_Zca = __CenteredRandomAnomalies(_Z, __nbMembers)
_Perturbations = __bgCenter + _Zca
else:
- _Perturbations = numpy.tile( __bgCenter, __nbMembers)
+ _Perturbations = numpy.tile(__bgCenter, __nbMembers)
#
return _Perturbations
+
# ==============================================================================
-def EnsembleMean( __Ensemble ):
+def EnsembleMean(__Ensemble):
"Renvoie la moyenne empirique d'un ensemble"
- return numpy.asarray(__Ensemble).mean(axis=1, dtype=mfp).astype('float').reshape((-1, 1))
+ return (
+ numpy.asarray(__Ensemble)
+ .mean(axis=1, dtype=mfp)
+ .astype("float")
+ .reshape((-1, 1))
+ )
+
# ==============================================================================
-def EnsembleOfAnomalies( __Ensemble, __OptMean = None, __Normalisation = 1. ):
+def EnsembleOfAnomalies(__Ensemble, __OptMean=None, __Normalisation=1.0):
"Renvoie les anomalies centrées à partir d'un ensemble"
if __OptMean is None:
- __Em = EnsembleMean( __Ensemble )
+ __Em = EnsembleMean(__Ensemble)
else:
- __Em = numpy.ravel( __OptMean ).reshape((-1, 1))
+ __Em = numpy.ravel(__OptMean).reshape((-1, 1))
#
- return __Normalisation * (numpy.asarray( __Ensemble ) - __Em)
+ return __Normalisation * (numpy.asarray(__Ensemble) - __Em)
+
# ==============================================================================
-def EnsembleErrorCovariance( __Ensemble, __Quick = False ):
+def EnsembleErrorCovariance(__Ensemble, __Quick=False):
"Renvoie l'estimation empirique de la covariance d'ensemble"
if __Quick:
# Covariance rapide mais rarement définie positive
- __Covariance = numpy.cov( __Ensemble )
+ __Covariance = numpy.cov(__Ensemble)
else:
# Résultat souvent identique à numpy.cov, mais plus robuste
- __n, __m = numpy.asarray( __Ensemble ).shape
- __Anomalies = EnsembleOfAnomalies( __Ensemble )
+ __n, __m = numpy.asarray(__Ensemble).shape
+ __Anomalies = EnsembleOfAnomalies(__Ensemble)
# Estimation empirique
- __Covariance = ( __Anomalies @ __Anomalies.T ) / (__m - 1)
+ __Covariance = (__Anomalies @ __Anomalies.T) / (__m - 1)
# Assure la symétrie
- __Covariance = ( __Covariance + __Covariance.T ) * 0.5
+ __Covariance = (__Covariance + __Covariance.T) * 0.5
# Assure la positivité
- __epsilon = mpr * numpy.trace( __Covariance )
+ __epsilon = mpr * numpy.trace(__Covariance)
__Covariance = __Covariance + __epsilon * numpy.identity(__n)
#
return __Covariance
+
# ==============================================================================
-def SingularValuesEstimation( __Ensemble, __Using = "SVDVALS"):
+def SingularValuesEstimation(__Ensemble, __Using="SVDVALS"):
"Renvoie les valeurs singulières de l'ensemble et leur carré"
if __Using == "SVDVALS": # Recommandé
- __sv = scipy.linalg.svdvals( __Ensemble )
+ __sv = scipy.linalg.svdvals(__Ensemble)
__svsq = __sv**2
elif __Using == "SVD":
- _, __sv, _ = numpy.linalg.svd( __Ensemble )
+ _, __sv, _ = numpy.linalg.svd(__Ensemble)
__svsq = __sv**2
elif __Using == "EIG": # Lent
- __eva, __eve = numpy.linalg.eig( __Ensemble @ __Ensemble.T )
- __svsq = numpy.sort(numpy.abs(numpy.real( __eva )))[::-1]
- __sv = numpy.sqrt( __svsq )
+ __eva, __eve = numpy.linalg.eig(__Ensemble @ __Ensemble.T)
+ __svsq = numpy.sort(numpy.abs(numpy.real(__eva)))[::-1]
+ __sv = numpy.sqrt(__svsq)
elif __Using == "EIGH":
- __eva, __eve = numpy.linalg.eigh( __Ensemble @ __Ensemble.T )
- __svsq = numpy.sort(numpy.abs(numpy.real( __eva )))[::-1]
- __sv = numpy.sqrt( __svsq )
+ __eva, __eve = numpy.linalg.eigh(__Ensemble @ __Ensemble.T)
+ __svsq = numpy.sort(numpy.abs(numpy.real(__eva)))[::-1]
+ __sv = numpy.sqrt(__svsq)
elif __Using == "EIGVALS":
- __eva = numpy.linalg.eigvals( __Ensemble @ __Ensemble.T )
- __svsq = numpy.sort(numpy.abs(numpy.real( __eva )))[::-1]
- __sv = numpy.sqrt( __svsq )
+ __eva = numpy.linalg.eigvals(__Ensemble @ __Ensemble.T)
+ __svsq = numpy.sort(numpy.abs(numpy.real(__eva)))[::-1]
+ __sv = numpy.sqrt(__svsq)
elif __Using == "EIGVALSH":
- __eva = numpy.linalg.eigvalsh( __Ensemble @ __Ensemble.T )
- __svsq = numpy.sort(numpy.abs(numpy.real( __eva )))[::-1]
- __sv = numpy.sqrt( __svsq )
+ __eva = numpy.linalg.eigvalsh(__Ensemble @ __Ensemble.T)
+ __svsq = numpy.sort(numpy.abs(numpy.real(__eva)))[::-1]
+ __sv = numpy.sqrt(__svsq)
else:
- raise ValueError("Error in requested variant name: %s"%__Using)
+ raise ValueError("Error in requested variant name: %s" % __Using)
#
__tisv = __svsq / __svsq.sum()
- __qisv = 1. - __svsq.cumsum() / __svsq.sum()
+ __qisv = 1.0 - __svsq.cumsum() / __svsq.sum()
# Différence à 1.e-16 : __qisv = 1. - __tisv.cumsum()
#
return __sv, __svsq, __tisv, __qisv
+
# ==============================================================================
-def MaxL2NormByColumn(__Ensemble, __LcCsts = False, __IncludedPoints = []):
+def MaxL2NormByColumn(__Ensemble, __LcCsts=False, __IncludedPoints=[]):
"Maximum des normes L2 calculées par colonne"
if __LcCsts and len(__IncludedPoints) > 0:
normes = numpy.linalg.norm(
- numpy.take(__Ensemble, __IncludedPoints, axis=0, mode='clip'),
- axis = 0,
+ numpy.take(__Ensemble, __IncludedPoints, axis=0, mode="clip"),
+ axis=0,
)
else:
- normes = numpy.linalg.norm( __Ensemble, axis = 0)
+ normes = numpy.linalg.norm(__Ensemble, axis=0)
nmax = numpy.max(normes)
imax = numpy.argmax(normes)
return nmax, imax, normes
-def MaxLinfNormByColumn(__Ensemble, __LcCsts = False, __IncludedPoints = []):
+
+def MaxLinfNormByColumn(__Ensemble, __LcCsts=False, __IncludedPoints=[]):
"Maximum des normes Linf calculées par colonne"
if __LcCsts and len(__IncludedPoints) > 0:
normes = numpy.linalg.norm(
- numpy.take(__Ensemble, __IncludedPoints, axis=0, mode='clip'),
- axis = 0, ord=numpy.inf,
+ numpy.take(__Ensemble, __IncludedPoints, axis=0, mode="clip"),
+ axis=0,
+ ord=numpy.inf,
)
else:
- normes = numpy.linalg.norm( __Ensemble, axis = 0, ord=numpy.inf)
+ normes = numpy.linalg.norm(__Ensemble, axis=0, ord=numpy.inf)
nmax = numpy.max(normes)
imax = numpy.argmax(normes)
return nmax, imax, normes
+
def InterpolationErrorByColumn(
- __Ensemble = None, __Basis = None, __Points = None, __M = 2, # Usage 1
- __Differences = None, # Usage 2
- __ErrorNorm = None, # Commun
- __LcCsts = False, __IncludedPoints = [], # Commun
- __CDM = False, # ComputeMaxDifference # Commun
- __RMU = False, # ReduceMemoryUse # Commun
- __FTL = False, # ForceTril # Commun
- ): # noqa: E123
+ __Ensemble=None,
+ __Basis=None,
+ __Points=None,
+ __M=2, # Usage 1
+ __Differences=None, # Usage 2
+ __ErrorNorm=None, # Commun
+ __LcCsts=False,
+ __IncludedPoints=[], # Commun
+ __CDM=False, # ComputeMaxDifference # Commun
+ __RMU=False, # ReduceMemoryUse # Commun
+ __FTL=False, # ForceTril # Commun
+):
"Analyse des normes d'erreurs d'interpolation calculées par colonne"
if __ErrorNorm == "L2":
NormByColumn = MaxL2NormByColumn
#
if __Differences is None and not __RMU: # Usage 1
if __FTL:
- rBasis = numpy.tril( __Basis[__Points, :] )
+ rBasis = numpy.tril(__Basis[__Points, :])
else:
rBasis = __Basis[__Points, :]
rEnsemble = __Ensemble[__Points, :]
rBasis_inv = numpy.linalg.inv(rBasis)
Interpolator = numpy.dot(__Basis, numpy.dot(rBasis_inv, rEnsemble))
else:
- rBasis_inv = 1. / rBasis
+ rBasis_inv = 1.0 / rBasis
Interpolator = numpy.outer(__Basis, numpy.outer(rBasis_inv, rEnsemble))
#
differences = __Ensemble - Interpolator
#
elif __Differences is None and __RMU: # Usage 1
if __FTL:
- rBasis = numpy.tril( __Basis[__Points, :] )
+ rBasis = numpy.tril(__Basis[__Points, :])
else:
rBasis = __Basis[__Points, :]
rEnsemble = __Ensemble[__Points, :]
rBasis_inv = numpy.linalg.inv(rBasis)
rCoordinates = numpy.dot(rBasis_inv, rEnsemble)
else:
- rBasis_inv = 1. / rBasis
+ rBasis_inv = 1.0 / rBasis
rCoordinates = numpy.outer(rBasis_inv, rEnsemble)
#
- error = 0.
+ error = 0.0
nbr = -1
for iCol in range(__Ensemble.shape[1]):
if __M > 1:
- iDifference = __Ensemble[:, iCol] - numpy.dot(__Basis, rCoordinates[:, iCol])
+ iDifference = __Ensemble[:, iCol] - numpy.dot(
+ __Basis, rCoordinates[:, iCol]
+ )
else:
- iDifference = __Ensemble[:, iCol] - numpy.ravel(numpy.outer(__Basis, rCoordinates[:, iCol]))
+ iDifference = __Ensemble[:, iCol] - numpy.ravel(
+ numpy.outer(__Basis, rCoordinates[:, iCol])
+ )
#
normDifference, _, _ = NormByColumn(iDifference, __LcCsts, __IncludedPoints)
#
if normDifference > error:
- error = normDifference
- nbr = iCol
+ error = normDifference
+ nbr = iCol
#
if __CDM:
- maxDifference = __Ensemble[:, nbr] - numpy.dot(__Basis, rCoordinates[:, nbr])
+ maxDifference = __Ensemble[:, nbr] - numpy.dot(
+ __Basis, rCoordinates[:, nbr]
+ )
#
else: # Usage 2
differences = __Differences
else:
return error, nbr
+
# ==============================================================================
-def EnsemblePerturbationWithGivenCovariance(
- __Ensemble,
- __Covariance,
- __Seed = None ):
+def EnsemblePerturbationWithGivenCovariance(__Ensemble, __Covariance, __Seed=None):
"Ajout d'une perturbation à chaque membre d'un ensemble selon une covariance prescrite"
if hasattr(__Covariance, "assparsematrix"):
- if (abs(__Ensemble).mean() > mpr) and (abs(__Covariance.assparsematrix()) / abs(__Ensemble).mean() < mpr).all():
+ if (abs(__Ensemble).mean() > mpr) and (
+ abs(__Covariance.assparsematrix()) / abs(__Ensemble).mean() < mpr
+ ).all():
# Traitement d'une covariance nulle ou presque
return __Ensemble
- if (abs(__Ensemble).mean() <= mpr) and (abs(__Covariance.assparsematrix()) < mpr).all():
+ if (abs(__Ensemble).mean() <= mpr) and (
+ abs(__Covariance.assparsematrix()) < mpr
+ ).all():
# Traitement d'une covariance nulle ou presque
return __Ensemble
else:
- if (abs(__Ensemble).mean() > mpr) and (abs(__Covariance) / abs(__Ensemble).mean() < mpr).all():
+ if (abs(__Ensemble).mean() > mpr) and (
+ abs(__Covariance) / abs(__Ensemble).mean() < mpr
+ ).all():
# Traitement d'une covariance nulle ou presque
return __Ensemble
if (abs(__Ensemble).mean() <= mpr) and (abs(__Covariance) < mpr).all():
#
if hasattr(__Covariance, "isscalar") and __Covariance.isscalar():
# Traitement d'une covariance multiple de l'identité
- __zero = 0.
- __std = numpy.sqrt(__Covariance.assparsematrix())
+ __zero = 0.0
+ __std = numpy.sqrt(__Covariance.assparsematrix())
__Ensemble += numpy.random.normal(__zero, __std, size=(__m, __n)).T
#
elif hasattr(__Covariance, "isvector") and __Covariance.isvector():
# Traitement d'une covariance diagonale avec variances non identiques
__zero = numpy.zeros(__n)
- __std = numpy.sqrt(__Covariance.assparsematrix())
- __Ensemble += numpy.asarray([numpy.random.normal(__zero, __std) for i in range(__m)]).T
+ __std = numpy.sqrt(__Covariance.assparsematrix())
+ __Ensemble += numpy.asarray(
+ [numpy.random.normal(__zero, __std) for i in range(__m)]
+ ).T
#
elif hasattr(__Covariance, "ismatrix") and __Covariance.ismatrix():
# Traitement d'une covariance pleine
- __Ensemble += numpy.random.multivariate_normal(numpy.zeros(__n), __Covariance.asfullmatrix(__n), size=__m).T
+ __Ensemble += numpy.random.multivariate_normal(
+ numpy.zeros(__n), __Covariance.asfullmatrix(__n), size=__m
+ ).T
#
elif isinstance(__Covariance, numpy.ndarray):
# Traitement d'une covariance numpy pleine, sachant qu'on arrive ici en dernier
- __Ensemble += numpy.random.multivariate_normal(numpy.zeros(__n), __Covariance, size=__m).T
+ __Ensemble += numpy.random.multivariate_normal(
+ numpy.zeros(__n), __Covariance, size=__m
+ ).T
#
else:
- raise ValueError("Error in ensemble perturbation with inadequate covariance specification")
+ raise ValueError(
+ "Error in ensemble perturbation with inadequate covariance specification"
+ )
#
return __Ensemble
+
# ==============================================================================
def CovarianceInflation(
- __InputCovOrEns,
- __InflationType = None,
- __InflationFactor = None,
- __BackgroundCov = None ):
+ __InputCovOrEns, __InflationType=None, __InflationFactor=None, __BackgroundCov=None
+):
"""
Inflation applicable soit sur Pb ou Pa, soit sur les ensembles EXb ou EXa
if __InputCovOrEns.size == 0:
return __InputCovOrEns
#
- if __InflationType in ["MultiplicativeOnAnalysisCovariance", "MultiplicativeOnBackgroundCovariance"]:
- if __InflationFactor < 1.:
- raise ValueError("Inflation factor for multiplicative inflation has to be greater or equal than 1.")
- if __InflationFactor < 1. + mpr: # No inflation = 1
+ if __InflationType in [
+ "MultiplicativeOnAnalysisCovariance",
+ "MultiplicativeOnBackgroundCovariance",
+ ]:
+ if __InflationFactor < 1.0:
+ raise ValueError(
+ "Inflation factor for multiplicative inflation has to be greater or equal than 1."
+ )
+ if __InflationFactor < 1.0 + mpr: # No inflation = 1
return __InputCovOrEns
__OutputCovOrEns = __InflationFactor**2 * __InputCovOrEns
#
- elif __InflationType in ["MultiplicativeOnAnalysisAnomalies", "MultiplicativeOnBackgroundAnomalies"]:
- if __InflationFactor < 1.:
- raise ValueError("Inflation factor for multiplicative inflation has to be greater or equal than 1.")
- if __InflationFactor < 1. + mpr: # No inflation = 1
+ elif __InflationType in [
+ "MultiplicativeOnAnalysisAnomalies",
+ "MultiplicativeOnBackgroundAnomalies",
+ ]:
+ if __InflationFactor < 1.0:
+ raise ValueError(
+ "Inflation factor for multiplicative inflation has to be greater or equal than 1."
+ )
+ if __InflationFactor < 1.0 + mpr: # No inflation = 1
return __InputCovOrEns
- __InputCovOrEnsMean = __InputCovOrEns.mean(axis=1, dtype=mfp).astype('float')
- __OutputCovOrEns = __InputCovOrEnsMean[:, numpy.newaxis] \
- + __InflationFactor * (__InputCovOrEns - __InputCovOrEnsMean[:, numpy.newaxis])
+ __InputCovOrEnsMean = __InputCovOrEns.mean(axis=1, dtype=mfp).astype("float")
+ __OutputCovOrEns = __InputCovOrEnsMean[:, numpy.newaxis] + __InflationFactor * (
+ __InputCovOrEns - __InputCovOrEnsMean[:, numpy.newaxis]
+ )
#
- elif __InflationType in ["AdditiveOnAnalysisCovariance", "AdditiveOnBackgroundCovariance"]:
- if __InflationFactor < 0.:
- raise ValueError("Inflation factor for additive inflation has to be greater or equal than 0.")
+ elif __InflationType in [
+ "AdditiveOnAnalysisCovariance",
+ "AdditiveOnBackgroundCovariance",
+ ]:
+ if __InflationFactor < 0.0:
+ raise ValueError(
+ "Inflation factor for additive inflation has to be greater or equal than 0."
+ )
if __InflationFactor < mpr: # No inflation = 0
return __InputCovOrEns
__n, __m = __InputCovOrEns.shape
if __n != __m:
- raise ValueError("Additive inflation can only be applied to squared (covariance) matrix.")
+ raise ValueError(
+ "Additive inflation can only be applied to squared (covariance) matrix."
+ )
__tr = __InputCovOrEns.trace() / __n
if __InflationFactor > __tr:
- raise ValueError("Inflation factor for additive inflation has to be small over %.0e."%__tr)
- __OutputCovOrEns = (1. - __InflationFactor) * __InputCovOrEns + __InflationFactor * numpy.identity(__n)
+ raise ValueError(
+ "Inflation factor for additive inflation has to be small over %.0e."
+ % __tr
+ )
+ __OutputCovOrEns = (
+ 1.0 - __InflationFactor
+ ) * __InputCovOrEns + __InflationFactor * numpy.identity(__n)
#
elif __InflationType == "HybridOnBackgroundCovariance":
- if __InflationFactor < 0.:
- raise ValueError("Inflation factor for hybrid inflation has to be greater or equal than 0.")
+ if __InflationFactor < 0.0:
+ raise ValueError(
+ "Inflation factor for hybrid inflation has to be greater or equal than 0."
+ )
if __InflationFactor < mpr: # No inflation = 0
return __InputCovOrEns
__n, __m = __InputCovOrEns.shape
if __n != __m:
- raise ValueError("Additive inflation can only be applied to squared (covariance) matrix.")
+ raise ValueError(
+ "Additive inflation can only be applied to squared (covariance) matrix."
+ )
if __BackgroundCov is None:
- raise ValueError("Background covariance matrix B has to be given for hybrid inflation.")
+ raise ValueError(
+ "Background covariance matrix B has to be given for hybrid inflation."
+ )
if __InputCovOrEns.shape != __BackgroundCov.shape:
- raise ValueError("Ensemble covariance matrix has to be of same size than background covariance matrix B.")
- __OutputCovOrEns = (1. - __InflationFactor) * __InputCovOrEns + __InflationFactor * __BackgroundCov
+ raise ValueError(
+ "Ensemble covariance matrix has to be of same size than background covariance matrix B."
+ )
+ __OutputCovOrEns = (
+ 1.0 - __InflationFactor
+ ) * __InputCovOrEns + __InflationFactor * __BackgroundCov
#
elif __InflationType == "Relaxation":
raise NotImplementedError("Relaxation inflation type not implemented")
#
else:
- raise ValueError("Error in inflation type, '%s' is not a valid keyword."%__InflationType)
+ raise ValueError(
+ "Error in inflation type, '%s' is not a valid keyword." % __InflationType
+ )
#
return __OutputCovOrEns
+
# ==============================================================================
-def HessienneEstimation( __selfA, __nb, __HaM, __HtM, __BI, __RI ):
+def HessienneEstimation(__selfA, __nb, __HaM, __HtM, __BI, __RI):
"Estimation de la Hessienne"
#
__HessienneI = []
for i in range(int(__nb)):
- __ee = numpy.zeros((__nb, 1))
- __ee[i] = 1.
- __HtEE = numpy.dot(__HtM, __ee).reshape((-1, 1))
- __HessienneI.append( numpy.ravel( __BI * __ee + __HaM * (__RI * __HtEE) ) )
+ __ee = numpy.zeros((__nb, 1))
+ __ee[i] = 1.0
+ __HtEE = numpy.dot(__HtM, __ee).reshape((-1, 1))
+ __HessienneI.append(numpy.ravel(__BI * __ee + __HaM * (__RI * __HtEE)))
#
- __A = numpy.linalg.inv(numpy.array( __HessienneI ))
+ __A = numpy.linalg.inv(numpy.array(__HessienneI))
__A = (__A + __A.T) * 0.5 # Symétrie
- __A = __A + mpr * numpy.trace( __A ) * numpy.identity(__nb) # Positivité
+ __A = __A + mpr * numpy.trace(__A) * numpy.identity(__nb) # Positivité
#
if min(__A.shape) != max(__A.shape):
raise ValueError(
- "The %s a posteriori covariance matrix A"%(__selfA._name,) + \
- " is of shape %s, despites it has to be a"%(str(__A.shape),) + \
- " squared matrix. There is an error in the observation operator," + \
- " please check it.")
+ "The %s a posteriori covariance matrix A" % (__selfA._name,)
+ + " is of shape %s, despites it has to be a" % (str(__A.shape),)
+ + " squared matrix. There is an error in the observation operator,"
+ + " please check it."
+ )
if (numpy.diag(__A) < 0).any():
raise ValueError(
- "The %s a posteriori covariance matrix A"%(__selfA._name,) + \
- " has at least one negative value on its diagonal. There is an" + \
- " error in the observation operator, please check it.")
- if logging.getLogger().level < logging.WARNING: # La vérification n'a lieu qu'en debug
+ "The %s a posteriori covariance matrix A" % (__selfA._name,)
+ + " has at least one negative value on its diagonal. There is an"
+ + " error in the observation operator, please check it."
+ )
+ if (
+ logging.getLogger().level < logging.WARNING
+ ): # La vérification n'a lieu qu'en debug
try:
- numpy.linalg.cholesky( __A )
- logging.debug("%s La matrice de covariance a posteriori A est bien symétrique définie positive."%(__selfA._name,))
+ numpy.linalg.cholesky(__A)
+ logging.debug(
+ "%s La matrice de covariance a posteriori A est bien symétrique définie positive."
+ % (__selfA._name,)
+ )
except Exception:
raise ValueError(
- "The %s a posteriori covariance matrix A"%(__selfA._name,) + \
- " is not symmetric positive-definite. Please check your a" + \
- " priori covariances and your observation operator.")
+ "The %s a posteriori covariance matrix A" % (__selfA._name,)
+ + " is not symmetric positive-definite. Please check your a"
+ + " priori covariances and your observation operator."
+ )
#
return __A
+
# ==============================================================================
-def QuantilesEstimations( selfA, A, Xa, HXa = None, Hm = None, HtM = None ):
+def QuantilesEstimations(selfA, A, Xa, HXa=None, Hm=None, HtM=None):
"Estimation des quantiles a posteriori à partir de A>0 (selfA est modifié)"
nbsamples = selfA._parameters["NumberOfSamplesForQuantiles"]
#
else:
LBounds = None
if LBounds is not None:
- LBounds = ForceNumericBounds( LBounds )
+ LBounds = ForceNumericBounds(LBounds)
__Xa = numpy.ravel(Xa)
#
# Échantillonnage des états
- YfQ = None
- EXr = None
+ YfQ = None
+ EXr = None
for i in range(nbsamples):
- if selfA._parameters["SimulationForQuantiles"] == "Linear" and HtM is not None and HXa is not None:
+ if (
+ selfA._parameters["SimulationForQuantiles"] == "Linear"
+ and HtM is not None
+ and HXa is not None
+ ):
dXr = (numpy.random.multivariate_normal(__Xa, A) - __Xa).reshape((-1, 1))
if LBounds is not None: # "EstimateProjection" par défaut
- dXr = numpy.max(numpy.hstack((dXr, LBounds[:, 0].reshape((-1, 1))) - __Xa.reshape((-1, 1))), axis=1)
- dXr = numpy.min(numpy.hstack((dXr, LBounds[:, 1].reshape((-1, 1))) - __Xa.reshape((-1, 1))), axis=1)
+ dXr = numpy.max(
+ numpy.hstack(
+ (dXr, LBounds[:, 0].reshape((-1, 1))) - __Xa.reshape((-1, 1))
+ ),
+ axis=1,
+ )
+ dXr = numpy.min(
+ numpy.hstack(
+ (dXr, LBounds[:, 1].reshape((-1, 1))) - __Xa.reshape((-1, 1))
+ ),
+ axis=1,
+ )
dYr = HtM @ dXr
Yr = HXa.reshape((-1, 1)) + dYr
if selfA._toStore("SampledStateForQuantiles"):
Xr = __Xa + numpy.ravel(dXr)
- elif selfA._parameters["SimulationForQuantiles"] == "NonLinear" and Hm is not None:
+ elif (
+ selfA._parameters["SimulationForQuantiles"] == "NonLinear"
+ and Hm is not None
+ ):
Xr = numpy.random.multivariate_normal(__Xa, A)
if LBounds is not None: # "EstimateProjection" par défaut
- Xr = numpy.max(numpy.hstack((Xr.reshape((-1, 1)), LBounds[:, 0].reshape((-1, 1)))), axis=1)
- Xr = numpy.min(numpy.hstack((Xr.reshape((-1, 1)), LBounds[:, 1].reshape((-1, 1)))), axis=1)
- Yr = numpy.asarray(Hm( Xr ))
+ Xr = numpy.max(
+ numpy.hstack((Xr.reshape((-1, 1)), LBounds[:, 0].reshape((-1, 1)))),
+ axis=1,
+ )
+ Xr = numpy.min(
+ numpy.hstack((Xr.reshape((-1, 1)), LBounds[:, 1].reshape((-1, 1)))),
+ axis=1,
+ )
+ Yr = numpy.asarray(Hm(Xr))
else:
raise ValueError("Quantile simulations has only to be Linear or NonLinear.")
#
YfQ.sort(axis=-1)
YQ = None
for quantile in selfA._parameters["Quantiles"]:
- if not (0. <= float(quantile) <= 1.):
+ if not (0.0 <= float(quantile) <= 1.0):
continue
- indice = int(nbsamples * float(quantile) - 1. / nbsamples)
+ indice = int(nbsamples * float(quantile) - 1.0 / nbsamples)
if YQ is None:
YQ = YfQ[:, indice].reshape((-1, 1))
else:
YQ = numpy.hstack((YQ, YfQ[:, indice].reshape((-1, 1))))
if YQ is not None: # Liste non vide de quantiles
- selfA.StoredVariables["SimulationQuantiles"].store( YQ )
+ selfA.StoredVariables["SimulationQuantiles"].store(YQ)
if selfA._toStore("SampledStateForQuantiles"):
- selfA.StoredVariables["SampledStateForQuantiles"].store( EXr )
+ selfA.StoredVariables["SampledStateForQuantiles"].store(EXr)
#
return 0
+
# ==============================================================================
-def ForceNumericBounds( __Bounds, __infNumbers = True ):
+def ForceNumericBounds(__Bounds, __infNumbers=True):
"Force les bornes à être des valeurs numériques, sauf si globalement None"
# Conserve une valeur par défaut à None s'il n'y a pas de bornes
if __Bounds is None:
return None
#
# Converti toutes les bornes individuelles None à +/- l'infini chiffré
- __Bounds = numpy.asarray( __Bounds, dtype=float ).reshape((-1, 2))
+ __Bounds = numpy.asarray(__Bounds, dtype=float).reshape((-1, 2))
if len(__Bounds.shape) != 2 or __Bounds.shape[0] == 0 or __Bounds.shape[1] != 2:
- raise ValueError("Incorrectly shaped bounds data (effective shape is %s)"%(__Bounds.shape,))
+ raise ValueError(
+ "Incorrectly shaped bounds data (effective shape is %s)" % (__Bounds.shape,)
+ )
if __infNumbers:
- __Bounds[numpy.isnan(__Bounds[:, 0]), 0] = -float('inf')
- __Bounds[numpy.isnan(__Bounds[:, 1]), 1] = float('inf')
+ __Bounds[numpy.isnan(__Bounds[:, 0]), 0] = -float("inf")
+ __Bounds[numpy.isnan(__Bounds[:, 1]), 1] = float("inf")
else:
__Bounds[numpy.isnan(__Bounds[:, 0]), 0] = -sys.float_info.max
__Bounds[numpy.isnan(__Bounds[:, 1]), 1] = sys.float_info.max
return __Bounds
+
# ==============================================================================
-def RecentredBounds( __Bounds, __Center, __Scale = None ):
+def RecentredBounds(__Bounds, __Center, __Scale=None):
"Recentre les bornes autour de 0, sauf si globalement None"
# Conserve une valeur par défaut à None s'il n'y a pas de bornes
if __Bounds is None:
#
if __Scale is None:
# Recentre les valeurs numériques de bornes
- return ForceNumericBounds( __Bounds ) - numpy.ravel( __Center ).reshape((-1, 1))
+ return ForceNumericBounds(__Bounds) - numpy.ravel(__Center).reshape((-1, 1))
else:
# Recentre les valeurs numériques de bornes et change l'échelle par une matrice
- return __Scale @ (ForceNumericBounds( __Bounds, False ) - numpy.ravel( __Center ).reshape((-1, 1)))
+ return __Scale @ (
+ ForceNumericBounds(__Bounds, False) - numpy.ravel(__Center).reshape((-1, 1))
+ )
+
# ==============================================================================
-def ApplyBounds( __Vector, __Bounds, __newClip = True ):
+def ApplyBounds(__Vector, __Bounds, __newClip=True):
"Applique des bornes numériques à un point"
# Conserve une valeur par défaut s'il n'y a pas de bornes
if __Bounds is None:
if not isinstance(__Bounds, numpy.ndarray): # Is an array
raise ValueError("Incorrect array definition of bounds data")
if 2 * __Vector.size != __Bounds.size: # Is a 2 column array of vector length
- raise ValueError("Incorrect bounds number (%i) to be applied for this vector (of size %i)"%(__Bounds.size, __Vector.size))
+ raise ValueError(
+ "Incorrect bounds number (%i) to be applied for this vector (of size %i)"
+ % (__Bounds.size, __Vector.size)
+ )
if len(__Bounds.shape) != 2 or min(__Bounds.shape) <= 0 or __Bounds.shape[1] != 2:
raise ValueError("Incorrectly shaped bounds data")
#
__Bounds[:, 1].reshape(__Vector.shape),
)
else:
- __Vector = numpy.max(numpy.hstack((__Vector.reshape((-1, 1)), numpy.asmatrix(__Bounds)[:, 0])), axis=1)
- __Vector = numpy.min(numpy.hstack((__Vector.reshape((-1, 1)), numpy.asmatrix(__Bounds)[:, 1])), axis=1)
+ __Vector = numpy.max(
+ numpy.hstack((__Vector.reshape((-1, 1)), numpy.asmatrix(__Bounds)[:, 0])),
+ axis=1,
+ )
+ __Vector = numpy.min(
+ numpy.hstack((__Vector.reshape((-1, 1)), numpy.asmatrix(__Bounds)[:, 1])),
+ axis=1,
+ )
__Vector = numpy.asarray(__Vector)
#
return __Vector
+
# ==============================================================================
-def VariablesAndIncrementsBounds( __Bounds, __BoxBounds, __Xini, __Name, __Multiplier = 1. ):
- __Bounds = ForceNumericBounds( __Bounds )
- __BoxBounds = ForceNumericBounds( __BoxBounds )
+def VariablesAndIncrementsBounds(
+ __Bounds, __BoxBounds, __Xini, __Name, __Multiplier=1.0
+):
+ __Bounds = ForceNumericBounds(__Bounds)
+ __BoxBounds = ForceNumericBounds(__BoxBounds)
if __Bounds is None and __BoxBounds is None:
- raise ValueError("Algorithm %s requires bounds on all variables (by Bounds), or on all variable increments (by BoxBounds), or both, to be explicitly given."%(__Name,))
+ raise ValueError(
+ "Algorithm %s requires bounds on all variables (by Bounds), or on all"
+ % (__Name,)
+ + " variable increments (by BoxBounds), or both, to be explicitly given."
+ )
elif __Bounds is None and __BoxBounds is not None:
- __Bounds = __BoxBounds
- logging.debug("%s Definition of parameter bounds from current parameter increment bounds"%(__Name,))
+ __Bounds = __BoxBounds
+ logging.debug(
+ "%s Definition of parameter bounds from current parameter increment bounds"
+ % (__Name,)
+ )
elif __Bounds is not None and __BoxBounds is None:
- __BoxBounds = __Multiplier * (__Bounds - __Xini.reshape((-1, 1))) # "M * [Xmin,Xmax]-Xini"
- logging.debug("%s Definition of parameter increment bounds from current parameter bounds"%(__Name,))
+ __BoxBounds = __Multiplier * (
+ __Bounds - __Xini.reshape((-1, 1))
+ ) # "M * [Xmin,Xmax]-Xini"
+ logging.debug(
+ "%s Definition of parameter increment bounds from current parameter bounds"
+ % (__Name,)
+ )
return __Bounds, __BoxBounds
+
# ==============================================================================
-def Apply3DVarRecentringOnEnsemble( __EnXn, __EnXf, __Ynpu, __HO, __R, __B, __SuppPars ):
+def Apply3DVarRecentringOnEnsemble(__EnXn, __EnXf, __Ynpu, __HO, __R, __B, __SuppPars):
"Recentre l'ensemble Xn autour de l'analyse 3DVAR"
__Betaf = __SuppPars["HybridCovarianceEquilibrium"]
#
- Xf = EnsembleMean( __EnXf )
- Pf = Covariance( asCovariance=EnsembleErrorCovariance(__EnXf) )
+ Xf = EnsembleMean(__EnXf)
+ Pf = Covariance(asCovariance=EnsembleErrorCovariance(__EnXf))
Pf = (1 - __Betaf) * __B.asfullmatrix(Xf.size) + __Betaf * Pf
#
selfB = PartialAlgorithm("3DVAR")
selfB._parameters["Minimizer"] = "LBFGSB"
- selfB._parameters["MaximumNumberOfIterations"] = __SuppPars["HybridMaximumNumberOfIterations"]
- selfB._parameters["CostDecrementTolerance"] = __SuppPars["HybridCostDecrementTolerance"]
+ selfB._parameters["MaximumNumberOfIterations"] = __SuppPars[
+ "HybridMaximumNumberOfIterations"
+ ]
+ selfB._parameters["CostDecrementTolerance"] = __SuppPars[
+ "HybridCostDecrementTolerance"
+ ]
selfB._parameters["ProjectedGradientTolerance"] = -1
- selfB._parameters["GradientNormTolerance"] = 1.e-05
+ selfB._parameters["GradientNormTolerance"] = 1.0e-05
selfB._parameters["StoreInternalVariables"] = False
selfB._parameters["optiprint"] = -1
selfB._parameters["optdisp"] = 0
selfB._parameters["Bounds"] = None
selfB._parameters["InitializationPoint"] = Xf
from daAlgorithms.Atoms import std3dvar
+
std3dvar.std3dvar(selfB, Xf, __Ynpu, None, __HO, None, __R, Pf)
Xa = selfB.get("Analysis")[-1].reshape((-1, 1))
del selfB
#
- return Xa + EnsembleOfAnomalies( __EnXn )
+ return Xa + EnsembleOfAnomalies(__EnXn)
+
# ==============================================================================
-def GenerateRandomPointInHyperSphere( __Center, __Radius ):
+def GenerateRandomPointInHyperSphere(__Center, __Radius):
"Génère un point aléatoire uniformément à l'intérieur d'une hyper-sphère"
- __Dimension = numpy.asarray( __Center ).size
- __GaussDelta = numpy.random.normal( 0, 1, size=__Center.shape )
- __VectorNorm = numpy.linalg.norm( __GaussDelta )
- __PointOnHS = __Radius * (__GaussDelta / __VectorNorm)
- __MoveInHS = math.exp( math.log(numpy.random.uniform()) / __Dimension) # rand()**1/n
- __PointInHS = __MoveInHS * __PointOnHS
+ __Dimension = numpy.asarray(__Center).size
+ __GaussDelta = numpy.random.normal(0, 1, size=__Center.shape)
+ __VectorNorm = numpy.linalg.norm(__GaussDelta)
+ __PointOnHS = __Radius * (__GaussDelta / __VectorNorm)
+ __MoveInHS = math.exp(math.log(numpy.random.uniform()) / __Dimension) # rand()**1/n
+ __PointInHS = __MoveInHS * __PointOnHS
return __Center + __PointInHS
+
# ==============================================================================
class GenerateWeightsAndSigmaPoints(object):
"Génère les points sigma et les poids associés"
- def __init__(self,
- Nn=0, EO="State", VariantM="UKF",
- Alpha=None, Beta=2., Kappa=0.):
+ def __init__(
+ self, Nn=0, EO="State", VariantM="UKF", Alpha=None, Beta=2.0, Kappa=0.0
+ ):
self.Nn = int(Nn)
- self.Alpha = numpy.longdouble( Alpha )
- self.Beta = numpy.longdouble( Beta )
+ self.Alpha = numpy.longdouble(Alpha)
+ self.Beta = numpy.longdouble(Beta)
if abs(Kappa) < 2 * mpr:
if EO == "Parameters" and VariantM == "UKF":
self.Kappa = 3 - self.Nn
self.Kappa = 0
else:
self.Kappa = Kappa
- self.Kappa = numpy.longdouble( self.Kappa )
- self.Lambda = self.Alpha**2 * ( self.Nn + self.Kappa ) - self.Nn
- self.Gamma = self.Alpha * numpy.sqrt( self.Nn + self.Kappa )
+ self.Kappa = numpy.longdouble(self.Kappa)
+ self.Lambda = self.Alpha**2 * (self.Nn + self.Kappa) - self.Nn
+ self.Gamma = self.Alpha * numpy.sqrt(self.Nn + self.Kappa)
# Rq.: Gamma = sqrt(n+Lambda) = Alpha*sqrt(n+Kappa)
- assert 0. < self.Alpha <= 1., "Alpha has to be between 0 strictly and 1 included"
+ assert (
+ 0.0 < self.Alpha <= 1.0
+ ), "Alpha has to be between 0 strictly and 1 included"
#
if VariantM == "UKF":
self.Wm, self.Wc, self.SC = self.__UKF2000()
elif VariantM == "5OS":
self.Wm, self.Wc, self.SC = self.__5OS2002()
else:
- raise ValueError("Variant \"%s\" is not a valid one."%VariantM)
+ raise ValueError('Variant "%s" is not a valid one.' % VariantM)
def __UKF2000(self):
"Standard Set, Julier et al. 2000 (aka Canonical UKF)"
# Rq.: W^{(m)}_{i=/=0} = 1. / (2.*(n + Lambda))
- Winn = 1. / (2. * ( self.Nn + self.Kappa ) * self.Alpha**2)
+ Winn = 1.0 / (2.0 * (self.Nn + self.Kappa) * self.Alpha**2)
Ww = []
- Ww.append( 0. )
+ Ww.append(0.0)
for point in range(2 * self.Nn):
- Ww.append( Winn )
+ Ww.append(Winn)
# Rq.: LsLpL = Lambda / (n + Lambda)
- LsLpL = 1. - self.Nn / (self.Alpha**2 * ( self.Nn + self.Kappa ))
- Wm = numpy.array( Ww )
+ LsLpL = 1.0 - self.Nn / (self.Alpha**2 * (self.Nn + self.Kappa))
+ Wm = numpy.array(Ww)
Wm[0] = LsLpL
- Wc = numpy.array( Ww )
- Wc[0] = LsLpL + (1. - self.Alpha**2 + self.Beta)
+ Wc = numpy.array(Ww)
+ Wc[0] = LsLpL + (1.0 - self.Alpha**2 + self.Beta)
# OK: assert abs(Wm.sum()-1.) < self.Nn*mpr, "UKF ill-conditioned %s >= %s"%(abs(Wm.sum()-1.), self.Nn*mpr)
#
SC = numpy.zeros((self.Nn, len(Ww)))
for ligne in range(self.Nn):
it = ligne + 1
- SC[ligne, it ] = self.Gamma
+ SC[ligne, it] = self.Gamma
SC[ligne, self.Nn + it] = -self.Gamma
#
return Wm, Wc, SC
def __S3F2022(self):
"Scaled Spherical Simplex Set, Papakonstantinou et al. 2022"
# Rq.: W^{(m)}_{i=/=0} = (n + Kappa) / ((n + Lambda) * (n + 1 + Kappa))
- Winn = 1. / ((self.Nn + 1. + self.Kappa) * self.Alpha**2)
+ Winn = 1.0 / ((self.Nn + 1.0 + self.Kappa) * self.Alpha**2)
Ww = []
- Ww.append( 0. )
+ Ww.append(0.0)
for point in range(self.Nn + 1):
- Ww.append( Winn )
+ Ww.append(Winn)
# Rq.: LsLpL = Lambda / (n + Lambda)
- LsLpL = 1. - self.Nn / (self.Alpha**2 * ( self.Nn + self.Kappa ))
- Wm = numpy.array( Ww )
+ LsLpL = 1.0 - self.Nn / (self.Alpha**2 * (self.Nn + self.Kappa))
+ Wm = numpy.array(Ww)
Wm[0] = LsLpL
- Wc = numpy.array( Ww )
- Wc[0] = LsLpL + (1. - self.Alpha**2 + self.Beta)
+ Wc = numpy.array(Ww)
+ Wc[0] = LsLpL + (1.0 - self.Alpha**2 + self.Beta)
# OK: assert abs(Wm.sum()-1.) < self.Nn*mpr, "S3F ill-conditioned %s >= %s"%(abs(Wm.sum()-1.), self.Nn*mpr)
#
SC = numpy.zeros((self.Nn, len(Ww)))
for ligne in range(self.Nn):
it = ligne + 1
- q_t = it / math.sqrt( it * (it + 1) * Winn )
- SC[ligne, 1:it + 1] = -q_t / it
- SC[ligne, it + 1 ] = q_t
+ q_t = it / math.sqrt(it * (it + 1) * Winn)
+ SC[ligne, 1 : it + 1] = -q_t / it # noqa: E203
+ SC[ligne, it + 1] = q_t
#
return Wm, Wc, SC
def __MSS2011(self):
"Minimum Set, Menegaz et al. 2011"
rho2 = (1 - self.Alpha) / self.Nn
- Cc = numpy.real(scipy.linalg.sqrtm( numpy.identity(self.Nn) - rho2 ))
- Ww = self.Alpha * rho2 * scipy.linalg.inv(Cc) @ numpy.ones(self.Nn) @ scipy.linalg.inv(Cc.T)
+ Cc = numpy.real(scipy.linalg.sqrtm(numpy.identity(self.Nn) - rho2))
+ Ww = (
+ self.Alpha
+ * rho2
+ * scipy.linalg.inv(Cc)
+ @ numpy.ones(self.Nn)
+ @ scipy.linalg.inv(Cc.T)
+ )
Wm = Wc = numpy.concatenate((Ww, [self.Alpha]))
# OK: assert abs(Wm.sum()-1.) < self.Nn*mpr, "MSS ill-conditioned %s >= %s"%(abs(Wm.sum()-1.), self.Nn*mpr)
+ Wm = Wc = Wm / numpy.sum(Wm) # Renormalisation explicite
#
# inv(sqrt(W)) = diag(inv(sqrt(W)))
- SC1an = Cc @ numpy.diag(1. / numpy.sqrt( Ww ))
- SCnpu = (- numpy.sqrt(rho2) / numpy.sqrt(self.Alpha)) * numpy.ones(self.Nn).reshape((-1, 1))
+ SC1an = Cc @ numpy.diag(1.0 / numpy.sqrt(Ww))
+ SCnpu = (-numpy.sqrt(rho2) / numpy.sqrt(self.Alpha)) * numpy.ones(
+ self.Nn
+ ).reshape((-1, 1))
SC = numpy.concatenate((SC1an, SCnpu), axis=1)
#
return Wm, Wc, SC
"Fifth Order Set, Lerner 2002"
Ww = []
for point in range(2 * self.Nn):
- Ww.append( (4. - self.Nn) / 18. )
+ Ww.append((4.0 - self.Nn) / 18.0)
for point in range(2 * self.Nn, 2 * self.Nn**2):
- Ww.append( 1. / 36. )
- Ww.append( (self.Nn**2 - 7 * self.Nn) / 18. + 1.)
- Wm = Wc = numpy.array( Ww )
+ Ww.append(1.0 / 36.0)
+ Ww.append((self.Nn**2 - 7 * self.Nn) / 18.0 + 1.0)
+ Wm = Wc = numpy.array(Ww)
# OK: assert abs(Wm.sum()-1.) < self.Nn*mpr, "5OS ill-conditioned %s >= %s"%(abs(Wm.sum()-1.), self.Nn*mpr)
#
- xi1n = numpy.diag( math.sqrt(3) * numpy.ones( self.Nn ) )
- xi2n = numpy.diag( -math.sqrt(3) * numpy.ones( self.Nn ) )
+ xi1n = numpy.diag(math.sqrt(3) * numpy.ones(self.Nn))
+ xi2n = numpy.diag(-math.sqrt(3) * numpy.ones(self.Nn))
#
xi3n1 = numpy.zeros((int((self.Nn - 1) * self.Nn / 2), self.Nn), dtype=float)
xi3n2 = numpy.zeros((int((self.Nn - 1) * self.Nn / 2), self.Nn), dtype=float)
xi4n1[ia, i1] = xi4n1[ia, i2] = math.sqrt(3)
xi4n2[ia, i1] = xi4n2[ia, i2] = -math.sqrt(3)
ia += 1
- SC = numpy.concatenate((xi1n, xi2n, xi3n1, xi3n2, xi4n1, xi4n2, numpy.zeros((1, self.Nn)))).T
+ SC = numpy.concatenate(
+ (xi1n, xi2n, xi3n1, xi3n2, xi4n1, xi4n2, numpy.zeros((1, self.Nn)))
+ ).T
#
return Wm, Wc, SC
def nbOfPoints(self):
- assert self.Nn == self.SC.shape[0], "Size mismatch %i =/= %i"%(self.Nn, self.SC.shape[0])
- assert self.Wm.size == self.SC.shape[1], "Size mismatch %i =/= %i"%(self.Wm.size, self.SC.shape[1])
- assert self.Wm.size == self.Wc.size, "Size mismatch %i =/= %i"%(self.Wm.size, self.Wc.size)
+ assert self.Nn == self.SC.shape[0], "Size mismatch %i =/= %i" % (
+ self.Nn,
+ self.SC.shape[0],
+ )
+ assert self.Wm.size == self.SC.shape[1], "Size mismatch %i =/= %i" % (
+ self.Wm.size,
+ self.SC.shape[1],
+ )
+ assert self.Wm.size == self.Wc.size, "Size mismatch %i =/= %i" % (
+ self.Wm.size,
+ self.Wc.size,
+ )
return self.Wm.size
def get(self):
def __repr__(self):
"x.__repr__() <==> repr(x)"
- msg = ""
- msg += " Alpha = %s\n"%self.Alpha
- msg += " Beta = %s\n"%self.Beta
- msg += " Kappa = %s\n"%self.Kappa
- msg += " Lambda = %s\n"%self.Lambda
- msg += " Gamma = %s\n"%self.Gamma
- msg += " Wm = %s\n"%self.Wm
- msg += " Wc = %s\n"%self.Wc
- msg += " sum(Wm) = %s\n"%numpy.sum(self.Wm)
- msg += " SC =\n%s\n"%self.SC
+ msg = ""
+ msg += " Alpha = %s\n" % self.Alpha
+ msg += " Beta = %s\n" % self.Beta
+ msg += " Kappa = %s\n" % self.Kappa
+ msg += " Lambda = %s\n" % self.Lambda
+ msg += " Gamma = %s\n" % self.Gamma
+ msg += " Wm = %s\n" % self.Wm
+ msg += " Wc = %s\n" % self.Wc
+ msg += " sum(Wm) = %s\n" % numpy.sum(self.Wm)
+ msg += " SC =\n%s\n" % self.SC
return msg
+
# ==============================================================================
-def GetNeighborhoodTopology( __ntype, __ipop ):
+def GetNeighborhoodTopology(__ntype, __ipop):
"Renvoi une topologie de connexion pour une population de points"
- if __ntype in ["FullyConnectedNeighborhood", "FullyConnectedNeighbourhood", "gbest"]:
+ if __ntype in [
+ "FullyConnectedNeighborhood",
+ "FullyConnectedNeighbourhood",
+ "gbest",
+ ]:
__topology = [__ipop for __i in __ipop]
- elif __ntype in ["RingNeighborhoodWithRadius1", "RingNeighbourhoodWithRadius1", "lbest"]:
+ elif __ntype in [
+ "RingNeighborhoodWithRadius1",
+ "RingNeighbourhoodWithRadius1",
+ "lbest",
+ ]:
__cpop = list(__ipop[-1:]) + list(__ipop) + list(__ipop[:1])
- __topology = [__cpop[__n:__n + 3] for __n in range(len(__ipop))]
+ __topology = [__cpop[__n : __n + 3] for __n in range(len(__ipop))] # noqa: E203
elif __ntype in ["RingNeighborhoodWithRadius2", "RingNeighbourhoodWithRadius2"]:
__cpop = list(__ipop[-2:]) + list(__ipop) + list(__ipop[:2])
- __topology = [__cpop[__n:__n + 5] for __n in range(len(__ipop))]
- elif __ntype in ["AdaptativeRandomWith3Neighbors", "AdaptativeRandomWith3Neighbours", "abest"]:
+ __topology = [__cpop[__n : __n + 5] for __n in range(len(__ipop))] # noqa: E203
+ elif __ntype in [
+ "AdaptativeRandomWith3Neighbors",
+ "AdaptativeRandomWith3Neighbours",
+ "abest",
+ ]:
__cpop = 3 * list(__ipop)
__topology = [[__i] + list(numpy.random.choice(__cpop, 3)) for __i in __ipop]
- elif __ntype in ["AdaptativeRandomWith5Neighbors", "AdaptativeRandomWith5Neighbours"]:
+ elif __ntype in [
+ "AdaptativeRandomWith5Neighbors",
+ "AdaptativeRandomWith5Neighbours",
+ ]:
__cpop = 5 * list(__ipop)
__topology = [[__i] + list(numpy.random.choice(__cpop, 5)) for __i in __ipop]
else:
- raise ValueError("Swarm topology type unavailable because name \"%s\" is unknown."%__ntype)
+ raise ValueError(
+ 'Swarm topology type unavailable because name "%s" is unknown.' % __ntype
+ )
return __topology
+
# ==============================================================================
-def FindIndexesFromNames( __NameOfLocations = None, __ExcludeLocations = None, ForceArray = False ):
+def FindIndexesFromNames(
+ __NameOfLocations=None, __ExcludeLocations=None, ForceArray=False
+):
"Exprime les indices des noms exclus, en ignorant les absents"
if __ExcludeLocations is None:
__ExcludeIndexes = ()
- elif isinstance(__ExcludeLocations, (list, numpy.ndarray, tuple)) and len(__ExcludeLocations) == 0:
+ elif (
+ isinstance(__ExcludeLocations, (list, numpy.ndarray, tuple))
+ and len(__ExcludeLocations) == 0
+ ):
__ExcludeIndexes = ()
# ----------
elif __NameOfLocations is None:
__ExcludeIndexes = numpy.asarray(__ExcludeLocations, dtype=int)
except ValueError as e:
if "invalid literal for int() with base 10:" in str(e):
- raise ValueError("to exclude named locations, initial location name list can not be void and has to have the same length as one state")
+ raise ValueError(
+ "to exclude named locations, initial location name list can"
+ + " not be void and has to have the same length as one state"
+ )
else:
raise ValueError(str(e))
- elif isinstance(__NameOfLocations, (list, numpy.ndarray, tuple)) and len(__NameOfLocations) == 0:
+ elif (
+ isinstance(__NameOfLocations, (list, numpy.ndarray, tuple))
+ and len(__NameOfLocations) == 0
+ ):
try:
__ExcludeIndexes = numpy.asarray(__ExcludeLocations, dtype=int)
except ValueError as e:
if "invalid literal for int() with base 10:" in str(e):
- raise ValueError("to exclude named locations, initial location name list can not be void and has to have the same length as one state")
+ raise ValueError(
+ "to exclude named locations, initial location name list can"
+ + " not be void and has to have the same length as one state"
+ )
else:
raise ValueError(str(e))
# ----------
__ExcludeIndexes = numpy.asarray(__ExcludeLocations, dtype=int)
except ValueError as e:
if "invalid literal for int() with base 10:" in str(e):
- if len(__NameOfLocations) < 1.e6 + 1 and len(__ExcludeLocations) > 1500:
+ if (
+ len(__NameOfLocations) < 1.0e6 + 1
+ and len(__ExcludeLocations) > 1500
+ ):
__Heuristic = True
else:
__Heuristic = False
if ForceArray or __Heuristic:
# Recherche par array permettant des noms invalides, peu efficace
- __NameToIndex = dict(numpy.array((
- __NameOfLocations,
- range(len(__NameOfLocations))
- )).T)
- __ExcludeIndexes = numpy.asarray([__NameToIndex.get(k, -1) for k in __ExcludeLocations], dtype=int)
+ __NameToIndex = dict(
+ numpy.array(
+ (__NameOfLocations, range(len(__NameOfLocations)))
+ ).T
+ )
+ __ExcludeIndexes = numpy.asarray(
+ [__NameToIndex.get(k, -1) for k in __ExcludeLocations],
+ dtype=int,
+ )
#
else:
# Recherche par liste permettant des noms invalides, très efficace
- def __NameToIndex_get( cle, default = -1 ):
+ def __NameToIndex_get(cle, default=-1):
if cle in __NameOfLocations:
return __NameOfLocations.index(cle)
else:
return default
- __ExcludeIndexes = numpy.asarray([__NameToIndex_get(k, -1) for k in __ExcludeLocations], dtype=int)
+
+ __ExcludeIndexes = numpy.asarray(
+ [__NameToIndex_get(k, -1) for k in __ExcludeLocations],
+ dtype=int,
+ )
#
- # Recherche par liste interdisant des noms invalides, mais encore un peu plus efficace
- # __ExcludeIndexes = numpy.asarray([__NameOfLocations.index(k) for k in __ExcludeLocations], dtype=int)
+ # Exemple de recherche par liste encore un peu plus efficace,
+ # mais interdisant des noms invalides :
+ # __ExcludeIndexes = numpy.asarray(
+ # [__NameOfLocations.index(k) for k in __ExcludeLocations],
+ # dtype=int,
+ # )
#
# Ignore les noms absents
- __ExcludeIndexes = numpy.compress(__ExcludeIndexes > -1, __ExcludeIndexes)
+ __ExcludeIndexes = numpy.compress(
+ __ExcludeIndexes > -1, __ExcludeIndexes
+ )
if len(__ExcludeIndexes) == 0:
__ExcludeIndexes = ()
else:
# ----------
return __ExcludeIndexes
+
# ==============================================================================
def BuildComplexSampleList(
- __SampleAsnUplet,
- __SampleAsExplicitHyperCube,
- __SampleAsMinMaxStepHyperCube,
- __SampleAsMinMaxLatinHyperCube,
- __SampleAsMinMaxSobolSequence,
- __SampleAsIndependantRandomVariables,
- __X0,
- __Seed = None ):
+ __SampleAsnUplet,
+ __SampleAsExplicitHyperCube,
+ __SampleAsMinMaxStepHyperCube,
+ __SampleAsMinMaxLatinHyperCube,
+ __SampleAsMinMaxSobolSequence,
+ __SampleAsIndependantRandomVariables,
+ __X0,
+ __Seed=None,
+):
# ---------------------------
if len(__SampleAsnUplet) > 0:
sampleList = __SampleAsnUplet
for i, Xx in enumerate(sampleList):
if numpy.ravel(Xx).size != __X0.size:
- raise ValueError("The size %i of the %ith state X in the sample and %i of the checking point Xb are different, they have to be identical."%(numpy.ravel(Xx).size, i + 1, __X0.size))
+ raise ValueError(
+ "The size %i of the %ith state X in the sample and %i of the"
+ % (numpy.ravel(Xx).size, i + 1, __X0.size)
+ + " checking point Xb are different, they have to be identical."
+ )
# ---------------------------
elif len(__SampleAsExplicitHyperCube) > 0:
sampleList = itertools.product(*list(__SampleAsExplicitHyperCube))
coordinatesList = []
for i, dim in enumerate(__SampleAsMinMaxStepHyperCube):
if len(dim) != 3:
- raise ValueError("For dimension %i, the variable definition \"%s\" is incorrect, it should be [min,max,step]."%(i, dim))
+ raise ValueError(
+ 'For dimension %i, the variable definition "%s" is incorrect,'
+ % (i, dim)
+ + " it should be [min,max,step]."
+ )
else:
- coordinatesList.append(numpy.linspace(dim[0], dim[1], 1 + int((float(dim[1]) - float(dim[0])) / float(dim[2]))))
+ coordinatesList.append(
+ numpy.linspace(
+ dim[0],
+ dim[1],
+ 1 + int((float(dim[1]) - float(dim[0])) / float(dim[2])),
+ )
+ )
sampleList = itertools.product(*coordinatesList)
# ---------------------------
elif len(__SampleAsMinMaxLatinHyperCube) > 0:
if vt(scipy.version.version) <= vt("1.7.0"):
- __msg = "In order to use Latin Hypercube sampling, you must at least use Scipy version 1.7.0 (and you are presently using Scipy %s). A void sample is then generated."%scipy.version.version
+ __msg = (
+ "In order to use Latin Hypercube sampling, you must at least use"
+ + " Scipy version 1.7.0 (and you are presently using"
+ + " Scipy %s). A void sample is then generated." % scipy.version.version
+ )
warnings.warn(__msg, FutureWarning, stacklevel=50)
sampleList = []
else:
__spDesc = list(__SampleAsMinMaxLatinHyperCube)
- __nbDime, __nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
+ __nbDime, __nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
__sample = scipy.stats.qmc.LatinHypercube(
- d = len(__spDesc),
- seed = numpy.random.default_rng(__Seed),
+ d=len(__spDesc),
+ seed=numpy.random.default_rng(__Seed),
)
- __sample = __sample.random(n = __nbSamp)
+ __sample = __sample.random(n=__nbSamp)
__bounds = numpy.array(__spDesc)[:, 0:2]
__l_bounds = __bounds[:, 0]
__u_bounds = __bounds[:, 1]
# ---------------------------
elif len(__SampleAsMinMaxSobolSequence) > 0:
if vt(scipy.version.version) <= vt("1.7.0"):
- __msg = "In order to use Latin Hypercube sampling, you must at least use Scipy version 1.7.0 (and you are presently using Scipy %s). A void sample is then generated."%scipy.version.version
+ __msg = (
+ "In order to use Latin Hypercube sampling, you must at least use"
+ + " Scipy version 1.7.0 (and you are presently using"
+ + " Scipy %s). A void sample is then generated." % scipy.version.version
+ )
warnings.warn(__msg, FutureWarning, stacklevel=50)
sampleList = []
else:
__spDesc = list(__SampleAsMinMaxSobolSequence)
- __nbDime, __nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
+ __nbDime, __nbSamp = map(int, __spDesc.pop()) # Réduction du dernier
if __nbDime != len(__spDesc):
- warnings.warn("Declared space dimension (%i) is not equal to number of bounds (%i), the last one will be used."%(__nbDime, len(__spDesc)), FutureWarning, stacklevel=50)
+ __msg = (
+ "Declared space dimension"
+ + " (%i) is not equal to number of bounds (%i),"
+ % (__nbDime, len(__spDesc))
+ + " the last one will be used."
+ )
+ warnings.warn(
+ __msg,
+ FutureWarning,
+ stacklevel=50,
+ )
__sample = scipy.stats.qmc.Sobol(
- d = len(__spDesc),
- seed = numpy.random.default_rng(__Seed),
+ d=len(__spDesc),
+ seed=numpy.random.default_rng(__Seed),
)
- __sample = __sample.random_base2(m = int(math.log2(__nbSamp)) + 1)
+ __sample = __sample.random_base2(m=int(math.log2(__nbSamp)) + 1)
__bounds = numpy.array(__spDesc)[:, 0:2]
__l_bounds = __bounds[:, 0]
__u_bounds = __bounds[:, 1]
coordinatesList = []
for i, dim in enumerate(__SampleAsIndependantRandomVariables):
if len(dim) != 3:
- raise ValueError("For dimension %i, the variable definition \"%s\" is incorrect, it should be ('distribution',(parameters),length) with distribution in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]."%(i, dim))
- elif not ( str(dim[0]) in ['normal', 'lognormal', 'uniform', 'weibull'] \
- and hasattr(numpy.random, str(dim[0])) ):
- raise ValueError("For dimension %i, the distribution name \"%s\" is not allowed, please choose in ['normal'(mean,std),'lognormal'(mean,sigma),'uniform'(low,high),'weibull'(shape)]"%(i, str(dim[0])))
+ raise ValueError(
+ 'For dimension %i, the variable definition "%s" is incorrect,'
+ % (i, dim)
+ + " it should be ('distribution',(parameters),length) with"
+ + " distribution in ['normal'(mean,std), 'lognormal'(mean,sigma),"
+ + " 'uniform'(low,high), 'weibull'(shape)]."
+ )
+ elif not (
+ str(dim[0]) in ["normal", "lognormal", "uniform", "weibull"]
+ and hasattr(numpy.random, str(dim[0]))
+ ):
+ raise ValueError(
+ 'For dimension %i, the distribution name "%s" is not allowed,'
+ % (i, str(dim[0]))
+ + " please choose in ['normal'(mean,std), 'lognormal'(mean,sigma),"
+ + " 'uniform'(low,high), 'weibull'(shape)]"
+ )
else:
- distribution = getattr(numpy.random, str(dim[0]), 'normal')
+ distribution = getattr(numpy.random, str(dim[0]), "normal")
coordinatesList.append(distribution(*dim[1], size=max(1, int(dim[2]))))
sampleList = itertools.product(*coordinatesList)
else:
- sampleList = iter([__X0,])
+ sampleList = iter(
+ [
+ __X0,
+ ]
+ )
# ----------
return sampleList
+
# ==============================================================================
-def multiXOsteps(
- selfA, Xb, Y, U, HO, EM, CM, R, B, Q, oneCycle,
- __CovForecast = False ):
+def multiXOsteps(selfA, Xb, Y, U, HO, EM, CM, R, B, Q, oneCycle, __CovForecast=False):
"""
Prévision multi-pas avec une correction par pas (multi-méthodes)
"""
# Initialisation
# --------------
if selfA._parameters["EstimationOf"] == "State":
- if len(selfA.StoredVariables["Analysis"]) == 0 or not selfA._parameters["nextStep"]:
+ if (
+ len(selfA.StoredVariables["Analysis"]) == 0
+ or not selfA._parameters["nextStep"]
+ ):
Xn = numpy.asarray(Xb)
if __CovForecast:
Pn = B
- selfA.StoredVariables["Analysis"].store( Xn )
+ selfA.StoredVariables["Analysis"].store(Xn)
if selfA._toStore("APosterioriCovariance"):
if hasattr(B, "asfullmatrix"):
- selfA.StoredVariables["APosterioriCovariance"].store( B.asfullmatrix(Xn.size) )
+ selfA.StoredVariables["APosterioriCovariance"].store(
+ B.asfullmatrix(Xn.size)
+ )
else:
- selfA.StoredVariables["APosterioriCovariance"].store( B )
+ selfA.StoredVariables["APosterioriCovariance"].store(B)
selfA._setInternalState("seed", numpy.random.get_state())
elif selfA._parameters["nextStep"]:
Xn = selfA._getInternalState("Xn")
# Multi-steps
# -----------
for step in range(duration - 1):
- selfA.StoredVariables["CurrentStepNumber"].store( len(selfA.StoredVariables["Analysis"]) )
+ selfA.StoredVariables["CurrentStepNumber"].store(
+ len(selfA.StoredVariables["Analysis"])
+ )
#
if hasattr(Y, "store"):
- Ynpu = numpy.asarray( Y[step + 1] ).reshape((-1, 1))
+ Ynpu = numpy.asarray(Y[step + 1]).reshape((-1, 1))
else:
- Ynpu = numpy.asarray( Y ).reshape((-1, 1))
+ Ynpu = numpy.asarray(Y).reshape((-1, 1))
#
if U is not None:
if hasattr(U, "store") and len(U) > 1:
- Un = numpy.asarray( U[step] ).reshape((-1, 1))
+ Un = numpy.asarray(U[step]).reshape((-1, 1))
elif hasattr(U, "store") and len(U) == 1:
- Un = numpy.asarray( U[0] ).reshape((-1, 1))
+ Un = numpy.asarray(U[0]).reshape((-1, 1))
else:
- Un = numpy.asarray( U ).reshape((-1, 1))
+ Un = numpy.asarray(U).reshape((-1, 1))
else:
Un = None
#
Ma = Ma.reshape(Xn.size, Xn.size) # ADAO & check shape
Pn_predicted = Q + Mt @ (Pn @ Ma)
Mm = EM["Direct"].appliedControledFormTo
- Xn_predicted = Mm( (Xn, Un) ).reshape((-1, 1))
- if CM is not None and "Tangent" in CM and Un is not None: # Attention : si Cm est aussi dans M, doublon !
+ Xn_predicted = Mm((Xn, Un)).reshape((-1, 1))
+ if (
+ CM is not None and "Tangent" in CM and Un is not None
+ ): # Attention : si Cm est aussi dans M, doublon !
Cm = CM["Tangent"].asMatrix(Xn_predicted)
Cm = Cm.reshape(Xn.size, Un.size) # ADAO & check shape
Xn_predicted = Xn_predicted + (Cm @ Un).reshape((-1, 1))
Pn_predicted = Pn
Xn_predicted = numpy.asarray(Xn_predicted).reshape((-1, 1))
if selfA._toStore("ForecastState"):
- selfA.StoredVariables["ForecastState"].store( Xn_predicted )
+ selfA.StoredVariables["ForecastState"].store(Xn_predicted)
if __CovForecast:
if hasattr(Pn_predicted, "asfullmatrix"):
Pn_predicted = Pn_predicted.asfullmatrix(Xn.size)
else:
Pn_predicted = numpy.asarray(Pn_predicted).reshape((Xn.size, Xn.size))
if selfA._toStore("ForecastCovariance"):
- selfA.StoredVariables["ForecastCovariance"].store( Pn_predicted )
+ selfA.StoredVariables["ForecastCovariance"].store(Pn_predicted)
#
# Correct (Measurement Update)
# ----------------------------
#
return 0
+
# ==============================================================================
if __name__ == "__main__":
print("\n AUTODIAGNOSTIC\n")
__author__ = "Jean-Philippe ARGAUD"
__all__ = []
-import os, numpy, copy, math
-import gzip, bz2, pickle
+import os
+import copy
+import math
+import gzip
+import bz2
+import pickle
+import numpy
-from daCore.PlatformInfo import PathManagement ; PathManagement() # noqa: E702,E203
-from daCore.PlatformInfo import PlatformInfo
+from daCore.PlatformInfo import PathManagement, PlatformInfo
+
+PathManagement()
lpi = PlatformInfo()
mfp = lpi.MaximumPrecision()
if lpi.has_gnuplot:
import Gnuplot
+
# ==============================================================================
class Persistence(object):
"""
Classe générale de persistance définissant les accesseurs nécessaires
(Template)
"""
+
__slots__ = (
- "__name", "__unit", "__basetype", "__values", "__tags", "__dynamic",
- "__g", "__title", "__ltitle", "__pause", "__dataobservers",
+ "__name",
+ "__unit",
+ "__basetype",
+ "__values",
+ "__tags",
+ "__dynamic",
+ "__g",
+ "__title",
+ "__ltitle",
+ "__pause",
+ "__dataobservers",
)
def __init__(self, name="", unit="", basetype=str):
#
self.__basetype = basetype
#
- self.__values = []
- self.__tags = []
+ self.__values = []
+ self.__tags = []
#
- self.__dynamic = False
- self.__g = None
- self.__title = None
- self.__ltitle = None
- self.__pause = None
+ self.__dynamic = False
+ self.__g = None
+ self.__title = None
+ self.__ltitle = None
+ self.__pause = None
#
self.__dataobservers = []
for hook, parameters, scheduler, order, osync, dovar in self.__dataobservers:
if __step in scheduler:
if order is None or dovar is None:
- hook( self, parameters )
+ hook(self, parameters)
else:
if not isinstance(order, (list, tuple)):
continue
if not isinstance(dovar, dict):
continue
if not bool(osync): # Async observation
- hook( self, parameters, order, dovar )
+ hook(self, parameters, order, dovar)
else: # Sync observations
for v in order:
if len(dovar[v]) != len(self):
break
else:
- hook( self, parameters, order, dovar )
+ hook(self, parameters, order, dovar)
def pop(self, item=None):
"""
longueur. Par défaut, renvoie 1.
"""
if len(self.__values) > 0:
- if self.__basetype in [numpy.matrix, numpy.ndarray, numpy.array, numpy.ravel]:
+ if self.__basetype in [
+ numpy.matrix,
+ numpy.ndarray,
+ numpy.array,
+ numpy.ravel,
+ ]:
return self.__values[-1].shape
elif self.__basetype in [int, float]:
return (1,)
# ---------------------------------------------------------
def __str__(self):
"x.__str__() <==> str(x)"
- msg = " Index Value Tags\n"
+ msg = " Index Value Tags\n"
for iv, vv in enumerate(self.__values):
- msg += " i=%05i %10s %s\n"%(iv, vv, self.__tags[iv])
+ msg += " i=%05i %10s %s\n" % (iv, vv, self.__tags[iv])
return msg
def __len__(self):
def name(self):
return self.__name
- def __getitem__(self, index=None ):
+ def __getitem__(self, index=None):
"x.__getitem__(y) <==> x[y]"
return copy.copy(self.__values[index])
for i in __indexOfFilteredItems:
if tagKey in self.__tags[i]:
if self.__tags[i][tagKey] == kwargs[tagKey]:
- __tmp.append( i )
- elif isinstance(kwargs[tagKey], (list, tuple)) and self.__tags[i][tagKey] in kwargs[tagKey]:
- __tmp.append( i )
+ __tmp.append(i)
+ elif (
+ isinstance(kwargs[tagKey], (list, tuple))
+ and self.__tags[i][tagKey] in kwargs[tagKey]
+ ):
+ __tmp.append(i)
__indexOfFilteredItems = __tmp
if len(__indexOfFilteredItems) == 0:
break
__keys = []
for i in __indexOfFilteredItems:
if keyword in self.__tags[i]:
- __keys.append( self.__tags[i][keyword] )
+ __keys.append(self.__tags[i][keyword])
else:
- __keys.append( None )
+ __keys.append(None)
return __keys
def items(self, keyword=None, **kwargs):
__pairs = []
for i in __indexOfFilteredItems:
if keyword in self.__tags[i]:
- __pairs.append( (self.__tags[i][keyword], self.__values[i]) )
+ __pairs.append((self.__tags[i][keyword], self.__values[i]))
else:
- __pairs.append( (None, self.__values[i]) )
+ __pairs.append((None, self.__values[i]))
return __pairs
def tagkeys(self):
"D.tagkeys() -> list of D's tag keys"
__allKeys = []
for dicotags in self.__tags:
- __allKeys.extend( list(dicotags.keys()) )
+ __allKeys.extend(list(dicotags.keys()))
__allKeys = sorted(set(__allKeys))
return __allKeys
if item is None:
__indexOfFilteredItems = self.__filteredIndexes(**kwargs)
else:
- __indexOfFilteredItems = [item,]
+ __indexOfFilteredItems = [
+ item,
+ ]
#
# Dans le cas où la sortie donne les valeurs d'un "outputTag"
if outputTag is not None and isinstance(outputTag, str):
outputValues = []
for index in __indexOfFilteredItems:
if outputTag in self.__tags[index].keys():
- outputValues.append( self.__tags[index][outputTag] )
+ outputValues.append(self.__tags[index][outputTag])
outputValues = sorted(set(outputValues))
return outputValues
#
else:
allTags = {}
for index in __indexOfFilteredItems:
- allTags.update( self.__tags[index] )
+ allTags.update(self.__tags[index])
allKeys = sorted(allTags.keys())
return allKeys
élémentaires numpy.
"""
try:
- __sr = [numpy.mean(item, dtype=mfp).astype('float') for item in self.__values]
+ __sr = [
+ numpy.mean(item, dtype=mfp).astype("float") for item in self.__values
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
return numpy.array(__sr).tolist()
l'écart-type, qui est dans le diviseur. Inutile avant Numpy 1.1
"""
try:
- if numpy.version.version >= '1.1.0':
- __sr = [numpy.array(item).std(ddof=ddof, dtype=mfp).astype('float') for item in self.__values]
+ if numpy.version.version >= "1.1.0":
+ __sr = [
+ numpy.array(item).std(ddof=ddof, dtype=mfp).astype("float")
+ for item in self.__values
+ ]
else:
- return [numpy.array(item).std(dtype=mfp).astype('float') for item in self.__values]
+ return [
+ numpy.array(item).std(dtype=mfp).astype("float")
+ for item in self.__values
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
return numpy.array(__sr).tolist()
def traces(self, offset=0):
"""
- Trace
+ Trace (offset : voir numpy.trace)
Renvoie la série contenant, à chaque pas, la trace (avec l'offset) des
données au pas. Il faut que le type de base soit compatible avec les
types élémentaires numpy.
"""
try:
- __sr = [numpy.trace(item, offset, dtype=mfp).astype('float') for item in self.__values]
+ __sr = [
+ numpy.trace(item, offset, dtype=mfp).astype("float")
+ for item in self.__values
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
return numpy.array(__sr).tolist()
- def maes(self, _predictor=None):
+ def maes(self, predictor=None):
"""
Mean Absolute Error (MAE)
mae(dX) = 1/n sum(dX_i)
prédicteur est None, sinon c'est appliqué à l'écart entre les données
au pas et le prédicteur au même pas.
"""
- if _predictor is None:
+ if predictor is None:
try:
__sr = [numpy.mean(numpy.abs(item)) for item in self.__values]
except Exception:
raise TypeError("Base type is incompatible with numpy")
else:
- if len(_predictor) != len(self.__values):
- raise ValueError("Predictor number of steps is incompatible with the values")
+ if len(predictor) != len(self.__values):
+ raise ValueError(
+ "Predictor number of steps is incompatible with the values"
+ )
for i, item in enumerate(self.__values):
- if numpy.asarray(_predictor[i]).size != numpy.asarray(item).size:
- raise ValueError("Predictor size at step %i is incompatible with the values"%i)
+ if numpy.asarray(predictor[i]).size != numpy.asarray(item).size:
+ raise ValueError(
+ "Predictor size at step %i is incompatible with the values" % i
+ )
try:
- __sr = [numpy.mean(numpy.abs(numpy.ravel(item) - numpy.ravel(_predictor[i]))) for i, item in enumerate(self.__values)]
+ __sr = [
+ numpy.mean(numpy.abs(numpy.ravel(item) - numpy.ravel(predictor[i])))
+ for i, item in enumerate(self.__values)
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
return numpy.array(__sr).tolist()
- def mses(self, _predictor=None):
+ def mses(self, predictor=None):
"""
Mean-Square Error (MSE) ou Mean-Square Deviation (MSD)
- mse(dX) = 1/n sum(dX_i**2)
+ mse(dX) = 1/n sum(dX_i**2) = 1/n ||X||^2
Renvoie la série contenant, à chaque pas, la MSE des données au pas. Il
faut que le type de base soit compatible avec les types élémentaires
prédicteur est None, sinon c'est appliqué à l'écart entre les données
au pas et le prédicteur au même pas.
"""
- if _predictor is None:
+ if predictor is None:
try:
__n = self.shape()[0]
- __sr = [(numpy.linalg.norm(item)**2 / __n) for item in self.__values]
+ __sr = [(numpy.linalg.norm(item) ** 2 / __n) for item in self.__values]
except Exception:
raise TypeError("Base type is incompatible with numpy")
else:
- if len(_predictor) != len(self.__values):
- raise ValueError("Predictor number of steps is incompatible with the values")
+ if len(predictor) != len(self.__values):
+ raise ValueError(
+ "Predictor number of steps is incompatible with the values"
+ )
for i, item in enumerate(self.__values):
- if numpy.asarray(_predictor[i]).size != numpy.asarray(item).size:
- raise ValueError("Predictor size at step %i is incompatible with the values"%i)
+ if numpy.asarray(predictor[i]).size != numpy.asarray(item).size:
+ raise ValueError(
+ "Predictor size at step %i is incompatible with the values" % i
+ )
try:
__n = self.shape()[0]
- __sr = [(numpy.linalg.norm(numpy.ravel(item) - numpy.ravel(_predictor[i]))**2 / __n) for i, item in enumerate(self.__values)]
+ __sr = [
+ (
+ numpy.linalg.norm(numpy.ravel(item) - numpy.ravel(predictor[i]))
+ ** 2
+ / __n
+ )
+ for i, item in enumerate(self.__values)
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
return numpy.array(__sr).tolist()
msds = mses # Mean-Square Deviation (MSD=MSE)
- def rmses(self, _predictor=None):
+ def rmses(self, predictor=None):
"""
Root-Mean-Square Error (RMSE) ou Root-Mean-Square Deviation (RMSD)
rmse(dX) = sqrt( 1/n sum(dX_i**2) ) = sqrt( mse(dX) )
Renvoie la série contenant, à chaque pas, la RMSE des données au pas.
Il faut que le type de base soit compatible avec les types élémentaires
numpy. C'est réservé aux variables d'écarts ou d'incréments si le
- prédicteur est None, sinon c'est appliqué à l'écart entre les données
- au pas et le prédicteur au même pas.
+ prédicteur est None (c'est donc une RMS), sinon c'est appliqué à
+ l'écart entre les données au pas et le prédicteur au même pas.
"""
- if _predictor is None:
+ if predictor is None:
try:
__n = self.shape()[0]
- __sr = [(numpy.linalg.norm(item) / math.sqrt(__n)) for item in self.__values]
+ __sr = [
+ (numpy.linalg.norm(item) / math.sqrt(__n)) for item in self.__values
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
else:
- if len(_predictor) != len(self.__values):
- raise ValueError("Predictor number of steps is incompatible with the values")
+ if len(predictor) != len(self.__values):
+ raise ValueError(
+ "Predictor number of steps is incompatible with the values"
+ )
for i, item in enumerate(self.__values):
- if numpy.asarray(_predictor[i]).size != numpy.asarray(item).size:
- raise ValueError("Predictor size at step %i is incompatible with the values"%i)
+ if numpy.asarray(predictor[i]).size != numpy.asarray(item).size:
+ raise ValueError(
+ "Predictor size at step %i is incompatible with the values" % i
+ )
try:
__n = self.shape()[0]
- __sr = [(numpy.linalg.norm(numpy.ravel(item) - numpy.ravel(_predictor[i])) / math.sqrt(__n)) for i, item in enumerate(self.__values)]
+ __sr = [
+ (
+ numpy.linalg.norm(numpy.ravel(item) - numpy.ravel(predictor[i]))
+ / math.sqrt(__n)
+ )
+ for i, item in enumerate(self.__values)
+ ]
except Exception:
raise TypeError("Base type is incompatible with numpy")
return numpy.array(__sr).tolist()
rmsds = rmses # Root-Mean-Square Deviation (RMSD=RMSE)
- def __preplots(self,
- title = "",
- xlabel = "",
- ylabel = "",
- ltitle = None,
- geometry = "600x400",
- persist = False,
- pause = True ):
+ def __preplots(
+ self,
+ title="",
+ xlabel="",
+ ylabel="",
+ ltitle=None,
+ geometry="600x400",
+ persist=False,
+ pause=True,
+ ):
"Préparation des plots"
#
# Vérification de la disponibilité du module Gnuplot
if ltitle is None:
ltitle = ""
__geometry = str(geometry)
- __sizespec = (__geometry.split('+')[0]).replace('x', ',')
+ __sizespec = (__geometry.split("+")[0]).replace("x", ",")
#
if persist:
- Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist '
+ Gnuplot.GnuplotOpts.gnuplot_command = "gnuplot -persist "
#
self.__g = Gnuplot.Gnuplot() # persist=1
- self.__g('set terminal ' + Gnuplot.GnuplotOpts.default_term + ' size ' + __sizespec)
- self.__g('set style data lines')
- self.__g('set grid')
- self.__g('set autoscale')
+ self.__g(
+ "set terminal " + Gnuplot.GnuplotOpts.default_term + " size " + __sizespec
+ )
+ self.__g("set style data lines")
+ self.__g("set grid")
+ self.__g("set autoscale")
self.__g('set xlabel "' + str(xlabel) + '"')
self.__g('set ylabel "' + str(ylabel) + '"')
- self.__title = title
+ self.__title = title
self.__ltitle = ltitle
- self.__pause = pause
-
- def plots(self,
- item = None,
- step = None,
- steps = None,
- title = "",
- xlabel = "",
- ylabel = "",
- ltitle = None,
- geometry = "600x400",
- filename = "",
- dynamic = False,
- persist = False,
- pause = True ):
+ self.__pause = pause
+
+ def plots(
+ self,
+ item=None,
+ step=None,
+ steps=None,
+ title="",
+ xlabel="",
+ ylabel="",
+ ltitle=None,
+ geometry="600x400",
+ filename="",
+ dynamic=False,
+ persist=False,
+ pause=True,
+ ):
"""
Renvoie un affichage de la valeur à chaque pas, si elle est compatible
avec un affichage Gnuplot (donc essentiellement un vecteur). Si
Par défaut, pause = True
"""
if not self.__dynamic:
- self.__preplots(title, xlabel, ylabel, ltitle, geometry, persist, pause )
+ self.__preplots(title, xlabel, ylabel, ltitle, geometry, persist, pause)
if dynamic:
self.__dynamic = True
if len(self.__values) == 0:
#
i = -1
for index in indexes:
- self.__g('set title "' + str(title) + ' (pas ' + str(index) + ')"')
+ self.__g('set title "' + str(title) + " (pas " + str(index) + ')"')
if isinstance(steps, (list, numpy.ndarray)):
Steps = list(steps)
else:
Steps = list(range(len(self.__values[index])))
#
- self.__g.plot( Gnuplot.Data( Steps, self.__values[index], title=ltitle ) )
+ self.__g.plot(Gnuplot.Data(Steps, self.__values[index], title=ltitle))
#
if filename != "":
i += 1
- stepfilename = "%s_%03i.ps"%(filename, i)
+ stepfilename = "%s_%03i.ps" % (filename, i)
if os.path.isfile(stepfilename):
- raise ValueError("Error: a file with this name \"%s\" already exists."%stepfilename)
+ raise ValueError(
+ 'Error: a file with this name "%s" already exists.'
+ % stepfilename
+ )
self.__g.hardcopy(filename=stepfilename, color=1)
if self.__pause:
- eval(input('Please press return to continue...\n'))
+ eval(input("Please press return to continue...\n"))
def __replots(self):
"""
#
self.__g('set title "' + str(self.__title))
Steps = list(range(len(self.__values)))
- self.__g.plot( Gnuplot.Data( Steps, self.__values, title=self.__ltitle ) )
+ self.__g.plot(Gnuplot.Data(Steps, self.__values, title=self.__ltitle))
#
if self.__pause:
- eval(input('Please press return to continue...\n'))
+ eval(input("Please press return to continue...\n"))
# ---------------------------------------------------------
# On pourrait aussi utiliser d'autres attributs d'un "array" comme "tofile"
les types élémentaires numpy.
"""
try:
- return numpy.mean(self.__values, axis=0, dtype=mfp).astype('float')
+ return numpy.mean(self.__values, axis=0, dtype=mfp).astype("float")
except Exception:
raise TypeError("Base type is incompatible with numpy")
l'écart-type, qui est dans le diviseur. Inutile avant Numpy 1.1
"""
try:
- if numpy.version.version >= '1.1.0':
- return numpy.asarray(self.__values).std(ddof=ddof, axis=0).astype('float')
+ if numpy.version.version >= "1.1.0":
+ return (
+ numpy.asarray(self.__values).std(ddof=ddof, axis=0).astype("float")
+ )
else:
- return numpy.asarray(self.__values).std(axis=0).astype('float')
+ return numpy.asarray(self.__values).std(axis=0).astype("float")
except Exception:
raise TypeError("Base type is incompatible with numpy")
except Exception:
raise TypeError("Base type is incompatible with numpy")
- def plot(self,
- steps = None,
- title = "",
- xlabel = "",
- ylabel = "",
- ltitle = None,
- geometry = "600x400",
- filename = "",
- persist = False,
- pause = True ):
+ def plot(
+ self,
+ steps=None,
+ title="",
+ xlabel="",
+ ylabel="",
+ ltitle=None,
+ geometry="600x400",
+ filename="",
+ persist=False,
+ pause=True,
+ ):
"""
Renvoie un affichage unique pour l'ensemble des valeurs à chaque pas, si
elles sont compatibles avec un affichage Gnuplot (donc essentiellement
else:
Steps = list(range(len(self.__values[0])))
__geometry = str(geometry)
- __sizespec = (__geometry.split('+')[0]).replace('x', ',')
+ __sizespec = (__geometry.split("+")[0]).replace("x", ",")
#
if persist:
- Gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist '
+ Gnuplot.GnuplotOpts.gnuplot_command = "gnuplot -persist "
#
self.__g = Gnuplot.Gnuplot() # persist=1
- self.__g('set terminal ' + Gnuplot.GnuplotOpts.default_term + ' size ' + __sizespec)
- self.__g('set style data lines')
- self.__g('set grid')
- self.__g('set autoscale')
- self.__g('set title "' + str(title) + '"')
+ self.__g(
+ "set terminal " + Gnuplot.GnuplotOpts.default_term + " size " + __sizespec
+ )
+ self.__g("set style data lines")
+ self.__g("set grid")
+ self.__g("set autoscale")
+ self.__g('set title "' + str(title) + '"')
self.__g('set xlabel "' + str(xlabel) + '"')
self.__g('set ylabel "' + str(ylabel) + '"')
#
# Tracé du ou des vecteurs demandés
indexes = list(range(len(self.__values)))
- self.__g.plot( Gnuplot.Data( Steps, self.__values[indexes.pop(0)], title=ltitle + " (pas 0)" ) )
+ self.__g.plot(
+ Gnuplot.Data(
+ Steps, self.__values[indexes.pop(0)], title=ltitle + " (pas 0)"
+ )
+ )
for index in indexes:
- self.__g.replot( Gnuplot.Data( Steps, self.__values[index], title=ltitle + " (pas %i)"%index ) )
+ self.__g.replot(
+ Gnuplot.Data(
+ Steps, self.__values[index], title=ltitle + " (pas %i)" % index
+ )
+ )
#
if filename != "":
self.__g.hardcopy(filename=filename, color=1)
if pause:
- eval(input('Please press return to continue...\n'))
+ eval(input("Please press return to continue...\n"))
# ---------------------------------------------------------
def s2mvr(self):
raise TypeError("Base type is incompatible with numpy")
# ---------------------------------------------------------
- def setDataObserver(self, HookFunction = None, HookParameters = None, Scheduler = None, Order = None, OSync = True, DOVar = None):
+ def setDataObserver(
+ self,
+ HookFunction=None,
+ HookParameters=None,
+ Scheduler=None,
+ Order=None,
+ OSync=True,
+ DOVar=None,
+ ):
"""
Association à la variable d'un triplet définissant un observer.
#
# Vérification du Scheduler
# -------------------------
- maxiter = int( 1e9 )
- if isinstance(Scheduler, int): # Considéré comme une fréquence à partir de 0
- Schedulers = range( 0, maxiter, int(Scheduler) )
- elif isinstance(Scheduler, range): # Considéré comme un itérateur
+ maxiter = int(1e9)
+ if isinstance(Scheduler, int): # Considéré comme une fréquence à partir de 0
+ Schedulers = range(0, maxiter, int(Scheduler))
+ elif isinstance(Scheduler, range): # Considéré comme un itérateur
Schedulers = Scheduler
- elif isinstance(Scheduler, (list, tuple)): # Considéré comme des index explicites
- Schedulers = [int(i) for i in Scheduler] # Similaire à map( int, Scheduler ) # noqa: E262
- else: # Dans tous les autres cas, activé par défaut
- Schedulers = range( 0, maxiter )
+ elif isinstance(
+ Scheduler, (list, tuple)
+ ): # Considéré comme des index explicites
+ Schedulers = [
+ int(i) for i in Scheduler
+ ] # Similaire à map( int, Scheduler )
+ else: # Dans tous les autres cas, activé par défaut
+ Schedulers = range(0, maxiter)
#
# Stockage interne de l'observer dans la variable
# -----------------------------------------------
- self.__dataobservers.append( [HookFunction, HookParameters, Schedulers, Order, OSync, DOVar] )
+ self.__dataobservers.append(
+ [HookFunction, HookParameters, Schedulers, Order, OSync, DOVar]
+ )
- def removeDataObserver(self, HookFunction = None, AllObservers = False):
+ def removeDataObserver(self, HookFunction=None, AllObservers=False):
"""
Suppression d'un observer nommé sur la variable.
AllObservers est vrai, supprime tous les observers enregistrés.
"""
if hasattr(HookFunction, "func_name"):
- name = str( HookFunction.func_name )
+ name = str(HookFunction.func_name)
elif hasattr(HookFunction, "__name__"):
- name = str( HookFunction.__name__ )
+ name = str(HookFunction.__name__)
elif isinstance(HookFunction, str):
- name = str( HookFunction )
+ name = str(HookFunction)
else:
name = None
#
for [hf, _, _, _, _, _] in self.__dataobservers:
ih = ih + 1
if name is hf.__name__ or AllObservers:
- index_to_remove.append( ih )
+ index_to_remove.append(ih)
index_to_remove.reverse()
for ih in index_to_remove:
- self.__dataobservers.pop( ih )
+ self.__dataobservers.pop(ih)
return len(index_to_remove)
def hasDataObserver(self):
return bool(len(self.__dataobservers) > 0)
+
# ==============================================================================
class SchedulerTrigger(object):
"""
Classe générale d'interface de type Scheduler/Trigger
"""
+
__slots__ = ()
- def __init__(self,
- simplifiedCombo = None,
- startTime = 0,
- endTime = int( 1e9 ),
- timeDelay = 1,
- timeUnit = 1,
- frequency = None ):
+ def __init__(
+ self,
+ simplifiedCombo=None,
+ startTime=0,
+ endTime=int(1e9),
+ timeDelay=1,
+ timeUnit=1,
+ frequency=None,
+ ):
pass
+
# ==============================================================================
class OneScalar(Persistence):
"""
ou des matrices comme dans les classes suivantes, mais c'est déconseillé
pour conserver une signification claire des noms.
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = float):
+ def __init__(self, name="", unit="", basetype=float):
Persistence.__init__(self, name, unit, basetype)
+
class OneIndex(Persistence):
"""
Classe définissant le stockage d'une valeur unique entière (int) par pas.
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = int):
+ def __init__(self, name="", unit="", basetype=int):
Persistence.__init__(self, name, unit, basetype)
+
class OneVector(Persistence):
"""
Classe de stockage d'une liste de valeurs numériques homogènes par pas. Ne
pas utiliser cette classe pour des données hétérogènes, mais "OneList".
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = numpy.ravel):
+ def __init__(self, name="", unit="", basetype=numpy.ravel):
Persistence.__init__(self, name, unit, basetype)
+
class OneMatrice(Persistence):
"""
Classe de stockage d'une matrice de valeurs homogènes par pas.
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = numpy.array):
+ def __init__(self, name="", unit="", basetype=numpy.array):
Persistence.__init__(self, name, unit, basetype)
+
class OneMatrix(Persistence):
"""
- Classe de stockage d'une matrice de valeurs homogènes par pas.
+ Classe de stockage d'une matrice de valeurs homogènes par pas (obsolète).
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = numpy.matrix):
+ def __init__(self, name="", unit="", basetype=numpy.matrix):
Persistence.__init__(self, name, unit, basetype)
+
class OneList(Persistence):
"""
Classe de stockage d'une liste de valeurs hétérogènes (list) par pas. Ne
pas utiliser cette classe pour des données numériques homogènes, mais
"OneVector".
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = list):
+ def __init__(self, name="", unit="", basetype=list):
Persistence.__init__(self, name, unit, basetype)
-def NoType( value ):
+
+def NoType(value):
"Fonction transparente, sans effet sur son argument"
return value
+
class OneNoType(Persistence):
"""
Classe de stockage d'un objet sans modification (cast) de type. Attention,
résultats inattendus. Cette classe n'est donc à utiliser qu'à bon escient
volontairement, et pas du tout par défaut.
"""
+
__slots__ = ()
- def __init__(self, name="", unit="", basetype = NoType):
+ def __init__(self, name="", unit="", basetype=NoType):
Persistence.__init__(self, name, unit, basetype)
+
# ==============================================================================
class CompositePersistence(object):
"""
Des objets par défaut sont prévus, et des objets supplémentaires peuvent
être ajoutés.
"""
+
__slots__ = ("__name", "__StoredObjects")
def __init__(self, name="", defaults=True):
# Definition des objets par defaut
# --------------------------------
if defaults:
- self.__StoredObjects["Informations"] = OneNoType("Informations")
- self.__StoredObjects["Background"] = OneVector("Background", basetype=numpy.array)
- self.__StoredObjects["BackgroundError"] = OneMatrix("BackgroundError")
- self.__StoredObjects["Observation"] = OneVector("Observation", basetype=numpy.array)
+ self.__StoredObjects["Informations"] = OneNoType("Informations")
+ self.__StoredObjects["Background"] = OneVector(
+ "Background", basetype=numpy.array
+ )
+ self.__StoredObjects["BackgroundError"] = OneMatrix("BackgroundError")
+ self.__StoredObjects["Observation"] = OneVector(
+ "Observation", basetype=numpy.array
+ )
self.__StoredObjects["ObservationError"] = OneMatrix("ObservationError")
- self.__StoredObjects["Analysis"] = OneVector("Analysis", basetype=numpy.array)
- self.__StoredObjects["AnalysisError"] = OneMatrix("AnalysisError")
- self.__StoredObjects["Innovation"] = OneVector("Innovation", basetype=numpy.array)
- self.__StoredObjects["KalmanGainK"] = OneMatrix("KalmanGainK")
- self.__StoredObjects["OperatorH"] = OneMatrix("OperatorH")
- self.__StoredObjects["RmsOMA"] = OneScalar("RmsOMA")
- self.__StoredObjects["RmsOMB"] = OneScalar("RmsOMB")
- self.__StoredObjects["RmsBMA"] = OneScalar("RmsBMA")
+ self.__StoredObjects["Analysis"] = OneVector(
+ "Analysis", basetype=numpy.array
+ )
+ self.__StoredObjects["AnalysisError"] = OneMatrix("AnalysisError")
+ self.__StoredObjects["Innovation"] = OneVector(
+ "Innovation", basetype=numpy.array
+ )
+ self.__StoredObjects["KalmanGainK"] = OneMatrix("KalmanGainK")
+ self.__StoredObjects["OperatorH"] = OneMatrix("OperatorH")
+ self.__StoredObjects["RmsOMA"] = OneScalar("RmsOMA")
+ self.__StoredObjects["RmsOMB"] = OneScalar("RmsOMB")
+ self.__StoredObjects["RmsBMA"] = OneScalar("RmsBMA")
#
def store(self, name=None, value=None, **kwargs):
if name is None:
raise ValueError("Storable object name is required for storage.")
if name not in self.__StoredObjects.keys():
- raise ValueError("No such name '%s' exists in storable objects."%name)
- self.__StoredObjects[name].store( value=value, **kwargs )
+ raise ValueError("No such name '%s' exists in storable objects." % name)
+ self.__StoredObjects[name].store(value=value, **kwargs)
- def add_object(self, name=None, persistenceType=Persistence, basetype=None ):
+ def add_object(self, name=None, persistenceType=Persistence, basetype=None):
"""
Ajoute dans les objets stockables un nouvel objet défini par son nom,
son type de Persistence et son type de base à chaque pas.
if name is None:
raise ValueError("Object name is required for adding an object.")
if name in self.__StoredObjects.keys():
- raise ValueError("An object with the same name '%s' already exists in storable objects. Choose another one."%name)
+ raise ValueError(
+ "An object with the same name '%s' already exists in storable objects. Choose another one."
+ % name
+ )
if basetype is None:
- self.__StoredObjects[name] = persistenceType( name=str(name) )
+ self.__StoredObjects[name] = persistenceType(name=str(name))
else:
- self.__StoredObjects[name] = persistenceType( name=str(name), basetype=basetype )
+ self.__StoredObjects[name] = persistenceType(
+ name=str(name), basetype=basetype
+ )
- def get_object(self, name=None ):
+ def get_object(self, name=None):
"""
Renvoie l'objet de type Persistence qui porte le nom demandé.
"""
if name is None:
raise ValueError("Object name is required for retrieving an object.")
if name not in self.__StoredObjects.keys():
- raise ValueError("No such name '%s' exists in stored objects."%name)
+ raise ValueError("No such name '%s' exists in stored objects." % name)
return self.__StoredObjects[name]
- def set_object(self, name=None, objet=None ):
+ def set_object(self, name=None, objet=None):
"""
Affecte directement un 'objet' qui porte le nom 'name' demandé.
Attention, il n'est pas effectué de vérification sur le type, qui doit
if name is None:
raise ValueError("Object name is required for setting an object.")
if name in self.__StoredObjects.keys():
- raise ValueError("An object with the same name '%s' already exists in storable objects. Choose another one."%name)
+ raise ValueError(
+ "An object with the same name '%s' already exists in storable objects. Choose another one."
+ % name
+ )
self.__StoredObjects[name] = objet
- def del_object(self, name=None ):
+ def del_object(self, name=None):
"""
Supprime un objet de la liste des objets stockables.
"""
if name is None:
raise ValueError("Object name is required for retrieving an object.")
if name not in self.__StoredObjects.keys():
- raise ValueError("No such name '%s' exists in stored objects."%name)
+ raise ValueError("No such name '%s' exists in stored objects." % name)
del self.__StoredObjects[name]
# ---------------------------------------------------------
# Méthodes d'accès de type dictionnaire
- def __getitem__(self, name=None ):
+ def __getitem__(self, name=None):
"x.__getitem__(y) <==> x[y]"
- return self.get_object( name )
+ return self.get_object(name)
- def __setitem__(self, name=None, objet=None ):
+ def __setitem__(self, name=None, objet=None):
"x.__setitem__(i, y) <==> x[i]=y"
- self.set_object( name, objet )
+ self.set_object(name, objet)
def keys(self):
"D.keys() -> list of D's keys"
- return self.get_stored_objects(hideVoidObjects = False)
+ return self.get_stored_objects(hideVoidObjects=False)
def values(self):
"D.values() -> list of D's values"
return self.__StoredObjects.items()
# ---------------------------------------------------------
- def get_stored_objects(self, hideVoidObjects = False):
+ def get_stored_objects(self, hideVoidObjects=False):
"Renvoie la liste des objets présents"
objs = self.__StoredObjects.keys()
if hideVoidObjects:
for k in objs:
try:
if len(self.__StoredObjects[k]) > 0:
- usedObjs.append( k )
+ usedObjs.append(k)
finally:
pass
objs = usedObjs
"""
if filename is None:
if compress == "gzip":
- filename = os.tempnam( os.getcwd(), 'dacp' ) + ".pkl.gz"
+ filename = os.tempnam(os.getcwd(), "dacp") + ".pkl.gz"
elif compress == "bzip2":
- filename = os.tempnam( os.getcwd(), 'dacp' ) + ".pkl.bz2"
+ filename = os.tempnam(os.getcwd(), "dacp") + ".pkl.bz2"
else:
- filename = os.tempnam( os.getcwd(), 'dacp' ) + ".pkl"
+ filename = os.tempnam(os.getcwd(), "dacp") + ".pkl"
else:
- filename = os.path.abspath( filename )
+ filename = os.path.abspath(filename)
#
if mode == "pickle":
if compress == "gzip":
- output = gzip.open( filename, 'wb')
+ output = gzip.open(filename, "wb")
elif compress == "bzip2":
- output = bz2.BZ2File( filename, 'wb')
+ output = bz2.BZ2File(filename, "wb")
else:
- output = open( filename, 'wb')
+ output = open(filename, "wb")
pickle.dump(self, output)
output.close()
else:
- raise ValueError("Save mode '%s' unknown. Choose another one."%mode)
+ raise ValueError("Save mode '%s' unknown. Choose another one." % mode)
#
return filename
if filename is None:
raise ValueError("A file name if requested to load a composite.")
else:
- filename = os.path.abspath( filename )
+ filename = os.path.abspath(filename)
#
if mode == "pickle":
if compress == "gzip":
- pkl_file = gzip.open( filename, 'rb')
+ pkl_file = gzip.open(filename, "rb")
elif compress == "bzip2":
- pkl_file = bz2.BZ2File( filename, 'rb')
+ pkl_file = bz2.BZ2File(filename, "rb")
else:
- pkl_file = open(filename, 'rb')
+ pkl_file = open(filename, "rb")
output = pickle.load(pkl_file)
for k in output.keys():
self[k] = output[k]
else:
- raise ValueError("Load mode '%s' unknown. Choose another one."%mode)
+ raise ValueError("Load mode '%s' unknown. Choose another one." % mode)
#
return filename
+
# ==============================================================================
if __name__ == "__main__":
print("\n AUTODIAGNOSTIC\n")
import re
import numpy
+
# ==============================================================================
-def uniq( __sequence ):
+def uniq(__sequence):
"""
Fonction pour rendre unique chaque élément d'une liste, en préservant l'ordre
"""
__seen = set()
return [x for x in __sequence if x not in __seen and not __seen.add(x)]
+
class PathManagement(object):
"""
Mise à jour du path système pour les répertoires d'outils
"""
- __slots__ = ("__paths")
+
+ __slots__ = ("__paths",)
def __init__(self):
"Déclaration des répertoires statiques"
#
for v in self.__paths.values():
if os.path.isdir(v):
- sys.path.insert(0, v )
+ sys.path.insert(0, v)
#
# Conserve en unique exemplaire chaque chemin
- sys.path = uniq( sys.path )
+ sys.path = uniq(sys.path)
del parent
def getpaths(self):
"""
return self.__paths
+
# ==============================================================================
class PlatformInfo(object):
"""
Rassemblement des informations sur le code et la plateforme
"""
+
__slots__ = ("has_salome", "has_yacs", "has_adao", "has_eficas")
def __init__(self):
"Sans effet"
- self.has_salome = bool( "SALOME_ROOT_DIR" in os.environ )
- self.has_yacs = bool( "YACS_ROOT_DIR" in os.environ )
- self.has_adao = bool( "ADAO_ROOT_DIR" in os.environ )
- self.has_eficas = bool( "EFICAS_ROOT_DIR" in os.environ )
+ self.has_salome = bool("SALOME_ROOT_DIR" in os.environ)
+ self.has_yacs = bool("YACS_ROOT_DIR" in os.environ)
+ self.has_adao = bool("ADAO_ROOT_DIR" in os.environ)
+ self.has_eficas = bool("EFICAS_ROOT_DIR" in os.environ)
PathManagement()
def getName(self):
"Retourne le nom de l'application"
import daCore.version as dav
+
return dav.name
def getVersion(self):
"Retourne le numéro de la version"
import daCore.version as dav
+
return dav.version
def getDate(self):
"Retourne la date de création de la version"
import daCore.version as dav
+
return dav.date
def getYear(self):
"Retourne l'année de création de la version"
import daCore.version as dav
+
return dav.year
def getSystemInformation(self, __prefix=""):
- __msg = ""
- __msg += "\n%s%30s : %s"%(__prefix, "platform.system", platform.system())
- __msg += "\n%s%30s : %s"%(__prefix, "sys.platform", sys.platform)
- __msg += "\n%s%30s : %s"%(__prefix, "platform.version", platform.version())
- __msg += "\n%s%30s : %s"%(__prefix, "platform.platform", platform.platform())
- __msg += "\n%s%30s : %s"%(__prefix, "platform.machine", platform.machine())
+ __msg = ""
+ __msg += "\n%s%30s : %s" % (__prefix, "platform.system", platform.system())
+ __msg += "\n%s%30s : %s" % (__prefix, "sys.platform", sys.platform)
+ __msg += "\n%s%30s : %s" % (__prefix, "platform.version", platform.version())
+ __msg += "\n%s%30s : %s" % (__prefix, "platform.platform", platform.platform())
+ __msg += "\n%s%30s : %s" % (__prefix, "platform.machine", platform.machine())
if len(platform.processor()) > 0:
- __msg += "\n%s%30s : %s"%(__prefix, "platform.processor", platform.processor())
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.processor",
+ platform.processor(),
+ )
#
- if sys.platform.startswith('linux'):
- if hasattr(platform, 'linux_distribution'):
- __msg += "\n%s%30s : %s"%(__prefix,
- "platform.linux_distribution", str(platform.linux_distribution())) # noqa: E128
- elif hasattr(platform, 'dist'):
- __msg += "\n%s%30s : %s"%(__prefix,
- "platform.dist", str(platform.dist())) # noqa: E128
- elif sys.platform.startswith('darwin'):
- if hasattr(platform, 'mac_ver'):
+ if sys.platform.startswith("linux"):
+ if hasattr(platform, "linux_distribution"):
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.linux_distribution",
+ str(platform.linux_distribution()),
+ )
+ elif hasattr(platform, "dist"):
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.dist",
+ str(platform.dist()),
+ )
+ elif sys.platform.startswith("darwin"):
+ if hasattr(platform, "mac_ver"):
# https://fr.wikipedia.org/wiki/MacOS
__macosxv10 = {
- '0' : 'Cheetah', '1' : 'Puma', '2' : 'Jaguar', # noqa: E241,E203
- '3' : 'Panther', '4' : 'Tiger', '5' : 'Leopard', # noqa: E241,E203
- '6' : 'Snow Leopard', '7' : 'Lion', '8' : 'Mountain Lion', # noqa: E241,E203
- '9' : 'Mavericks', '10': 'Yosemite', '11': 'El Capitan', # noqa: E241,E203
- '12': 'Sierra', '13': 'High Sierra', '14': 'Mojave', # noqa: E241,E203
- '15': 'Catalina',
+ "0": "Cheetah",
+ "1": "Puma",
+ "2": "Jaguar",
+ "3": "Panther",
+ "4": "Tiger",
+ "5": "Leopard",
+ "6": "Snow Leopard",
+ "7": "Lion",
+ "8": "Mountain Lion",
+ "9": "Mavericks",
+ "10": "Yosemite",
+ "11": "El Capitan",
+ "12": "Sierra",
+ "13": "High Sierra",
+ "14": "Mojave",
+ "15": "Catalina",
}
for key in __macosxv10:
- __details = platform.mac_ver()[0].split('.')
+ __details = platform.mac_ver()[0].split(".")
if (len(__details) > 0) and (__details[1] == key):
- __msg += "\n%s%30s : %s"%(__prefix,
- "platform.mac_ver", str(platform.mac_ver()[0] + "(" + __macosxv10[key] + ")")) # noqa: E128
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.mac_ver",
+ str(platform.mac_ver()[0] + "(" + __macosxv10[key] + ")"),
+ )
__macosxv11 = {
- '11': 'Big Sur', '12': 'Monterey', '13': 'Ventura', # noqa: E241
- '14': 'Sonoma', '15': 'Sequoia', # noqa: E241
+ "11": "Big Sur",
+ "12": "Monterey",
+ "13": "Ventura",
+ "14": "Sonoma",
+ "15": "Sequoia",
}
for key in __macosxv11:
- __details = platform.mac_ver()[0].split('.')
- if (__details[0] == key):
- __msg += "\n%s%30s : %s"%(__prefix,
- "platform.mac_ver", str(platform.mac_ver()[0] + "(" + __macosxv11[key] + ")")) # noqa: E128
- elif hasattr(platform, 'dist'):
- __msg += "\n%s%30s : %s"%(__prefix, "platform.dist", str(platform.dist()))
- elif os.name == 'nt':
- __msg += "\n%s%30s : %s"%(__prefix, "platform.win32_ver", platform.win32_ver()[1])
+ __details = platform.mac_ver()[0].split(".")
+ if __details[0] == key:
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.mac_ver",
+ str(platform.mac_ver()[0] + "(" + __macosxv11[key] + ")"),
+ )
+ elif hasattr(platform, "dist"):
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.dist",
+ str(platform.dist()),
+ )
+ elif os.name == "nt":
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.win32_ver",
+ platform.win32_ver()[1],
+ )
#
__msg += "\n"
- __msg += "\n%s%30s : %s"%(__prefix, "platform.python_implementation", platform.python_implementation())
- __msg += "\n%s%30s : %s"%(__prefix, "sys.executable", sys.executable)
- __msg += "\n%s%30s : %s"%(__prefix, "sys.version", sys.version.replace('\n', ''))
- __msg += "\n%s%30s : %s"%(__prefix, "sys.getfilesystemencoding", str(sys.getfilesystemencoding()))
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "platform.python_implementation",
+ platform.python_implementation(),
+ )
+ __msg += "\n%s%30s : %s" % (__prefix, "sys.executable", sys.executable)
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "sys.version",
+ sys.version.replace("\n", ""),
+ )
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "sys.getfilesystemencoding",
+ str(sys.getfilesystemencoding()),
+ )
if sys.version_info.major == 3 and sys.version_info.minor < 11: # Python 3.10
- __msg += "\n%s%30s : %s"%(__prefix, "locale.getdefaultlocale", str(locale.getdefaultlocale()))
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "locale.getdefaultlocale",
+ str(locale.getdefaultlocale()),
+ )
else:
- __msg += "\n%s%30s : %s"%(__prefix, "locale.getlocale", str(locale.getlocale()))
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "locale.getlocale",
+ str(locale.getlocale()),
+ )
__msg += "\n"
- __msg += "\n%s%30s : %s"%(__prefix, "os.cpu_count", os.cpu_count())
- if hasattr(os, 'sched_getaffinity'):
- __msg += "\n%s%30s : %s"%(__prefix, "len(os.sched_getaffinity(0))", len(os.sched_getaffinity(0)))
+ __msg += "\n%s%30s : %s" % (__prefix, "os.cpu_count", os.cpu_count())
+ if hasattr(os, "sched_getaffinity"):
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "len(os.sched_getaffinity(0))",
+ len(os.sched_getaffinity(0)),
+ )
else:
- __msg += "\n%s%30s : %s"%(__prefix, "len(os.sched_getaffinity(0))", "Unsupported on this platform")
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "len(os.sched_getaffinity(0))",
+ "Unsupported on this platform",
+ )
__msg += "\n"
- __msg += "\n%s%30s : %s"%(__prefix, "platform.node", platform.node())
- __msg += "\n%s%30s : %s"%(__prefix, "socket.getfqdn", socket.getfqdn())
- __msg += "\n%s%30s : %s"%(__prefix, "os.path.expanduser", os.path.expanduser('~'))
+ __msg += "\n%s%30s : %s" % (__prefix, "platform.node", platform.node())
+ __msg += "\n%s%30s : %s" % (__prefix, "socket.getfqdn", socket.getfqdn())
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "os.path.expanduser",
+ os.path.expanduser("~"),
+ )
return __msg
def getApplicationInformation(self, __prefix=""):
- __msg = ""
- __msg += "\n%s%30s : %s"%(__prefix, "ADAO version", self.getVersion())
+ __msg = ""
+ __msg += "\n%s%30s : %s" % (__prefix, "ADAO version", self.getVersion())
__msg += "\n"
- __msg += "\n%s%30s : %s"%(__prefix, "Python version", self.getPythonVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "Numpy version", self.getNumpyVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "Scipy version", self.getScipyVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "NLopt version", self.getNloptVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "MatplotLib version", self.getMatplotlibVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "GnuplotPy version", self.getGnuplotVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "Python version", self.getPythonVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "Numpy version", self.getNumpyVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "Scipy version", self.getScipyVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "NLopt version", self.getNloptVersion())
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "MatplotLib version",
+ self.getMatplotlibVersion(),
+ )
+ __msg += "\n%s%30s : %s" % (
+ __prefix,
+ "GnuplotPy version",
+ self.getGnuplotVersion(),
+ )
__msg += "\n"
- __msg += "\n%s%30s : %s"%(__prefix, "Pandas version", self.getPandasVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "Fmpy version", self.getFmpyVersion())
- __msg += "\n%s%30s : %s"%(__prefix, "Sphinx version", self.getSphinxVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "Pandas version", self.getPandasVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "Fmpy version", self.getFmpyVersion())
+ __msg += "\n%s%30s : %s" % (__prefix, "Sphinx version", self.getSphinxVersion())
return __msg
def getAllInformation(self, __prefix="", __title="Whole system information"):
- __msg = ""
+ __msg = ""
if len(__title) > 0:
__msg += "\n" + "=" * 80 + "\n" + __title + "\n" + "=" * 80 + "\n"
__msg += self.getSystemInformation(__prefix)
def getPythonVersion(self):
"Retourne la version de python disponible"
- return ".".join([str(x) for x in sys.version_info[0:3]]) # map(str,sys.version_info[0:3]))
+ return ".".join(
+ [str(x) for x in sys.version_info[0:3]]
+ ) # map(str,sys.version_info[0:3]))
# Tests des modules système
def _has_numpy(self):
try:
import numpy # noqa: F401
+
has_numpy = True
except ImportError:
- raise ImportError("Numpy is not available, despites the fact it is mandatory.")
+ raise ImportError(
+ "Numpy is not available, despites the fact it is mandatory."
+ )
return has_numpy
- has_numpy = property(fget = _has_numpy)
+
+ has_numpy = property(fget=_has_numpy)
def _has_scipy(self):
try:
import scipy
import scipy.version
import scipy.optimize # noqa: F401
+
has_scipy = True
except ImportError:
has_scipy = False
return has_scipy
- has_scipy = property(fget = _has_scipy)
+
+ has_scipy = property(fget=_has_scipy)
def _has_matplotlib(self):
try:
import matplotlib # noqa: F401
+
has_matplotlib = True
except ImportError:
has_matplotlib = False
return has_matplotlib
- has_matplotlib = property(fget = _has_matplotlib)
+
+ has_matplotlib = property(fget=_has_matplotlib)
def _has_sphinx(self):
try:
import sphinx # noqa: F401
+
has_sphinx = True
except ImportError:
has_sphinx = False
return has_sphinx
- has_sphinx = property(fget = _has_sphinx)
+
+ has_sphinx = property(fget=_has_sphinx)
def _has_nlopt(self):
try:
import nlopt # noqa: F401
+
has_nlopt = True
except ImportError:
has_nlopt = False
return has_nlopt
- has_nlopt = property(fget = _has_nlopt)
+
+ has_nlopt = property(fget=_has_nlopt)
def _has_pandas(self):
try:
import pandas # noqa: F401
+
has_pandas = True
except ImportError:
has_pandas = False
return has_pandas
- has_pandas = property(fget = _has_pandas)
+
+ has_pandas = property(fget=_has_pandas)
def _has_sdf(self):
try:
import sdf # noqa: F401
+
has_sdf = True
except ImportError:
has_sdf = False
return has_sdf
- has_sdf = property(fget = _has_sdf)
+
+ has_sdf = property(fget=_has_sdf)
def _has_fmpy(self):
try:
import fmpy # noqa: F401
+
has_fmpy = True
except ImportError:
has_fmpy = False
return has_fmpy
- has_fmpy = property(fget = _has_fmpy)
+
+ has_fmpy = property(fget=_has_fmpy)
def _has_buildingspy(self):
try:
import buildingspy # noqa: F401
+
has_buildingspy = True
except ImportError:
has_buildingspy = False
return has_buildingspy
- has_buildingspy = property(fget = _has_buildingspy)
+
+ has_buildingspy = property(fget=_has_buildingspy)
def _has_control(self):
try:
import control # noqa: F401
+
has_control = True
except ImportError:
has_control = False
return has_control
- has_control = property(fget = _has_control)
+
+ has_control = property(fget=_has_control)
def _has_modelicares(self):
try:
import modelicares # noqa: F401
+
has_modelicares = True
except ImportError:
has_modelicares = False
return has_modelicares
- has_modelicares = property(fget = _has_modelicares)
+
+ has_modelicares = property(fget=_has_modelicares)
# Tests des modules locaux
def _has_gnuplot(self):
try:
import Gnuplot # noqa: F401
+
has_gnuplot = True
except ImportError:
has_gnuplot = False
return has_gnuplot
- has_gnuplot = property(fget = _has_gnuplot)
+
+ has_gnuplot = property(fget=_has_gnuplot)
def _has_models(self):
try:
import Models # noqa: F401
+
has_models = True
except ImportError:
has_models = False
return has_models
- has_models = property(fget = _has_models)
+
+ has_models = property(fget=_has_models)
def _has_pst4mod(self):
try:
import pst4mod # noqa: F401
+
has_pst4mod = True
except ImportError:
has_pst4mod = False
return has_pst4mod
- has_pst4mod = property(fget = _has_pst4mod)
+
+ has_pst4mod = property(fget=_has_pst4mod)
# Versions
def getNumpyVersion(self):
"Retourne la version de numpy disponible"
import numpy.version
+
return numpy.version.version
def getScipyVersion(self):
"Retourne la version de scipy disponible"
if self.has_scipy:
import scipy
+
__version = scipy.version.version
else:
__version = "0.0.0"
"Retourne la version de nlopt disponible"
if self.has_nlopt:
import nlopt
- __version = "%s.%s.%s"%(
+
+ __version = "%s.%s.%s" % (
nlopt.version_major(),
nlopt.version_minor(),
nlopt.version_bugfix(),
"Retourne la version de matplotlib disponible"
if self.has_matplotlib:
import matplotlib
+
__version = matplotlib.__version__
else:
__version = "0.0.0"
"Retourne la version de pandas disponible"
if self.has_pandas:
import pandas
+
__version = pandas.__version__
else:
__version = "0.0.0"
"Retourne la version de gnuplotpy disponible"
if self.has_gnuplot:
import Gnuplot
+
__version = Gnuplot.__version__
else:
__version = "0.0"
"Retourne la version de fmpy disponible"
if self.has_fmpy:
import fmpy
+
__version = fmpy.__version__
else:
__version = "0.0.0"
"Retourne la version de sdf disponible"
if self.has_sdf:
import sdf
+
__version = sdf.__version__
else:
__version = "0.0.0"
"Retourne la version de sphinx disponible"
if self.has_sphinx:
import sphinx
+
__version = sphinx.__version__
else:
__version = "0.0.0"
def MaximumPrecision(self):
"Retourne la précision maximale flottante pour Numpy"
import numpy
+
try:
- numpy.array([1.,], dtype='float128')
- mfp = 'float128'
+ numpy.array(
+ [
+ 1.0,
+ ],
+ dtype="float128",
+ )
+ mfp = "float128"
except Exception:
- mfp = 'float64'
+ mfp = "float64"
return mfp
def MachinePrecision(self):
def __str__(self):
import daCore.version as dav
- return "%s %s (%s)"%(dav.name, dav.version, dav.date)
+
+ return "%s %s (%s)" % (dav.name, dav.version, dav.date)
+
# ==============================================================================
-def vt( __version ):
+def vt(__version):
"Version transformée pour comparaison robuste, obtenue comme un tuple"
serie = []
for sv in re.split("[_.+-]", __version):
serie.append(sv.zfill(6))
return tuple(serie)
-def isIterable( __sequence, __check = False, __header = "" ):
+
+def isIterable(__sequence, __check=False, __header=""):
"""
Vérification que l'argument est un itérable interne.
Remarque : pour permettre le test correct en MultiFonctions,
- Ne pas accepter comme itérable un "numpy.ndarray"
- Ne pas accepter comme itérable avec hasattr(__sequence, "__iter__")
"""
- if isinstance( __sequence, (list, tuple, map, dict) ):
+ if isinstance(__sequence, (list, tuple, map, dict)):
__isOk = True
- elif type(__sequence).__name__ in ('generator', 'range'):
+ elif type(__sequence).__name__ in ("generator", "range"):
__isOk = True
elif "_iterator" in type(__sequence).__name__:
__isOk = True
else:
__isOk = False
if __check and not __isOk:
- raise TypeError("Not iterable or unkown input type%s: %s"%(__header, type(__sequence),))
+ raise TypeError(
+ "Not iterable or unkown input type%s: %s"
+ % (
+ __header,
+ type(__sequence),
+ )
+ )
return __isOk
-def date2int( __date: str, __lang="FR" ):
+
+def date2int(__date: str, __lang="FR"):
"""
Fonction de secours, conversion pure : dd/mm/yy hh:mm ---> int(yyyymmddhhmm)
"""
__date = __date.strip()
- if __date.count('/') == 2 and __date.count(':') == 0 and __date.count(' ') == 0:
+ if __date.count("/") == 2 and __date.count(":") == 0 and __date.count(" ") == 0:
d, m, y = __date.split("/")
__number = (10**4) * int(y) + (10**2) * int(m) + int(d)
- elif __date.count('/') == 2 and __date.count(':') == 1 and __date.count(' ') > 0:
+ elif __date.count("/") == 2 and __date.count(":") == 1 and __date.count(" ") > 0:
part1, part2 = __date.split()
d, m, y = part1.strip().split("/")
h, n = part2.strip().split(":")
- __number = (10**8) * int(y) + (10**6) * int(m) + (10**4) * int(d) + (10**2) * int(h) + int(n)
+ __number = (
+ (10**8) * int(y)
+ + (10**6) * int(m)
+ + (10**4) * int(d)
+ + (10**2) * int(h)
+ + int(n)
+ )
else:
- raise ValueError("Cannot convert \"%s\" as a D/M/Y H:M date"%__date)
+ raise ValueError('Cannot convert "%s" as a D/M/Y H:M date' % __date)
return __number
+
def vfloat(__value: numpy.ndarray):
"""
Conversion en flottant d'un vecteur de taille 1 et de dimensions quelconques
elif isinstance(__value, (float, int)):
return float(__value)
else:
- raise ValueError("Error in converting multiple float values from array when waiting for only one")
+ raise ValueError(
+ "Error in converting multiple float values from array when waiting for only one"
+ )
+
-def strvect2liststr( __strvect ):
+def strvect2liststr(__strvect):
"""
Fonction de secours, conversion d'une chaîne de caractères de
représentation de vecteur en une liste de chaînes de caractères de
__strvect = __strvect.replace(st, " ") # Blanc
return __strvect.split()
-def strmatrix2liststr( __strvect ):
+
+def strmatrix2liststr(__strvect):
"""
Fonction de secours, conversion d'une chaîne de caractères de
représentation de matrice en une liste de chaînes de caractères de
__strvect = __strvect.replace(",", " ") # Blanc
for st in ("]", ")"):
__strvect = __strvect.replace(st, ";") # "]" et ")" par ";"
- __strvect = re.sub(r';\s*;', r';', __strvect)
+ __strvect = re.sub(r";\s*;", r";", __strvect)
__strvect = __strvect.rstrip(";") # Après ^ et avant v
__strmat = [__l.split() for __l in __strvect.split(";")]
return __strmat
-def checkFileNameConformity( __filename, __warnInsteadOfPrint=True ):
+
+def checkFileNameConformity(__filename, __warnInsteadOfPrint=True):
if sys.platform.startswith("win") and len(__filename) > 256:
__conform = False
__msg = (
- " For some shared or older file systems on Windows, a file " + \
- "name longer than 256 characters can lead to access problems." + \
- "\n The name of the file in question is the following:" + \
- "\n %s")%(__filename,)
+ " For some shared or older file systems on Windows, a file "
+ + "name longer than 256 characters can lead to access problems."
+ + "\n The name of the file in question is the following:"
+ + "\n %s"
+ ) % (__filename,)
if __warnInsteadOfPrint:
logging.warning(__msg)
else:
#
return __conform
-def checkFileNameImportability( __filename, __warnInsteadOfPrint=True ):
+
+def checkFileNameImportability(__filename, __warnInsteadOfPrint=True):
if str(__filename).count(".") > 1:
__conform = False
__msg = (
- " The file name contains %i point(s) before the extension " + \
- "separator, which can potentially lead to problems when " + \
- "importing this file into Python, as it can then be recognized " + \
- "as a sub-module (generating a \"ModuleNotFoundError\"). If it " + \
- "is intentional, make sure that there is no module with the " + \
- "same name as the part before the first point, and that there is " + \
- "no \"__init__.py\" file in the same directory." + \
- "\n The name of the file in question is the following:" + \
- "\n %s")%(int(str(__filename).count(".") - 1), __filename)
+ " The file name contains %i point(s) before the extension "
+ + "separator, which can potentially lead to problems when "
+ + "importing this file into Python, as it can then be recognized "
+ + 'as a sub-module (generating a "ModuleNotFoundError"). If it '
+ + "is intentional, make sure that there is no module with the "
+ + "same name as the part before the first point, and that there is "
+ + 'no "__init__.py" file in the same directory.'
+ + "\n The name of the file in question is the following:"
+ + "\n %s"
+ ) % (int(str(__filename).count(".") - 1), __filename)
if __warnInsteadOfPrint is None:
pass
elif __warnInsteadOfPrint:
#
return __conform
+
# ==============================================================================
class SystemUsage(object):
"""
Permet de récupérer les différentes tailles mémoires du process courant
"""
+
__slots__ = ()
#
# Le module resource renvoie 0 pour les tailles mémoire. On utilise donc
# plutôt : http://code.activestate.com/recipes/286222/ et Wikipedia
#
- _proc_status = '/proc/%d/status' % os.getpid()
- _memo_status = '/proc/meminfo'
+ _proc_status = "/proc/%d/status" % os.getpid()
+ _memo_status = "/proc/meminfo"
_scale = {
- 'o' : 1.0, # Multiples SI de l'octet # noqa: E203
- 'ko' : 1.e3, # noqa: E203
- 'Mo' : 1.e6, # noqa: E203
- 'Go' : 1.e9, # noqa: E203
- 'kio': 1024.0, # Multiples binaires de l'octet # noqa: E203
- 'Mio': 1024.0 * 1024.0, # noqa: E203
- 'Gio': 1024.0 * 1024.0 * 1024.0, # noqa: E203
- 'B' : 1.0, # Multiples binaires du byte=octet # noqa: E203
- 'kB' : 1024.0, # noqa: E203
- 'MB' : 1024.0 * 1024.0, # noqa: E203
- 'GB' : 1024.0 * 1024.0 * 1024.0, # noqa: E203
+ "o": 1.0, # Multiples SI de l'octet
+ "ko": 1.0e3,
+ "Mo": 1.0e6,
+ "Go": 1.0e9,
+ "kio": 1024.0, # Multiples binaires de l'octet
+ "Mio": 1024.0 * 1024.0,
+ "Gio": 1024.0 * 1024.0 * 1024.0,
+ "B": 1.0, # Multiples binaires du byte=octet
+ "kB": 1024.0,
+ "MB": 1024.0 * 1024.0,
+ "GB": 1024.0 * 1024.0 * 1024.0,
}
def __init__(self):
v = t.read()
t.close()
except IOError:
- return 0.0 # non-Linux?
- i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
+ return 0.0 # non-Linux?
+ i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
- return 0.0 # invalid format?
+ return 0.0 # invalid format?
# convert Vm value to bytes
mem = float(v[1]) * self._scale[v[2]]
return mem / self._scale[unit]
def getAvailablePhysicalMemory(self, unit="o"):
"Renvoie la mémoire physique utilisable en octets"
- return self._VmA('MemTotal:', unit)
+ return self._VmA("MemTotal:", unit)
def getAvailableSwapMemory(self, unit="o"):
"Renvoie la mémoire swap utilisable en octets"
- return self._VmA('SwapTotal:', unit)
+ return self._VmA("SwapTotal:", unit)
def getAvailableMemory(self, unit="o"):
"Renvoie la mémoire totale (physique+swap) utilisable en octets"
- return self._VmA('MemTotal:', unit) + self._VmA('SwapTotal:', unit)
+ return self._VmA("MemTotal:", unit) + self._VmA("SwapTotal:", unit)
def getUsableMemory(self, unit="o"):
"""Renvoie la mémoire utilisable en octets
Rq : il n'est pas sûr que ce décompte soit juste...
"""
- return self._VmA('MemFree:', unit) + self._VmA('SwapFree:', unit) + \
- self._VmA('Cached:', unit) + self._VmA('SwapCached:', unit)
+ return (
+ self._VmA("MemFree:", unit)
+ + self._VmA("SwapFree:", unit)
+ + self._VmA("Cached:", unit)
+ + self._VmA("SwapCached:", unit)
+ )
def _VmB(self, VmKey, unit):
"Lecture des paramètres mémoire du processus"
v = t.read()
t.close()
except IOError:
- return 0.0 # non-Linux?
- i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
+ return 0.0 # non-Linux?
+ i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...'
v = v[i:].split(None, 3) # whitespace
if len(v) < 3:
- return 0.0 # invalid format?
+ return 0.0 # invalid format?
# convert Vm value to bytes
mem = float(v[1]) * self._scale[v[2]]
return mem / self._scale[unit]
def getUsedMemory(self, unit="o"):
"Renvoie la mémoire résidente utilisée en octets"
- return self._VmB('VmRSS:', unit)
+ return self._VmB("VmRSS:", unit)
def getVirtualMemory(self, unit="o"):
"Renvoie la mémoire totale utilisée en octets"
- return self._VmB('VmSize:', unit)
+ return self._VmB("VmSize:", unit)
def getUsedStacksize(self, unit="o"):
"Renvoie la taille du stack utilisé en octets"
- return self._VmB('VmStk:', unit)
+ return self._VmB("VmStk:", unit)
def getMaxUsedMemory(self, unit="o"):
"Renvoie la mémoire résidente maximale mesurée"
- return self._VmB('VmHWM:', unit)
+ return self._VmB("VmHWM:", unit)
def getMaxVirtualMemory(self, unit="o"):
"Renvoie la mémoire totale maximale mesurée"
- return self._VmB('VmPeak:', unit)
+ return self._VmB("VmPeak:", unit)
+
# ==============================================================================
if __name__ == "__main__":
# ==============================================================================
# Classes de services non utilisateur
+
class _ReportPartM__(object):
"""
Store and retrieve the data for C: internal class
"""
+
__slots__ = ("__part", "__styles", "__content")
def __init__(self, part="default"):
- self.__part = str(part)
- self.__styles = []
+ self.__part = str(part)
+ self.__styles = []
self.__content = []
def append(self, content, style="p", position=-1):
def get_content(self):
return self.__content
+
class _ReportM__(object):
"""
Store and retrieve the data for C: internal class
"""
- __slots__ = ("__document")
- def __init__(self, part='default'):
+ __slots__ = ("__document",)
+
+ def __init__(self, part="default"):
self.__document = {}
self.__document[part] = _ReportPartM__(part)
- def append(self, content, style="p", position=-1, part='default'):
+ def append(self, content, style="p", position=-1, part="default"):
if part not in self.__document:
self.__document[part] = _ReportPartM__(part)
self.__document[part].append(content, style, position)
def clear(self):
self.__init__()
+
class __ReportC__(object):
"""
Get user commands, update M and V: user intertace to create the report
"""
+
__slots__ = ()
#
m = _ReportM__()
def clear(self):
self.m.clear()
+
class __ReportV__(object):
"""
Interact with user and C: template for reports
"""
- __slots__ = ("c")
+
+ __slots__ = ("c",)
#
default_filename = "report.txt"
_filename = os.path.abspath(filename)
#
_inside = self.get()
- fid = open(_filename, 'w')
+ fid = open(_filename, "w")
fid.write(_inside)
fid.close()
return filename, _filename
del self.c
return 0
+
# ==============================================================================
# Classes d'interface utilisateur : ReportViewIn*, ReportStorage
# Tags de structure : (title, h1, h2, h3, p, uli, oli, <b></b>, <i></i>)
+
class ReportViewInHtml(__ReportV__):
"""
Report in HTML
"""
+
__slots__ = ()
#
default_filename = "report.html"
try:
ii = ps.index("title")
title = pc[ii]
- pg += "%s\n%s\n%s"%('<hr noshade><h1 align="center">', title, '</h1><hr noshade>')
+ pg += "%s\n%s\n%s" % (
+ '<hr noshade><h1 align="center">',
+ title,
+ "</h1><hr noshade>",
+ )
except Exception:
pass
for ip, sp in enumerate(ps):
for tp in self.tags:
if sp == tp:
sp = self.tags[tp]
- pg += "\n<%s>%s</%s>"%(sp, cp, sp)
+ pg += "\n<%s>%s</%s>" % (sp, cp, sp)
pg += "\n</body>\n</html>"
return pg
+
class ReportViewInRst(__ReportV__):
"""
Report in RST
"""
+
__slots__ = ()
#
default_filename = "report.rst"
try:
ii = ps.index("title")
title = pc[ii]
- pg += "%s\n%s\n%s"%("=" * 80, title, "=" * 80)
+ pg += "%s\n%s\n%s" % ("=" * 80, title, "=" * 80)
except Exception:
pass
for ip, sp in enumerate(ps):
for tp in self.translation:
cp = cp.replace(tp, self.translation[tp])
if sp in self.titles.keys():
- pg += "\n%s\n%s\n%s"%(self.titles[sp][0] * len(cp), cp, self.titles[sp][1] * len(cp))
+ pg += "\n%s\n%s\n%s" % (
+ self.titles[sp][0] * len(cp),
+ cp,
+ self.titles[sp][1] * len(cp),
+ )
elif sp in self.tags.keys():
- pg += "%s%s%s"%(self.tags[sp][0], cp, self.tags[sp][1])
+ pg += "%s%s%s" % (self.tags[sp][0], cp, self.tags[sp][1])
pg += "\n"
return pg
+
class ReportViewInPlainTxt(__ReportV__):
"""
Report in plain TXT
"""
+
#
__slots__ = ()
#
try:
ii = ps.index("title")
title = pc[ii]
- pg += "%s\n%s\n%s"%("=" * 80, title, "=" * 80)
+ pg += "%s\n%s\n%s" % ("=" * 80, title, "=" * 80)
except Exception:
pass
for ip, sp in enumerate(ps):
for tp in self.translation:
cp = cp.replace(tp, self.translation[tp])
if sp in self.titles.keys():
- pg += "\n%s\n%s\n%s"%(self.titles[sp][0] * len(cp), cp, -self.titles[sp][1] * len(cp))
+ pg += "\n%s\n%s\n%s" % (
+ self.titles[sp][0] * len(cp),
+ cp,
+ -self.titles[sp][1] * len(cp),
+ )
elif sp in self.tags.keys():
- pg += "\n%s%s%s"%(self.tags[sp][0], cp, self.tags[sp][1])
+ pg += "\n%s%s%s" % (self.tags[sp][0], cp, self.tags[sp][1])
pg += "\n"
return pg
+
# Interface utilisateur de stockage des informations
ReportStorage = __ReportC__
import numpy
+
# ==============================================================================
class TemplateStorage(object):
"""
Classe générale de stockage de type dictionnaire étendu
(Template)
"""
+
__slots__ = ("__preferedLanguage", "__values", "__order")
- def __init__( self, language = "fr_FR" ):
+ def __init__(self, language="fr_FR"):
self.__preferedLanguage = language
- self.__values = {}
- self.__order = -1
+ self.__values = {}
+ self.__order = -1
- def store( self, name = None, content = None, fr_FR = "", en_EN = "", order = "next" ):
+ def store(self, name=None, content=None, fr_FR="", en_EN="", order="next"):
"D.store(k, c, fr_FR, en_EN, o) -> Store template k and its main characteristics"
if name is None or content is None:
- raise ValueError("To be consistent, the storage of a template must provide a name and a content.")
+ raise ValueError(
+ "To be consistent, the storage of a template must provide a name and a content."
+ )
if order == "next":
self.__order += 1
else:
self.__order = int(order)
self.__values[str(name)] = {
- 'content': str(content),
- 'fr_FR' : str(fr_FR), # noqa: E203
- 'en_EN' : str(en_EN), # noqa: E203
- 'order' : int(self.__order), # noqa: E203
+ "content": str(content),
+ "fr_FR": str(fr_FR),
+ "en_EN": str(en_EN),
+ "order": int(self.__order),
}
def keys(self):
"x.__len__() <==> len(x)"
return len(self.__values)
- def __getitem__(self, name=None ):
+ def __getitem__(self, name=None):
"x.__getitem__(y) <==> x[y]"
- return self.__values[name]['content']
+ return self.__values[name]["content"]
- def getdoc(self, name = None, lang = "fr_FR"):
+ def getdoc(self, name=None, lang="fr_FR"):
"D.getdoc(k, l) -> Return documentation of key k in language l"
if lang not in self.__values[name]:
lang = self.__preferedLanguage
"D.keys_in_presentation_order() -> list of D's keys in presentation order"
__orders = []
for ik in self.keys():
- __orders.append( self.__values[ik]['order'] )
+ __orders.append(self.__values[ik]["order"])
__reorder = numpy.array(__orders).argsort()
return (numpy.array(self.keys())[__reorder]).tolist()
+
# ==============================================================================
ObserverTemplates = TemplateStorage()
ObserverTemplates.store(
- name = "ValuePrinter",
- content = """print(str(info)+" "+str(var[-1]))""",
- fr_FR = "Imprime sur la sortie standard la valeur courante de la variable",
- en_EN = "Print on standard output the current value of the variable",
- order = "next",
+ name="ValuePrinter",
+ content="""print(str(info)+" "+str(var[-1]))""",
+ fr_FR="Imprime sur la sortie standard la valeur courante de la variable",
+ en_EN="Print on standard output the current value of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueAndIndexPrinter",
- content = """print(str(info)+(" index %i:"%(len(var)-1))+" "+str(var[-1]))""",
- fr_FR = "Imprime sur la sortie standard la valeur courante de la variable, en ajoutant son index",
- en_EN = "Print on standard output the current value of the variable, adding its index",
- order = "next",
+ name="ValueAndIndexPrinter",
+ content="""print(str(info)+(" index %i:"%(len(var)-1))+" "+str(var[-1]))""",
+ fr_FR="Imprime sur la sortie standard la valeur courante de la variable, en ajoutant son index",
+ en_EN="Print on standard output the current value of the variable, adding its index",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSeriePrinter",
- content = """print(str(info)+" "+str(var[:]))""",
- fr_FR = "Imprime sur la sortie standard la série des valeurs de la variable",
- en_EN = "Print on standard output the value series of the variable",
- order = "next",
+ name="ValueSeriePrinter",
+ content="""print(str(info)+" "+str(var[:]))""",
+ fr_FR="Imprime sur la sortie standard la série des valeurs de la variable",
+ en_EN="Print on standard output the value series of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSaver",
- content = """import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
- fr_FR = "Enregistre la valeur courante de la variable dans un fichier du répertoire '/tmp' nommé 'value...txt' selon le nom de la variable et l'étape d'enregistrement",
- en_EN = "Save the current value of the variable in a file of the '/tmp' directory named 'value...txt' from the variable name and the saving step",
- order = "next",
+ name="ValueSaver",
+ content="""import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
+ fr_FR="Enregistre la valeur courante de la variable dans un fichier du répertoire '/tmp' nommé 'value...txt' selon le nom de la variable et l'étape d'enregistrement",
+ en_EN="Save the current value of the variable in a file of the '/tmp' directory named 'value...txt' from the variable name and the saving step",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSerieSaver",
- content = """import numpy, re\nv=numpy.array(var[:], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
- fr_FR = "Enregistre la série des valeurs de la variable dans un fichier du répertoire '/tmp' nommé 'value...txt' selon le nom de la variable et l'étape",
- en_EN = "Save the value series of the variable in a file of the '/tmp' directory named 'value...txt' from the variable name and the saving step",
- order = "next",
+ name="ValueSerieSaver",
+ content="""import numpy, re\nv=numpy.array(var[:], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
+ fr_FR="Enregistre la série des valeurs de la variable dans un fichier du répertoire '/tmp' nommé 'value...txt' selon le nom de la variable et l'étape",
+ en_EN="Save the value series of the variable in a file of the '/tmp' directory named 'value...txt' from the variable name and the saving step",
+ order="next",
)
ObserverTemplates.store(
- name = "ValuePrinterAndSaver",
- content = """import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nprint(str(info)+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
- fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur courante de la variable",
- en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the current value of the variable",
- order = "next",
+ name="ValuePrinterAndSaver",
+ content="""import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nprint(str(info)+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
+ fr_FR="Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur courante de la variable",
+ en_EN="Print on standard output and, in the same time save in a file of the '/tmp' directory, the current value of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueIndexPrinterAndSaver",
- content = """import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nprint(str(info)+(" index %i:"%(len(var)-1))+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
- fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur courante de la variable, en ajoutant son index",
- en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the current value of the variable, adding its index",
- order = "next",
+ name="ValueIndexPrinterAndSaver",
+ content="""import numpy, re\nv=numpy.array(var[-1], ndmin=1)\nprint(str(info)+(" index %i:"%(len(var)-1))+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
+ fr_FR="Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur courante de la variable, en ajoutant son index",
+ en_EN="Print on standard output and, in the same time save in a file of the '/tmp' directory, the current value of the variable, adding its index",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSeriePrinterAndSaver",
- content = """import numpy, re\nv=numpy.array(var[:], ndmin=1)\nprint(str(info)+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
- fr_FR = "Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp', la série des valeurs de la variable",
- en_EN = "Print on standard output and, in the same time, save in a file of the '/tmp' directory, the value series of the variable",
- order = "next",
+ name="ValueSeriePrinterAndSaver",
+ content="""import numpy, re\nv=numpy.array(var[:], ndmin=1)\nprint(str(info)+" "+str(v))\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)""",
+ fr_FR="Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp', la série des valeurs de la variable",
+ en_EN="Print on standard output and, in the same time, save in a file of the '/tmp' directory, the value series of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueGnuPlotter",
- content = """import numpy, Gnuplot\nv=numpy.array(var[-1], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
- fr_FR = "Affiche graphiquement avec Gnuplot la valeur courante de la variable (affichage persistant)",
- en_EN = "Graphically plot with Gnuplot the current value of the variable (persistent plot)",
- order = "next",
+ name="ValueGnuPlotter",
+ content="""import numpy, Gnuplot\nv=numpy.array(var[-1], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
+ fr_FR="Affiche graphiquement avec Gnuplot la valeur courante de la variable (affichage persistant)",
+ en_EN="Graphically plot with Gnuplot the current value of the variable (persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSerieGnuPlotter",
- content = """import numpy, Gnuplot\nv=numpy.array(var[:], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\n gp('set xlabel \"Step\"')\n gp('set ylabel \"Variable\"')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
- fr_FR = "Affiche graphiquement avec Gnuplot la série des valeurs de la variable (affichage persistant)",
- en_EN = "Graphically plot with Gnuplot the value series of the variable (persistent plot)",
- order = "next",
+ name="ValueSerieGnuPlotter",
+ content="""import numpy, Gnuplot\nv=numpy.array(var[:], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\n gp('set xlabel \"Step\"')\n gp('set ylabel \"Variable\"')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
+ fr_FR="Affiche graphiquement avec Gnuplot la série des valeurs de la variable (affichage persistant)",
+ en_EN="Graphically plot with Gnuplot the value series of the variable (persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValuePrinterAndGnuPlotter",
- content = """print(str(info)+' '+str(var[-1]))\nimport numpy, Gnuplot\nv=numpy.array(var[-1], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
- fr_FR = "Imprime sur la sortie standard et, en même temps, affiche graphiquement avec Gnuplot la valeur courante de la variable (affichage persistant)",
- en_EN = "Print on standard output and, in the same time, graphically plot with Gnuplot the current value of the variable (persistent plot)",
- order = "next",
+ name="ValuePrinterAndGnuPlotter",
+ content="""print(str(info)+' '+str(var[-1]))\nimport numpy, Gnuplot\nv=numpy.array(var[-1], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
+ fr_FR="Imprime sur la sortie standard et, en même temps, affiche graphiquement avec Gnuplot la valeur courante de la variable (affichage persistant)",
+ en_EN="Print on standard output and, in the same time, graphically plot with Gnuplot the current value of the variable (persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSeriePrinterAndGnuPlotter",
- content = """print(str(info)+' '+str(var[:]))\nimport numpy, Gnuplot\nv=numpy.array(var[:], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\n gp('set xlabel \"Step\"')\n gp('set ylabel \"Variable\"')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
- fr_FR = "Imprime sur la sortie standard et, en même temps, affiche graphiquement avec Gnuplot la série des valeurs de la variable (affichage persistant)",
- en_EN = "Print on standard output and, in the same time, graphically plot with Gnuplot the value series of the variable (persistent plot)",
- order = "next",
+ name="ValueSeriePrinterAndGnuPlotter",
+ content="""print(str(info)+' '+str(var[:]))\nimport numpy, Gnuplot\nv=numpy.array(var[:], ndmin=1)\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\n gp('set xlabel \"Step\"')\n gp('set ylabel \"Variable\"')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
+ fr_FR="Imprime sur la sortie standard et, en même temps, affiche graphiquement avec Gnuplot la série des valeurs de la variable (affichage persistant)",
+ en_EN="Print on standard output and, in the same time, graphically plot with Gnuplot the value series of the variable (persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValuePrinterSaverAndGnuPlotter",
- content = """print(str(info)+' '+str(var[-1]))\nimport numpy, re\nv=numpy.array(var[-1], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)\nimport Gnuplot\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
- fr_FR = "Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp' et affiche graphiquement la valeur courante de la variable (affichage persistant)",
- en_EN = "Print on standard output and, in the same, time save in a file of the '/tmp' directory and graphically plot the current value of the variable (persistent plot)",
- order = "next",
+ name="ValuePrinterSaverAndGnuPlotter",
+ content="""print(str(info)+' '+str(var[-1]))\nimport numpy, re\nv=numpy.array(var[-1], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)\nimport Gnuplot\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
+ fr_FR="Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp' et affiche graphiquement la valeur courante de la variable (affichage persistant)",
+ en_EN="Print on standard output and, in the same, time save in a file of the '/tmp' directory and graphically plot the current value of the variable (persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSeriePrinterSaverAndGnuPlotter",
- content = """print(str(info)+' '+str(var[:]))\nimport numpy, re\nv=numpy.array(var[:], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)\nimport Gnuplot\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\n gp('set xlabel \"Step\"')\n gp('set ylabel \"Variable\"')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
- fr_FR = "Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp' et affiche graphiquement la série des valeurs de la variable (affichage persistant)",
- en_EN = "Print on standard output and, in the same, time save in a file of the '/tmp' directory and graphically plot the value series of the variable (persistent plot)",
- order = "next",
+ name="ValueSeriePrinterSaverAndGnuPlotter",
+ content="""print(str(info)+' '+str(var[:]))\nimport numpy, re\nv=numpy.array(var[:], ndmin=1)\nglobal istep\ntry:\n istep+=1\nexcept:\n istep=0\nf='/tmp/value_%s_%05i.txt'%(info,istep)\nf=re.sub(r'\\s','_',f)\nprint('Value saved in \"%s\"'%f)\nnumpy.savetxt(f,v)\nimport Gnuplot\nglobal igfig, gp\ntry:\n igfig+=1\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\nexcept:\n igfig=0\n gp=Gnuplot.Gnuplot(persist=1)\n gp('set title \"%s (Figure %i)\"'%(info,igfig))\n gp('set style data lines')\n gp('set xlabel \"Step\"')\n gp('set ylabel \"Variable\"')\ngp.plot( Gnuplot.Data( v, with_='lines lw 2' ) )""",
+ fr_FR="Imprime sur la sortie standard et, en même temps, enregistre dans un fichier du répertoire '/tmp' et affiche graphiquement la série des valeurs de la variable (affichage persistant)",
+ en_EN="Print on standard output and, in the same, time save in a file of the '/tmp' directory and graphically plot the value series of the variable (persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueMatPlotter",
- content = """import numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la valeur courante de la variable (affichage non persistant)",
- en_EN = "Graphically plot with Matplolib the current value of the variable (non persistent plot)",
- order = "next",
+ name="ValueMatPlotter",
+ content="""import numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la valeur courante de la variable (affichage non persistant)",
+ en_EN="Graphically plot with Matplolib the current value of the variable (non persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueMatPlotterSaver",
- content = """import numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la valeur courante de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
- en_EN = "Graphically plot with Matplolib the current value of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
- order = "next",
+ name="ValueMatPlotterSaver",
+ content="""import numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la valeur courante de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
+ en_EN="Graphically plot with Matplolib the current value of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSerieMatPlotter",
- content = """import numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la série des valeurs de la variable (affichage non persistant)",
- en_EN = "Graphically plot with Matplolib the value series of the variable (non persistent plot)",
- order = "next",
+ name="ValueSerieMatPlotter",
+ content="""import numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la série des valeurs de la variable (affichage non persistant)",
+ en_EN="Graphically plot with Matplolib the value series of the variable (non persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSerieMatPlotterSaver",
- content = """import numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la série des valeurs de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
- en_EN = "Graphically plot with Matplolib the value series of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
- order = "next",
+ name="ValueSerieMatPlotterSaver",
+ content="""import numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la série des valeurs de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
+ en_EN="Graphically plot with Matplolib the value series of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValuePrinterAndMatPlotter",
- content = """print(str(info)+' '+str(var[-1]))\nimport numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la valeur courante de la variable (affichage non persistant)",
- en_EN = "Graphically plot with Matplolib the current value of the variable (non persistent plot)",
- order = "next",
+ name="ValuePrinterAndMatPlotter",
+ content="""print(str(info)+' '+str(var[-1]))\nimport numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la valeur courante de la variable (affichage non persistant)",
+ en_EN="Graphically plot with Matplolib the current value of the variable (non persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValuePrinterAndMatPlotterSaver",
- content = """print(str(info)+' '+str(var[-1]))\nimport numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la valeur courante de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
- en_EN = "Graphically plot with Matplolib the current value of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
- order = "next",
+ name="ValuePrinterAndMatPlotterSaver",
+ content="""print(str(info)+' '+str(var[-1]))\nimport numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[-1], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la valeur courante de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
+ en_EN="Graphically plot with Matplolib the current value of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSeriePrinterAndMatPlotter",
- content = """print(str(info)+' '+str(var[:]))\nimport numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la série des valeurs de la variable (affichage non persistant)",
- en_EN = "Graphically plot with Matplolib the value series of the variable (non persistent plot)",
- order = "next",
+ name="ValueSeriePrinterAndMatPlotter",
+ content="""print(str(info)+' '+str(var[:]))\nimport numpy\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la série des valeurs de la variable (affichage non persistant)",
+ en_EN="Graphically plot with Matplolib the value series of the variable (non persistent plot)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueSeriePrinterAndMatPlotterSaver",
- content = """print(str(info)+' '+str(var[:]))\nimport numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
- fr_FR = "Affiche graphiquement avec Matplolib la série des valeurs de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
- en_EN = "Graphically plot with Matplolib the value series of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
- order = "next",
+ name="ValueSeriePrinterAndMatPlotterSaver",
+ content="""print(str(info)+' '+str(var[:]))\nimport numpy, re\nimport matplotlib.pyplot as plt\nv=numpy.array(var[:], ndmin=1)\nglobal imfig, mp, ax\nplt.ion()\ntry:\n imfig+=1\n mp.suptitle('%s (Figure %i)'%(info,imfig))\nexcept:\n imfig=0\n mp = plt.figure()\n ax = mp.add_subplot(1, 1, 1)\n mp.suptitle('%s (Figure %i)'%(info,imfig))\n ax.set_xlabel('Step')\n ax.set_ylabel('Variable')\nax.plot(v)\nf='/tmp/figure_%s_%05i.pdf'%(info,imfig)\nf=re.sub(r'\\s','_',f)\nplt.savefig(f)\nplt.show()""",
+ fr_FR="Affiche graphiquement avec Matplolib la série des valeurs de la variable, et enregistre la figure dans un fichier du répertoire '/tmp' (figure persistante)",
+ en_EN="Graphically plot with Matplolib the value series of the variable, and save the figure in a file of the '/tmp' directory (persistant figure)",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueMean",
- content = """import numpy\nprint(str(info)+' '+str(numpy.nanmean(var[-1])))""",
- fr_FR = "Imprime sur la sortie standard la moyenne de la valeur courante de la variable",
- en_EN = "Print on standard output the mean of the current value of the variable",
- order = "next",
+ name="ValueMean",
+ content="""import numpy\nprint(str(info)+' '+str(numpy.nanmean(var[-1])))""",
+ fr_FR="Imprime sur la sortie standard la moyenne de la valeur courante de la variable",
+ en_EN="Print on standard output the mean of the current value of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueStandardError",
- content = """import numpy\nprint(str(info)+' '+str(numpy.nanstd(var[-1])))""",
- fr_FR = "Imprime sur la sortie standard l'écart-type de la valeur courante de la variable",
- en_EN = "Print on standard output the standard error of the current value of the variable",
- order = "next",
+ name="ValueStandardError",
+ content="""import numpy\nprint(str(info)+' '+str(numpy.nanstd(var[-1])))""",
+ fr_FR="Imprime sur la sortie standard l'écart-type de la valeur courante de la variable",
+ en_EN="Print on standard output the standard error of the current value of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueVariance",
- content = """import numpy\nprint(str(info)+' '+str(numpy.nanvar(var[-1])))""",
- fr_FR = "Imprime sur la sortie standard la variance de la valeur courante de la variable",
- en_EN = "Print on standard output the variance of the current value of the variable",
- order = "next",
+ name="ValueVariance",
+ content="""import numpy\nprint(str(info)+' '+str(numpy.nanvar(var[-1])))""",
+ fr_FR="Imprime sur la sortie standard la variance de la valeur courante de la variable",
+ en_EN="Print on standard output the variance of the current value of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueL2Norm",
- content = """import numpy\nv = numpy.ravel( var[-1] )\nprint(str(info)+' '+str(float( numpy.linalg.norm(v) )))""",
- fr_FR = "Imprime sur la sortie standard la norme L2 de la valeur courante de la variable",
- en_EN = "Print on standard output the L2 norm of the current value of the variable",
- order = "next",
+ name="ValueL2Norm",
+ content="""import numpy\nv = numpy.ravel( var[-1] )\nprint(str(info)+' '+str(float( numpy.linalg.norm(v) )))""",
+ fr_FR="Imprime sur la sortie standard la norme L2 de la valeur courante de la variable",
+ en_EN="Print on standard output the L2 norm of the current value of the variable",
+ order="next",
)
ObserverTemplates.store(
- name = "ValueRMS",
- content = """import numpy\nv = numpy.ravel( var[-1] )\nprint(str(info)+' '+str(float( numpy.sqrt((1./v.size)*numpy.dot(v,v)) )))""",
- fr_FR = "Imprime sur la sortie standard la racine de la moyenne des carrés (RMS), ou moyenne quadratique, de la valeur courante de la variable",
- en_EN = "Print on standard output the root mean square (RMS), or quadratic mean, of the current value of the variable",
- order = "next",
+ name="ValueRMS",
+ content="""import numpy\nv = numpy.ravel( var[-1] )\nprint(str(info)+' '+str(float( numpy.sqrt((1./v.size)*numpy.dot(v,v)) )))""",
+ fr_FR="Imprime sur la sortie standard la racine de la moyenne des carrés (RMS), ou moyenne quadratique, de la valeur courante de la variable",
+ en_EN="Print on standard output the root mean square (RMS), or quadratic mean, of the current value of the variable",
+ order="next",
)
# ==============================================================================
UserPostAnalysisTemplates = TemplateStorage()
UserPostAnalysisTemplates.store(
- name = "AnalysisPrinter",
- content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nprint('Analysis',xa)""",
- fr_FR = "Imprime sur la sortie standard la valeur optimale",
- en_EN = "Print on standard output the optimal value",
- order = "next",
+ name="AnalysisPrinter",
+ content="""print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nprint('Analysis',xa)""",
+ fr_FR="Imprime sur la sortie standard la valeur optimale",
+ en_EN="Print on standard output the optimal value",
+ order="next",
)
UserPostAnalysisTemplates.store(
- name = "AnalysisSaver",
- content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
- fr_FR = "Enregistre la valeur optimale dans un fichier du répertoire '/tmp' nommé 'analysis.txt'",
- en_EN = "Save the optimal value in a file of the '/tmp' directory named 'analysis.txt'",
- order = "next",
+ name="AnalysisSaver",
+ content="""print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
+ fr_FR="Enregistre la valeur optimale dans un fichier du répertoire '/tmp' nommé 'analysis.txt'",
+ en_EN="Save the optimal value in a file of the '/tmp' directory named 'analysis.txt'",
+ order="next",
)
UserPostAnalysisTemplates.store(
- name = "AnalysisPrinterAndSaver",
- content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nprint('Analysis',xa)\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
- fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur optimale",
- en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the optimal value",
- order = "next",
+ name="AnalysisPrinterAndSaver",
+ content="""print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')[-1]\nprint('Analysis',xa)\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
+ fr_FR="Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la valeur optimale",
+ en_EN="Print on standard output and, in the same time save in a file of the '/tmp' directory, the optimal value",
+ order="next",
)
UserPostAnalysisTemplates.store(
- name = "AnalysisSeriePrinter",
- content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nprint('Analysis',xa)""",
- fr_FR = "Imprime sur la sortie standard la série des valeurs optimales",
- en_EN = "Print on standard output the optimal value series",
- order = "next",
+ name="AnalysisSeriePrinter",
+ content="""print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nprint('Analysis',xa)""",
+ fr_FR="Imprime sur la sortie standard la série des valeurs optimales",
+ en_EN="Print on standard output the optimal value series",
+ order="next",
)
UserPostAnalysisTemplates.store(
- name = "AnalysisSerieSaver",
- content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
- fr_FR = "Enregistre la série des valeurs optimales dans un fichier du répertoire '/tmp' nommé 'analysis.txt'",
- en_EN = "Save the optimal value series in a file of the '/tmp' directory named 'analysis.txt'",
- order = "next",
+ name="AnalysisSerieSaver",
+ content="""print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
+ fr_FR="Enregistre la série des valeurs optimales dans un fichier du répertoire '/tmp' nommé 'analysis.txt'",
+ en_EN="Save the optimal value series in a file of the '/tmp' directory named 'analysis.txt'",
+ order="next",
)
UserPostAnalysisTemplates.store(
- name = "AnalysisSeriePrinterAndSaver",
- content = """print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nprint('Analysis',xa)\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
- fr_FR = "Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la série des valeurs optimales",
- en_EN = "Print on standard output and, in the same time save in a file of the '/tmp' directory, the optimal value series",
- order = "next",
+ name="AnalysisSeriePrinterAndSaver",
+ content="""print('# Post-analysis')\nimport numpy\nxa=ADD.get('Analysis')\nprint('Analysis',xa)\nf='/tmp/analysis.txt'\nprint('Analysis saved in \"%s\"'%f)\nnumpy.savetxt(f,xa)""",
+ fr_FR="Imprime sur la sortie standard et, en même temps enregistre dans un fichier du répertoire '/tmp', la série des valeurs optimales",
+ en_EN="Print on standard output and, in the same time save in a file of the '/tmp' directory, the optimal value series",
+ order="next",
)
# ==============================================================================
--- /dev/null
+# -*- coding: utf-8 -*-
+#
+# Copyright (C) 2008-2024 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+import sys
+import unittest
+import math
+import numpy
+from daCore.BasicObjects import DynamicalSimulator
+
+
+# ==============================================================================
+class Lorenz1963(DynamicalSimulator):
+ """
+ Three-dimensional parametrized nonlinear ODE system depending on µ=(σ,ρ,β):
+
+ ∂x/∂t = σ (y − x)
+ ∂y/∂t = ρ x − y − x z
+ ∂z/∂t = x y − β z
+
+ with t ∈ [0, 40] the time interval, x(t), y(t), z(t) the dependent
+ variables, and with σ=10, ρ=28, and β=8/3 the commonly used parameter
+ values. The initial conditions for (x, y, z) at t=0 for the reference
+ case are (0, 1, 0).
+
+ This is the well known parametrized coupled system of three nonlinear
+ ordinary differential equations:
+ Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the
+ Atmospheric Sciences, 20, 130–141.
+ doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2
+ """
+
+ def set_canonical_description(self):
+ self.set_mu((10.0, 28.0, 8.0 / 3.0)) # µ = (σ, ρ, β)
+ self.set_integrator("rk4")
+ self.set_dt(0.01)
+ self.set_t0(0.0)
+ self.set_tf(40)
+ self.set_y0((0.0, 1.0, 0.0))
+ self.set_autonomous(True)
+ return True
+
+ def ODEModel(self, t, Y):
+ "ODE dY/dt = F(Y,t)"
+ sigma, rho, beta = self.set_mu()
+ x, y, z = map(float, Y)
+ #
+ rx = sigma * (y - x)
+ ry = x * rho - y - x * z
+ rz = x * y - beta * z
+ #
+ return numpy.array([rx, ry, rz])
+
+
+# ==============================================================================
+class LocalTest(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ print("\nAUTODIAGNOSTIC\n==============\n")
+ print(" " + Lorenz1963().__doc__.strip())
+
+ def test001(self):
+ numpy.random.seed(123456789)
+ ODE = Lorenz1963() # Default parameters
+ trajectory = ODE.ForecastedPath()
+ #
+ print()
+ self.assertTrue(trajectory.shape[0] == 1 + int(ODE.set_tf() / ODE.set_dt()))
+ self.assertTrue(
+ abs(
+ max(
+ trajectory[-1]
+ - numpy.array([16.48799962, 14.01693428, 40.30448848])
+ )
+ )
+ <= 1.0e-8,
+ msg=" Last value is not equal to the reference one",
+ )
+ print(" Last value is equal to the reference one")
+
+ def tearDown(cls):
+ print("\n Tests are finished\n")
+
+
+# ==============================================================================
+if __name__ == "__main__":
+ sys.stderr = sys.stdout
+ unittest.main(verbosity=0)
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, unittest, numpy
+import sys
+import unittest
+import numpy
+
# ==============================================================================
class TwoDimensionalInverseDistanceCS2010:
Nonlinear Model Reduction via Discrete Empirical Interpolation,
SIAM Journal on Scientific Computing, 32(5), pp. 2737-2764 (2010).
"""
+
def __init__(self, nx: int = 20, ny: int = 20):
"Définition du maillage spatial"
- self.nx = max(1, nx)
- self.ny = max(1, ny)
- self.x = numpy.linspace(0.1, 0.9, self.nx, dtype=float)
- self.y = numpy.linspace(0.1, 0.9, self.ny, dtype=float)
+ self.nx = max(1, nx)
+ self.ny = max(1, ny)
+ self.x = numpy.linspace(0.1, 0.9, self.nx, dtype=float)
+ self.y = numpy.linspace(0.1, 0.9, self.ny, dtype=float)
- def FieldG(self, mu ):
+ def FieldG(self, mu):
"Fonction simulation pour un paramètre donné"
- mu1, mu2 = numpy.ravel( mu )
+ mu1, mu2 = numpy.ravel(mu)
#
- x, y = numpy.meshgrid( self.x, self.y )
- sxymu = 1. / numpy.sqrt( (x - mu1)**2 + (y - mu2)**2 + 0.1**2 )
+ x, y = numpy.meshgrid(self.x, self.y)
+ sxymu = 1.0 / numpy.sqrt((x - mu1) ** 2 + (y - mu2) ** 2 + 0.1**2)
#
return sxymu
OneRealisation = FieldG
+
# ==============================================================================
class LocalTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
- print('\nAUTODIAGNOSTIC\n==============\n')
+ print("\nAUTODIAGNOSTIC\n==============\n")
print(" " + TwoDimensionalInverseDistanceCS2010().__doc__.strip())
def test001(self):
numpy.random.seed(123456789)
Equation = TwoDimensionalInverseDistanceCS2010()
for mu in Equation.get_sample_of_mu(5, 5):
- solution = Equation.OneRealisation( mu )
+ solution = Equation.OneRealisation(mu)
# Nappe maximale au coin (0,0)
self.assertTrue(numpy.max(solution.flat) <= solution[0, 0])
# Nappe minimale au coin [-1,-1]
def tearDown(cls):
print("\n Tests OK\n")
+
# ==============================================================================
if __name__ == "__main__":
sys.stderr = sys.stdout
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import sys, unittest, math, numpy
+import sys
+import unittest
+import math
+import numpy
+
# ==============================================================================
class TwoDimensionalRosenbrockFunctionR1960:
f(x,y) = (a - x)² + b (y -x²)²
- with (x,y) ∈ [-2,2]x[-1,3]² and usually a=1, b=100. There is a
- global minimum at (x,y) = (a,a²) for which f(x,y) = 0.
+ with (x,y) ∈ [-2,2]x[-1,3]² and a=1, b=100 the commonly used parameter
+ values. There exists a global minimum at (x,y) = (a,a²) for which
+ f(x,y) = 0.
This is the non-linear non-convex parametric function of the reference:
Rosenbrock, H. H., An Automatic Method for Finding the Greatest or
Least Value of a Function, The Computer Journal, 3(3), pp.175–184,
(1960)
"""
+
def __init__(self, nx: int = 40, ny: int = 40):
"Définition du maillage spatial"
- self.nx = max(1, nx)
- self.ny = max(1, ny)
- self.x = numpy.linspace(-2, 2, self.nx, dtype=float)
- self.y = numpy.linspace(-1, 3, self.ny, dtype=float)
+ self.nx = max(1, nx)
+ self.ny = max(1, ny)
+ self.x = numpy.linspace(-2, 2, self.nx, dtype=float)
+ self.y = numpy.linspace(-1, 3, self.ny, dtype=float)
- def FieldZ(self, mu ):
+ def FieldZ(self, mu):
"Fonction simulation pour un paramètre donné"
- a, b = numpy.ravel( mu )
+ a, b = numpy.ravel(mu)
#
- x, y = numpy.meshgrid( self.x, self.y )
- sxymu = (a - x)**2 + b * (y - x**2)**2
+ x, y = numpy.meshgrid(self.x, self.y)
+ sxymu = (a - x) ** 2 + b * (y - x**2) ** 2
#
return sxymu
- def FunctionH(self, xy, a = 1, b = 100):
+ def FunctionH(self, xy, a=1, b=100):
"Construit la fonction de Rosenbrock en L2 (Scipy 1.8.1 p.1322)"
- xy = numpy.ravel( xy ).reshape((-1, 2)) # Deux colonnes
+ xy = numpy.ravel(xy).reshape((-1, 2)) # Deux colonnes
x = xy[:, 0]
y = xy[:, 1]
return numpy.array([(a - x), math.sqrt(b) * (y - x**2)])
OneRealisation = FieldZ
+
# ==============================================================================
class LocalTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
- print('\nAUTODIAGNOSTIC\n==============\n')
+ print("\nAUTODIAGNOSTIC\n==============\n")
print(" " + TwoDimensionalRosenbrockFunctionR1960().__doc__.strip())
def test001(self):
numpy.random.seed(123456789)
Equation = TwoDimensionalRosenbrockFunctionR1960()
- optimum = Equation.FunctionH( [1, 1] )
- self.assertTrue( max(optimum.flat) <= 0.)
+ optimum = Equation.FunctionH([1, 1])
+ self.assertTrue(max(optimum.flat) <= 0.0)
- optimum = Equation.FunctionH( [0.5, 0.25], a=0.5 )
- self.assertTrue( max(optimum.flat) <= 0.)
+ optimum = Equation.FunctionH([0.5, 0.25], a=0.5)
+ self.assertTrue(max(optimum.flat) <= 0.0)
def tearDown(cls):
print("\n Tests OK\n")
+
# ==============================================================================
if __name__ == "__main__":
sys.stderr = sys.stdout
import os, sys
-# print "import des prefs de Adao"
+# print ("import des prefs de Adao")
#
# Configuration de Eficas
# =======================