.. include:: snippets/Header2Algo01.rst
This algorithm allows to check the quality of the tangent operator, by
-calculating a residue with known theoretical properties.
+calculating a residue whose theoretical properties are known. The test is
+applicable to any operator, of evolution or observation.
+
+For all formulas, with :math:`\mathbf{x}` the current verification point, we
+take :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` and
+:math:`\mathbf{dx}=\alpha_0*\mathbf{dx}_0` with :math:`\alpha_0` a scaling user
+parameter, defaulting to 1. :math:`F` is the computational operator or code
+(which is here defined by the observation operator command
+"*ObservationOperator*").
One can observe the following residue, which is the comparison of increments
using the tangent linear operator:
:ref:`section_ref_algorithm_LinearityTest`), and the tangent is valid until the
calculation precision is reached.
-One take :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` and
-:math:`\mathbf{dx}=\alpha*\mathbf{dx}_0`. :math:`F` is the calculation code.
-
.. ------------------------------------ ..
.. include:: snippets/Header2Algo02.rst
.. include:: snippets/InitialDirection.rst
+.. include:: snippets/NumberOfPrintedDigits.rst
+
.. include:: snippets/SetSeed.rst
StoreSupplementaryCalculations
:maxdepth: 1
ref_algorithm_AdjointTest
+ ref_algorithm_ControledFunctionTest
ref_algorithm_FunctionTest
ref_algorithm_GradientTest
ref_algorithm_InputValuesTest
===========
This test allows to analyze the quality of an adjoint operator associated
- to some given direct operator. If the adjoint operator is approximated and
- not given, the test measures the quality of the automatic approximation.
+ to some given direct operator F, applied to one single vector argument x.
+ If the adjoint operator is approximated and not given, the test measures
+ the quality of the automatic approximation, around an input checking point X.
+
+===> Information before launching:
+ -----------------------------
+
+ Characteristics of input vector X, internally converted:
+ Type...............: <class 'numpy.ndarray'>
+ Length of vector...: 3
+ Minimum value......: 0.000e+00
+ Maximum value......: 2.000e+00
+ Mean of vector.....: 1.000e+00
+ Standard error.....: 8.165e-01
+ L2 norm of vector..: 2.236e+00
+
+ ---------------------------------------------------------------------------
+
+===> Numerical quality indicators:
+ -----------------------------
Using the "ScalarProduct" formula, one observes the residue R which is the
difference of two scalar products:
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+
-------------------------------------------------------------
i Alpha ||X|| ||Y|| ||dX|| R(Alpha)
-------------------------------------------------------------
11 1e-11 2.236e+00 1.910e+01 3.536e-11 0.000e+00
12 1e-12 2.236e+00 1.910e+01 3.536e-12 0.000e+00
-------------------------------------------------------------
+
+ End of the verification by "ScalarProduct" formula
+
+ ---------------------------------------------------------------------------
+
FUNCTIONTEST
============
- This test allows to analyze the (repetition of) launch of some given
- operator. It shows simple statistics related to its successful execution,
+ This test allows to analyze the (repetition of the) launch of some
+ given simulation operator F, applied to one single vector argument x,
+ in a sequential way.
+ The output shows simple statistics related to its successful execution,
or related to the similarities of repetition of its execution.
===> Information before launching:
-----------------------------
+
Characteristics of input vector X, internally converted:
Type...............: <class 'numpy.ndarray'>
Length of vector...: 3
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+ Number of evaluations...........................: 5
+
Characteristics of the whole set of outputs Y:
- Number of evaluations.........................: 5
+ Size of each of the outputs...................: 3
Minimum value of the whole set of outputs.....: 0.00e+00
Maximum value of the whole set of outputs.....: 2.00e+00
Mean of vector of the whole set of outputs....: 1.00e+00
First example
.............
-This example describes the test of the correct operation of a given operator,
-and that its call proceeds in a way compatible with its common use in the ADAO
+This example describes the test that the given operator works properly, and
+that its call proceeds in a way compatible with its common use in the ADAO
algorithms. The required information are minimal, namely here an operator
:math:`F` (described for the test by the command "*ObservationOperator*"), and
a particular state :math:`\mathbf{x}` to test it on (described for the test by
The test is repeated a configurable number of times, and a final statistic
makes it possible to quickly verify the operator's good behavior. The simplest
diagnostic consists in checking, at the very end of the display, the order of
-magnitude of the values indicated as the mean of the differences between the
-repeated outputs and their mean, under the part entitled "*Characteristics of
-the mean of the differences between the outputs Y and their mean Ym*". For a
-satisfactory operator, these values should be close to the numerical zero.
+magnitude of variations in the values indicated as the mean of the differences
+between the repeated outputs and their mean, under the part entitled
+"*Characteristics of the mean of the differences between the outputs Y and
+their mean Ym*". For a satisfactory operator, these values should be close to
+the numerical zero.
#
from adao import adaoBuilder
case = adaoBuilder.New()
-case.setCheckingPoint( Vector = array([1., 1., 1.]), Stored=True )
-case.setObservationOperator( OneFunction = DirectOperator )
+case.set( 'CheckingPoint', Vector = array([1., 1., 1.]), Stored=True )
+case.set( 'ObservationOperator', OneFunction = DirectOperator )
case.setAlgorithmParameters(
Algorithm='FunctionTest',
Parameters={
FUNCTIONTEST
============
- This test allows to analyze the (repetition of) launch of some given
- operator. It shows simple statistics related to its successful execution,
+ This test allows to analyze the (repetition of the) launch of some
+ given simulation operator F, applied to one single vector argument x,
+ in a sequential way.
+ The output shows simple statistics related to its successful execution,
or related to the similarities of repetition of its execution.
===> Information before launching:
-----------------------------
+
Characteristics of input vector X, internally converted:
Type...............: <class 'numpy.ndarray'>
Length of vector...: 3
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+ Number of evaluations...........................: 15
+
Characteristics of the whole set of outputs Y:
- Number of evaluations.........................: 15
+ Size of each of the outputs...................: 5
Minimum value of the whole set of outputs.....: 1.000e+00
Maximum value of the whole set of outputs.....: 1.110e+02
Mean of vector of the whole set of outputs....: 2.980e+01
The test is repeated here 15 times, and a final statistic makes it possible to
quickly verify the operator's good behavior. The simplest diagnostic consists
-in checking, at the very end of the display, the order of magnitude of the
-values indicated as the mean of the differences between the repeated outputs
-and their mean, under the part entitled "*Characteristics of the mean of the
-differences between the outputs Y and their mean Ym*". For a satisfactory
-operator, these values should be close to the numerical zero.
+in checking, at the very end of the display, the order of magnitude of
+variations in the values indicated as the mean of the differences between the
+repeated outputs and their mean, under the part entitled "*Characteristics of
+the mean of the differences between the outputs Y and their mean Ym*". For a
+satisfactory operator, these values should be close to the numerical zero.
PARALLELFUNCTIONTEST
====================
- This test allows to analyze the (repetition of) launch of some given
- operator. It shows simple statistics related to its successful execution,
+ This test allows to analyze the (repetition of the) launch of some
+ given simulation operator F, applied to one single vector argument x,
+ in a parallel way.
+ The output shows simple statistics related to its successful execution,
or related to the similarities of repetition of its execution.
===> Information before launching:
-----------------------------
+
Characteristics of input vector X, internally converted:
Type...............: <class 'numpy.ndarray'>
Length of vector...: 30
---------------------------------------------------------------------------
+ Appending the input vector to the agument set to be evaluated in parallel
+
+ ---------------------------------------------------------------------------
+
===> Launching operator parallel evaluation for 50 states
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+ Number of evaluations...........................: 50
+
Characteristics of the whole set of outputs Y:
- Number of evaluations.........................: 50
+ Size of each of the outputs...................: 30
Minimum value of the whole set of outputs.....: 0.00e+00
Maximum value of the whole set of outputs.....: 2.90e+01
Mean of vector of the whole set of outputs....: 1.45e+01
#. :ref:`Exemples avec l'algorithme de "3DVAR"<section_ref_algorithm_3DVAR_examples>`
#. :ref:`Exemples avec l'algorithme de "Blue"<section_ref_algorithm_Blue_examples>`
-#. :ref:`Exemples avec l'algorithme de "DerivativeFreeOptimization" algorithm<section_ref_algorithm_DerivativeFreeOptimization_examples>`
+#. :ref:`Exemples avec l'algorithme de "DerivativeFreeOptimization"<section_ref_algorithm_DerivativeFreeOptimization_examples>`
#. :ref:`Exemples avec l'algorithme de "ExtendedBlue"<section_ref_algorithm_ExtendedBlue_examples>`
#. :ref:`Exemples avec l'algorithme de "KalmanFilter"<section_ref_algorithm_KalmanFilter_examples>`
#. :ref:`Exemples avec l'algorithme de "NonLinearLeastSquares"<section_ref_algorithm_NonLinearLeastSquares_examples>`
Utilisations d'algorithmes de vérification
------------------------------------------
-#. :ref:`Exemples avec la vérification "AdjointTest"<section_ref_algorithm_AdjointTest_examples>`
-#. :ref:`Exemples avec la vérification "FunctionTest"<section_ref_algorithm_FunctionTest_examples>`
-#. :ref:`Exemples avec la vérification "ParallelFunctionTest"<section_ref_algorithm_ParallelFunctionTest_examples>`
+#. :ref:`Exemples de vérification avec "AdjointTest"<section_ref_algorithm_AdjointTest_examples>`
+#. :ref:`Exemples de vérification avec "ControledFunctionTest"<section_ref_algorithm_ControledFunctionTest_examples>`
+#. :ref:`Exemples de vérification avec "FunctionTest"<section_ref_algorithm_FunctionTest_examples>`
+#. :ref:`Exemples de vérification avec "ParallelFunctionTest"<section_ref_algorithm_ParallelFunctionTest_examples>`
Utilisations avancées
---------------------
===========
This test allows to analyze the quality of an adjoint operator associated
- to some given direct operator. If the adjoint operator is approximated and
- not given, the test measures the quality of the automatic approximation.
+ to some given direct operator F, applied to one single vector argument x.
+ If the adjoint operator is approximated and not given, the test measures
+ the quality of the automatic approximation, around an input checking point X.
+
+===> Information before launching:
+ -----------------------------
+
+ Characteristics of input vector X, internally converted:
+ Type...............: <class 'numpy.ndarray'>
+ Length of vector...: 3
+ Minimum value......: 0.000e+00
+ Maximum value......: 2.000e+00
+ Mean of vector.....: 1.000e+00
+ Standard error.....: 8.165e-01
+ L2 norm of vector..: 2.236e+00
+
+ ---------------------------------------------------------------------------
+
+===> Numerical quality indicators:
+ -----------------------------
Using the "ScalarProduct" formula, one observes the residue R which is the
difference of two scalar products:
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+
-------------------------------------------------------------
i Alpha ||X|| ||Y|| ||dX|| R(Alpha)
-------------------------------------------------------------
11 1e-11 2.236e+00 1.910e+01 3.536e-11 0.000e+00
12 1e-12 2.236e+00 1.910e+01 3.536e-12 0.000e+00
-------------------------------------------------------------
+
+ End of the verification by "ScalarProduct" formula
+
+ ---------------------------------------------------------------------------
+
FUNCTIONTEST
============
- This test allows to analyze the (repetition of) launch of some given
- operator. It shows simple statistics related to its successful execution,
+ This test allows to analyze the (repetition of the) launch of some
+ given simulation operator F, applied to one single vector argument x,
+ in a sequential way.
+ The output shows simple statistics related to its successful execution,
or related to the similarities of repetition of its execution.
===> Information before launching:
-----------------------------
+
Characteristics of input vector X, internally converted:
Type...............: <class 'numpy.ndarray'>
Length of vector...: 3
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+ Number of evaluations...........................: 5
+
Characteristics of the whole set of outputs Y:
- Number of evaluations.........................: 5
+ Size of each of the outputs...................: 3
Minimum value of the whole set of outputs.....: 0.00e+00
Maximum value of the whole set of outputs.....: 2.00e+00
Mean of vector of the whole set of outputs....: 1.00e+00
Le test est répété un nombre paramétrable de fois, et une statistique finale
permet de vérifier rapidement le bon comportement de l'opérateur. Le diagnostic
le plus simple consiste à vérifier, à la toute fin de l'affichage, l'ordre de
-grandeur des valeurs indiquées comme la moyenne des différences entre les
-sorties répétées et leur moyenne, sous la partie titrée "*Characteristics of
-the mean of the differences between the outputs Y and their mean Ym*". Pour un
-opérateur satisfaisant, ces valeurs doivent être proches du zéro numérique.
+grandeur des variations des valeurs indiquées comme la moyenne des différences
+entre les sorties répétées et leur moyenne, sous la partie titrée
+"*Characteristics of the mean of the differences between the outputs Y and
+their mean Ym*". Pour un opérateur satisfaisant, ces valeurs doivent être
+proches du zéro numérique.
#
from adao import adaoBuilder
case = adaoBuilder.New()
-case.setCheckingPoint( Vector = array([1., 1., 1.]), Stored=True )
-case.setObservationOperator( OneFunction = DirectOperator )
+case.set( 'CheckingPoint', Vector = array([1., 1., 1.]), Stored=True )
+case.set( 'ObservationOperator', OneFunction = DirectOperator )
case.setAlgorithmParameters(
Algorithm='FunctionTest',
Parameters={
FUNCTIONTEST
============
- This test allows to analyze the (repetition of) launch of some given
- operator. It shows simple statistics related to its successful execution,
+ This test allows to analyze the (repetition of the) launch of some
+ given simulation operator F, applied to one single vector argument x,
+ in a sequential way.
+ The output shows simple statistics related to its successful execution,
or related to the similarities of repetition of its execution.
===> Information before launching:
-----------------------------
+
Characteristics of input vector X, internally converted:
Type...............: <class 'numpy.ndarray'>
Length of vector...: 3
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+ Number of evaluations...........................: 15
+
Characteristics of the whole set of outputs Y:
- Number of evaluations.........................: 15
+ Size of each of the outputs...................: 5
Minimum value of the whole set of outputs.....: 1.000e+00
Maximum value of the whole set of outputs.....: 1.110e+02
Mean of vector of the whole set of outputs....: 2.980e+01
Ce test est répété ici 15 fois, et une statistique finale permet de vérifier
rapidement le bon comportement de l'opérateur. Le diagnostic le plus simple
consiste à vérifier, à la toute fin de l'affichage, l'ordre de grandeur des
-valeurs indiquées comme la moyenne des différences entre les sorties répétées
-et leur moyenne, sous la partie titrée "*Characteristics of the mean of the
-differences between the outputs Y and their mean Ym*". Pour qu'un opérateur
-soit satisfaisant, ces valeurs doivent être proches du zéro numérique.
+variations des valeurs indiquées comme la moyenne des différences entre les
+sorties répétées et leur moyenne, sous la partie titrée "*Characteristics of
+the mean of the differences between the outputs Y and their mean Ym*". Pour
+qu'un opérateur soit satisfaisant, ces valeurs doivent être proches du zéro
+numérique.
PARALLELFUNCTIONTEST
====================
- This test allows to analyze the (repetition of) launch of some given
- operator. It shows simple statistics related to its successful execution,
+ This test allows to analyze the (repetition of the) launch of some
+ given simulation operator F, applied to one single vector argument x,
+ in a parallel way.
+ The output shows simple statistics related to its successful execution,
or related to the similarities of repetition of its execution.
===> Information before launching:
-----------------------------
+
Characteristics of input vector X, internally converted:
Type...............: <class 'numpy.ndarray'>
Length of vector...: 30
---------------------------------------------------------------------------
+ Appending the input vector to the agument set to be evaluated in parallel
+
+ ---------------------------------------------------------------------------
+
===> Launching operator parallel evaluation for 50 states
(Remark: numbers that are (about) under 2e-16 represent 0 to machine precision)
+ Number of evaluations...........................: 50
+
Characteristics of the whole set of outputs Y:
- Number of evaluations.........................: 50
+ Size of each of the outputs...................: 30
Minimum value of the whole set of outputs.....: 0.00e+00
Maximum value of the whole set of outputs.....: 2.90e+01
Mean of vector of the whole set of outputs....: 1.45e+01
import numpy
from daCore import BasicObjects, NumericObjects, PlatformInfo
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
+mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
Ht = HO["Tangent"].appliedInXTo
Ha = HO["Adjoint"].appliedInXTo
#
- Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
- Perturbations.reverse()
- #
- Xn = numpy.ravel( Xb ).reshape((-1,1))
- NormeX = numpy.linalg.norm( Xn )
- if Y is None:
- Yn = numpy.ravel( Hm( Xn ) ).reshape((-1,1))
- else:
- Yn = numpy.ravel( Y ).reshape((-1,1))
- NormeY = numpy.linalg.norm( Yn )
- if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( Xn )
- if self._toStore("SimulatedObservationAtCurrentState"):
- self.StoredVariables["SimulatedObservationAtCurrentState"].store( Yn )
+ X0 = numpy.ravel( Xb ).reshape((-1,1))
#
- dX0 = NumericObjects.SetInitialDirection(
- self._parameters["InitialDirection"],
- self._parameters["AmplitudeOfInitialDirection"],
- Xn,
- )
- #
- # --------------------
+ # ----------
__p = self._parameters["NumberOfPrintedDigits"]
#
- __marge = 5*u" "
+ __marge = 5*u" "
+ __flech = 3*"="+"> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs = ("\n")
msgs += (__marge + "====" + "="*len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
msgs += (__marge + "====" + "="*len(__rt) + "====\n")
else:
- msgs = ("\n")
- msgs += (" %s\n"%self._name)
- msgs += (" %s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%self._name)
+ msgs += (__marge + "%s\n"%("="*len(self._name),))
#
msgs += ("\n")
- msgs += (" This test allows to analyze the quality of an adjoint operator associated\n")
- msgs += (" to some given direct operator. If the adjoint operator is approximated and\n")
- msgs += (" not given, the test measures the quality of the automatic approximation.\n")
+ msgs += (__marge + "This test allows to analyze the quality of an adjoint operator associated\n")
+ msgs += (__marge + "to some given direct operator F, applied to one single vector argument x.\n")
+ msgs += (__marge + "If the adjoint operator is approximated and not given, the test measures\n")
+ msgs += (__marge + "the quality of the automatic approximation, around an input checking point X.\n")
+ msgs += ("\n")
+ msgs += (__flech + "Information before launching:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( X0 )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__flech + "Numerical quality indicators:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
#
if self._parameters["ResiduFormula"] == "ScalarProduct":
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "difference of two scalar products:\n")
msgs += ("\n")
- msgs += (" Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
- msgs += (" difference of two scalar products:\n")
- msgs += ("\n")
- msgs += (" R(Alpha) = | < TangentF_X(dX) , Y > - < dX , AdjointF_X(Y) > |\n")
+ msgs += (__marge + " R(Alpha) = | < TangentF_X(dX) , Y > - < dX , AdjointF_X(Y) > |\n")
msgs += ("\n")
- msgs += (" which must remain constantly equal to zero to the accuracy of the calculation.\n")
- msgs += (" One takes dX0 = Normal(0,X) and dX = Alpha*dX0, where F is the calculation\n")
- msgs += (" operator. If it is given, Y must be in the image of F. If it is not given,\n")
- msgs += (" one takes Y = F(X).\n")
+ msgs += (__marge + "which must remain constantly equal to zero to the accuracy of the calculation.\n")
+ msgs += (__marge + "One takes dX0 = Normal(0,X) and dX = Alpha*dX0, where F is the calculation\n")
+ msgs += (__marge + "operator. If it is given, Y must be in the image of F. If it is not given,\n")
+ msgs += (__marge + "one takes Y = F(X).\n")
+ #
+ __entete = str.rstrip(" i Alpha " + \
+ str.center("||X||",2+__p+7) + \
+ str.center("||Y||",2+__p+7) + \
+ str.center("||dX||",2+__p+7) + \
+ str.center("R(Alpha)",2+__p+7))
+ __nbtirets = len(__entete) + 2
+ #
msgs += ("\n")
- msgs += (" (Remark: numbers that are (about) under %.0e represent 0 to machine precision)"%mpr)
- print(msgs)
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ print(msgs) # 1
#
- # --------------------
- __pf = " %"+str(__p+7)+"."+str(__p)+"e"
- __ms = " %2i %5.0e"+(__pf*4)
- __bl = " %"+str(__p+7)+"s "
- __entete = str.rstrip(" i Alpha " + \
- str.center("||X||",2+__p+7) + \
- str.center("||Y||",2+__p+7) + \
- str.center("||dX||",2+__p+7) + \
- str.center("R(Alpha)",2+__p+7))
- __nbtirets = len(__entete) + 2
+ Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
+ Perturbations.reverse()
+ #
+ NormeX = numpy.linalg.norm( X0 )
+ if Y is None:
+ Yn = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ else:
+ Yn = numpy.ravel( Y ).reshape((-1,1))
+ NormeY = numpy.linalg.norm( Yn )
+ if self._toStore("CurrentState"):
+ self.StoredVariables["CurrentState"].store( X0 )
+ if self._toStore("SimulatedObservationAtCurrentState"):
+ self.StoredVariables["SimulatedObservationAtCurrentState"].store( Yn )
+ #
+ dX0 = NumericObjects.SetInitialDirection(
+ self._parameters["InitialDirection"],
+ self._parameters["AmplitudeOfInitialDirection"],
+ X0,
+ )
#
- msgs = ""
+ # Boucle sur les perturbations
+ # ----------------------------
+ msgs = ("") # 2
msgs += "\n" + __marge + "-"*__nbtirets
msgs += "\n" + __marge + __entete
msgs += "\n" + __marge + "-"*__nbtirets
- #
+ msgs += ("\n")
+ __pf = " %"+str(__p+7)+"."+str(__p)+"e"
+ __ms = " %2i %5.0e"+(__pf*4)+"\n"
for i,amplitude in enumerate(Perturbations):
dX = amplitude * dX0
NormedX = numpy.linalg.norm( dX )
#
- TangentFXdX = numpy.ravel( Ht( (Xn,dX) ) )
- AdjointFXY = numpy.ravel( Ha( (Xn,Yn) ) )
- #
- Residu = abs(float(numpy.dot( TangentFXdX, Yn ) - numpy.dot( dX, AdjointFXY )))
- #
- msg = __ms%(i,amplitude,NormeX,NormeY,NormedX,Residu)
- msgs += "\n" + __marge + msg
- #
- self.StoredVariables["Residu"].store( Residu )
+ if self._parameters["ResiduFormula"] == "ScalarProduct":
+ TangentFXdX = numpy.ravel( Ht( (X0,dX) ) )
+ AdjointFXY = numpy.ravel( Ha( (X0,Yn) ) )
+ #
+ Residu = abs(float(numpy.dot( TangentFXdX, Yn ) - numpy.dot( dX, AdjointFXY )))
+ #
+ self.StoredVariables["Residu"].store( Residu )
+ ttsep = __ms%(i,amplitude,NormeX,NormeY,NormedX,Residu)
+ msgs += __marge + ttsep
#
- msgs += "\n" + __marge + "-"*__nbtirets
- print(msgs)
+ msgs += (__marge + "-"*__nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 2
#
self._post_run(HO)
return 0
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
+import numpy, copy, logging
from daCore import BasicObjects, PlatformInfo
-import numpy, copy
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
#
Hm = HO["Direct"].appliedTo
#
- Xn = copy.copy( Xb )
+ X0 = copy.copy( Xb )
#
# ----------
__s = self._parameters["ShowElementarySummary"]
__p = self._parameters["NumberOfPrintedDigits"]
+ __r = self._parameters["NumberOfRepetition"]
#
- __marge = 5*u" "
+ __marge = 5*u" "
+ __flech = 3*"="+"> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs = ("\n")
msgs += (__marge + "====" + "="*len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
msgs += (__marge + "====" + "="*len(__rt) + "====\n")
else:
- msgs = ("\n")
- msgs += (" %s\n"%self._name)
- msgs += (" %s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%self._name)
+ msgs += (__marge + "%s\n"%("="*len(self._name),))
#
msgs += ("\n")
- msgs += (" This test allows to analyze the (repetition of) launch of some given\n")
- msgs += (" operator. It shows simple statistics related to its successful execution,\n")
- msgs += (" or related to the similarities of repetition of its execution.\n")
+ msgs += (__marge + "This test allows to analyze the (repetition of the) launch of some\n")
+ msgs += (__marge + "given simulation operator F, applied to one single vector argument x,\n")
+ msgs += (__marge + "in a sequential way.\n")
+ msgs += (__marge + "The output shows simple statistics related to its successful execution,\n")
+ msgs += (__marge + "or related to the similarities of repetition of its execution.\n")
msgs += ("\n")
- msgs += ("===> Information before launching:\n")
- msgs += (" -----------------------------\n")
- msgs += (" Characteristics of input vector X, internally converted:\n")
- msgs += (" Type...............: %s\n")%type( Xn )
- msgs += (" Length of vector...: %i\n")%max(numpy.ravel( Xn ).shape)
- msgs += (" Minimum value......: %."+str(__p)+"e\n")%numpy.min( Xn )
- msgs += (" Maximum value......: %."+str(__p)+"e\n")%numpy.max( Xn )
- msgs += (" Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Xn, dtype=mfp )
- msgs += (" Standard error.....: %."+str(__p)+"e\n")%numpy.std( Xn, dtype=mfp )
- msgs += (" L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Xn )
- print(msgs)
+ msgs += (__flech + "Information before launching:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( X0 )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
#
- print(" %s\n"%("-"*75,))
if self._parameters["SetDebug"]:
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
logging.getLogger().setLevel(logging.DEBUG)
- print("===> Beginning of repeated evaluation, activating debug\n")
+ if __r > 1:
+ msgs += (__flech + "Beginning of repeated evaluation, activating debug\n")
+ else:
+ msgs += (__flech + "Beginning of evaluation, activating debug\n")
else:
- print("===> Beginning of repeated evaluation, without activating debug\n")
+ if __r > 1:
+ msgs += (__flech + "Beginning of repeated evaluation, without activating debug\n")
+ else:
+ msgs += (__flech + "Beginning of evaluation, without activating debug\n")
+ print(msgs) # 1
#
# ----------
HO["Direct"].disableAvoidingRedundancy()
# ----------
Ys = []
- for i in range(self._parameters["NumberOfRepetition"]):
+ for i in range(__r):
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( numpy.ravel(Xn) )
+ self.StoredVariables["CurrentState"].store( X0 )
if __s:
- print(" %s\n"%("-"*75,))
- if self._parameters["NumberOfRepetition"] > 1:
- print("===> Repetition step number %i on a total of %i\n"%(i+1,self._parameters["NumberOfRepetition"]))
- print("===> Launching operator sequential evaluation\n")
+ msgs = (__marge + "%s\n"%("-"*75,)) # 2-1
+ if __r > 1:
+ msgs += ("\n")
+ msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i+1,__r))
+ msgs += ("\n")
+ msgs += (__flech + "Launching operator sequential evaluation\n")
+ print(msgs) # 2-1
#
- Yn = Hm( Xn )
+ Yn = Hm( X0 )
#
if __s:
- print("\n===> End of operator sequential evaluation\n")
- #
- msgs = ("===> Information after evaluation:\n")
- msgs += ("\n Characteristics of simulated output vector Y=H(X), to compare to others:\n")
- msgs += (" Type...............: %s\n")%type( Yn )
- msgs += (" Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
- msgs += (" Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
- msgs += (" Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
- msgs += (" Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
- msgs += (" Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
- msgs += (" L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
- print(msgs)
+ msgs = ("\n") # 2-2
+ msgs += (__flech + "End of operator sequential evaluation\n")
+ msgs += ("\n")
+ msgs += (__flech + "Information after evaluation:\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of simulated output vector Y=F(X), to compare to others:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( Yn )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
+ print(msgs) # 2-2
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(Yn) )
#
HO["Direct"].enableAvoidingRedundancy()
# ----------
#
- print(" %s\n"%("-"*75,))
+ msgs = (__marge + "%s\n\n"%("-"*75,)) # 3
if self._parameters["SetDebug"]:
- print("===> End of repeated evaluation, deactivating debug if necessary\n")
+ if __r > 1:
+ msgs += (__flech + "End of repeated evaluation, deactivating debug if necessary\n")
+ else:
+ msgs += (__flech + "End of evaluation, deactivating debug if necessary\n")
logging.getLogger().setLevel(CUR_LEVEL)
else:
- print("===> End of repeated evaluation, without deactivating debug\n")
+ if __r > 1:
+ msgs += (__flech + "End of repeated evaluation, without deactivating debug\n")
+ else:
+ msgs += (__flech + "End of evaluation, without deactivating debug\n")
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
#
- if self._parameters["NumberOfRepetition"] > 1:
- print(" %s\n"%("-"*75,))
- print("===> Launching statistical summary calculation for %i states\n"%self._parameters["NumberOfRepetition"])
- msgs = (" %s\n"%("-"*75,))
- msgs += ("\n===> Statistical analysis of the outputs obtained through sequential repeated evaluations\n")
- msgs += ("\n (Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ if __r > 1:
+ msgs += ("\n")
+ msgs += (__flech + "Launching statistical summary calculation for %i states\n"%__r)
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += ("\n")
+ msgs += (__flech + "Statistical analysis of the outputs obtained through sequential repeated evaluations\n")
+ msgs += ("\n")
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += ("\n")
Yy = numpy.array( Ys )
- msgs += ("\n Characteristics of the whole set of outputs Y:\n")
- msgs += (" Number of evaluations.........................: %i\n")%len( Ys )
- msgs += (" Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (" Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (" Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (" Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Ys )
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of the whole set of outputs Y:\n")
+ msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ys[0].size
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
+ msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += ("\n")
Ym = numpy.mean( numpy.array( Ys ), axis=0, dtype=mfp )
- msgs += ("\n Characteristics of the vector Ym, mean of the outputs Y:\n")
- msgs += (" Size of the mean of the outputs...............: %i\n")%Ym.size
- msgs += (" Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
- msgs += (" Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
- msgs += (" Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
- msgs += (" Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += (__marge + "Characteristics of the vector Ym, mean of the outputs Y:\n")
+ msgs += (__marge + " Size of the mean of the outputs...............: %i\n")%Ym.size
+ msgs += (__marge + " Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
+ msgs += (__marge + " Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
+ msgs += (__marge + " Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
+ msgs += (__marge + " Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += ("\n")
Ye = numpy.mean( numpy.array( Ys ) - Ym, axis=0, dtype=mfp )
- msgs += "\n Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n"
- msgs += (" Size of the mean of the differences...........: %i\n")%Ym.size
- msgs += (" Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
- msgs += (" Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
- msgs += (" Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
- msgs += (" Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
- msgs += ("\n %s\n"%("-"*75,))
- print(msgs)
+ msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n")
+ msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Ye.size
+ msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
+ msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
+ msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
+ msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ #
+ msgs += ("\n")
+ msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 3
#
self._post_run(HO)
return 0
import math, numpy
from daCore import BasicObjects, NumericObjects, PlatformInfo
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
+mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
message = "Graine fixée pour le générateur aléatoire",
)
self.defineRequiredParameter(
- name = "PlotAndSave",
- default = False,
- typecast = bool,
- message = "Trace et sauve les résultats",
- )
- self.defineRequiredParameter(
- name = "ResultFile",
- default = self._name+"_result_file",
- typecast = str,
- message = "Nom de base (hors extension) des fichiers de sauvegarde des résultats",
+ name = "NumberOfPrintedDigits",
+ default = 5,
+ typecast = int,
+ message = "Nombre de chiffres affichés pour les impressions de réels",
+ minval = 0,
)
self.defineRequiredParameter(
name = "ResultTitle",
typecast = str,
message = "Label de la courbe tracée dans la figure",
)
+ self.defineRequiredParameter(
+ name = "ResultFile",
+ default = self._name+"_result_file",
+ typecast = str,
+ message = "Nom de base (hors extension) des fichiers de sauvegarde des résultats",
+ )
+ self.defineRequiredParameter(
+ name = "PlotAndSave",
+ default = False,
+ typecast = bool,
+ message = "Trace et sauve les résultats",
+ )
self.defineRequiredParameter(
name = "StoreSupplementaryCalculations",
default = [],
if self._parameters["ResiduFormula"] in ["Taylor", "TaylorOnNorm"]:
Ht = HO["Tangent"].appliedInXTo
#
+ X0 = numpy.ravel( Xb ).reshape((-1,1))
+ #
# ----------
+ __p = self._parameters["NumberOfPrintedDigits"]
+ #
+ __marge = 5*u" "
+ __flech = 3*"="+"> "
+ msgs = ("\n") # 1
+ if len(self._parameters["ResultTitle"]) > 0:
+ __rt = str(self._parameters["ResultTitle"])
+ msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + " " + __rt + "\n")
+ msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ else:
+ msgs += (__marge + "%s\n"%self._name)
+ msgs += (__marge + "%s\n"%("="*len(self._name),))
+ #
+ msgs += ("\n")
+ msgs += (__marge + "This test allows to analyze the numerical stability of the gradient of some\n")
+ msgs += (__marge + "given simulation operator F, applied to one single vector argument x.\n")
+ msgs += (__marge + "The output shows simple statistics related to its stability for various\n")
+ msgs += (__marge + "increments, around an input checking point X.\n")
+ msgs += ("\n")
+ msgs += (__flech + "Information before launching:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( X0 )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__flech + "Numerical quality indicators:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "following ratio or comparison:\n")
+ msgs += ("\n")
+ #
+ if self._parameters["ResiduFormula"] == "Taylor":
+ msgs += (__marge + " || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||\n")
+ msgs += (__marge + " R(Alpha) = ----------------------------------------------------\n")
+ msgs += (__marge + " || F(X) ||\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue decreases and if the decay is in Alpha**2 according to\n")
+ msgs += (__marge + "Alpha, it means that the gradient is well calculated up to the stopping\n")
+ msgs += (__marge + "precision of the quadratic decay, and that F is not linear.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue decreases and if the decay is done in Alpha according\n")
+ msgs += (__marge + "to Alpha, until a certain threshold after which the residue is small\n")
+ msgs += (__marge + "and constant, it means that F is linear and that the residue decreases\n")
+ msgs += (__marge + "from the error made in the calculation of the GradientF_X term.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
+ #
+ if self._parameters["ResiduFormula"] == "TaylorOnNorm":
+ msgs += (__marge + " || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||\n")
+ msgs += (__marge + " R(Alpha) = ----------------------------------------------------\n")
+ msgs += (__marge + " Alpha**2\n")
+ msgs += ("\n")
+ msgs += (__marge + "It is a residue essentially similar to the classical Taylor criterion,\n")
+ msgs += (__marge + "but its behavior may differ depending on the numerical properties of\n")
+ msgs += (__marge + "the calculations of its various terms.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue is constant up to a certain threshold and increasing\n")
+ msgs += (__marge + "afterwards, it means that the gradient is well computed up to this\n")
+ msgs += (__marge + "stopping precision, and that F is not linear.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue is systematically increasing starting from a small\n")
+ msgs += (__marge + "value compared to ||F(X)||, it means that F is (quasi-)linear and that\n")
+ msgs += (__marge + "the calculation of the gradient is correct until the residue is of the\n")
+ msgs += (__marge + "order of magnitude of ||F(X)||.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
+ #
+ if self._parameters["ResiduFormula"] == "Norm":
+ msgs += (__marge + " || F(X+Alpha*dX) - F(X) ||\n")
+ msgs += (__marge + " R(Alpha) = --------------------------\n")
+ msgs += (__marge + " Alpha\n")
+ msgs += ("\n")
+ msgs += (__marge + "which must remain constant until the accuracy of the calculation is\n")
+ msgs += (__marge + "reached.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
+ #
+ msgs += ("\n")
+ msgs += (__marge + "We take dX0 = Normal(0,X) and dX = Alpha*dX0. F is the calculation code.\n")
+ msgs += ("\n")
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ print(msgs) # 1
+ #
Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
Perturbations.reverse()
#
- X = numpy.ravel( Xb ).reshape((-1,1))
- FX = numpy.ravel( Hm( X ) ).reshape((-1,1))
- NormeX = numpy.linalg.norm( X )
+ FX = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ NormeX = numpy.linalg.norm( X0 )
NormeFX = numpy.linalg.norm( FX )
if NormeFX < mpr: NormeFX = mpr
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( X )
+ self.StoredVariables["CurrentState"].store( X0 )
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX )
#
dX0 = NumericObjects.SetInitialDirection(
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
- X,
+ X0,
)
#
if self._parameters["ResiduFormula"] in ["Taylor", "TaylorOnNorm"]:
- dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0.reshape((-1,1))
- GradFxdX = Ht( (X, dX1) )
+ dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0
+ GradFxdX = Ht( (X0, dX1) )
GradFxdX = numpy.ravel( GradFxdX ).reshape((-1,1))
GradFxdX = float(1./self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
#
- # Entete des resultats
- # --------------------
- __marge = 12*u" "
- __precision = u"""
- Remarque : les nombres inferieurs a %.0e (environ) representent un zero
- a la precision machine.\n"""%mpr
- if self._parameters["ResiduFormula"] == "Taylor":
- __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
- __msgdoc = u"""
- On observe le residu issu du developpement de Taylor de la fonction F,
- normalise par la valeur au point nominal :
-
- || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||
- R(Alpha) = ----------------------------------------------------
- || F(X) ||
-
- Si le residu decroit et que la decroissance se fait en Alpha**2 selon Alpha,
- cela signifie que le gradient est bien calcule jusqu'a la precision d'arret
- de la decroissance quadratique, et que F n'est pas lineaire.
-
- Si le residu decroit et que la decroissance se fait en Alpha selon Alpha,
- jusqu'a un certain seuil apres lequel le residu est faible et constant, cela
- signifie que F est lineaire et que le residu decroit a partir de l'erreur
- faite dans le calcul du terme GradientF_X.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- if self._parameters["ResiduFormula"] == "TaylorOnNorm":
- __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
- __msgdoc = u"""
- On observe le residu issu du developpement de Taylor de la fonction F,
- rapporte au parametre Alpha au carre :
-
- || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||
- R(Alpha) = ----------------------------------------------------
- Alpha**2
-
- C'est un residu essentiellement similaire au critere classique de Taylor,
- mais son comportement peut differer selon les proprietes numeriques des
- calculs de ses differents termes.
-
- Si le residu est constant jusqu'a un certain seuil et croissant ensuite,
- cela signifie que le gradient est bien calcule jusqu'a cette precision
- d'arret, et que F n'est pas lineaire.
-
- Si le residu est systematiquement croissant en partant d'une valeur faible
- par rapport a ||F(X)||, cela signifie que F est (quasi-)lineaire et que le
- calcul du gradient est correct jusqu'au moment ou le residu est de l'ordre de
- grandeur de ||F(X)||.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- if self._parameters["ResiduFormula"] == "Norm":
- __entete = u" i Alpha ||X|| ||F(X)|| ||F(X+dX)|| ||dX|| ||F(X+dX)-F(X)|| ||F(X+dX)-F(X)||/||dX|| R(Alpha) log( R )"
- __msgdoc = u"""
- On observe le residu, qui est base sur une approximation du gradient :
-
- || F(X+Alpha*dX) - F(X) ||
- R(Alpha) = ---------------------------
- Alpha
-
- qui doit rester constant jusqu'a ce que l'on atteigne la precision du calcul.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- #
- if len(self._parameters["ResultTitle"]) > 0:
- __rt = str(self._parameters["ResultTitle"])
- msgs = u"\n"
- msgs += __marge + "====" + "="*len(__rt) + "====\n"
- msgs += __marge + " " + __rt + "\n"
- msgs += __marge + "====" + "="*len(__rt) + "====\n"
- else:
- msgs = u""
- msgs += __msgdoc
- #
+ # Boucle sur les perturbations
+ # ----------------------------
__nbtirets = len(__entete) + 2
+ msgs = ("") # 2
msgs += "\n" + __marge + "-"*__nbtirets
msgs += "\n" + __marge + __entete
msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += ("\n")
#
- # Boucle sur les perturbations
- # ----------------------------
NormesdX = []
NormesFXdX = []
NormesdFX = []
for i,amplitude in enumerate(Perturbations):
dX = amplitude * dX0.reshape((-1,1))
#
- FX_plus_dX = Hm( X + dX )
+ FX_plus_dX = Hm( X0 + dX )
FX_plus_dX = numpy.ravel( FX_plus_dX ).reshape((-1,1))
#
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( numpy.ravel(X + dX) )
+ self.StoredVariables["CurrentState"].store( numpy.ravel(X0 + dX) )
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(FX_plus_dX) )
#
elif self._parameters["ResiduFormula"] == "Norm":
Residu = NormedFXsAm
#
- msg = " %2i %5.0e %9.3e %9.3e %9.3e %9.3e %9.3e | %9.3e | %9.3e %4.0f"%(i,amplitude,NormeX,NormeFX,NormeFXdX,NormedX,NormedFX,NormedFXsdX,Residu,math.log10(max(1.e-99,Residu)))
- msgs += "\n" + __marge + msg
- #
self.StoredVariables["Residu"].store( Residu )
+ ttsep = " %2i %5.0e %9.3e %9.3e %9.3e %9.3e %9.3e | %9.3e | %9.3e %4.0f\n"%(i,amplitude,NormeX,NormeFX,NormeFXdX,NormedX,NormedFX,NormedFXsdX,Residu,math.log10(max(1.e-99,Residu)))
+ msgs += __marge + ttsep
#
- msgs += "\n" + __marge + "-"*__nbtirets
- msgs += "\n"
- #
- # ----------
- print("\nResults of gradient check by \"%s\" formula:"%self._parameters["ResiduFormula"])
- print(msgs)
+ msgs += (__marge + "-"*__nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 2
#
if self._parameters["PlotAndSave"]:
f = open(str(self._parameters["ResultFile"])+".txt",'a')
import math, numpy
from daCore import BasicObjects, NumericObjects, PlatformInfo
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
+mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
)
+ self.defineRequiredParameter(
+ name = "NumberOfPrintedDigits",
+ default = 5,
+ typecast = int,
+ message = "Nombre de chiffres affichés pour les impressions de réels",
+ minval = 0,
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
if self._parameters["ResiduFormula"] in ["Taylor", "NominalTaylor", "NominalTaylorRMS"]:
Ht = HO["Tangent"].appliedInXTo
#
+ X0 = numpy.ravel( Xb ).reshape((-1,1))
+ #
+ # ----------
+ __p = self._parameters["NumberOfPrintedDigits"]
+ #
+ __marge = 5*u" "
+ __flech = 3*"="+"> "
+ msgs = ("\n") # 1
+ if len(self._parameters["ResultTitle"]) > 0:
+ __rt = str(self._parameters["ResultTitle"])
+ msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + " " + __rt + "\n")
+ msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ else:
+ msgs += (__marge + "%s\n"%self._name)
+ msgs += (__marge + "%s\n"%("="*len(self._name),))
+ #
+ msgs += ("\n")
+ msgs += (__marge + "This test allows to analyze the linearity property of some given\n")
+ msgs += (__marge + "simulation operator F, applied to one single vector argument x.\n")
+ msgs += (__marge + "The output shows simple statistics related to its stability for various\n")
+ msgs += (__marge + "increments, around an input checking point X.\n")
+ msgs += ("\n")
+ msgs += (__flech + "Information before launching:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( X0 )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__flech + "Numerical quality indicators:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "following ratio or comparison:\n")
+ msgs += ("\n")
+ #
+ if self._parameters["ResiduFormula"] == "CenteredDL":
+ msgs += (__marge + " || F(X+Alpha*dX) + F(X-Alpha*dX) - 2*F(X) ||\n")
+ msgs += (__marge + " R(Alpha) = --------------------------------------------\n")
+ msgs += (__marge + " || F(X) ||\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue remains always very small compared to 1, the linearity\n")
+ msgs += (__marge + "assumption of F is verified.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue varies a lot, or is of the order of 1 or more, and is\n")
+ msgs += (__marge + "small only under a certain order of Alpha increment, the linearity\n")
+ msgs += (__marge + "assumption of F is not verified.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue decreases and if the decay is in Alpha**2 according to\n")
+ msgs += (__marge + "Alpha, it means that the gradient is well calculated up to the stopping\n")
+ msgs += (__marge + "precision of the quadratic decay.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) log10( R )"
+ #
+ if self._parameters["ResiduFormula"] == "Taylor":
+ msgs += (__marge + " || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||\n")
+ msgs += (__marge + " R(Alpha) = ----------------------------------------------------\n")
+ msgs += (__marge + " || F(X) ||\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue remains always very small compared to 1, the linearity\n")
+ msgs += (__marge + "assumption of F is verified.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue varies a lot, or is of the order of 1 or more, and is\n")
+ msgs += (__marge + "small only under a certain order of Alpha increment, the linearity\n")
+ msgs += (__marge + "assumption of F is not verified.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue decreases and if the decay is in Alpha**2 according to\n")
+ msgs += (__marge + "Alpha, it means that the gradient is well calculated up to the stopping\n")
+ msgs += (__marge + "precision of the quadratic decay.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) log10( R )"
+ #
+ if self._parameters["ResiduFormula"] == "NominalTaylor":
+ msgs += (__marge + " R(Alpha) = max(\n")
+ msgs += (__marge + " || F(X+Alpha*dX) - Alpha * F(dX) || / || F(X) ||,\n")
+ msgs += (__marge + " || F(X-Alpha*dX) + Alpha * F(dX) || / || F(X) ||,\n")
+ msgs += (__marge + " )\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue remains always equal to 1 within 2 or 3 percent (i.e.\n")
+ msgs += (__marge + "|R-1| remains equal to 2 or 3 percent), then the linearity assumption of\n")
+ msgs += (__marge + "F is verified.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue is equal to 1 over only a part of the range of variation\n")
+ msgs += (__marge + "of the Alpha increment, it is over this part that the linearity assumption\n")
+ msgs += (__marge + "of F is verified.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) |R-1| en %"
+ #
+ if self._parameters["ResiduFormula"] == "NominalTaylorRMS":
+ msgs += (__marge + " R(Alpha) = max(\n")
+ msgs += (__marge + " RMS( F(X), F(X+Alpha*dX) - Alpha * F(dX) ) / || F(X) ||,\n")
+ msgs += (__marge + " RMS( F(X), F(X-Alpha*dX) + Alpha * F(dX) ) / || F(X) ||,\n")
+ msgs += (__marge + " )\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue remains always equal to 0 within 1 or 2 percent then the\n")
+ msgs += (__marge + "linearity assumption of F is verified.\n")
+ msgs += ("\n")
+ msgs += (__marge + "If the residue is equal to 0 over only a part of the range of variation\n")
+ msgs += (__marge + "of the Alpha increment, it is over this part that the linearity assumption\n")
+ msgs += (__marge + "of F is verified.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) |R| en %"
+ #
+ msgs += ("\n")
+ msgs += (__marge + "We take dX0 = Normal(0,X) and dX = Alpha*dX0. F is the calculation code.\n")
+ msgs += ("\n")
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ print(msgs) # 1
+ #
Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
Perturbations.reverse()
#
- Xn = numpy.ravel( Xb ).reshape((-1,1))
- FX = numpy.ravel( Hm( Xn ) ).reshape((-1,1))
- NormeX = numpy.linalg.norm( Xn )
+ FX = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ NormeX = numpy.linalg.norm( X0 )
NormeFX = numpy.linalg.norm( FX )
if NormeFX < mpr: NormeFX = mpr
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( numpy.ravel(Xn) )
+ self.StoredVariables["CurrentState"].store( X0 )
if self._toStore("SimulatedObservationAtCurrentState"):
- self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(FX) )
+ self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX )
#
dX0 = NumericObjects.SetInitialDirection(
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
- Xn,
+ X0,
)
#
if self._parameters["ResiduFormula"] in ["Taylor", "NominalTaylor", "NominalTaylorRMS"]:
dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0
- GradFxdX = Ht( (Xn, dX1) )
+ GradFxdX = Ht( (X0, dX1) )
GradFxdX = numpy.ravel( GradFxdX ).reshape((-1,1))
GradFxdX = float(1./self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
#
- # Entete des resultats
- # --------------------
- __marge = 12*u" "
- __precision = u"""
- Remarque : les nombres inferieurs a %.0e (environ) representent un zero
- a la precision machine.\n"""%mpr
- if self._parameters["ResiduFormula"] == "CenteredDL":
- __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) log10( R )"
- __msgdoc = u"""
- On observe le residu provenant de la difference centree des valeurs de F
- au point nominal et aux points perturbes, normalisee par la valeur au
- point nominal :
-
- || F(X+Alpha*dX) + F(X-Alpha*dX) - 2*F(X) ||
- R(Alpha) = --------------------------------------------
- || F(X) ||
-
- S'il reste constamment tres faible par rapport a 1, l'hypothese de linearite
- de F est verifiee.
-
- Si le residu varie, ou qu'il est de l'ordre de 1 ou plus, et qu'il n'est
- faible qu'a partir d'un certain ordre d'increment, l'hypothese de linearite
- de F n'est pas verifiee.
-
- Si le residu decroit et que la decroissance se fait en Alpha**2 selon Alpha,
- cela signifie que le gradient est calculable jusqu'a la precision d'arret
- de la decroissance quadratique.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- if self._parameters["ResiduFormula"] == "Taylor":
- __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) log10( R )"
- __msgdoc = u"""
- On observe le residu issu du developpement de Taylor de la fonction F,
- normalisee par la valeur au point nominal :
-
- || F(X+Alpha*dX) - F(X) - Alpha * GradientF_X(dX) ||
- R(Alpha) = ----------------------------------------------------
- || F(X) ||
-
- S'il reste constamment tres faible par rapport a 1, l'hypothese de linearite
- de F est verifiee.
-
- Si le residu varie, ou qu'il est de l'ordre de 1 ou plus, et qu'il n'est
- faible qu'a partir d'un certain ordre d'increment, l'hypothese de linearite
- de F n'est pas verifiee.
-
- Si le residu decroit et que la decroissance se fait en Alpha**2 selon Alpha,
- cela signifie que le gradient est bien calcule jusqu'a la precision d'arret
- de la decroissance quadratique.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- if self._parameters["ResiduFormula"] == "NominalTaylor":
- __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) |R-1| en %"
- __msgdoc = u"""
- On observe le residu obtenu a partir de deux approximations d'ordre 1 de F(X),
- normalisees par la valeur au point nominal :
-
- R(Alpha) = max(
- || F(X+Alpha*dX) - Alpha * F(dX) || / || F(X) ||,
- || F(X-Alpha*dX) + Alpha * F(dX) || / || F(X) ||,
- )
-
- S'il reste constamment egal a 1 a moins de 2 ou 3 pourcents pres (c'est-a-dire
- que |R-1| reste egal a 2 ou 3 pourcents), c'est que l'hypothese de linearite
- de F est verifiee.
-
- S'il est egal a 1 sur une partie seulement du domaine de variation de
- l'increment Alpha, c'est sur cette partie que l'hypothese de linearite de F
- est verifiee.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- if self._parameters["ResiduFormula"] == "NominalTaylorRMS":
- __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) |R| en %"
- __msgdoc = u"""
- On observe le residu obtenu a partir de deux approximations d'ordre 1 de F(X),
- normalisees par la valeur au point nominal :
-
- R(Alpha) = max(
- RMS( F(X), F(X+Alpha*dX) - Alpha * F(dX) ) / || F(X) ||,
- RMS( F(X), F(X-Alpha*dX) + Alpha * F(dX) ) / || F(X) ||,
- )
-
- S'il reste constamment egal a 0 a moins de 1 ou 2 pourcents pres, c'est
- que l'hypothese de linearite de F est verifiee.
-
- S'il est egal a 0 sur une partie seulement du domaine de variation de
- l'increment Alpha, c'est sur cette partie que l'hypothese de linearite de F
- est verifiee.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- #
- if len(self._parameters["ResultTitle"]) > 0:
- __rt = str(self._parameters["ResultTitle"])
- msgs = u"\n"
- msgs += __marge + "====" + "="*len(__rt) + "====\n"
- msgs += __marge + " " + __rt + "\n"
- msgs += __marge + "====" + "="*len(__rt) + "====\n"
- else:
- msgs = u""
- msgs += __msgdoc
- #
+ # Boucle sur les perturbations
+ # ----------------------------
__nbtirets = len(__entete) + 2
+ msgs = ("") # 2
msgs += "\n" + __marge + "-"*__nbtirets
msgs += "\n" + __marge + __entete
msgs += "\n" + __marge + "-"*__nbtirets
+ msgs += ("\n")
#
- # Boucle sur les perturbations
- # ----------------------------
for i,amplitude in enumerate(Perturbations):
dX = amplitude * dX0.reshape((-1,1))
#
if self._parameters["ResiduFormula"] == "CenteredDL":
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( Xn + dX )
- self.StoredVariables["CurrentState"].store( Xn - dX )
+ self.StoredVariables["CurrentState"].store( X0 + dX )
+ self.StoredVariables["CurrentState"].store( X0 - dX )
#
- FX_plus_dX = numpy.ravel( Hm( Xn + dX ) ).reshape((-1,1))
- FX_moins_dX = numpy.ravel( Hm( Xn - dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
+ FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1,1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX_plus_dX )
Residu = numpy.linalg.norm( FX_plus_dX + FX_moins_dX - 2 * FX ) / NormeFX
#
self.StoredVariables["Residu"].store( Residu )
- msg = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f"%(i,amplitude,NormeX,NormeFX,Residu,math.log10(max(1.e-99,Residu)))
- msgs += "\n" + __marge + msg
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f\n"%(i,amplitude,NormeX,NormeFX,Residu,math.log10(max(1.e-99,Residu)))
+ msgs += __marge + ttsep
#
if self._parameters["ResiduFormula"] == "Taylor":
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( Xn + dX )
+ self.StoredVariables["CurrentState"].store( X0 + dX )
#
- FX_plus_dX = numpy.ravel( Hm( Xn + dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX_plus_dX )
Residu = numpy.linalg.norm( FX_plus_dX - FX - amplitude * GradFxdX ) / NormeFX
#
self.StoredVariables["Residu"].store( Residu )
- msg = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f"%(i,amplitude,NormeX,NormeFX,Residu,math.log10(max(1.e-99,Residu)))
- msgs += "\n" + __marge + msg
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %4.0f\n"%(i,amplitude,NormeX,NormeFX,Residu,math.log10(max(1.e-99,Residu)))
+ msgs += __marge + ttsep
#
if self._parameters["ResiduFormula"] == "NominalTaylor":
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( Xn + dX )
- self.StoredVariables["CurrentState"].store( Xn - dX )
+ self.StoredVariables["CurrentState"].store( X0 + dX )
+ self.StoredVariables["CurrentState"].store( X0 - dX )
self.StoredVariables["CurrentState"].store( dX )
#
- FX_plus_dX = numpy.ravel( Hm( Xn + dX ) ).reshape((-1,1))
- FX_moins_dX = numpy.ravel( Hm( Xn - dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
+ FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1,1))
FdX = numpy.ravel( Hm( dX ) ).reshape((-1,1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
)
#
self.StoredVariables["Residu"].store( Residu )
- msg = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s"%(i,amplitude,NormeX,NormeFX,Residu,100.*abs(Residu-1.),"%")
- msgs += "\n" + __marge + msg
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s\n"%(i,amplitude,NormeX,NormeFX,Residu,100.*abs(Residu-1.),"%")
+ msgs += __marge + ttsep
#
if self._parameters["ResiduFormula"] == "NominalTaylorRMS":
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( Xn + dX )
- self.StoredVariables["CurrentState"].store( Xn - dX )
+ self.StoredVariables["CurrentState"].store( X0 + dX )
+ self.StoredVariables["CurrentState"].store( X0 - dX )
self.StoredVariables["CurrentState"].store( dX )
#
- FX_plus_dX = numpy.ravel( Hm( Xn + dX ) ).reshape((-1,1))
- FX_moins_dX = numpy.ravel( Hm( Xn - dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
+ FX_moins_dX = numpy.ravel( Hm( X0 - dX ) ).reshape((-1,1))
FdX = numpy.ravel( Hm( dX ) ).reshape((-1,1))
#
if self._toStore("SimulatedObservationAtCurrentState"):
)
#
self.StoredVariables["Residu"].store( Residu )
- msg = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s"%(i,amplitude,NormeX,NormeFX,Residu,100.*Residu,"%")
- msgs += "\n" + __marge + msg
- #
- msgs += "\n" + __marge + "-"*__nbtirets
- msgs += "\n"
+ ttsep = " %2i %5.0e %9.3e %9.3e | %9.3e %5i %s\n"%(i,amplitude,NormeX,NormeFX,Residu,100.*Residu,"%")
+ msgs += __marge + ttsep
#
- # Sorties eventuelles
- # -------------------
- print("\nResults of linearity check by \"%s\" formula:"%self._parameters["ResiduFormula"])
- print(msgs)
+ msgs += (__marge + "-"*__nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 2
#
self._post_run(HO)
return 0
#
# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
-import logging
+import numpy, copy, logging
from daCore import BasicObjects, PlatformInfo
-import numpy, copy
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
#
Hm = HO["Direct"].appliedTo
#
- Xn = copy.copy( Xb )
+ X0 = copy.copy( Xb )
#
# ----------
__s = self._parameters["ShowElementarySummary"]
__p = self._parameters["NumberOfPrintedDigits"]
+ __r = self._parameters["NumberOfRepetition"]
#
- __marge = 5*u" "
+ __marge = 5*u" "
+ __flech = 3*"="+"> "
+ msgs = ("\n") # 1
if len(self._parameters["ResultTitle"]) > 0:
__rt = str(self._parameters["ResultTitle"])
- msgs = ("\n")
msgs += (__marge + "====" + "="*len(__rt) + "====\n")
msgs += (__marge + " " + __rt + "\n")
msgs += (__marge + "====" + "="*len(__rt) + "====\n")
else:
- msgs = ("\n")
- msgs += (" %s\n"%self._name)
- msgs += (" %s\n"%("="*len(self._name),))
+ msgs += (__marge + "%s\n"%self._name)
+ msgs += (__marge + "%s\n"%("="*len(self._name),))
#
msgs += ("\n")
- msgs += (" This test allows to analyze the (repetition of) launch of some given\n")
- msgs += (" operator. It shows simple statistics related to its successful execution,\n")
- msgs += (" or related to the similarities of repetition of its execution.\n")
+ msgs += (__marge + "This test allows to analyze the (repetition of the) launch of some\n")
+ msgs += (__marge + "given simulation operator F, applied to one single vector argument x,\n")
+ msgs += (__marge + "in a parallel way.\n")
+ msgs += (__marge + "The output shows simple statistics related to its successful execution,\n")
+ msgs += (__marge + "or related to the similarities of repetition of its execution.\n")
msgs += ("\n")
- msgs += ("===> Information before launching:\n")
- msgs += (" -----------------------------\n")
- msgs += (" Characteristics of input vector X, internally converted:\n")
- msgs += (" Type...............: %s\n")%type( Xn )
- msgs += (" Length of vector...: %i\n")%max(numpy.ravel( Xn ).shape)
- msgs += (" Minimum value......: %."+str(__p)+"e\n")%numpy.min( Xn )
- msgs += (" Maximum value......: %."+str(__p)+"e\n")%numpy.max( Xn )
- msgs += (" Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Xn, dtype=mfp )
- msgs += (" Standard error.....: %."+str(__p)+"e\n")%numpy.std( Xn, dtype=mfp )
- msgs += (" L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Xn )
- print(msgs)
- #
- print(" %s\n"%("-"*75,))
+ msgs += (__flech + "Information before launching:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( X0 )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
+ #
if self._parameters["SetDebug"]:
CUR_LEVEL = logging.getLogger().getEffectiveLevel()
logging.getLogger().setLevel(logging.DEBUG)
- print("===> Beginning of repeated evaluation, activating debug\n")
+ if __r > 1:
+ msgs += (__flech + "Beginning of repeated evaluation, activating debug\n")
+ else:
+ msgs += (__flech + "Beginning of evaluation, activating debug\n")
else:
- print("===> Beginning of repeated evaluation, without activating debug\n")
+ if __r > 1:
+ msgs += (__flech + "Beginning of repeated evaluation, without activating debug\n")
+ else:
+ msgs += (__flech + "Beginning of evaluation, without activating debug\n")
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 1
#
# ----------
HO["Direct"].disableAvoidingRedundancy()
# ----------
Ys = []
- print(" %s\n"%("-"*75,))
Xs = []
- for i in range(self._parameters["NumberOfRepetition"]):
+ msgs = (__marge + "Appending the input vector to the agument set to be evaluated in parallel\n") # 2-1
+ for i in range(__r):
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( numpy.ravel(Xn) )
- Xs.append( Xn )
- print("===> Launching operator parallel evaluation for %i states\n"%self._parameters["NumberOfRepetition"])
+ self.StoredVariables["CurrentState"].store( X0 )
+ Xs.append( X0 )
+ if __s:
+ # msgs += ("\n")
+ if __r > 1:
+ msgs += (__marge + " Appending step number %i on a total of %i\n"%(i+1,__r))
+ #
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__flech + "Launching operator parallel evaluation for %i states\n"%__r)
+ print(msgs) # 2-1
#
Ys = Hm( Xs, argsAsSerie = True )
#
- print("\n===> End of operator parallel evaluation for %i states\n"%self._parameters["NumberOfRepetition"])
+ msgs = ("\n") # 2-2
+ msgs += (__flech + "End of operator parallel evaluation for %i states\n"%__r)
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 2-2
#
# ----------
HO["Direct"].enableAvoidingRedundancy()
# ----------
#
- print(" %s\n"%("-"*75,))
+ msgs = ("") # 3
if self._parameters["SetDebug"]:
- print("===> End of repeated evaluation, deactivating debug if necessary\n")
+ if __r > 1:
+ msgs += (__flech + "End of repeated evaluation, deactivating debug if necessary\n")
+ else:
+ msgs += (__flech + "End of evaluation, deactivating debug if necessary\n")
logging.getLogger().setLevel(CUR_LEVEL)
else:
- print("===> End of repeated evaluation, without deactivating debug\n")
+ if __r > 1:
+ msgs += (__flech + "End of repeated evaluation, without deactivating debug\n")
+ else:
+ msgs += (__flech + "End of evaluation, without deactivating debug\n")
#
if __s or self._toStore("SimulatedObservationAtCurrentState"):
for i in range(self._parameters["NumberOfRepetition"]):
if __s:
- print(" %s\n"%("-"*75,))
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
if self._parameters["NumberOfRepetition"] > 1:
- print("===> Repetition step number %i on a total of %i\n"%(i+1,self._parameters["NumberOfRepetition"]))
+ msgs += (__flech + "Repetition step number %i on a total of %i\n"%(i+1,self._parameters["NumberOfRepetition"]))
#
Yn = Ys[i]
if __s:
- msgs = ("===> Information after evaluation:\n")
- msgs += ("\n Characteristics of simulated output vector Y=H(X), to compare to others:\n")
- msgs += (" Type...............: %s\n")%type( Yn )
- msgs += (" Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
- msgs += (" Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
- msgs += (" Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
- msgs += (" Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
- msgs += (" Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
- msgs += (" L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
- print(msgs)
+ msgs += ("\n")
+ msgs += (__flech + "Information after evaluation:\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of simulated output vector Y=F(X), to compare to others:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( Yn )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( Yn ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( Yn )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( Yn )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( Yn, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( Yn, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( Yn )
+ #
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( numpy.ravel(Yn) )
#
- if self._parameters["NumberOfRepetition"] > 1:
- print(" %s\n"%("-"*75,))
- print("===> Launching statistical summary calculation for %i states\n"%self._parameters["NumberOfRepetition"])
- msgs = (" %s\n"%("-"*75,))
- msgs += ("\n===> Statistical analysis of the outputs obtained through parallel repeated evaluations\n")
- msgs += ("\n (Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ #
+ if __r > 1:
+ msgs += ("\n")
+ msgs += (__flech + "Launching statistical summary calculation for %i states\n"%__r)
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ msgs += ("\n")
+ msgs += (__flech + "Statistical analysis of the outputs obtained through parallel repeated evaluations\n")
+ msgs += ("\n")
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ msgs += ("\n")
Yy = numpy.array( Ys )
- msgs += ("\n Characteristics of the whole set of outputs Y:\n")
- msgs += (" Number of evaluations.........................: %i\n")%len( Ys )
- msgs += (" Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
- msgs += (" Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
- msgs += (" Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
- msgs += (" Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += (__marge + "Number of evaluations...........................: %i\n")%len( Ys )
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of the whole set of outputs Y:\n")
+ msgs += (__marge + " Size of each of the outputs...................: %i\n")%Ys[0].size
+ msgs += (__marge + " Minimum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.min( Yy )
+ msgs += (__marge + " Maximum value of the whole set of outputs.....: %."+str(__p)+"e\n")%numpy.max( Yy )
+ msgs += (__marge + " Mean of vector of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.mean( Yy, dtype=mfp )
+ msgs += (__marge + " Standard error of the whole set of outputs....: %."+str(__p)+"e\n")%numpy.std( Yy, dtype=mfp )
+ msgs += ("\n")
Ym = numpy.mean( numpy.array( Ys ), axis=0, dtype=mfp )
- msgs += ("\n Characteristics of the vector Ym, mean of the outputs Y:\n")
- msgs += (" Size of the mean of the outputs...............: %i\n")%Ym.size
- msgs += (" Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
- msgs += (" Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
- msgs += (" Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
- msgs += (" Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += (__marge + "Characteristics of the vector Ym, mean of the outputs Y:\n")
+ msgs += (__marge + " Size of the mean of the outputs...............: %i\n")%Ym.size
+ msgs += (__marge + " Minimum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.min( Ym )
+ msgs += (__marge + " Maximum value of the mean of the outputs......: %."+str(__p)+"e\n")%numpy.max( Ym )
+ msgs += (__marge + " Mean of the mean of the outputs...............: %."+str(__p)+"e\n")%numpy.mean( Ym, dtype=mfp )
+ msgs += (__marge + " Standard error of the mean of the outputs.....: %."+str(__p)+"e\n")%numpy.std( Ym, dtype=mfp )
+ msgs += ("\n")
Ye = numpy.mean( numpy.array( Ys ) - Ym, axis=0, dtype=mfp )
- msgs += "\n Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n"
- msgs += (" Size of the mean of the differences...........: %i\n")%Ym.size
- msgs += (" Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
- msgs += (" Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
- msgs += (" Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
- msgs += (" Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
- msgs += ("\n %s\n"%("-"*75,))
- print(msgs)
+ msgs += (__marge + "Characteristics of the mean of the differences between the outputs Y and their mean Ym:\n")
+ msgs += (__marge + " Size of the mean of the differences...........: %i\n")%Ye.size
+ msgs += (__marge + " Minimum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.min( Ye )
+ msgs += (__marge + " Maximum value of the mean of the differences..: %."+str(__p)+"e\n")%numpy.max( Ye )
+ msgs += (__marge + " Mean of the mean of the differences...........: %."+str(__p)+"e\n")%numpy.mean( Ye, dtype=mfp )
+ msgs += (__marge + " Standard error of the mean of the differences.: %."+str(__p)+"e\n")%numpy.std( Ye, dtype=mfp )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n"%("-"*75,))
+ #
+ msgs += ("\n")
+ msgs += (__marge + "End of the \"%s\" verification\n\n"%self._name)
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 3
#
self._post_run(HO)
return 0
import numpy
from daCore import BasicObjects, NumericObjects, PlatformInfo
mpr = PlatformInfo.PlatformInfo().MachinePrecision()
+mfp = PlatformInfo.PlatformInfo().MaximumPrecision()
# ==============================================================================
class ElementaryAlgorithm(BasicObjects.Algorithm):
typecast = numpy.random.seed,
message = "Graine fixée pour le générateur aléatoire",
)
+ self.defineRequiredParameter(
+ name = "NumberOfPrintedDigits",
+ default = 5,
+ typecast = int,
+ message = "Nombre de chiffres affichés pour les impressions de réels",
+ minval = 0,
+ )
self.defineRequiredParameter(
name = "ResultTitle",
default = "",
Hm = HO["Direct"].appliedTo
Ht = HO["Tangent"].appliedInXTo
#
- # Construction des perturbations
- # ------------------------------
+ X0 = numpy.ravel( Xb ).reshape((-1,1))
+ #
+ # ----------
+ __p = self._parameters["NumberOfPrintedDigits"]
+ #
+ __marge = 5*u" "
+ __flech = 3*"="+"> "
+ msgs = ("\n") # 1
+ if len(self._parameters["ResultTitle"]) > 0:
+ __rt = str(self._parameters["ResultTitle"])
+ msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ msgs += (__marge + " " + __rt + "\n")
+ msgs += (__marge + "====" + "="*len(__rt) + "====\n")
+ else:
+ msgs += (__marge + "%s\n"%self._name)
+ msgs += (__marge + "%s\n"%("="*len(self._name),))
+ #
+ msgs += ("\n")
+ msgs += (__marge + "This test allows to analyze the numerical stability of the tangent of some\n")
+ msgs += (__marge + "given simulation operator F, applied to one single vector argument x.\n")
+ msgs += (__marge + "The output shows simple statistics related to its stability for various\n")
+ msgs += (__marge + "increments, around an input checking point X.\n")
+ msgs += ("\n")
+ msgs += (__flech + "Information before launching:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Characteristics of input vector X, internally converted:\n")
+ msgs += (__marge + " Type...............: %s\n")%type( X0 )
+ msgs += (__marge + " Length of vector...: %i\n")%max(numpy.ravel( X0 ).shape)
+ msgs += (__marge + " Minimum value......: %."+str(__p)+"e\n")%numpy.min( X0 )
+ msgs += (__marge + " Maximum value......: %."+str(__p)+"e\n")%numpy.max( X0 )
+ msgs += (__marge + " Mean of vector.....: %."+str(__p)+"e\n")%numpy.mean( X0, dtype=mfp )
+ msgs += (__marge + " Standard error.....: %."+str(__p)+"e\n")%numpy.std( X0, dtype=mfp )
+ msgs += (__marge + " L2 norm of vector..: %."+str(__p)+"e\n")%numpy.linalg.norm( X0 )
+ msgs += ("\n")
+ msgs += (__marge + "%s\n\n"%("-"*75,))
+ msgs += (__flech + "Numerical quality indicators:\n")
+ msgs += (__marge + "-----------------------------\n")
+ msgs += ("\n")
+ msgs += (__marge + "Using the \"%s\" formula, one observes the residue R which is the\n"%self._parameters["ResiduFormula"])
+ msgs += (__marge + "ratio of increments using the tangent linear:\n")
+ msgs += ("\n")
+ #
+ if self._parameters["ResiduFormula"] == "Taylor":
+ msgs += (__marge + " || F(X+Alpha*dX) - F(X) ||\n")
+ msgs += (__marge + " R(Alpha) = -----------------------------\n")
+ msgs += (__marge + " || Alpha * TangentF_X * dX ||\n")
+ msgs += ("\n")
+ msgs += (__marge + "which must remain stable in 1+O(Alpha) until the accuracy of the\n")
+ msgs += (__marge + "calculation is reached.\n")
+ msgs += ("\n")
+ msgs += (__marge + "When |R-1|/Alpha is less than or equal to a stable value when Alpha varies,\n")
+ msgs += (__marge + "the tangent is valid, until the accuracy of the calculation is reached.\n")
+ msgs += (__marge + "\n")
+ msgs += (__marge + "If |R-1|/Alpha is very small, the code F is likely linear or quasi-linear,\n")
+ msgs += (__marge + "and the tangent is valid until computational accuracy is reached.\n")
+ #
+ __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) |R-1|/Alpha"
+ #
+ msgs += ("\n")
+ msgs += (__marge + "We take dX0 = Normal(0,X) and dX = Alpha*dX0. F is the calculation code.\n")
+ msgs += ("\n")
+ msgs += (__marge + "(Remark: numbers that are (about) under %.0e represent 0 to machine precision)\n"%mpr)
+ print(msgs) # 1
+ #
Perturbations = [ 10**i for i in range(self._parameters["EpsilonMinimumExponent"],1) ]
Perturbations.reverse()
#
- # Calcul du point courant
- # -----------------------
- Xn = numpy.ravel( Xb ).reshape((-1,1))
- FX = numpy.ravel( Hm( Xn ) ).reshape((-1,1))
- NormeX = numpy.linalg.norm( Xn )
+ FX = numpy.ravel( Hm( X0 ) ).reshape((-1,1))
+ NormeX = numpy.linalg.norm( X0 )
NormeFX = numpy.linalg.norm( FX )
+ if NormeFX < mpr: NormeFX = mpr
if self._toStore("CurrentState"):
- self.StoredVariables["CurrentState"].store( Xn )
+ self.StoredVariables["CurrentState"].store( X0 )
if self._toStore("SimulatedObservationAtCurrentState"):
self.StoredVariables["SimulatedObservationAtCurrentState"].store( FX )
#
dX0 = NumericObjects.SetInitialDirection(
self._parameters["InitialDirection"],
self._parameters["AmplitudeOfInitialDirection"],
- Xn,
+ X0,
)
#
- # Calcul du gradient au point courant X pour l'increment dX
+ # Calcul du gradient au point courant X pour l'incrément dX
# qui est le tangent en X multiplie par dX
# ---------------------------------------------------------
dX1 = float(self._parameters["AmplitudeOfTangentPerturbation"]) * dX0
- GradFxdX = Ht( (Xn, dX1) )
+ GradFxdX = Ht( (X0, dX1) )
GradFxdX = numpy.ravel( GradFxdX ).reshape((-1,1))
GradFxdX = float(1./self._parameters["AmplitudeOfTangentPerturbation"]) * GradFxdX
NormeGX = numpy.linalg.norm( GradFxdX )
if NormeGX < mpr: NormeGX = mpr
#
- # Entete des resultats
- # --------------------
- __marge = 12*u" "
- __precision = u"""
- Remarque : les nombres inferieurs a %.0e (environ) representent un zero
- a la precision machine.\n"""%mpr
- if self._parameters["ResiduFormula"] == "Taylor":
- __entete = u" i Alpha ||X|| ||F(X)|| | R(Alpha) |R-1|/Alpha"
- __msgdoc = u"""
- On observe le residu provenant du rapport d'increments utilisant le
- lineaire tangent :
-
- || F(X+Alpha*dX) - F(X) ||
- R(Alpha) = -----------------------------
- || Alpha * TangentF_X * dX ||
-
- qui doit rester stable en 1+O(Alpha) jusqu'a ce que l'on atteigne la
- precision du calcul.
-
- Lorsque |R-1|/Alpha est inferieur ou egal a une valeur stable
- lorsque Alpha varie, le tangent est valide, jusqu'a ce que l'on
- atteigne la precision du calcul.
-
- Si |R-1|/Alpha est tres faible, le code F est vraisemblablement
- lineaire ou quasi-lineaire, et le tangent est valide jusqu'a ce que
- l'on atteigne la precision du calcul.
-
- On prend dX0 = Normal(0,X) et dX = Alpha*dX0. F est le code de calcul.\n""" + __precision
- #
- if len(self._parameters["ResultTitle"]) > 0:
- __rt = str(self._parameters["ResultTitle"])
- msgs = u"\n"
- msgs += __marge + "====" + "="*len(__rt) + "====\n"
- msgs += __marge + " " + __rt + "\n"
- msgs += __marge + "====" + "="*len(__rt) + "====\n"
- else:
- msgs = u""
- msgs += __msgdoc
- #
+ # Boucle sur les perturbations
+ # ----------------------------
__nbtirets = len(__entete) + 2
+ msgs = ("") # 2
msgs += "\n" + __marge + "-"*__nbtirets
msgs += "\n" + __marge + __entete
msgs += "\n" + __marge + "-"*__nbtirets
- #
- # Boucle sur les perturbations
- # ----------------------------
+ msgs += ("\n")
for i,amplitude in enumerate(Perturbations):
dX = amplitude * dX0.reshape((-1,1))
#
if self._parameters["ResiduFormula"] == "Taylor":
- FX_plus_dX = numpy.ravel( Hm( Xn + dX ) ).reshape((-1,1))
+ FX_plus_dX = numpy.ravel( Hm( X0 + dX ) ).reshape((-1,1))
#
Residu = numpy.linalg.norm( FX_plus_dX - FX ) / (amplitude * NormeGX)
- #
- self.StoredVariables["Residu"].store( Residu )
- msg = " %2i %5.0e %9.3e %9.3e | %11.5e %5.1e"%(i,amplitude,NormeX,NormeFX,Residu,abs(Residu-1.)/amplitude)
- msgs += "\n" + __marge + msg
- #
- msgs += "\n" + __marge + "-"*__nbtirets
- msgs += "\n"
+ #
+ self.StoredVariables["Residu"].store( Residu )
+ ttsep = " %2i %5.0e %9.3e %9.3e | %11.5e %5.1e\n"%(i,amplitude,NormeX,NormeFX,Residu,abs(Residu-1.)/amplitude)
+ msgs += __marge + ttsep
#
- # Sorties eventuelles
- # -------------------
- print("\nResults of tangent check by \"%s\" formula:"%self._parameters["ResiduFormula"])
- print(msgs)
+ msgs += (__marge + "-"*__nbtirets + "\n\n")
+ msgs += (__marge + "End of the \"%s\" verification by the \"%s\" formula.\n\n"%(self._name,self._parameters["ResiduFormula"]))
+ msgs += (__marge + "%s\n"%("-"*75,))
+ print(msgs) # 2
#
self._post_run(HO)
return 0
"LocalSensitivityTest",
"SamplingTest",
"ParallelFunctionTest",
+ "ControledFunctionTest",
"InputValuesTest",
"ObserverTest",
]
"CheckingPoint",
"ObservationOperator",
]
+AlgoDataRequirements["ControledFunctionTest"] = [
+ "CheckingPoint",
+ "ObservationOperator",
+ ]
AlgoDataRequirements["ObserverTest"] = [
"Observers",
]