ObservationOperator = F_ObservationOperator("o"),
EvolutionModel = F_EvolutionModel("f"),
EvolutionError = F_EvolutionError("f"),
+ ControlInput = F_ControlInput("f"),
AlgorithmParameters = F_AlgorithmParameters("f"),
UserDataInit = F_Init("f"),
UserPostAnalysis = F_UserPostAnalysis("f"),
.. index:: single: AlgorithmParameters
.. index:: single: Background
.. index:: single: BackgroundError
+.. index:: single: ControlInput
.. index:: single: Debug
.. index:: single: EvolutionError
.. index:: single: EvolutionModel
optimization algorithm chosen. The choices are limited and available through
the GUI. There exists for example "3DVAR", "Blue"... See below the list of
algorithms and associated parameters in the following subsection `Options
- for algorithms`_.
+ and required commands for algorithms`_.
**AlgorithmParameters**
*Optional command*. This command allows to add some optional parameters to
control the data assimilation or optimization algorithm. It is defined as a
"*Dict*" type object, that is, given as a script. See below the list of
algorithms and associated parameters in the following subsection `Options
- for algorithms`_.
+ and required commands for algorithms`_.
**Background**
*Required command*. This indicates the background or initial vector used,
previously noted as :math:`\mathbf{B}`. It is defined as a "*Matrix*" type
object, that is, given either as a string or as a script.
+**ControlInput**
+ *Optional command*. This indicates the control vector used to force the
+ evolution model at each step, usually noted as :math:`\mathbf{U}`. It is
+ defined as a "*Vector*" or a *VectorSerie* type object, that is, given
+ either as a string or as a script. When there is no control, it has to be a
+ void string ''.
+
**Debug**
*Required command*. This define the level of trace and intermediary debug
information. The choices are limited between 0 (for False) and 1 (for
noted :math:`M`, which describes a step of evolution. It is defined as a
"*Function*" type object, that is, given as a script. Different functional
forms can be used, as described in the following subsection `Requirements
- for functions describing an operator`_.
+ for functions describing an operator`_. If there is some control :math:`U`,
+ the operator has to be applied to a pair :math:`(X,U)`.
**InputVariables**
*Optional command*. This command allows to indicates the name and size of
**Observation**
*Required command*. This indicates the observation vector used for data
assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
- is defined as a "*Vector*" type object, that is, given either as a string or
- as a script.
+ is defined as a "*Vector*" or a *VectorSerie* type object, that is, given
+ either as a string or as a script.
**ObservationError**
*Required command*. This indicates the observation error covariance matrix,
optimization algorithm chosen. The choices are limited and available through
the GUI. There exists for example "3DVAR", "Blue"... See below the list of
algorithms and associated parameters in the following subsection `Options
- for algorithms`_.
+ and required commands for algorithms`_.
**AlgorithmParameters**
*Optional command*. This command allows to add some optional parameters to
control the data assimilation or optimization algorithm. It is defined as a
"*Dict*" type object, that is, given as a script. See below the list of
algorithms and associated parameters in the following subsection `Options
- for algorithms`_.
+ and required commands for algorithms`_.
**CheckingPoint**
*Required command*. This indicates the vector used,
*Optional command*. This commands allows to initialize some parameters or
data automatically before data assimilation algorithm processing.
-Options for algorithms
-----------------------
+Options and required commands for algorithms
+--------------------------------------------
.. index:: single: 3DVAR
.. index:: single: Blue
.. index:: single: EnsembleBlue
.. index:: single: KalmanFilter
+.. index:: single: ExtendedKalmanFilter
.. index:: single: LinearLeastSquares
.. index:: single: NonLinearLeastSquares
.. index:: single: ParticleSwarmOptimization
This section describes the available options algorithm by algorithm. If an
option is specified for an algorithm that doesn't support it, the option is
simply left unused. The meaning of the acronyms or particular names can be found
-in the :ref:`genindex` or the :ref:`section_glossary`.
+in the :ref:`genindex` or the :ref:`section_glossary`. In addition, for each
+algorithm, the required commands are given, being described in `List of commands
+and keywords for an ADAO calculation case`_.
**"Blue"**
+ *Required commands*
+ *"Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator"*
+
StoreSupplementaryCalculations
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
**"LinearLeastSquares"**
+ *Required commands*
+ *"Observation", "ObservationError",
+ "ObservationOperator"*
+
StoreSupplementaryCalculations
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
**"3DVAR"**
+ *Required commands*
+ *"Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator"*
+
Minimizer
This key allows to choose the optimization minimizer. The default choice
is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
**"NonLinearLeastSquares"**
+ *Required commands*
+ *"Background",
+ "Observation", "ObservationError",
+ "ObservationOperator"*
+
Minimizer
This key allows to choose the optimization minimizer. The default choice
is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained
**"EnsembleBlue"**
+ *Required commands*
+ *"Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator"*
+
SetSeed
This key allow to give an integer in order to fix the seed of the random
generator used to generate the ensemble. A convenient value is for example
**"KalmanFilter"**
+ *Required commands*
+ *"Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator",
+ "EvolutionModel", "EvolutionError"*
+
+ StoreSupplementaryCalculations
+ This list indicates the names of the supplementary variables that can be
+ available at the end of the algorithm. It involves potentially costly
+ calculations. The default is a void list, none of these variables being
+ calculated and stored by default. The possible names are in the following
+ list: ["APosterioriCovariance", "Innovation"].
+
+**"ExtendedKalmanFilter"**
+
+ *Required commands*
+ *"Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator",
+ "EvolutionModel", "EvolutionError",
+ "ControlInput"*
+
StoreSupplementaryCalculations
This list indicates the names of the supplementary variables that can be
available at the end of the algorithm. It involves potentially costly
**"ParticleSwarmOptimization"**
+ *Required commands*
+ *"Background", "BackgroundError",
+ "Observation", "ObservationError",
+ "ObservationOperator"*
+
MaximumNumberOfSteps
This key indicates the maximum number of iterations allowed for iterative
optimization. The default is 50, which is an arbitrary limit. It is then
**"QuantileRegression"**
+ *Required commands*
+ *"Background",
+ "Observation",
+ "ObservationOperator"*
+
Quantile
This key allows to define the real value of the desired quantile, between
0 and 1. The default is 0.5, corresponding to the median.
The operators for observation and evolution are required to implement the data
assimilation or optimization procedures. They include the physical simulation
numerical simulations, but also the filtering and restriction to compare the
-simulation to observation.
+simulation to observation. The evolution operator is considered here in its
+incremental form, representing the transition between two successive states, and
+is then similar to the observation operator.
Schematically, an operator has to give a output solution given the input
parameters. Part of the input parameters can be modified during the optimization
+++++++++++++++++++++++++++++++++++++++++++++++++++++
The second one consist in providing directly the three associated operators
-:math:`H`, :math:`\mathbf{H}` and :math:`\mathbf{H}^*`. This is done by using the
-keyword "*ScriptWithFunctions*" for the description of the chosen operator in
-the ADAO GUI. The user have to provide three functions in one script, with three
-mandatory names "*DirectOperator*", "*TangentOperator*" and "*AdjointOperator*".
-For example, the script can follow the template::
+:math:`H`, :math:`\mathbf{H}` and :math:`\mathbf{H}^*`. This is done by using
+the keyword "*ScriptWithFunctions*" for the description of the chosen operator
+in the ADAO GUI. The user have to provide three functions in one script, with
+three mandatory names "*DirectOperator*", "*TangentOperator*" and
+"*AdjointOperator*". For example, the script can follow the template::
def DirectOperator( X ):
""" Direct non-linear simulation operator """
result["errorMessage"] = ""
All various modifications could be done from this template hypothesis.
+
+Special case of controled evolution operator
+++++++++++++++++++++++++++++++++++++++++++++
+
+In some cases, the evolution operator is required to be controled by an external
+input control, given a priori. In this case, the generic form of the incremental
+evolution model is slightly modified as follows:
+
+.. math:: \mathbf{y} = H( \mathbf{x}, \mathbf{u})
+
+where :math:`\mathbf{u}` is the control over one state increment. In this case,
+the direct evolution operator has to be applied to a pair of variables
+:math:`(X,U)`. Schematically, the evolution operator has to be set as::
+
+ def DirectOperator( (X, U) ):
+ """ Direct non-linear simulation operator """
+ ...
+ ...
+ ...
+ return something like X(n+1)
+
+The tangent and adjoint operators have the same signature as previously, noting
+that the derivatives has to be done only against :math:`\mathbf{x}` partially.
+In such a case with explicit control, only the second functional form (using
+"*ScriptWithFunctions*") and third functional form (using "*ScriptWithSwitch*")
+can be used.
assim_study.setCheckingPointStored(CheckingPointStored)
assim_study.setCheckingPoint(CheckingPoint)
-# BackgroundError
+# ControlInput
try:
- BackgroundError
+ ControlInput
except NameError:
pass
else:
- logging.debug("CREATE BackgroundError is set")
- logging.debug("CREATE BackgroundErrorStored is %s"%BackgroundErrorStored)
- assim_study.setBackgroundErrorStored(BackgroundErrorStored)
- assim_study.setBackgroundError(BackgroundError)
+ logging.debug("CREATE ControlInput is set")
+ logging.debug("CREATE ControlInputType is %s"%ControlInputType)
+ logging.debug("CREATE ControlInputStored is %s"%ControlInputStored)
+ assim_study.setControlInputType(ControlInputType)
+ assim_study.setControlInputStored(ControlInputStored)
+ assim_study.setControlInput(ControlInput)
# Observation
try:
assim_study.setObservationStored(ObservationStored)
assim_study.setObservation(Observation)
+# BackgroundError
+try:
+ BackgroundError
+except NameError:
+ pass
+else:
+ logging.debug("CREATE BackgroundError is set")
+ logging.debug("CREATE BackgroundErrorStored is %s"%BackgroundErrorStored)
+ assim_study.setBackgroundErrorStored(BackgroundErrorStored)
+ assim_study.setBackgroundError(BackgroundError)
+
# ObservationError
try:
ObservationError
<script><code><![CDATA[
import numpy, logging
logging.debug("CREATE Entering in CreateNumpyVectorSerieFromString")
-vector = numpy.matrix(vector_in_string)
+vector_in_list = eval(str(vector_in_string),{},{})
+vector = numpy.matrix(vector_in_list)
type = "VectorSerie"
logging.debug("VectorSerie is %s"%vector)
]]></code></script>
--- /dev/null
+#-*-coding:iso-8859-1-*-
+#
+# Copyright (C) 2008-2013 EDF R&D
+#
+# This library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License.
+#
+# This library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with this library; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+#
+# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
+#
+# Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
+
+import logging
+from daCore import BasicObjects, PlatformInfo
+m = PlatformInfo.SystemUsage()
+import numpy
+
+# ==============================================================================
+class ElementaryAlgorithm(BasicObjects.Algorithm):
+ def __init__(self):
+ BasicObjects.Algorithm.__init__(self, "EXTENDEDKALMANFILTER")
+ self.defineRequiredParameter(
+ name = "StoreSupplementaryCalculations",
+ default = [],
+ typecast = tuple,
+ message = "Liste de calculs supplémentaires à stocker et/ou effectuer",
+ listval = ["APosterioriCovariance", "Innovation"]
+ )
+
+ def run(self, Xb=None, Y=None, U=None, HO=None, EM=None, CM=None, R=None, B=None, Q=None, Parameters=None):
+ logging.debug("%s Lancement"%self._name)
+ logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("M")))
+ #
+ # Paramètres de pilotage
+ # ----------------------
+ self.setParameters(Parameters)
+ #
+ # Opérateurs
+ # ----------
+ if B is None:
+ raise ValueError("Background error covariance matrix has to be properly defined!")
+ if R is None:
+ raise ValueError("Observation error covariance matrix has to be properly defined!")
+ #
+ H = HO["Direct"].appliedTo
+ #
+ M = EM["Direct"].appliedTo
+ #
+ # Nombre de pas du Kalman identique au nombre de pas d'observations
+ # -----------------------------------------------------------------
+ duration = Y.stepnumber()
+ #
+ # Initialisation
+ # --------------
+ Xn = Xb
+ Pn = B
+ self.StoredVariables["Analysis"].store( Xn.A1 )
+ if "APosterioriCovariance" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["APosterioriCovariance"].store( Pn )
+ #
+ for step in range(duration-1):
+ Ynpu = numpy.asmatrix(numpy.ravel( Y[step+1] )).T
+ #
+ Ht = HO["Tangent"].asMatrix(ValueForMethodForm = Xn)
+ Ht = Ht.reshape(Ynpu.size,Xn.size) # ADAO & check shape
+ Ha = HO["Adjoint"].asMatrix(ValueForMethodForm = Xn)
+ Ha = Ha.reshape(Xn.size,Ynpu.size) # ADAO & check shape
+ #
+ Mt = EM["Tangent"].asMatrix(ValueForMethodForm = Xn)
+ Mt = Mt.reshape(Xn.size,Xn.size) # ADAO & check shape
+ Ma = EM["Adjoint"].asMatrix(ValueForMethodForm = Xn)
+ Ma = Ma.reshape(Xn.size,Xn.size) # ADAO & check shape
+ #
+ if U is not None:
+ if hasattr(U,"store") and len(U)>1:
+ Un = numpy.asmatrix(numpy.ravel( U[step] )).T
+ elif hasattr(U,"store") and len(U)==1:
+ Un = numpy.asmatrix(numpy.ravel( U[0] )).T
+ else:
+ Un = numpy.asmatrix(numpy.ravel( U )).T
+ else:
+ Un = None
+ #
+ Xn_predicted = M( (Xn, Un) )
+ Pn_predicted = Mt * Pn * Ma + Q
+ #
+ d = Ynpu - H( Xn_predicted )
+ K = Pn_predicted * Ha * (Ht * Pn_predicted * Ha + R).I
+ Xn = Xn_predicted + K * d
+ Pn = Pn_predicted - K * Ht * Pn_predicted
+ #
+ self.StoredVariables["Analysis"].store( Xn.A1 )
+ if "Innovation" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["Innovation"].store( numpy.ravel( d.A1 ) )
+ if "APosterioriCovariance" in self._parameters["StoreSupplementaryCalculations"]:
+ self.StoredVariables["APosterioriCovariance"].store( Pn )
+ #
+ logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("M")))
+ logging.debug("%s Terminé"%self._name)
+ #
+ return 0
+
+# ==============================================================================
+if __name__ == "__main__":
+ print '\n AUTODIAGNOSTIC \n'
raise ValueError("Background error covariance matrix has to be properly defined!")
if R is None:
raise ValueError("Observation error covariance matrix has to be properly defined!")
- Hm = HO["Tangent"].asMatrix(None)
+ #
+ Ht = HO["Tangent"].asMatrix(None)
Ha = HO["Adjoint"].asMatrix(None)
#
- Mm = EM["Tangent"].asMatrix(None)
- Mt = EM["Adjoint"].asMatrix(None)
+ Mt = EM["Tangent"].asMatrix(None)
+ Ma = EM["Adjoint"].asMatrix(None)
#
- if CM is not None and U is not None:
+ if CM is not None and CM.has_key("Tangent") and U is not None:
Cm = CM["Tangent"].asMatrix(None)
+ else:
+ Cm = None
#
# Nombre de pas du Kalman identique au nombre de pas d'observations
# -----------------------------------------------------------------
self.StoredVariables["APosterioriCovariance"].store( Pn )
#
for step in range(duration-1):
- if CM is not None and U is not None:
+ Ynpu = numpy.asmatrix(numpy.ravel( Y[step+1] )).T
+ #
+ if Cm is not None:
if hasattr(U,"store") and len(U)>1:
- Xn_predicted = Mm * Xn + Cm * numpy.asmatrix(numpy.ravel( U[step] )).T
+ Un = numpy.asmatrix(numpy.ravel( U[step] )).T
elif hasattr(U,"store") and len(U)==1:
- Xn_predicted = Mm * Xn + Cm * numpy.asmatrix(numpy.ravel( U[0] )).T
+ Un = numpy.asmatrix(numpy.ravel( U[0] )).T
else:
- Xn_predicted = Mm * Xn + Cm * numpy.asmatrix(numpy.ravel( U )).T
+ Un = numpy.asmatrix(numpy.ravel( U )).T
+ #
+ Xn_predicted = Mt * Xn + Cm * Un
else:
- Xn_predicted = Mm * Xn
- Pn_predicted = Mm * Pn * Mt + Q
+ Un = None
+ #
+ Xn_predicted = Mt * Xn
+ Pn_predicted = Mt * Pn * Ma + Q
#
- d = numpy.asmatrix(numpy.ravel( Y[step+1] )).T - Hm * Xn_predicted
- K = Pn_predicted * Ha * (Hm * Pn_predicted * Ha + R).I
+ d = Ynpu - Ht * Xn_predicted
+ K = Pn_predicted * Ha * (Ht * Pn_predicted * Ha + R).I
Xn = Xn_predicted + K * d
- Pn = Pn_predicted - K * Hm * Pn_predicted
+ Pn = Pn_predicted - K * Ht * Pn_predicted
#
self.StoredVariables["Analysis"].store( Xn.A1 )
if "Innovation" in self._parameters["StoreSupplementaryCalculations"]:
else:
self.__Y = numpy.matrix( asVector, numpy.float ).T
elif asPersistentVector is not None:
- if type( asPersistentVector ) is list or type( asPersistentVector ) is tuple:
- self.__Y = Persistence.OneVector("Observation", basetype=numpy.array)
- for y in asPersistentVector:
- self.__Y.store( y )
+ if type(asPersistentVector) in [type([]),type(()),type(numpy.array([])),type(numpy.matrix([]))]:
+ self.__Y = Persistence.OneVector("Observation", basetype=numpy.matrix)
+ for member in asPersistentVector:
+ self.__Y.store( numpy.matrix( numpy.asmatrix(member).A1, numpy.float ).T )
else:
self.__Y = asPersistentVector
else:
else:
self.__U = numpy.matrix( asVector, numpy.float ).T
elif asPersistentVector is not None:
- if isinstance(asPersistentVector,list) or isinstance( asPersistentVector,tuple):
- self.__U = Persistence.OneVector("ControlInput", basetype=numpy.array)
- for y in asPersistentVector:
- self.__U.store( y )
+ if type(asPersistentVector) in [type([]),type(()),type(numpy.array([])),type(numpy.matrix([]))]:
+ self.__U = Persistence.OneVector("ControlInput", basetype=numpy.matrix)
+ for member in asPersistentVector:
+ self.__U.store( numpy.matrix( numpy.asmatrix(member).A1, numpy.float ).T )
else:
self.__U = asPersistentVector
else:
self.add_data("EvolutionModel")
if "__"+self.type_of_study+"__EvolutionError__INPUT_TYPE" in self.dictMCVal.keys():
self.add_data("EvolutionError")
+ if "__"+self.type_of_study+"__ControlInput__INPUT_TYPE" in self.dictMCVal.keys():
+ self.add_data("ControlInput")
self.add_variables()
# Parametres optionnels
#--------------------------------------
+ def setControlInputType(self, Type):
+ if Type == "Vector" or Type == "VectorSerie":
+ self.ControlInputType = Type
+ else:
+ raise daError("[daStudy::setControlInputType] Type is unkown : " + Type + ". Types are : Vector, VectorSerie")
+
+ def setControlInputStored(self, Stored):
+ if Stored:
+ self.ControlInputStored = True
+ else:
+ self.ControlInputStored = False
+
+ def setControlInput(self, ControlInput):
+ try:
+ self.ControlInputType
+ self.ControlInputStored
+ except AttributeError:
+ raise daError("[daStudy::setControlInput] Type or Storage is not defined !")
+ if self.ControlInputType == "Vector":
+ self.ADD.setControlInput(asVector = ControlInput, toBeStored = self.ControlInputStored)
+ if self.ControlInputType == "VectorSerie":
+ self.ADD.setControlInput(asPersistentVector = ControlInput, toBeStored = self.ControlInputStored)
+
+ #--------------------------------------
+
def setObservationType(self, Type):
if Type == "Vector" or Type == "VectorSerie":
self.ObservationType = Type
"ObservationOperator",
"EvolutionModel", "EvolutionError",
"AlgorithmParameters",
- "CheckingPoint",
+ "CheckingPoint", "ControlInput",
]
AssimType = {}
AssimType["AlgorithmParameters"] = ["Dict"]
AssimType["UserDataInit"] = ["Dict"]
AssimType["CheckingPoint"] = ["Vector"]
+AssimType["ControlInput"] = ["Vector", "VectorSerie"]
FromNumpyList = {}
FromNumpyList["Vector"] = ["String", "Script"]
"Blue",
"EnsembleBlue",
"KalmanFilter",
+ "ExtendedKalmanFilter",
"LinearLeastSquares",
"NonLinearLeastSquares",
"QuantileRegression",
AlgoDataRequirements["KalmanFilter"] = [
"Background", "BackgroundError",
"Observation", "ObservationError",
+ "ObservationOperator",
"EvolutionModel", "EvolutionError",
+ ]
+AlgoDataRequirements["ExtendedKalmanFilter"] = [
+ "Background", "BackgroundError",
+ "Observation", "ObservationError",
"ObservationOperator",
+ "EvolutionModel", "EvolutionError",
+ "ControlInput",
]
AlgoDataRequirements["LinearLeastSquares"] = [
"Observation", "ObservationError",
AlgoType["Blue"] = "Optim"
AlgoType["EnsembleBlue"] = "Optim"
AlgoType["KalmanFilter"] = "Optim"
+AlgoType["ExtendedKalmanFilter"] = "Optim"
AlgoType["LinearLeastSquares"] = "Optim"
AlgoType["NonLinearLeastSquares"] = "Optim"
AlgoType["ParticleSwarmOptimization"] = "Optim"
AssimDataDict["AlgorithmParameters"] = ["Dict"]
AssimDataDict["UserDataInit"] = ["Dict"]
AssimDataDict["CheckingPoint"] = ["Vector"]
+AssimDataDict["ControlInput"] = ["Vector", "VectorSerie"]
AssimDataDefaultDict = {}
AssimDataDefaultDict["Background"] = "Vector"
AssimDataDefaultDict["AlgorithmParameters"] = "Dict"
AssimDataDefaultDict["UserDataInit"] = "Dict"
AssimDataDefaultDict["CheckingPoint"] = "Vector"
+AssimDataDefaultDict["ControlInput"] = "Vector"
StoredAssimData = ["Vector", "VectorSerie", "Matrix"]
node_script += """ __data = AdjointOperator((numpy.matrix( __Xcurrent ).T, numpy.matrix( __Ycurrent ).T))\n"""
node_script += """#\n"""
node_script += """logging.debug("ComputationFunctionNode: Formatting the output")\n"""
- node_script += """it = numpy.ravel(__data)\n"""
+ node_script += """__it = 1.*numpy.ravel(__data)\n"""
node_script += """outputValues = [[[[]]]]\n"""
- node_script += """for val in it:\n"""
- node_script += """ outputValues[0][0][0].append(val)\n"""
+ node_script += """outputValues[0][0][0] = list(__it)\n"""
node_script += """#\n"""
node_script += """result = {}\n"""
node_script += """result["outputValues"] = outputValues\n"""
node_script += """ __data = FDA.AdjointOperator((numpy.matrix( __Xcurrent ).T, numpy.matrix( __Ycurrent ).T))\n"""
node_script += """#\n"""
node_script += """logging.debug("ComputationFunctionNode: Formatting the output")\n"""
- node_script += """it = numpy.ravel(__data)\n"""
+ node_script += """__it = 1.*numpy.ravel(__data)\n"""
node_script += """outputValues = [[[[]]]]\n"""
- node_script += """for val in it:\n"""
- node_script += """ outputValues[0][0][0].append(val)\n"""
+ node_script += """outputValues[0][0][0] = list(__it)\n"""
node_script += """#\n"""
node_script += """result = {}\n"""
node_script += """result["outputValues"] = outputValues\n"""
node_script += """ raise ValueError("ComputationFunctionNode: mandatory DirectOperator not found in the imported user script file")\n"""
node_script += """ logging.debug("ComputationFunctionNode: Direct computation")\n"""
node_script += """ __Xcurrent = computation["inputValues"][0][0][0]\n"""
- node_script += """ __data = DirectOperator(numpy.matrix( __Xcurrent ).T)\n"""
+ node_script += """ if len(computation["inputValues"][0][0]) == 2:\n"""
+ node_script += """ __Ucurrent = computation["inputValues"][0][0][1]\n"""
+ node_script += """ __data = DirectOperator((numpy.matrix( __Xcurrent ).T, numpy.matrix( __Ucurrent ).T))\n"""
+ node_script += """ else:\n"""
+ node_script += """ __data = DirectOperator(numpy.matrix( __Xcurrent ).T)\n"""
node_script += """#\n"""
node_script += """if __method == "Tangent":\n"""
node_script += """ try:\n"""
node_script += """ __data = AdjointOperator((numpy.matrix( __Xcurrent ).T, numpy.matrix( __Ycurrent ).T))\n"""
node_script += """#\n"""
node_script += """logging.debug("ComputationFunctionNode: Formatting the output")\n"""
- node_script += """it = numpy.ravel(__data)\n"""
+ node_script += """__it = 1.*numpy.ravel(__data)\n"""
node_script += """outputValues = [[[[]]]]\n"""
- node_script += """for val in it:\n"""
- node_script += """ outputValues[0][0][0].append(val)\n"""
+ node_script += """outputValues[0][0][0] = list(__it)\n"""
node_script += """#\n"""
node_script += """result = {}\n"""
node_script += """result["outputValues"] = outputValues\n"""
node_script += """__data = []\n"""
node_script += """if __method == "Direct":\n"""
node_script += """ logging.debug("ComputationFunctionNode: Direct computation")\n"""
+ node_script += """ if len(computation["inputValues"][0][0]) == 2:\n"""
+ node_script += """ raise ValueError("ComputationFunctionNode: you have to build explicitly the controled evolution model and its tangent and adjoint operators, instead of using approximate derivative.")"""
node_script += """ __Xcurrent = computation["inputValues"][0][0][0]\n"""
node_script += """ __data = FDA.DirectOperator(numpy.matrix( __Xcurrent ).T)\n"""
node_script += """#\n"""
node_script += """ __data = FDA.AdjointOperator((numpy.matrix( __Xcurrent ).T, numpy.matrix( __Ycurrent ).T))\n"""
node_script += """#\n"""
node_script += """logging.debug("ComputationFunctionNode: Formatting the output")\n"""
- node_script += """it = numpy.ravel(__data)\n"""
+ node_script += """__it = 1.*numpy.ravel(__data)\n"""
node_script += """outputValues = [[[[]]]]\n"""
- node_script += """for val in it:\n"""
- node_script += """ outputValues[0][0][0].append(val)\n"""
+ node_script += """outputValues[0][0][0] = list(__it)\n"""
node_script += """#\n"""
node_script += """result = {}\n"""
node_script += """result["outputValues"] = outputValues\n"""