3 ================================================================================
4 Tutorials on using the ADAO module
5 ================================================================================
7 .. |eficas_new| image:: images/eficas_new.png
10 .. |eficas_save| image:: images/eficas_save.png
13 .. |eficas_saveas| image:: images/eficas_saveas.png
16 .. |eficas_yacs| image:: images/eficas_yacs.png
20 This section presents some examples on using the ADAO module in SALOME. The
21 first one shows how to build a simple data assimilation case defining
22 explicitly all the required data through the GUI. The second one shows, on the
23 same case, how to define data using external sources through scripts.
25 Building a simple estimation case with explicit data definition
26 ---------------------------------------------------------------
28 This simple example is a demonstration one, and describes how to set a BLUE
29 estimation framework in order to get *weighted least square estimated state* of
30 a system from an observation of the state and from an *a priori* knowledge (or
31 background) of this state. In other words, we look for the weighted middle
32 between the observation and the background vectors. All the numerical values of
33 this example are arbitrary.
38 We choose to operate in a 3-dimensional space. 3D is chosen in order to restrict
39 the size of numerical object to explicitly enter by the user, but the problem is
40 not dependant of the dimension and can be set in dimension 1000... The
41 observation :math:`\mathbf{y}^o` is of value 1 in each direction, so:
45 The background state :math:`\mathbf{x}^b`, which represent some *a priori*
46 knowledge or a regularization, is of value of 0 in each direction, which is:
50 Data assimilation requires information on errors covariances :math:`\mathbf{R}`
51 and :math:`\mathbf{B}` respectively for observation and background variables. We
52 choose here to have uncorrelated errors (that is, diagonal matrices) and to have
53 the same variance of 1 for all variables (that is, identity matrices). We get:
55 ``B = R = [1 0 0 ; 0 1 0 ; 0 0 1]``
57 Last, we need an observation operator :math:`\mathbf{H}` to convert the
58 background value in the space of observation value. Here, because the space
59 dimensions are the same, we can choose the identity as the observation
62 ``H = [1 0 0 ; 0 1 0 ; 0 0 1]``
64 With such choices, the Best Linear Unbiased Estimator (BLUE) will be the average
65 vector between :math:`\mathbf{y}^o` and :math:`\mathbf{x}^b`, named the
66 *analysis* and denoted by :math:`\mathbf{x}^a`:
68 ``Xa = [0.5 0.5 0.5]``
70 As en extension of this example, one can change the variances for
71 :math:`\mathbf{B}` or :math:`\mathbf{R}` independently, and the analysis will
72 move to :math:`\mathbf{y}^o` or to :math:`\mathbf{x}^b` in inverse proportion of
73 the variances in :math:`\mathbf{B}` and :math:`\mathbf{R}`. It is also
74 equivalent to search for the analysis thought a BLUE algorithm or a 3DVAR one.
76 Using the GUI to build the ADAO case
77 ++++++++++++++++++++++++++++++++++++
79 First, you have to activate the ADAO module by choosing the appropriate module
80 button or menu of SALOME, and you will see:
83 .. image:: images/adao_activate.png
87 **Activating the module ADAO in SALOME**
89 Choose the "*New*" button in this window. You will directly get the EFICAS
90 interface for variables definition, along with the "*Object browser*". You can
91 then click on the "*New*" button |eficas_new| to create a new ADAO case, and you
95 .. image:: images/adao_viewer.png
99 **The EFICAS viewer for cases definition in module ADAO**
101 Then fill in the variables to build the ADAO case by using the experimental set
102 up described above. All the technical information given above will be directly
103 inserted in the ADAO case definition, by using the *String* type for all the
104 variables. When the case definition is ready, save it to a "*JDC (\*.comm)*"
105 native file somewhere in your path. Remember that other files will be also
106 created near this first one, so it is better to make a specific directory for
107 your case, and to save the file inside. The name of the file will appear in the
108 "*Object browser*" window, under the "*ADAO*" menu. The final case definition
111 .. _adao_jdcexample01:
112 .. image:: images/adao_jdcexample01.png
116 **Definition of the experimental set up chosen for the ADAO case**
118 To go further, we need now to generate the YACS scheme from the ADAO case
119 definition. In order to do that, right click on the name of the file case in the
120 "*Object browser*" window, and choose the "*Export to YACS*" sub-menu (or the
121 "*Export to YACS*" button |eficas_yacs|) as below:
123 .. _adao_exporttoyacs00:
124 .. image:: images/adao_exporttoyacs.png
128 **"Export to YACS" sub-menu to generate the YACS scheme from the ADAO case**
130 This command will generate the YACS scheme, activate YACS module in SALOME, and
131 open the new scheme in the GUI of the YACS module [#]_. After reordering the
132 nodes by using the "*arrange local node*" sub-menu of the YACS graphical view of
133 the scheme, you get the following representation of the generated ADAO scheme:
135 .. _yacs_generatedscheme:
136 .. image:: images/yacs_generatedscheme.png
140 **YACS generated scheme from the ADAO case**
142 After that point, all the modifications, executions and post-processing of the
143 data assimilation scheme will be done in YACS. In order to check the result in a
144 simple way, we create here a new YACS node by using the "*in-line script node*"
145 sub-menu of the YACS graphical view, and we name it "*PostProcessing*".
147 This script will retrieve the data assimilation analysis from the
148 "*algoResults*" output port of the computation bloc (which gives access to a
149 SALOME Python Object), and will print it on the standard output.
151 To obtain this, the in-line script node need to have an input port of type
152 "*pyobj*" named "*results*" for example, that have to be linked graphically to
153 the "*algoResults*" output port of the computation bloc. Then the code to fill
154 in the script node is::
156 Xa = results.ADD.get("Analysis").valueserie(-1)
159 print "Analysis =",Xa
162 The augmented YACS scheme can be saved (overwriting the generated scheme if the
163 simple "*Save*" command or button are used, or with a new name). Then,
164 classically in YACS, it have to be prepared for run, and then executed. After
165 completion, the printing on standard output is available in the "*YACS Container
166 Log*", obtained through the right click menu of the "*proc*" window in the YACS
167 scheme as shown below:
169 .. _yacs_containerlog:
170 .. image:: images/yacs_containerlog.png
174 **YACS menu for Container Log, and dialog window showing the log**
176 We verify that the result is correct by checking that the log dialog window
177 contains the following line::
179 Analysis = [0.5, 0.5, 0.5]
181 as shown in the image above.
183 As a simple extension of this example, one can notice that the same problem
184 solved with a 3DVAR algorithm gives the same result. This algorithm can be
185 chosen at the ADAO case building step, before entering in YACS step. The
186 ADAO 3DVAR case will look completely similar to the BLUE algorithmic case, as
187 shown by the following figure:
189 .. _adao_jdcexample02:
190 .. image:: images/adao_jdcexample02.png
194 **Defining an ADAO 3DVAR case looks completely similar to a BLUE case**
196 There is only one command changing, with "*3DVAR*" value instead of "*Blue*".
198 Building a simple estimation case with external data definition by scripts
199 --------------------------------------------------------------------------
201 It is useful to get parts or all of the data from external definition, using
202 Python script files to provide access to the data. As an example, we build here
203 an ADAO case representing the same experimental set up as in the above example
204 `Building a simple estimation case with explicit data definition`_, but using
205 data form a single one external Python script file.
207 First, we write the following script file, using conventional names for the
208 desired variables. Here, all the input variables are defined in the script, but
209 the user can choose to split the file in several ones, or to mix explicit data
210 definition in the ADAO GUI and implicit data definition by external files. The
211 present script looks like::
215 # Definition of the Background as a vector
216 # ----------------------------------------
217 Background = [0, 0, 0]
219 # Definition of the Observation as a vector
220 # -----------------------------------------
221 Observation = "1 1 1"
223 # Definition of the Background Error covariance as a matrix
224 # ---------------------------------------------------------
225 BackgroundError = numpy.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
227 # Definition of the Observation Error covariance as a matrix
228 # ----------------------------------------------------------
229 ObservationError = numpy.matrix("1 0 0 ; 0 1 0 ; 0 0 1")
231 # Definition of the Observation Operator as a matrix
232 # --------------------------------------------------
233 ObservationOperator = numpy.identity(3)
235 The names of the Python variables above are mandatory, in order to define the
236 right variables, but the Python script can be bigger and define classes,
237 functions, etc. with other names. It shows different ways to define arrays and
238 matrices, using list, string (as in Numpy or Octave), Numpy array type or Numpy
239 matrix type, and Numpy special functions. All of these syntaxes are valid.
241 After saving this script somewhere in your path (named here "*script.py*" for
242 the example), we use the GUI to build the ADAO case. The procedure to fill in
243 the case is similar except that, instead of selecting the "*String*" option for
244 the "*FROM*" keyword, we select the "*Script*" one. This leads to a
245 "*SCRIPT_DATA/SCRIPT_FILE*" entry in the tree, allowing to choose a file as:
247 .. _adao_scriptentry01:
248 .. image:: images/adao_scriptentry01.png
252 **Defining an input value using an external script file**
254 Other steps and results are exactly the same as in the `Building a simple
255 estimation case with explicit data definition`_ previous example.
257 In fact, this script methodology allows to retrieve data from in-line or previous
258 calculations, from static files, from database or from stream, all of them
259 outside of SALOME. It allows also to modify easily some input data, for example
260 for debug purpose or for repetitive execution process, and it is the most
261 versatile method in order to parametrize the input data. **But be careful,
262 script methodology is not a "safe" procedure, in the sense that erroneous
263 data, or errors in calculations, can be directly injected into the YACS scheme
266 Adding parameters to control the data assimilation algorithm
267 ------------------------------------------------------------
269 One can add some optional parameters to control the data assimilation algorithm
270 calculation. This is done by using the "*AlgorithmParameters*" keyword in the
271 definition of the ADAO case, which is an keyword of the ASSIMILATION_STUDY. This
272 keyword requires a Python dictionary, containing some key/value pairs. The list
273 of possible optional parameters are given in the subsection
274 :ref:`subsection_algo_options`.
276 If no bounds at all are required on the control variables, then one can choose
277 the "BFGS" or "CG" minimisation algorithm for the 3DVAR algorithm. For
278 constrained optimization, the minimizer "LBFGSB" is often more robust, but the
279 "TNC" is sometimes more performant.
281 This dictionary has to be defined, for example, in an external Python script
282 file, using the mandatory variable name "*AlgorithmParameters*" for the
283 dictionary. All the keys inside the dictionary are optional, they all have
284 default values, and can exist without being used. For example::
286 AlgorithmParameters = {
287 "Minimizer" : "CG", # Possible choice : "LBFGSB", "TNC", "CG", "BFGS"
288 "MaximumNumberOfSteps" : 10,
291 Then the script can be added to the ADAO case, in a file entry describing the
292 "*AlgorithmParameters*" keyword, as follows:
294 .. _adao_scriptentry02:
295 .. image:: images/adao_scriptentry02.png
299 **Adding parameters to control the algorithm**
301 Other steps and results are exactly the same as in the `Building a simple
302 estimation case with explicit data definition`_ previous example. The dictionary
303 can also be directly given in the input field associated with the keyword.
305 Building a complex case with external data definition by scripts
306 ----------------------------------------------------------------
308 This more complex and complete example has to been considered as a framework for
309 user inputs, that need to be tailored for each real application. Nevertheless,
310 the file skeletons are sufficiently general to have been used for various
311 applications in neutronic, fluid mechanics... Here, we will not focus on the
312 results, but more on the user control of inputs and outputs in an ADAO case. As
313 previously, all the numerical values of this example are arbitrary.
315 The objective is to set up the input and output definitions of a physical case
316 by external python scripts, using a general non-linear operator, adding control
317 on parameters and so on... The complete framework scripts can be found in the
318 ADAO skeletons examples directory under the name
319 "*External_data_definition_by_scripts*".
324 We continue to operate in a 3-dimensional space, in order to restrict
325 the size of numerical object shown in the scripts, but the problem is
326 not dependant of the dimension.
328 We choose a twin experiment context, using a known true state
329 :math:`\mathbf{x}^t` of arbitrary values:
333 The background state :math:`\mathbf{x}^b`, which represent some *a priori*
334 knowledge of the true state, is build as a normal random perturbation of 20% the
335 true state :math:`\mathbf{x}^t` for each component, which is:
337 ``Xb = Xt + normal(0, 20%*Xt)``
339 To describe the background error covariances matrix :math:`\mathbf{B}`, we make
340 as previously the hypothesis of uncorrelated errors (that is, a diagonal matrix,
341 of size 3x3 because :math:`\mathbf{x}^b` is of lenght 3) and to have the same
342 variance of 0.1 for all variables. We get:
344 ``B = 0.1 * diagonal( lenght(Xb) )``
346 We suppose that there exist an observation operator :math:`\mathbf{H}`, which
347 can be non linear. In real calibration procedure or inverse problems, the
348 physical simulation codes are embedded in the observation operator. We need also
349 to know its gradient with respect to each calibrated variable, which is a rarely
350 known information with industrial codes. But we will see later how to obtain an
351 approximated gradient in this case.
353 Being in twin experiments, the observation :math:`\mathbf{y}^o` and its error
354 covariances matrix :math:`\mathbf{R}` are generated by using the true state
355 :math:`\mathbf{x}^t` and the observation operator :math:`\mathbf{H}`:
359 and, with an arbitrary standard deviation of 1% on each error component:
361 ``R = 0.0001 * diagonal( lenght(Yo) )``
363 All the required data assimilation informations are then defined.
365 Skeletons of the scripts describing the setup
366 +++++++++++++++++++++++++++++++++++++++++++++
368 We give here the essential parts of each script used afterwards to build the ADAO
369 case. Remember that using these scripts in real Python files requires to
370 correctly define the path to imported modules or codes (even if the module is in
371 the same directory that the importing Python file ; we indicate the path
372 adjustment using the mention ``"# INSERT PHYSICAL SCRIPT PATH"``), the encoding
373 if necessary, etc. The indicated file names for the following scripts are
374 arbitrary. Examples of complete file scripts are available in the ADAO examples
377 We first define the true state :math:`\mathbf{x}^t` and some convenient matrix
378 building function, in a Python script file named
379 ``Physical_data_and_covariance_matrices.py``::
385 Arbitrary values and names, as a tuple of two series of same length
387 return (numpy.array([1, 2, 3]), ['Para1', 'Para2', 'Para3'])
389 def Simple_Matrix( size, diagonal=None ):
391 Diagonal matrix, with either 1 or a given vector on the diagonal
393 if diagonal is not None:
394 S = numpy.diag( diagonal )
396 S = numpy.matrix(numpy.identity(int(size)))
399 We can then define the background state :math:`\mathbf{x}^b` as a random
400 perturbation of the true state, adding at the end of the script the definition
401 of a *required ADAO variable* in order to export the defined value. It is done
402 in a Python script file named ``Script_Background_xb.py``::
404 from Physical_data_and_covariance_matrices import True_state
407 xt, names = True_state()
409 Standard_deviation = 0.2*xt # 20% for each variable
411 xb = xt + abs(numpy.random.normal(0.,Standard_deviation,size=(len(xt),)))
413 # Creating the required ADAO variable
414 # -----------------------------------
415 Background = list(xb)
417 In the same way, we define the background error covariance matrix
418 :math:`\mathbf{B}` as a diagonal matrix of the same diagonal length as the
419 background of the true state, using the convenient function already defined. It
420 is done in a Python script file named ``Script_BackgroundError_B.py``::
422 from Physical_data_and_covariance_matrices import True_state, Simple_Matrix
424 xt, names = True_state()
426 B = 0.1 * Simple_Matrix( size = len(xt) )
428 # Creating the required ADAO variable
429 # -----------------------------------
432 To continue, we need the observation operator :math:`\mathbf{H}` as a function
433 of the state. It is here defined in an external file named
434 ``"Physical_simulation_functions.py"``, which should contain functions
435 conveniently named here ``"FunctionH"`` and ``"AdjointH"``. These functions are
436 user ones, representing as programming functions the :math:`\mathbf{H}` operator
437 and its adjoint. We suppose these functions are given by the user. A simple
438 skeleton is given in the Python script file ``Physical_simulation_functions.py``
439 of the ADAO examples standard directory. It can be used in the case only the
440 non-linear direct physical simulation exists. The script is partly reproduced
441 here for convenience::
444 """ Direct non-linear simulation operator """
446 # --------------------------------------> EXAMPLE TO BE REMOVED
447 if type(XX) is type(numpy.matrix([])): # EXAMPLE TO BE REMOVED
448 HX = XX.A1.tolist() # EXAMPLE TO BE REMOVED
449 elif type(XX) is type(numpy.array([])): # EXAMPLE TO BE REMOVED
450 HX = numpy.matrix(XX).A1.tolist() # EXAMPLE TO BE REMOVED
451 else: # EXAMPLE TO BE REMOVED
452 HX = XX # EXAMPLE TO BE REMOVED
453 # --------------------------------------> EXAMPLE TO BE REMOVED
455 return numpy.array( HX )
457 def TangentHMatrix( X, increment = 0.01, centeredDF = False ):
458 """ Tangent operator (Jacobian) calculated by finite differences """
460 dX = increment * X.A1
465 for i in range( len(dX) ):
466 X_plus_dXi = numpy.array( X.A1 )
467 X_plus_dXi[i] = X[i] + dX[i]
468 X_moins_dXi = numpy.array( X.A1 )
469 X_moins_dXi[i] = X[i] - dX[i]
471 HX_plus_dXi = FunctionH( X_plus_dXi )
472 HX_moins_dXi = FunctionH( X_moins_dXi )
474 HX_Diff = ( HX_plus_dXi - HX_moins_dXi ) / (2.*dX[i])
476 Jacobian.append( HX_Diff )
481 for i in range( len(dX) ):
482 X_plus_dXi = numpy.array( X.A1 )
483 X_plus_dXi[i] = X[i] + dX[i]
485 HX_plus_dXi = FunctionH( X_plus_dXi )
487 HX_plus_dX.append( HX_plus_dXi )
492 for i in range( len(dX) ):
493 Jacobian.append( ( HX_plus_dX[i] - HX ) / dX[i] )
495 Jacobian = numpy.matrix( Jacobian )
500 """ Tangent operator """
501 _X = numpy.asmatrix(X).flatten().T
502 HtX = self.TangentHMatrix( _X ) * _X
505 def AdjointH( (X, Y) ):
506 """ Ajoint operator """
508 Jacobian = TangentHMatrix( X, centeredDF = False )
510 Y = numpy.asmatrix(Y).flatten().T
511 HaY = numpy.dot(Jacobian, Y)
515 We insist on the fact that these non-linear operator ``"FunctionH"``, tangent
516 operator ``"TangentH"`` and adjoint operator ``"AdjointH"`` come from the
517 physical knowledge, include the reference physical simulation code and its
518 eventual adjoint, and have to be carefully set up by the data assimilation user.
519 The errors in or missuses of the operators can not be detected or corrected by
520 the data assimilation framework alone.
522 To operates in the module ADAO, it is required to define for ADAO these
523 different types of operators: the (potentially non-linear) standard observation
524 operator, named ``"Direct"``, its linearised approximation, named ``"Tangent"``,
525 and the adjoint operator named ``"Adjoint"``. The Python script have to retrieve
526 an input parameter, found under the key "value", in a variable named
527 ``"specificParameters"`` of the SALOME input data and parameters
528 ``"computation"`` dictionary variable. If the operator is already linear, the
529 ``"Direct"`` and ``"Tangent"`` functions are the same, as it can be supposed
530 here. The following example Python script file named
531 ``Script_ObservationOperator_H.py``, illustrates the case::
533 import Physical_simulation_functions
534 import numpy, logging
536 # -----------------------------------------------------------------------
537 # SALOME input data and parameters: all information are the required input
538 # variable "computation", containing for example:
539 # {'inputValues': [[[[0.0, 0.0, 0.0]]]],
540 # 'inputVarList': ['adao_default'],
541 # 'outputVarList': ['adao_default'],
542 # 'specificParameters': [{'name': 'method', 'value': 'Direct'}]}
543 # -----------------------------------------------------------------------
545 # Recovering the type of computation: "Direct", "Tangent" or "Adjoint"
546 # --------------------------------------------------------------------
548 for param in computation["specificParameters"]:
549 if param["name"] == "method":
550 method = param["value"]
551 logging.info("ComputationFunctionNode: Found method is \'%s\'"%method)
553 # Loading the H operator functions from external definitions
554 # ----------------------------------------------------------
555 logging.info("ComputationFunctionNode: Loading operator functions")
556 FunctionH = Physical_simulation_functions.FunctionH
557 TangentH = Physical_simulation_functions.TangentH
558 AdjointH = Physical_simulation_functions.AdjointH
560 # Executing the possible computations
561 # -----------------------------------
562 if method == "Direct":
563 logging.info("ComputationFunctionNode: Direct computation")
564 Xcurrent = computation["inputValues"][0][0][0]
565 data = FunctionH(numpy.matrix( Xcurrent ).T)
567 if method == "Tangent":
568 logging.info("ComputationFunctionNode: Tangent computation")
569 Xcurrent = computation["inputValues"][0][0][0]
570 data = TangentH(numpy.matrix( Xcurrent ).T)
572 if method == "Adjoint":
573 logging.info("ComputationFunctionNode: Adjoint computation")
574 Xcurrent = computation["inputValues"][0][0][0]
575 Ycurrent = computation["inputValues"][0][0][1]
576 data = AdjointH((numpy.matrix( Xcurrent ).T, numpy.matrix( Ycurrent ).T))
578 # Formatting the output
579 # ---------------------
580 logging.info("ComputationFunctionNode: Formatting the output")
582 outputValues = [[[[]]]]
584 outputValues[0][0][0].append(val)
586 # Creating the required ADAO variable
587 # -----------------------------------
589 result["outputValues"] = outputValues
590 result["specificOutputInfos"] = []
591 result["returnCode"] = 0
592 result["errorMessage"] = ""
594 As output, this script has to define a nested list variable, as shown above with
595 the ``"outputValues"`` variable, where the nested levels describe the different
596 variables included in the state, then the different possible states at the same
597 time, then the different time steps. In this case, because there is only one
598 time step and one state, and all the variables are stored together, we only set
599 the most inner level of the lists.
601 In this twin experiments framework, the observation :math:`\mathbf{y}^o` and its
602 error covariances matrix :math:`\mathbf{R}` can be generated. It is done in two
603 Python script files, the first one being named ``Script_Observation_yo.py``::
605 from Physical_data_and_covariance_matrices import True_state
606 from Physical_simulation_functions import FunctionH
608 xt, noms = True_state()
612 # Creating the required ADAO variable
613 # -----------------------------------
614 Observation = list(yo)
616 and the second one named ``Script_ObservationError_R.py``::
618 from Physical_data_and_covariance_matrices import True_state, Simple_Matrix
619 from Physical_simulation_functions import FunctionH
621 xt, names = True_state()
625 R = 0.0001 * Simple_Matrix( size = len(yo) )
627 # Creating the required ADAO variable
628 # -----------------------------------
631 As in previous examples, it can be useful to define some parameters for the data
632 assimilation algorithm. For example, if we use the standard 3DVAR algorithm, the
633 following parameters can be defined in a Python script file named
634 ``Script_AlgorithmParameters.py``::
636 # Creating the required ADAO variable
637 # -----------------------------------
638 AlgorithmParameters = {
639 "Minimizer" : "TNC", # Possible : "LBFGSB", "TNC", "CG", "BFGS"
640 "MaximumNumberOfSteps" : 15, # Number of global iterative steps
642 [ None, None ], # Bound on the first parameter
643 [ 0., 4. ], # Bound on the second parameter
644 [ 0., None ], # Bound on the third parameter
648 Finally, it is common to post-process the results, retrieving them after the
649 data assimilation phase in order to analyse, print or show them. It requires to
650 use a intermediary Python script file in order to extract these results. The
651 following example Python script file named ``Script_UserPostAnalysis.py``,
652 illustrates the fact::
654 from Physical_data_and_covariance_matrices import True_state
657 xt, names = True_state()
658 xa = ADD.get("Analysis").valueserie(-1)
659 x_series = ADD.get("CurrentState").valueserie()
660 J = ADD.get("CostFunctionJ").valueserie()
662 # Verifying the results by printing
663 # ---------------------------------
666 print "xa = %s"%numpy.array(xa)
668 for i in range( len(x_series) ):
669 print "Step %2i : J = %.5e et X = %s"%(i, J[i], x_series[i])
672 At the end, we get a description of the whole case setup through a set of files
675 #. ``Physical_data_and_covariance_matrices.py``
676 #. ``Physical_simulation_functions.py``
677 #. ``Script_AlgorithmParameters.py``
678 #. ``Script_BackgroundError_B.py``
679 #. ``Script_Background_xb.py``
680 #. ``Script_ObservationError_R.py``
681 #. ``Script_ObservationOperator_H.py``
682 #. ``Script_Observation_yo.py``
683 #. ``Script_UserPostAnalysis.py``
685 We insist here that all these scripts are written by the user and can not be
686 automatically tested. So the user is required to verify the scripts (and in
687 particular their input/output) in order to limit the difficulty of debug. We
688 recall: **script methodology is not a "safe" procedure, in the sense that
689 erroneous data, or errors in calculations, can be directly injected into the
690 YACS scheme execution.**
692 Building the case with external data definition by scripts
693 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
695 All these scripts can then be used to define the ADAO case with external data
696 definition by Python script files. It is entirely similar to the method
697 described in the `Building a simple estimation case with external data
698 definition by scripts`_ previous section. For each variable to be defined, we
699 select the "*Script*" option of the "*FROM*" keyword, which leads to a
700 "*SCRIPT_DATA/SCRIPT_FILE*" entry in the tree.
702 The other steps to build the ADAO case are exactly the same as in the `Building
703 a simple estimation case with explicit data definition`_ previous section.
705 Using the simple linear operator :math:`\mathbf{H}` from the Python script file
706 ``Physical_simulation_functions.py`` in the ADAO examples standard directory,
707 the results will look like::
710 xa = [ 1.000014 2.000458 3.000390]
712 Step 0 : J = 1.81750e+03 et X = [1.014011, 2.459175, 3.390462]
713 Step 1 : J = 1.81750e+03 et X = [1.014011, 2.459175, 3.390462]
714 Step 2 : J = 1.79734e+01 et X = [1.010771, 2.040342, 2.961378]
715 Step 3 : J = 1.79734e+01 et X = [1.010771, 2.040342, 2.961378]
716 Step 4 : J = 1.81909e+00 et X = [1.000826, 2.000352, 3.000487]
717 Step 5 : J = 1.81909e+00 et X = [1.000826, 2.000352, 3.000487]
718 Step 6 : J = 1.81641e+00 et X = [1.000247, 2.000651, 3.000156]
719 Step 7 : J = 1.81641e+00 et X = [1.000247, 2.000651, 3.000156]
720 Step 8 : J = 1.81569e+00 et X = [1.000015, 2.000432, 3.000364]
721 Step 9 : J = 1.81569e+00 et X = [1.000015, 2.000432, 3.000364]
722 Step 10 : J = 1.81568e+00 et X = [1.000013, 2.000458, 3.000390]
725 The state at the first step is the randomly generated background state
726 :math:`\mathbf{x}^b`. After completion, these printing on standard output is
727 available in the "*YACS Container Log*", obtained through the right click menu
728 of the "*proc*" window in the YACS scheme.
730 .. [#] For more information on YACS, see the the *YACS module User's Guide* available in the main "*Help*" menu of SALOME GUI.