3 ================================================================================
4 Tutorials on using the ADAO module
5 ================================================================================
7 .. |eficas_new| image:: images/eficas_new.png
10 .. |eficas_save| image:: images/eficas_save.png
13 .. |eficas_yacs| image:: images/eficas_yacs.png
17 This section presents some examples on using the ADAO module in SALOME. The
18 first one shows how to build a simple data assimilation case defining
19 explicitly all the required data through the GUI. The second one shows, on the
20 same case, how to define data using external sources through scripts.
22 Building a simple estimation case with explicit data definition
23 ---------------------------------------------------------------
25 This simple example is a demonstration one, and describes how to set a BLUE
26 estimation framework in order to get *weighted least square estimated state* of
27 a system from an observation of the state and from an *a priori* knowledge (or
28 background) of this state. In other words, we look for the weighted middle
29 between the observation and the background vectors. All the numerical values of
30 this example are arbitrary.
35 We choose to operate in a 3-dimensional space. 3D is chosen in order to restrict
36 the size of numerical object to explicitly enter by the user, but the problem is
37 not dependant of the dimension and can be set in dimension 1000... The
38 observation :math:`\mathbf{y}^o` is of value 1 in each direction, so:
42 The background state :math:`\mathbf{x}^b`, which represent some *a priori*
43 knowledge or a regularization, is of value of 0 in each direction, which is:
47 Data assimilation requires information on errors covariances :math:`\mathbf{R}`
48 and :math:`\mathbf{B}` respectively for observation and background variables. We
49 choose here to have uncorrelated errors (that is, diagonal matrices) and to have
50 the same variance of 1 for all variables (that is, identity matrices). We get:
52 ``B = R = [1 0 0 ; 0 1 0 ; 0 0 1]``
54 Last, we need an observation operator :math:`\mathbf{H}` to convert the
55 background value in the space of observation value. Here, because the space
56 dimensions are the same, we can choose the identity as the observation
59 ``H = [1 0 0 ; 0 1 0 ; 0 0 1]``
61 With such choices, the Best Linear Unbiased Estimator (BLUE) will be the average
62 vector between :math:`\mathbf{y}^o` and :math:`\mathbf{x}^b`, named the
63 *analysis* and denoted by :math:`\mathbf{x}^a`:
65 ``Xa = [0.5 0.5 0.5]``
67 As en extension of this example, one can change the variances for
68 :math:`\mathbf{B}` or :math:`\mathbf{R}` independently, and the analysis will
69 move to :math:`\mathbf{y}^o` or to :math:`\mathbf{x}^b` in inverse proportion of
70 the variances in :math:`\mathbf{B}` and :math:`\mathbf{R}`. It is also
71 equivalent to search for the analysis thought a BLUE algorithm or a 3DVAR one.
73 Using the GUI to build the ADAO case
74 ++++++++++++++++++++++++++++++++++++
76 First, you have to activate the ADAO module by choosing the appropriate module
77 button or menu of SALOME, and you will see:
80 .. image:: images/adao_activate.png
84 **Activating the module ADAO in SALOME**
86 Choose the "*New*" button in this window. You will directly get the EFICAS
87 interface for variables definition, along with the "*Object browser*". You can
88 then click on the "*New*" button |eficas_new| to create a new ADAO case, and you
92 .. image:: images/adao_viewer.png
96 **The EFICAS viewer for cases definition in module ADAO**
98 Then fill in the variables to build the ADAO case by using the experimental set
99 up described above. All the technical information given above will be directly
100 inserted in the ADAO case definition, by using the *String* type for all the
101 variables. When the case definition is ready, save it to a "*JDC (\*.comm)*"
102 native file somewhere in your path. Remember that other files will be also
103 created near this first one, so it is better to make a specific directory for
104 your case, and to save the file inside. The name of the file will appear in the
105 "*Object browser*" window, under the "*ADAO*" menu. The final case definition
108 .. _adao_jdcexample01:
109 .. image:: images/adao_jdcexample01.png
113 **Definition of the experimental set up chosen for the ADAO case**
115 To go further, we need now to generate the YACS scheme from the ADAO case
116 definition. In order to do that, right click on the name of the file case in the
117 "*Object browser*" window, and choose the "*Export to YACS*" sub-menu (or the
118 "*Export to YACS*" button |eficas_yacs|) as below:
120 .. _adao_exporttoyacs00:
121 .. image:: images/adao_exporttoyacs.png
125 **"Export to YACS" sub-menu to generate the YACS scheme from the ADAO case**
127 This command will generate the YACS scheme, activate YACS module in SALOME, and
128 open the new scheme in the GUI of the YACS module [#]_. After reordering the
129 nodes by using the "*arrange local node*" sub-menu of the YACS graphical view of
130 the scheme, you get the following representation of the generated ADAO scheme:
132 .. _yacs_generatedscheme:
133 .. image:: images/yacs_generatedscheme.png
137 **YACS generated scheme from the ADAO case**
139 After that point, all the modifications, executions and post-processing of the
140 data assimilation scheme will be done in YACS. In order to check the result in a
141 simple way, we create here a new YACS node by using the "*in-line script node*"
142 sub-menu of the YACS graphical view, and we name it "*PostProcessing*".
144 This script will retrieve the data assimilation analysis from the
145 "*algoResults*" output port of the computation bloc (which gives access to a
146 SALOME Python Object), and will print it on the standard output.
148 To obtain this, the in-line script node need to have an input port of type
149 "*pyobj*" named "*results*" for example, that have to be linked graphically to
150 the "*algoResults*" output port of the computation bloc. Then the code to fill
151 in the script node is::
153 Xa = results.ADD.get("Analysis").valueserie(-1)
156 print "Analysis =",Xa
159 The augmented YACS scheme can be saved (overwriting the generated scheme if the
160 simple "*Save*" command or button are used, or with a new name). Then,
161 classically in YACS, it have to be prepared for run, and then executed. After
162 completion, the printing on standard output is available in the "*YACS Container
163 Log*", obtained through the right click menu of the "*proc*" window in the YACS
164 scheme as shown below:
166 .. _yacs_containerlog:
167 .. image:: images/yacs_containerlog.png
171 **YACS menu for Container Log, and dialog window showing the log**
173 We verify that the result is correct by checking that the log dialog window
174 contains the following line::
176 Analysis = [0.5, 0.5, 0.5]
178 as shown in the image above.
180 As a simple extension of this example, one can notice that the same problem
181 solved with a 3DVAR algorithm gives the same result. This algorithm can be
182 chosen at the ADAO case building step, before entering in YACS step. The
183 ADAO 3DVAR case will look completely similar to the BLUE algorithmic case, as
184 shown by the following figure:
186 .. _adao_jdcexample02:
187 .. image:: images/adao_jdcexample02.png
191 **Defining an ADAO 3DVAR case looks completely similar to a BLUE case**
193 There is only one command changing, with "*3DVAR*" value instead of "*Blue*".
195 Building a simple estimation case with external data definition by scripts
196 --------------------------------------------------------------------------
198 It is useful to get parts or all of the data from external definition, using
199 Python script files to provide access to the data. As an example, we build here
200 an ADAO case representing the same experimental set up as in the above example
201 `Building a simple estimation case with explicit data definition`_, but using
202 data form a single one external Python script file.
204 First, we write the following script file, using conventional names for the
205 desired variables. Here, all the input variables are defined in the script, but
206 the user can choose to split the file in several ones, or to mix explicit data
207 definition in the ADAO GUI and implicit data definition by external files. The
208 present script looks like::
212 # Definition of the Background as a vector
213 # ----------------------------------------
214 Background = [0, 0, 0]
216 # Definition of the Observation as a vector
217 # -----------------------------------------
218 Observation = "1 1 1"
220 # Definition of the Background Error covariance as a matrix
221 # ---------------------------------------------------------
222 BackgroundError = numpy.array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
224 # Definition of the Observation Error covariance as a matrix
225 # ----------------------------------------------------------
226 ObservationError = numpy.matrix("1 0 0 ; 0 1 0 ; 0 0 1")
228 # Definition of the Observation Operator as a matrix
229 # --------------------------------------------------
230 ObservationOperator = numpy.identity(3)
232 The names of the Python variables above are mandatory, in order to define the
233 right variables, but the Python script can be bigger and define classes,
234 functions, etc. with other names. It shows different ways to define arrays and
235 matrices, using list, string (as in Numpy or Octave), Numpy array type or Numpy
236 matrix type, and Numpy special functions. All of these syntaxes are valid.
238 After saving this script somewhere in your path (named here "*script.py*" for
239 the example), we use the GUI to build the ADAO case. The procedure to fill in
240 the case is similar except that, instead of selecting the "*String*" option for
241 the "*FROM*" keyword, we select the "*Script*" one. This leads to a
242 "*SCRIPT_DATA/SCRIPT_FILE*" entry in the tree, allowing to choose a file as:
244 .. _adao_scriptentry01:
245 .. image:: images/adao_scriptentry01.png
249 **Defining an input value using an external script file**
251 Other steps and results are exactly the same as in the `Building a simple
252 estimation case with explicit data definition`_ previous example.
254 In fact, this script methodology allows to retrieve data from in-line or previous
255 calculations, from static files, from database or from stream, all of them
256 outside of SALOME. It allows also to modify easily some input data, for example
257 for debug purpose or for repetitive execution process, and it is the most
258 versatile method in order to parametrize the input data. **But be careful,
259 script methodology is not a "safe" procedure, in the sense that erroneous
260 data, or errors in calculations, can be directly injected into the YACS scheme
263 Adding parameters to control the data assimilation algorithm
264 ------------------------------------------------------------
266 One can add some optional parameters to control the data assimilation algorithm
267 calculation. This is done by using the "*AlgorithmParameters*" keyword in the
268 definition of the ADAO case, which is an keyword of the ASSIMILATION_STUDY. This
269 keyword requires a Python dictionary, containing some key/value pairs. The list
270 of possible optional parameters are given in the subsection
271 :ref:`subsection_algo_options`.
273 If no bounds at all are required on the control variables, then one can choose
274 the "BFGS" or "CG" minimisation algorithm for the 3DVAR algorithm. For
275 constrained optimization, the minimizer "LBFGSB" is often more robust, but the
276 "TNC" is sometimes more performant.
278 This dictionary has to be defined, for example, in an external Python script
279 file, using the mandatory variable name "*AlgorithmParameters*" for the
280 dictionary. All the keys inside the dictionary are optional, they all have
281 default values, and can exist without being used. For example::
283 AlgorithmParameters = {
284 "Minimizer" : "CG", # Possible choice : "LBFGSB", "TNC", "CG", "BFGS"
285 "MaximumNumberOfSteps" : 10,
288 Then the script can be added to the ADAO case, in a file entry describing the
289 "*AlgorithmParameters*" keyword, as follows:
291 .. _adao_scriptentry02:
292 .. image:: images/adao_scriptentry02.png
296 **Adding parameters to control the algorithm**
298 Other steps and results are exactly the same as in the `Building a simple
299 estimation case with explicit data definition`_ previous example. The dictionary
300 can also be directly given in the input field associated with the keyword.
302 Building a complex case with external data definition by scripts
303 ----------------------------------------------------------------
305 This more complex and complete example has to been considered as a framework for
306 user inputs, that need to be tailored for each real application. Nevertheless,
307 the file skeletons are sufficiently general to have been used for various
308 applications in neutronic, fluid mechanics... Here, we will not focus on the
309 results, but more on the user control of inputs and outputs in an ADAO case. As
310 previously, all the numerical values of this example are arbitrary.
312 The objective is to set up the input and output definitions of a physical case
313 by external python scripts, using a general non-linear operator, adding control
314 on parameters and so on... The complete framework scripts can be found in the
315 ADAO skeletons examples directory under the name
316 "*External_data_definition_by_scripts*".
321 We continue to operate in a 3-dimensional space, in order to restrict
322 the size of numerical object shown in the scripts, but the problem is
323 not dependant of the dimension.
325 We choose a twin experiment context, using a known true state
326 :math:`\mathbf{x}^t` of arbitrary values:
330 The background state :math:`\mathbf{x}^b`, which represent some *a priori*
331 knowledge of the true state, is build as a normal random perturbation of 20% the
332 true state :math:`\mathbf{x}^t` for each component, which is:
334 ``Xb = Xt + normal(0, 20%*Xt)``
336 To describe the background error covariances matrix :math:`\mathbf{B}`, we make
337 as previously the hypothesis of uncorrelated errors (that is, a diagonal matrix,
338 of size 3x3 because :math:`\mathbf{x}^b` is of lenght 3) and to have the same
339 variance of 0.1 for all variables. We get:
341 ``B = 0.1 * diagonal( lenght(Xb) )``
343 We suppose that there exist an observation operator :math:`\mathbf{H}`, which
344 can be non linear. In real calibration procedure or inverse problems, the
345 physical simulation codes are embedded in the observation operator. We need also
346 to know its gradient with respect to each calibrated variable, which is a rarely
347 known information with industrial codes. But we will see later how to obtain an
348 approximated gradient in this case.
350 Being in twin experiments, the observation :math:`\mathbf{y}^o` and its error
351 covariances matrix :math:`\mathbf{R}` are generated by using the true state
352 :math:`\mathbf{x}^t` and the observation operator :math:`\mathbf{H}`:
356 and, with an arbitrary standard deviation of 1% on each error component:
358 ``R = 0.0001 * diagonal( lenght(Yo) )``
360 All the required data assimilation informations are then defined.
362 Skeletons of the scripts describing the setup
363 +++++++++++++++++++++++++++++++++++++++++++++
365 We give here the essential parts of each script used afterwards to build the ADAO
366 case. Remember that using these scripts in real Python files requires to
367 correctly define the path to imported modules or codes (even if the module is in
368 the same directory that the importing Python file ; we indicate the path
369 adjustment using the mention ``"# INSERT PHYSICAL SCRIPT PATH"``), the encoding
370 if necessary, etc. The indicated file names for the following scripts are
371 arbitrary. Examples of complete file scripts are available in the ADAO examples
374 We first define the true state :math:`\mathbf{x}^t` and some convenient matrix
375 building function, in a Python script file named
376 ``Physical_data_and_covariance_matrices.py``::
382 Arbitrary values and names, as a tuple of two series of same length
384 return (numpy.array([1, 2, 3]), ['Para1', 'Para2', 'Para3'])
386 def Simple_Matrix( size, diagonal=None ):
388 Diagonal matrix, with either 1 or a given vector on the diagonal
390 if diagonal is not None:
391 S = numpy.diag( diagonal )
393 S = numpy.matrix(numpy.identity(int(size)))
396 We can then define the background state :math:`\mathbf{x}^b` as a random
397 perturbation of the true state, adding at the end of the script the definition
398 of a *required ADAO variable* in order to export the defined value. It is done
399 in a Python script file named ``Script_Background_xb.py``::
401 from Physical_data_and_covariance_matrices import True_state
404 xt, names = True_state()
406 Standard_deviation = 0.2*xt # 20% for each variable
408 xb = xt + abs(numpy.random.normal(0.,Standard_deviation,size=(len(xt),)))
410 # Creating the required ADAO variable
411 # -----------------------------------
412 Background = list(xb)
414 In the same way, we define the background error covariance matrix
415 :math:`\mathbf{B}` as a diagonal matrix of the same diagonal length as the
416 background of the true state, using the convenient function already defined. It
417 is done in a Python script file named ``Script_BackgroundError_B.py``::
419 from Physical_data_and_covariance_matrices import True_state, Simple_Matrix
421 xt, names = True_state()
423 B = 0.1 * Simple_Matrix( size = len(xt) )
425 # Creating the required ADAO variable
426 # -----------------------------------
429 To continue, we need the observation operator :math:`\mathbf{H}` as a function
430 of the state. It is here defined in an external file named
431 ``"Physical_simulation_functions.py"``, which should contain functions
432 conveniently named here ``"FunctionH"`` and ``"AdjointH"``. These functions are
433 user ones, representing as programming functions the :math:`\mathbf{H}` operator
434 and its adjoint. We suppose these functions are given by the user. A simple
435 skeleton is given in the Python script file ``Physical_simulation_functions.py``
436 of the ADAO examples standard directory. It can be used in the case only the
437 non-linear direct physical simulation exists. The script is partly reproduced
438 here for convenience::
441 """ Direct non-linear simulation operator """
443 # --------------------------------------> EXAMPLE TO BE REMOVED
444 if type(XX) is type(numpy.matrix([])): # EXAMPLE TO BE REMOVED
445 HX = XX.A1.tolist() # EXAMPLE TO BE REMOVED
446 elif type(XX) is type(numpy.array([])): # EXAMPLE TO BE REMOVED
447 HX = numpy.matrix(XX).A1.tolist() # EXAMPLE TO BE REMOVED
448 else: # EXAMPLE TO BE REMOVED
449 HX = XX # EXAMPLE TO BE REMOVED
450 # --------------------------------------> EXAMPLE TO BE REMOVED
452 return numpy.array( HX )
454 def TangentHMatrix( X, increment = 0.01, centeredDF = False ):
455 """ Tangent operator (Jacobian) calculated by finite differences """
457 dX = increment * X.A1
462 for i in range( len(dX) ):
463 X_plus_dXi = numpy.array( X.A1 )
464 X_plus_dXi[i] = X[i] + dX[i]
465 X_moins_dXi = numpy.array( X.A1 )
466 X_moins_dXi[i] = X[i] - dX[i]
468 HX_plus_dXi = FunctionH( X_plus_dXi )
469 HX_moins_dXi = FunctionH( X_moins_dXi )
471 HX_Diff = ( HX_plus_dXi - HX_moins_dXi ) / (2.*dX[i])
473 Jacobian.append( HX_Diff )
478 for i in range( len(dX) ):
479 X_plus_dXi = numpy.array( X.A1 )
480 X_plus_dXi[i] = X[i] + dX[i]
482 HX_plus_dXi = FunctionH( X_plus_dXi )
484 HX_plus_dX.append( HX_plus_dXi )
489 for i in range( len(dX) ):
490 Jacobian.append( ( HX_plus_dX[i] - HX ) / dX[i] )
492 Jacobian = numpy.matrix( Jacobian )
497 """ Tangent operator """
498 _X = numpy.asmatrix(X).flatten().T
499 HtX = self.TangentHMatrix( _X ) * _X
502 def AdjointH( (X, Y) ):
503 """ Ajoint operator """
505 Jacobian = TangentHMatrix( X, centeredDF = False )
507 Y = numpy.asmatrix(Y).flatten().T
508 HaY = numpy.dot(Jacobian, Y)
512 We insist on the fact that these non-linear operator ``"FunctionH"``, tangent
513 operator ``"TangentH"`` and adjoint operator ``"AdjointH"`` come from the
514 physical knowledge, include the reference physical simulation code and its
515 eventual adjoint, and have to be carefully set up by the data assimilation user.
516 The errors in or missuses of the operators can not be detected or corrected by
517 the data assimilation framework alone.
519 To operates in the module ADAO, it is required to define for ADAO these
520 different types of operators: the (potentially non-linear) standard observation
521 operator, named ``"Direct"``, its linearised approximation, named ``"Tangent"``,
522 and the adjoint operator named ``"Adjoint"``. The Python script have to retrieve
523 an input parameter, found under the key "value", in a variable named
524 ``"specificParameters"`` of the SALOME input data and parameters
525 ``"computation"`` dictionary variable. If the operator is already linear, the
526 ``"Direct"`` and ``"Tangent"`` functions are the same, as it can be supposed
527 here. The following example Python script file named
528 ``Script_ObservationOperator_H.py``, illustrates the case::
530 import Physical_simulation_functions
531 import numpy, logging
533 # -----------------------------------------------------------------------
534 # SALOME input data and parameters: all information are the required input
535 # variable "computation", containing for example:
536 # {'inputValues': [[[[0.0, 0.0, 0.0]]]],
537 # 'inputVarList': ['adao_default'],
538 # 'outputVarList': ['adao_default'],
539 # 'specificParameters': [{'name': 'method', 'value': 'Direct'}]}
540 # -----------------------------------------------------------------------
542 # Recovering the type of computation: "Direct", "Tangent" or "Adjoint"
543 # --------------------------------------------------------------------
545 for param in computation["specificParameters"]:
546 if param["name"] == "method":
547 method = param["value"]
548 logging.info("ComputationFunctionNode: Found method is \'%s\'"%method)
550 # Loading the H operator functions from external definitions
551 # ----------------------------------------------------------
552 logging.info("ComputationFunctionNode: Loading operator functions")
553 FunctionH = Physical_simulation_functions.FunctionH
554 TangentH = Physical_simulation_functions.TangentH
555 AdjointH = Physical_simulation_functions.AdjointH
557 # Executing the possible computations
558 # -----------------------------------
559 if method == "Direct":
560 logging.info("ComputationFunctionNode: Direct computation")
561 Xcurrent = computation["inputValues"][0][0][0]
562 data = FunctionH(numpy.matrix( Xcurrent ).T)
564 if method == "Tangent":
565 logging.info("ComputationFunctionNode: Tangent computation")
566 Xcurrent = computation["inputValues"][0][0][0]
567 data = TangentH(numpy.matrix( Xcurrent ).T)
569 if method == "Adjoint":
570 logging.info("ComputationFunctionNode: Adjoint computation")
571 Xcurrent = computation["inputValues"][0][0][0]
572 Ycurrent = computation["inputValues"][0][0][1]
573 data = AdjointH((numpy.matrix( Xcurrent ).T, numpy.matrix( Ycurrent ).T))
575 # Formatting the output
576 # ---------------------
577 logging.info("ComputationFunctionNode: Formatting the output")
579 outputValues = [[[[]]]]
581 outputValues[0][0][0].append(val)
583 # Creating the required ADAO variable
584 # -----------------------------------
586 result["outputValues"] = outputValues
587 result["specificOutputInfos"] = []
588 result["returnCode"] = 0
589 result["errorMessage"] = ""
591 As output, this script has to define a nested list variable, as shown above with
592 the ``"outputValues"`` variable, where the nested levels describe the different
593 variables included in the state, then the different possible states at the same
594 time, then the different time steps. In this case, because there is only one
595 time step and one state, and all the variables are stored together, we only set
596 the most inner level of the lists.
598 In this twin experiments framework, the observation :math:`\mathbf{y}^o` and its
599 error covariances matrix :math:`\mathbf{R}` can be generated. It is done in two
600 Python script files, the first one being named ``Script_Observation_yo.py``::
602 from Physical_data_and_covariance_matrices import True_state
603 from Physical_simulation_functions import FunctionH
605 xt, noms = True_state()
609 # Creating the required ADAO variable
610 # -----------------------------------
611 Observation = list(yo)
613 and the second one named ``Script_ObservationError_R.py``::
615 from Physical_data_and_covariance_matrices import True_state, Simple_Matrix
616 from Physical_simulation_functions import FunctionH
618 xt, names = True_state()
622 R = 0.0001 * Simple_Matrix( size = len(yo) )
624 # Creating the required ADAO variable
625 # -----------------------------------
628 As in previous examples, it can be useful to define some parameters for the data
629 assimilation algorithm. For example, if we use the standard 3DVAR algorithm, the
630 following parameters can be defined in a Python script file named
631 ``Script_AlgorithmParameters.py``::
633 # Creating the required ADAO variable
634 # -----------------------------------
635 AlgorithmParameters = {
636 "Minimizer" : "TNC", # Possible : "LBFGSB", "TNC", "CG", "BFGS"
637 "MaximumNumberOfSteps" : 15, # Number of global iterative steps
639 [ None, None ], # Bound on the first parameter
640 [ 0., 4. ], # Bound on the second parameter
641 [ 0., None ], # Bound on the third parameter
645 Finally, it is common to post-process the results, retrieving them after the
646 data assimilation phase in order to analyse, print or show them. It requires to
647 use a intermediary Python script file in order to extract these results. The
648 following example Python script file named ``Script_UserPostAnalysis.py``,
649 illustrates the fact::
651 from Physical_data_and_covariance_matrices import True_state
654 xt, names = True_state()
655 xa = ADD.get("Analysis").valueserie(-1)
656 x_series = ADD.get("CurrentState").valueserie()
657 J = ADD.get("CostFunctionJ").valueserie()
659 # Verifying the results by printing
660 # ---------------------------------
663 print "xa = %s"%numpy.array(xa)
665 for i in range( len(x_series) ):
666 print "Step %2i : J = %.5e et X = %s"%(i, J[i], x_series[i])
669 At the end, we get a description of the whole case setup through a set of files
672 #. ``Physical_data_and_covariance_matrices.py``
673 #. ``Physical_simulation_functions.py``
674 #. ``Script_AlgorithmParameters.py``
675 #. ``Script_BackgroundError_B.py``
676 #. ``Script_Background_xb.py``
677 #. ``Script_ObservationError_R.py``
678 #. ``Script_ObservationOperator_H.py``
679 #. ``Script_Observation_yo.py``
680 #. ``Script_UserPostAnalysis.py``
682 We insist here that all these scripts are written by the user and can not be
683 automatically tested. So the user is required to verify the scripts (and in
684 particular their input/output) in order to limit the difficulty of debug. We
685 recall: **script methodology is not a "safe" procedure, in the sense that
686 erroneous data, or errors in calculations, can be directly injected into the
687 YACS scheme execution.**
689 Building the case with external data definition by scripts
690 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
692 All these scripts can then be used to define the ADAO case with external data
693 definition by Python script files. It is entirely similar to the method
694 described in the `Building a simple estimation case with external data
695 definition by scripts`_ previous section. For each variable to be defined, we
696 select the "*Script*" option of the "*FROM*" keyword, which leads to a
697 "*SCRIPT_DATA/SCRIPT_FILE*" entry in the tree.
699 The other steps to build the ADAO case are exactly the same as in the `Building
700 a simple estimation case with explicit data definition`_ previous section.
702 Using the simple linear operator :math:`\mathbf{H}` from the Python script file
703 ``Physical_simulation_functions.py`` in the ADAO examples standard directory,
704 the results will look like::
707 xa = [ 1.000014 2.000458 3.000390]
709 Step 0 : J = 1.81750e+03 et X = [1.014011, 2.459175, 3.390462]
710 Step 1 : J = 1.81750e+03 et X = [1.014011, 2.459175, 3.390462]
711 Step 2 : J = 1.79734e+01 et X = [1.010771, 2.040342, 2.961378]
712 Step 3 : J = 1.79734e+01 et X = [1.010771, 2.040342, 2.961378]
713 Step 4 : J = 1.81909e+00 et X = [1.000826, 2.000352, 3.000487]
714 Step 5 : J = 1.81909e+00 et X = [1.000826, 2.000352, 3.000487]
715 Step 6 : J = 1.81641e+00 et X = [1.000247, 2.000651, 3.000156]
716 Step 7 : J = 1.81641e+00 et X = [1.000247, 2.000651, 3.000156]
717 Step 8 : J = 1.81569e+00 et X = [1.000015, 2.000432, 3.000364]
718 Step 9 : J = 1.81569e+00 et X = [1.000015, 2.000432, 3.000364]
719 Step 10 : J = 1.81568e+00 et X = [1.000013, 2.000458, 3.000390]
722 The state at the first step is the randomly generated background state
723 :math:`\mathbf{x}^b`. After completion, these printing on standard output is
724 available in the "*YACS Container Log*", obtained through the right click menu
725 of the "*proc*" window in the YACS scheme.
727 .. [#] For more information on YACS, see the the *YACS module User's Guide* available in the main "*Help*" menu of SALOME GUI.