2 Copyright (C) 2008-2015 EDF R&D
4 This file is part of SALOME ADAO module.
6 This library is free software; you can redistribute it and/or
7 modify it under the terms of the GNU Lesser General Public
8 License as published by the Free Software Foundation; either
9 version 2.1 of the License, or (at your option) any later version.
11 This library is distributed in the hope that it will be useful,
12 but WITHOUT ANY WARRANTY; without even the implied warranty of
13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
14 Lesser General Public License for more details.
16 You should have received a copy of the GNU Lesser General Public
17 License along with this library; if not, write to the Free Software
18 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
20 See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com
22 Author: Jean-Philippe Argaud, jean-philippe.argaud@edf.fr, EDF R&D
24 .. index:: single: UnscentedKalmanFilter
25 .. _section_ref_algorithm_UnscentedKalmanFilter:
27 Calculation algorithm "*UnscentedKalmanFilter*"
28 -----------------------------------------------
33 This algorithm realizes an estimation of the state of a dynamic system by a
34 "unscented" Kalman Filter, avoiding to have to perform the tangent and adjoint
35 operators for the observation and evolution operators, as in the simple or
36 extended Kalman filter.
38 Optional and required commands
39 ++++++++++++++++++++++++++++++
41 .. index:: single: AlgorithmParameters
42 .. index:: single: Background
43 .. index:: single: BackgroundError
44 .. index:: single: Observation
45 .. index:: single: ObservationError
46 .. index:: single: ObservationOperator
47 .. index:: single: Bounds
48 .. index:: single: ConstrainedBy
49 .. index:: single: EstimationOf
50 .. index:: single: Alpha
51 .. index:: single: Beta
52 .. index:: single: Kappa
53 .. index:: single: Reconditioner
54 .. index:: single: StoreSupplementaryCalculations
56 The general required commands, available in the editing user interface, are the
60 *Required command*. This indicates the background or initial vector used,
61 previously noted as :math:`\mathbf{x}^b`. Its value is defined as a
62 "*Vector*" or a *VectorSerie*" type object.
65 *Required command*. This indicates the background error covariance matrix,
66 previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*"
67 type object, a "*ScalarSparseMatrix*" type object, or a
68 "*DiagonalSparseMatrix*" type object.
71 *Required command*. This indicates the observation vector used for data
72 assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It
73 is defined as a "*Vector*" or a *VectorSerie* type object.
76 *Required command*. This indicates the observation error covariance matrix,
77 previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type
78 object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*"
82 *Required command*. This indicates the observation operator, previously
83 noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to
84 results :math:`\mathbf{y}` to be compared to observations
85 :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or
86 a "*Matrix*" type one. In the case of "*Function*" type, different
87 functional forms can be used, as described in the section
88 :ref:`section_ref_operator_requirements`. If there is some control :math:`U`
89 included in the observation, the operator has to be applied to a pair
92 The general optional commands, available in the editing user interface, are
93 indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters
94 of the command "*AlgorithmParameters*" allows to choose the specific options,
95 described hereafter, of the algorithm. See
96 :ref:`section_ref_options_Algorithm_Parameters` for the good use of this
99 The options of the algorithm are the following:
102 This key allows to define upper and lower bounds for every state variable
103 being optimized. Bounds have to be given by a list of list of pairs of
104 lower/upper bounds for each variable, with extreme values every time there
105 is no bound (``None`` is not allowed when there is no bound).
107 Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
110 This key allows to choose the method to take into account the bounds
111 constraints. The only one available is the "EstimateProjection", which
112 projects the current state estimate on the bounds constraints.
114 Example : ``{"ConstrainedBy":"EstimateProjection"}``
117 This key allows to choose the type of estimation to be performed. It can be
118 either state-estimation, with a value of "State", or parameter-estimation,
119 with a value of "Parameters". The default choice is "State".
121 Example : ``{"EstimationOf":"Parameters"}``
123 Alpha, Beta, Kappa, Reconditioner
124 These keys are internal scaling parameters. "Alpha" requires a value between
125 1.e-4 and 1. "Beta" has an optimal value of 2 for Gaussian *a priori*
126 distribution. "Kappa" requires an integer value, and the right default is
127 obtained by setting it to 0. "Reconditioner" requires a value between 1.e-3
128 and 10, it defaults to 1.
130 Example : ``{"Alpha":1,"Beta":2,"Kappa":0,"Reconditioner":1}``
132 StoreSupplementaryCalculations
133 This list indicates the names of the supplementary variables that can be
134 available at the end of the algorithm. It involves potentially costly
135 calculations or memory consumptions. The default is a void list, none of
136 these variables being calculated and stored by default. The possible names
137 are in the following list: ["APosterioriCorrelations",
138 "APosterioriCovariance", "APosterioriStandardDeviations",
139 "APosterioriVariances", "BMA", "CostFunctionJ", "CurrentState",
142 Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}``
144 Information and variables available at the end of the algorithm
145 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
147 At the output, after executing the algorithm, there are variables and
148 information originating from the calculation. The description of
149 :ref:`section_ref_output_variables` show the way to obtain them by the method
150 named ``get`` of the variable "*ADD*" of the post-processing. The input
151 variables, available to the user at the output in order to facilitate the
152 writing of post-processing procedures, are described in the
153 :ref:`subsection_r_o_v_Inventaire`.
155 The unconditional outputs of the algorithm are the following:
158 *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in
159 optimization or an analysis :math:`\mathbf{x}^a` in data assimilation.
161 Example : ``Xa = ADD.get("Analysis")[-1]``
163 The conditional outputs of the algorithm are the following:
165 APosterioriCorrelations
166 *List of matrices*. Each element is an *a posteriori* error correlation
167 matrix of the optimal state.
169 Example : ``C = ADD.get("APosterioriCorrelations")[-1]``
171 APosterioriCovariance
172 *List of matrices*. Each element is an *a posteriori* error covariance
173 matrix :math:`\mathbf{A}*` of the optimal state.
175 Example : ``A = ADD.get("APosterioriCovariance")[-1]``
177 APosterioriStandardDeviations
178 *List of matrices*. Each element is an *a posteriori* error standard
179 deviation matrix of the optimal state.
181 Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]``
184 *List of matrices*. Each element is an *a posteriori* error variance matrix
185 of the optimal state.
187 Example : ``V = ADD.get("APosterioriVariances")[-1]``
190 *List of vectors*. Each element is a vector of difference between the
191 background and the optimal state.
193 Example : ``bma = ADD.get("BMA")[-1]``
196 *List of values*. Each element is a value of the error function :math:`J`.
198 Example : ``J = ADD.get("CostFunctionJ")[:]``
201 *List of values*. Each element is a value of the error function :math:`J^b`,
202 that is of the background difference part.
204 Example : ``Jb = ADD.get("CostFunctionJb")[:]``
207 *List of values*. Each element is a value of the error function :math:`J^o`,
208 that is of the observation difference part.
210 Example : ``Jo = ADD.get("CostFunctionJo")[:]``
213 *List of vectors*. Each element is a usual state vector used during the
214 optimization algorithm procedure.
216 Example : ``Xs = ADD.get("CurrentState")[:]``
219 *List of vectors*. Each element is an innovation vector, which is in static
220 the difference between the optimal and the background, and in dynamic the
223 Example : ``d = ADD.get("Innovation")[-1]``
228 References to other sections:
229 - :ref:`section_ref_algorithm_KalmanFilter`
230 - :ref:`section_ref_algorithm_ExtendedKalmanFilter`
232 Bibliographical references: