From: Jean-Philippe ARGAUD Date: Sat, 20 Jan 2018 21:48:18 +0000 (+0100) Subject: Documentation corrections and modular evolution (3 EN) X-Git-Tag: V8_5_0rc1~26 X-Git-Url: http://git.salome-platform.org/gitweb/?a=commitdiff_plain;h=bafcd99daccb1628112af3d4ca1addc71e8bd7b2;p=modules%2Fadao.git Documentation corrections and modular evolution (3 EN) --- diff --git a/doc/en/ref_algorithm_3DVAR.rst b/doc/en/ref_algorithm_3DVAR.rst index 824237c..88eed05 100644 --- a/doc/en/ref_algorithm_3DVAR.rst +++ b/doc/en/ref_algorithm_3DVAR.rst @@ -41,59 +41,18 @@ which is usually designed as the "*3D-VAR*" function (see for example Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: Minimizer -.. index:: single: Bounds -.. index:: single: MaximumNumberOfSteps -.. index:: single: CostDecrementTolerance -.. index:: single: ProjectedGradientTolerance -.. index:: single: GradientNormTolerance -.. index:: single: StoreSupplementaryCalculations -.. index:: single: Quantiles -.. index:: single: SetSeed -.. index:: single: NumberOfSamplesForQuantiles -.. index:: single: SimulationForQuantiles - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -105,6 +64,8 @@ command. The options of the algorithm are the following: Minimizer + .. index:: single: Minimizer + This key allows to choose the optimization minimizer. The default choice is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear @@ -112,53 +73,22 @@ The options of the algorithm are the following: (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is strongly recommended to stay with the default. - Example : ``{"Minimizer":"LBFGSB"}`` - - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained optimizers. - - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}`` - - MaximumNumberOfSteps - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which is very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. For some optimizers, the effective stopping step can be - slightly different of the limit due to algorithm internal control - requirements. - - Example : ``{"MaximumNumberOfSteps":100}`` - - CostDecrementTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function decreases less than - this tolerance at the last step. The default is 1.e-7, and it is - recommended to adapt it to the needs on real problems. + Example : + ``{"Minimizer":"LBFGSB"}`` - Example : ``{"CostDecrementTolerance":1.e-7}`` + .. include:: snippets/BoundsWithNone.rst - ProjectedGradientTolerance - This key indicates a limit value, leading to stop successfully the iterative - optimization process when all the components of the projected gradient are - under this limit. It is only used for constrained optimizers. The default is - -1, that is the internal default of each minimizer (generally 1.e-5), and it - is not recommended to change it. + .. include:: snippets/MaximumNumberOfSteps.rst - Example : ``{"ProjectedGradientTolerance":-1}`` + .. include:: snippets/CostDecrementTolerance.rst - GradientNormTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the norm of the gradient is under this - limit. It is only used for non-constrained optimizers. The default is - 1.e-5 and it is not recommended to change it. + .. include:: snippets/ProjectedGradientTolerance.rst - Example : ``{"GradientNormTolerance":1.e-5}`` + .. include:: snippets/GradientNormTolerance.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -175,45 +105,16 @@ The options of the algorithm are the following: "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum", "SimulationQuantiles"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` - - Quantiles - This list indicates the values of quantile, between 0 and 1, to be estimated - by simulation around the optimal state. The sampling uses a multivariate - Gaussian random sampling, directed by the *a posteriori* covariance matrix. - This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default is a void list. - - Example : ``{"Quantiles":[0.1,0.9]}`` - - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` - NumberOfSamplesForQuantiles - This key indicates the number of simulation to be done in order to estimate - the quantiles. This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default is 100, which is often - sufficient for correct estimation of common quantiles at 5%, 10%, 90% or - 95%. + .. include:: snippets/Quantiles.rst - Example : ``{"NumberOfSamplesForQuantiles":100}`` + .. include:: snippets/SetSeed.rst - SimulationForQuantiles - This key indicates the type of simulation, linear (with the tangent - observation operator applied to perturbation increments around the optimal - state) or non-linear (with standard observation operator applied to - perturbed states), one want to do for each perturbation. It changes mainly - the time of each elementary calculation, usually longer in non-linear than - in linear. This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default value is "Linear", and - the possible choices are "Linear" and "NonLinear". + .. include:: snippets/NumberOfSamplesForQuantiles.rst - Example : ``{"SimulationForQuantiles":"Linear"}`` + .. include:: snippets/SimulationForQuantiles.rst Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -228,171 +129,59 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. + .. include:: snippets/Analysis.rst - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/CostFunctionJ.rst - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/CostFunctionJb.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` - - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. - - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlations - matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance - matrix. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard - errors diagonal matrix of the optimal state, coming from the - :math:`\mathbf{A}*` covariance matrix. - - Example : ``S = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variance - errors diagonal matrix of the optimal state, coming from the - :math:`\mathbf{A}*` covariance matrix. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J`. - At each step, the value corresponds to the optimal state found from the - beginning. - - Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]`` - - CostFunctionJbAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]`` - - CostFunctionJoAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]`` - - CurrentOptimum - *List of vectors*. Each element is the optimal state obtained at the current - step of the optimization algorithm. It is not necessarily the last state. - - Example : ``Xo = ADD.get("CurrentOptimum")[:]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - IndexOfOptimum - *List of integers*. Each element is the iteration index of the optimum - obtained at the current step the optimization algorithm. It is not - necessarily the number of the last iteration. - - Example : ``i = ADD.get("IndexOfOptimum")[-1]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. + .. include:: snippets/APosterioriCorrelations.rst - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/APosterioriCovariance.rst - InnovationAtCurrentState - *List of vectors*. Each element is an innovation vector at current state. + .. include:: snippets/APosterioriStandardDeviations.rst - Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]`` + .. include:: snippets/APosterioriVariances.rst - MahalanobisConsistency - *List of values*. Each element is a value of the Mahalanobis quality - indicator. + .. include:: snippets/BMA.rst - Example : ``m = ADD.get("MahalanobisConsistency")[-1]`` + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/CurrentOptimum.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CurrentState.rst - SigmaObs2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^o)^2` of the observation part. + .. include:: snippets/IndexOfOptimum.rst - Example : ``so2 = ADD.get("SigmaObs")[-1]`` + .. include:: snippets/Innovation.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/InnovationAtCurrentState.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/MahalanobisConsistency.rst - SimulatedObservationAtCurrentOptimum - *List of vectors*. Each element is a vector of observation simulated from - the optimal state obtained at the current step the optimization algorithm, - that is, in the observation space. + .. include:: snippets/OMA.rst - Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]`` + .. include:: snippets/OMB.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/SigmaObs2.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtBackground.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst - SimulationQuantiles - *List of vectors*. Each element is a vector corresponding to the observed - state which realize the required quantile, in the same order than the - quantiles required by the user. + .. include:: snippets/SimulatedObservationAtOptimum.rst - Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]`` + .. include:: snippets/SimulationQuantiles.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_4DVAR.rst b/doc/en/ref_algorithm_4DVAR.rst index 74065fa..5b721f9 100644 --- a/doc/en/ref_algorithm_4DVAR.rst +++ b/doc/en/ref_algorithm_4DVAR.rst @@ -50,56 +50,23 @@ filters, specially the :ref:`section_ref_algorithm_ExtendedKalmanFilter` or the Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: Bounds -.. index:: single: ConstrainedBy -.. index:: single: EstimationOf -.. index:: single: MaximumNumberOfSteps -.. index:: single: CostDecrementTolerance -.. index:: single: ProjectedGradientTolerance -.. index:: single: GradientNormTolerance -.. index:: single: StoreSupplementaryCalculations The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/EvolutionError.rst + + .. include:: snippets/EvolutionModel.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -111,6 +78,8 @@ command. The options of the algorithm are the following: Minimizer + .. index:: single: Minimizer + This key allows to choose the optimization minimizer. The default choice is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear @@ -118,67 +87,26 @@ The options of the algorithm are the following: (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is strongly recommended to stay with the default. - Example : ``{"Minimizer":"LBFGSB"}`` - - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained optimizers. - - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}`` - - ConstrainedBy - This key allows to choose the method to take into account the bounds - constraints. The only one available is the "EstimateProjection", which - projects the current state estimate on the bounds constraints. - - Example : ``{"ConstrainedBy":"EstimateProjection"}`` - - MaximumNumberOfSteps - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which is very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. For some optimizers, the effective stopping step can be - slightly different of the limit due to algorithm internal control - requirements. - - Example : ``{"MaximumNumberOfSteps":100}`` + Example : + ``{"Minimizer":"LBFGSB"}`` - CostDecrementTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function decreases less than - this tolerance at the last step. The default is 1.e-7, and it is - recommended to adapt it to the needs on real problems. + .. include:: snippets/BoundsWithNone.rst - Example : ``{"CostDecrementTolerance":1.e-7}`` + .. include:: snippets/ConstrainedBy.rst - EstimationOf - This key allows to choose the type of estimation to be performed. It can be - either state-estimation, with a value of "State", or parameter-estimation, - with a value of "Parameters". The default choice is "State". + .. include:: snippets/MaximumNumberOfSteps.rst - Example : ``{"EstimationOf":"Parameters"}`` + .. include:: snippets/CostDecrementTolerance.rst - ProjectedGradientTolerance - This key indicates a limit value, leading to stop successfully the iterative - optimization process when all the components of the projected gradient are - under this limit. It is only used for constrained optimizers. The default is - -1, that is the internal default of each minimizer (generally 1.e-5), and it - is not recommended to change it. + .. include:: snippets/EstimationOf.rst - Example : ``{"ProjectedGradientTolerance":-1}`` + .. include:: snippets/ProjectedGradientTolerance.rst - GradientNormTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the norm of the gradient is under this - limit. It is only used for non-constrained optimizers. The default is - 1.e-5 and it is not recommended to change it. - - Example : ``{"GradientNormTolerance":1.e-5}`` + .. include:: snippets/GradientNormTolerance.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -203,76 +131,29 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/Analysis.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/CostFunctionJ.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/CostFunctionJb.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J`. - At each step, the value corresponds to the optimal state found from the - beginning. - - Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]`` - - CostFunctionJbAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]`` - - CostFunctionJoAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]`` + .. include:: snippets/BMA.rst - CurrentOptimum - *List of vectors*. Each element is the optimal state obtained at the current - step of the optimization algorithm. It is not necessarily the last state. + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst - Example : ``Xo = ADD.get("CurrentOptimum")[:]`` + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst - Example : ``Xs = ADD.get("CurrentState")[:]`` + .. include:: snippets/CurrentOptimum.rst - IndexOfOptimum - *List of integers*. Each element is the iteration index of the optimum - obtained at the current step the optimization algorithm. It is not - necessarily the number of the last iteration. + .. include:: snippets/CurrentState.rst - Example : ``i = ADD.get("IndexOfOptimum")[-1]`` + .. include:: snippets/IndexOfOptimum.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_AdjointTest.rst b/doc/en/ref_algorithm_AdjointTest.rst index 5c91235..296ac27 100644 --- a/doc/en/ref_algorithm_AdjointTest.rst +++ b/doc/en/ref_algorithm_AdjointTest.rst @@ -47,33 +47,13 @@ take :math:`\mathbf{y} = F(\mathbf{x})`. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: ObservationOperator -.. index:: single: AmplitudeOfInitialDirection -.. index:: single: EpsilonMinimumExponent -.. index:: single: InitialDirection -.. index:: single: SetSeed -.. index:: single: StoreSupplementaryCalculations The general required commands, available in the editing user interface, are the following: - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control - :math:`U` included in the observation, the operator has to be applied to a - pair :math:`(X,U)`. + .. include:: snippets/CheckingPoint.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -84,39 +64,17 @@ command. The options of the algorithm are the following: - AmplitudeOfInitialDirection - This key indicates the scaling of the initial perturbation build as a vector - used for the directional derivative around the nominal checking point. The - default is 1, that means no scaling. - - Example : ``{"AmplitudeOfInitialDirection":0.5}`` - - EpsilonMinimumExponent - This key indicates the minimal exponent value of the power of 10 coefficient - to be used to decrease the increment multiplier. The default is -8, and it - has to be between 0 and -20. For example, its default value leads to - calculate the residue of the formula with a fixed increment multiplied from - 1.e0 to 1.e-8. - - Example : ``{"EpsilonMinimumExponent":-12}`` - - InitialDirection - This key indicates the vector direction used for the directional derivative - around the nominal checking point. It has to be a vector. If not specified, - this direction defaults to a random perturbation around zero of the same - vector size than the checking point. + .. include:: snippets/AmplitudeOfInitialDirection.rst - Example : ``{"InitialDirection":[0.1,0.1,100.,3}`` + .. include:: snippets/EpsilonMinimumExponent.rst - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. + .. include:: snippets/InitialDirection.rst - Example : ``{"SetSeed":1000}`` + .. include:: snippets/SetSeed.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -124,7 +82,8 @@ The options of the algorithm are the following: are in the following list: ["CurrentState", "Residu", "SimulatedObservationAtCurrentState"]. - Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["CurrentState"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -139,25 +98,13 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Residu - *List of values*. Each element is the value of the particular residue - verified during a checking algorithm, in the order of the tests. - - Example : ``r = ADD.get("Residu")[:]`` + .. include:: snippets/Residu.rst The conditional outputs of the algorithm are the following: - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/CurrentState.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_Blue.rst b/doc/en/ref_algorithm_Blue.rst index 37913f4..a15ad80 100644 --- a/doc/en/ref_algorithm_Blue.rst +++ b/doc/en/ref_algorithm_Blue.rst @@ -46,53 +46,19 @@ In case of non-linearity, even slightly marked, it will be easily preferred the Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: StoreSupplementaryCalculations -.. index:: single: Quantiles -.. index:: single: SetSeed -.. index:: single: NumberOfSamplesForQuantiles -.. index:: single: SimulationForQuantiles The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -104,6 +70,8 @@ command. The options of the algorithm are the following: StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -116,45 +84,16 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` - - Quantiles - This list indicates the values of quantile, between 0 and 1, to be estimated - by simulation around the optimal state. The sampling uses a multivariate - Gaussian random sampling, directed by the *a posteriori* covariance matrix. - This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default is a void list. - - Example : ``{"Quantiles":[0.1,0.9]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. + .. include:: snippets/Quantiles.rst - Example : ``{"SetSeed":1000}`` + .. include:: snippets/SetSeed.rst - NumberOfSamplesForQuantiles - This key indicates the number of simulation to be done in order to estimate - the quantiles. This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default is 100, which is often - sufficient for correct estimation of common quantiles at 5%, 10%, 90% or - 95%. + .. include:: snippets/NumberOfSamplesForQuantiles.rst - Example : ``{"NumberOfSamplesForQuantiles":100}`` - - SimulationForQuantiles - This key indicates the type of simulation, linear (with the tangent - observation operator applied to perturbation increments around the optimal - state) or non-linear (with standard observation operator applied to - perturbed states), one want to do for each perturbation. It changes mainly - the time of each elementary calculation, usually longer in non-linear than - in linear. This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default value is "Linear", and - the possible choices are "Linear" and "NonLinear". - - Example : ``{"SimulationForQuantiles":"Linear"}`` + .. include:: snippets/SimulationForQuantiles.rst Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -169,116 +108,44 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst The conditional outputs of the algorithm are the following: - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlation - matrix of the optimal state. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard - deviation matrix of the optimal state. - - Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variance matrix - of the optimal state. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. - - Example : ``J = ADD.get("CostFunctionJ")[:]`` - - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. - - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. + .. include:: snippets/APosterioriCorrelations.rst - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/APosterioriCovariance.rst - MahalanobisConsistency - *List of values*. Each element is a value of the Mahalanobis quality - indicator. + .. include:: snippets/APosterioriStandardDeviations.rst - Example : ``m = ADD.get("MahalanobisConsistency")[-1]`` + .. include:: snippets/APosterioriVariances.rst - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. + .. include:: snippets/BMA.rst - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/CostFunctionJ.rst - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/CostFunctionJb.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CostFunctionJo.rst - SigmaBck2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^b)^2` of the background part. + .. include:: snippets/Innovation.rst - Example : ``sb2 = ADD.get("SigmaBck")[-1]`` + .. include:: snippets/MahalanobisConsistency.rst - SigmaObs2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^o)^2` of the observation part. + .. include:: snippets/OMA.rst - Example : ``so2 = ADD.get("SigmaObs")[-1]`` + .. include:: snippets/OMB.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/SigmaBck2.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/SigmaObs2.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtBackground.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtOptimum.rst - SimulationQuantiles - *List of vectors*. Each element is a vector corresponding to the observed - state which realize the required quantile, in the same order than the - quantiles required by the user. + .. include:: snippets/SimulationQuantiles.rst - Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]`` See also ++++++++ diff --git a/doc/en/ref_algorithm_DerivativeFreeOptimization.rst b/doc/en/ref_algorithm_DerivativeFreeOptimization.rst index f137084..b369345 100644 --- a/doc/en/ref_algorithm_DerivativeFreeOptimization.rst +++ b/doc/en/ref_algorithm_DerivativeFreeOptimization.rst @@ -48,55 +48,18 @@ least squares function, classically used in data assimilation. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: Minimizer -.. index:: single: MaximumNumberOfSteps -.. index:: single: MaximumNumberOfFunctionEvaluations -.. index:: single: StateVariationTolerance -.. index:: single: CostDecrementTolerance -.. index:: single: QualityCriterion -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -107,77 +70,23 @@ command. The options of the algorithm are the following: - Minimizer - This key allows to choose the optimization minimizer. The default choice is - "BOBYQA", and the possible ones are - "BOBYQA" (minimization with or without constraints by quadratic approximation [Powell09]_), - "COBYLA" (minimization with or without constraints by linear approximation [Powell94]_ [Powell98]_). - "NEWUOA" (minimization with or without constraints by iterative quadratic approximation [Powell04]_), - "POWELL" (minimization unconstrained using conjugate directions [Powell64]_), - "SIMPLEX" (minimization with or without constraints using Nelder-Mead simplex algorithm [Nelder65]_), - "SUBPLEX" (minimization with or without constraints using Nelder-Mead on a sequence of subspaces [Rowan90]_). - Remark: the "POWELL" method perform a dual outer/inner loops optimization, - leading then to less control on the cost function evaluation number because - it is the outer loop limit than is controlled. If precise control on this - cost function evaluation number is required, choose an another minimizer. - - Example : ``{"Minimizer":"BOBYQA"}`` - - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained optimizers. - - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}`` - - MaximumNumberOfSteps - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which is very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. For some optimizers, the effective stopping step can be - slightly different of the limit due to algorithm internal control - requirements. - - Example : ``{"MaximumNumberOfSteps":50}`` - - MaximumNumberOfFunctionEvaluations - This key indicates the maximum number of evaluation of the cost function to - be optimized. The default is 15000, which is an arbitrary limit. It is then - recommended to adapt this parameter to the needs on real problems. For some - optimizers, the effective number of function evaluations can be slightly - different of the limit due to algorithm internal control requirements. - - Example : ``{"MaximumNumberOfFunctionEvaluations":50}`` - - StateVariationTolerance - This key indicates the maximum relative variation of the state for stopping - by convergence on the state. The default is 1.e-4, and it is recommended to - adapt it to the needs on real problems. - - Example : ``{"StateVariationTolerance":1.e-4}`` - - CostDecrementTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function decreases less than - this tolerance at the last step. The default is 1.e-7, and it is - recommended to adapt it to the needs on real problems. - - Example : ``{"CostDecrementTolerance":1.e-7}`` - - QualityCriterion - This key indicates the quality criterion, minimized to find the optimal - state estimate. The default is the usual data assimilation criterion named - "DA", the augmented weighted least squares. The possible criteria has to be - in the following list, where the equivalent names are indicated by the sign - "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA", - "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2", - "AbsoluteValue"="L1", "MaximumError"="ME"]. - - Example : ``{"QualityCriterion":"DA"}`` + .. include:: snippets/Minimizer_DFO.rst + + .. include:: snippets/BoundsWithNone.rst + + .. include:: snippets/MaximumNumberOfSteps.rst + + .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst + + .. include:: snippets/StateVariationTolerance.rst + + .. include:: snippets/CostDecrementTolerance.rst + + .. include:: snippets/QualityCriterion.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -190,7 +99,8 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentOptimum", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -205,112 +115,41 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. - - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/Analysis.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/CostFunctionJ.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` + .. include:: snippets/CostFunctionJb.rst - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. + .. include:: snippets/CostFunctionJo.rst - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` + .. include:: snippets/CurrentState.rst The conditional outputs of the algorithm are the following: - CostFunctionJAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J`. - At each step, the value corresponds to the optimal state found from the - beginning. - - Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]`` - - CostFunctionJbAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]`` - - CostFunctionJoAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]`` - - CurrentOptimum - *List of vectors*. Each element is the optimal state obtained at the current - step of the optimization algorithm. It is not necessarily the last state. - - Example : ``Xo = ADD.get("CurrentOptimum")[:]`` - - IndexOfOptimum - *List of integers*. Each element is the iteration index of the optimum - obtained at the current step the optimization algorithm. It is not - necessarily the number of the last iteration. - - Example : ``i = ADD.get("IndexOfOptimum")[-1]`` - - InnovationAtCurrentState - *List of vectors*. Each element is an innovation vector at current state. - - Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]`` - - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CurrentOptimum.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/IndexOfOptimum.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/InnovationAtCurrentState.rst - SimulatedObservationAtCurrentOptimum - *List of vectors*. Each element is a vector of observation simulated from - the optimal state obtained at the current step the optimization algorithm, - that is, in the observation space. + .. include:: snippets/OMA.rst - Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]`` + .. include:: snippets/OMB.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/SimulatedObservationAtBackground.rst - Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtCurrentState.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtOptimum.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_EnsembleBlue.rst b/doc/en/ref_algorithm_EnsembleBlue.rst index c2fe883..1a65a65 100644 --- a/doc/en/ref_algorithm_EnsembleBlue.rst +++ b/doc/en/ref_algorithm_EnsembleBlue.rst @@ -43,49 +43,19 @@ linearity of the observation operator with the help of the Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: SetSeed The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -96,7 +66,11 @@ command. The options of the algorithm are the following: + .. include:: snippets/SetSeed.rst + StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -105,15 +79,8 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["CurrentState", "Innovation"]}`` - - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + Example : + ``{"StoreSupplementaryCalculations":["CurrentState", "Innovation"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -128,24 +95,11 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` + .. include:: snippets/Analysis.rst - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. + .. include:: snippets/CurrentState.rst - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/Innovation.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_ExtendedBlue.rst b/doc/en/ref_algorithm_ExtendedBlue.rst index 2b66df8..537043d 100644 --- a/doc/en/ref_algorithm_ExtendedBlue.rst +++ b/doc/en/ref_algorithm_ExtendedBlue.rst @@ -44,53 +44,18 @@ without being entirely equivalent. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: StoreSupplementaryCalculations -.. index:: single: Quantiles -.. index:: single: SetSeed -.. index:: single: NumberOfSamplesForQuantiles -.. index:: single: SimulationForQuantiles - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -102,6 +67,8 @@ command. The options of the algorithm are the following: StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -114,45 +81,16 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` - - Quantiles - This list indicates the values of quantile, between 0 and 1, to be estimated - by simulation around the optimal state. The sampling uses a multivariate - gaussian random sampling, directed by the *a posteriori* covariance matrix. - This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default is a void list. + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` - Example : ``{"Quantiles":[0.1,0.9]}`` + .. include:: snippets/Quantiles.rst - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. + .. include:: snippets/SetSeed.rst - Example : ``{"SetSeed":1000}`` + .. include:: snippets/NumberOfSamplesForQuantiles.rst - NumberOfSamplesForQuantiles - This key indicates the number of simulation to be done in order to estimate - the quantiles. This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default is 100, which is often - sufficient for correct estimation of common quantiles at 5%, 10%, 90% or - 95%. - - Example : ``{"NumberOfSamplesForQuantiles":100}`` - - SimulationForQuantiles - This key indicates the type of simulation, linear (with the tangent - observation operator applied to perturbation increments around the optimal - state) or non-linear (with standard observation operator applied to - perturbated states), one want to do for each perturbation. It changes mainly - the time of each elementary calculation, usually longer in non-linear than - in linear. This option is useful only if the supplementary calculation - "SimulationQuantiles" has been chosen. The default value is "Linear", and - the possible choices are "Linear" and "NonLinear". - - Example : ``{"SimulationForQuantiles":"Linear"}`` + .. include:: snippets/SimulationForQuantiles.rst Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -167,116 +105,43 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst The conditional outputs of the algorithm are the following: - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlation - matrix of the optimal state. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard - deviation matrix of the optimal state. - - Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variance matrix - of the optimal state. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. - - Example : ``J = ADD.get("CostFunctionJ")[:]`` - - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. - - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. - - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/APosterioriCorrelations.rst - MahalanobisConsistency - *List of values*. Each element is a value of the Mahalanobis quality - indicator. + .. include:: snippets/APosterioriCovariance.rst - Example : ``m = ADD.get("MahalanobisConsistency")[-1]`` + .. include:: snippets/APosterioriStandardDeviations.rst - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. + .. include:: snippets/APosterioriVariances.rst - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/BMA.rst - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/CostFunctionJ.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CostFunctionJb.rst - SigmaBck2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^b)^2` of the background part. + .. include:: snippets/CostFunctionJo.rst - Example : ``sb2 = ADD.get("SigmaBck")[-1]`` + .. include:: snippets/Innovation.rst - SigmaObs2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^o)^2` of the observation part. + .. include:: snippets/MahalanobisConsistency.rst - Example : ``so2 = ADD.get("SigmaObs")[-1]`` + .. include:: snippets/OMA.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/OMB.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/SigmaBck2.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SigmaObs2.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtBackground.rst - SimulationQuantiles - *List of vectors*. Each element is a vector corresponding to the observed - state which realize the required quantile, in the same order than the - quantiles required by the user. + .. include:: snippets/SimulatedObservationAtOptimum.rst - Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]`` + .. include:: snippets/SimulationQuantiles.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_ExtendedKalmanFilter.rst b/doc/en/ref_algorithm_ExtendedKalmanFilter.rst index 97d1004..bd074c1 100644 --- a/doc/en/ref_algorithm_ExtendedKalmanFilter.rst +++ b/doc/en/ref_algorithm_ExtendedKalmanFilter.rst @@ -31,57 +31,34 @@ Description +++++++++++ This algorithm realizes an estimation of the state of a dynamic system by a -extended Kalman Filter, using a non-linear calculation of the state. +extended Kalman Filter, using a non-linear calculation of the state and the +incremental evolution (process). + +In case of really non-linear operators, one can easily use the +:ref:`section_ref_algorithm_EnsembleKalmanFilter` or the +:ref:`section_ref_algorithm_UnscentedKalmanFilter`, which are often far more +adapted to non-linear behavior but more costly. One can verify the linearity of +the operators with the help of the :ref:`section_ref_algorithm_LinearityTest`. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: Bounds -.. index:: single: ConstrainedBy -.. index:: single: EstimationOf -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/EvolutionError.rst + + .. include:: snippets/EvolutionModel.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -92,29 +69,15 @@ command. The options of the algorithm are the following: - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with extreme values every time there - is no bound (``None`` is not allowed when there is no bound). - - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}`` + .. include:: snippets/BoundsWithExtremes.rst - ConstrainedBy - This key allows to choose the method to take into account the bounds - constraints. The only one available is the "EstimateProjection", which - projects the current state estimate on the bounds constraints. + .. include:: snippets/ConstrainedBy.rst - Example : ``{"ConstrainedBy":"EstimateProjection"}`` - - EstimationOf - This key allows to choose the type of estimation to be performed. It can be - either state-estimation, with a value of "State", or parameter-estimation, - with a value of "Parameters". The default choice is "State". - - Example : ``{"EstimationOf":"Parameters"}`` + .. include:: snippets/EstimationOf.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -124,7 +87,8 @@ The options of the algorithm are the following: "APosterioriVariances", "BMA", "CostFunctionJ", "CostFunctionJb", "CostFunctionJo", "CurrentState", "Innovation"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -139,77 +103,34 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst The conditional outputs of the algorithm are the following: - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlation - matrix of the optimal state. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard - deviation matrix of the optimal state. - - Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variance matrix - of the optimal state. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/APosterioriCorrelations.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/APosterioriCovariance.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/APosterioriStandardDeviations.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` + .. include:: snippets/APosterioriVariances.rst - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. + .. include:: snippets/BMA.rst - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJ.rst - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. + .. include:: snippets/CostFunctionJb.rst - Example : ``Xs = ADD.get("CurrentState")[:]`` + .. include:: snippets/CostFunctionJo.rst - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. + .. include:: snippets/CurrentState.rst - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/Innovation.rst See also ++++++++ References to other sections: - :ref:`section_ref_algorithm_KalmanFilter` + - :ref:`section_ref_algorithm_EnsembleKalmanFilter` - :ref:`section_ref_algorithm_UnscentedKalmanFilter` diff --git a/doc/en/ref_algorithm_FunctionTest.rst b/doc/en/ref_algorithm_FunctionTest.rst index c0a143c..820d999 100644 --- a/doc/en/ref_algorithm_FunctionTest.rst +++ b/doc/en/ref_algorithm_FunctionTest.rst @@ -43,32 +43,12 @@ of operator. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: ObservationOperator -.. index:: single: NumberOfPrintedDigits -.. index:: single: NumberOfRepetition -.. index:: single: SetDebug -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/CheckingPoint.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -79,26 +59,15 @@ command. The options of the algorithm are the following: - NumberOfPrintedDigits - This key indicates the number of digits of precision for floating point - printed output. The default is 5, with a minimum of 0. - - Example : ``{"NumberOfPrintedDigits":5}`` - - NumberOfRepetition - This key indicates the number of time to repeat the function evaluation. The - default is 1. + .. include:: snippets/NumberOfPrintedDigits.rst - Example : ``{"NumberOfRepetition":3}`` + .. include:: snippets/NumberOfRepetition.rst - SetDebug - This key requires the activation, or not, of the debug mode during the - function evaluation. The default is "False", the choices are "True" or - "False". - - Example : ``{"SetDebug":False}`` + .. include:: snippets/SetDebug.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -106,7 +75,8 @@ The options of the algorithm are the following: are in the following list: ["CurrentState", "SimulatedObservationAtCurrentState"]. - Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["CurrentState"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -121,17 +91,9 @@ writing of post-processing procedures, are described in the The conditional outputs of the algorithm are the following: - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/CurrentState.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_GradientTest.rst b/doc/en/ref_algorithm_GradientTest.rst index bd4d2be..8676d89 100644 --- a/doc/en/ref_algorithm_GradientTest.rst +++ b/doc/en/ref_algorithm_GradientTest.rst @@ -102,21 +102,9 @@ Optional and required commands The general required commands, available in the editing user interface, are the following: - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control - :math:`U` included in the observation, the operator has to be applied to a - pair :math:`(X,U)`. + .. include:: snippets/CheckingPoint.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -127,31 +115,17 @@ command. The options of the algorithm are the following: - AmplitudeOfInitialDirection - This key indicates the scaling of the initial perturbation build as a vector - used for the directional derivative around the nominal checking point. The - default is 1, that means no scaling. - - Example : ``{"AmplitudeOfInitialDirection":0.5}`` - - EpsilonMinimumExponent - This key indicates the minimal exponent value of the power of 10 coefficient - to be used to decrease the increment multiplier. The default is -8, and it - has to be between 0 and -20. For example, its default value leads to - calculate the residue of the scalar product formula with a fixed increment - multiplied from 1.e0 to 1.e-8. + .. include:: snippets/AmplitudeOfInitialDirection.rst - Example : ``{"EpsilonMinimumExponent":-12}`` + .. include:: snippets/EpsilonMinimumExponent.rst - InitialDirection - This key indicates the vector direction used for the directional derivative - around the nominal checking point. It has to be a vector. If not specified, - this direction defaults to a random perturbation around zero of the same - vector size than the checking point. + .. include:: snippets/InitialDirection.rst - Example : ``{"InitialDirection":[0.1,0.1,100.,3}`` + .. include:: snippets/SetSeed.rst ResiduFormula + .. index:: single: ResiduFormula + This key indicates the residue formula that has to be used for the test. The default choice is "Taylor", and the possible ones are "Taylor" (normalized residue of the Taylor development of the operator, which has to decrease @@ -161,17 +135,12 @@ The options of the algorithm are the following: the norm of the Taylor development at zero order approximation, which approximate the gradient, and which has to remain constant). - Example : ``{"ResiduFormula":"Taylor"}`` - - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + Example : + ``{"ResiduFormula":"Taylor"}`` StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -179,7 +148,8 @@ The options of the algorithm are the following: are in the following list: ["CurrentState", "Residu", "SimulatedObservationAtCurrentState"]. - Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["CurrentState"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -194,25 +164,13 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Residu - *List of values*. Each element is the value of the particular residue - verified during a checking algorithm, in the order of the tests. - - Example : ``r = ADD.get("Residu")[:]`` + .. include:: snippets/Residu.rst The conditional outputs of the algorithm are the following: - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/CurrentState.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_KalmanFilter.rst b/doc/en/ref_algorithm_KalmanFilter.rst index 59c2ec1..52ea3fd 100644 --- a/doc/en/ref_algorithm_KalmanFilter.rst +++ b/doc/en/ref_algorithm_KalmanFilter.rst @@ -35,60 +35,33 @@ Kalman Filter. It is theoretically reserved for observation and incremental evolution operators cases which are linear, even if it sometimes works in "slightly" non-linear -cases. One can verify the linearity of the observation operator with the help of +cases. One can verify the linearity of the operators with the help of the :ref:`section_ref_algorithm_LinearityTest`. In case of non-linearity, even slightly marked, it will be preferred the -:ref:`section_ref_algorithm_ExtendedKalmanFilter` or the -:ref:`section_ref_algorithm_UnscentedKalmanFilter`. +:ref:`section_ref_algorithm_ExtendedKalmanFilter`, or the +:ref:`section_ref_algorithm_UnscentedKalmanFilter` and the +:ref:`section_ref_algorithm_UnscentedKalmanFilter` that are more powerful. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: EstimationOf -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/EvolutionError.rst + + .. include:: snippets/EvolutionModel.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -99,14 +72,11 @@ command. The options of the algorithm are the following: - EstimationOf - This key allows to choose the type of estimation to be performed. It can be - either state-estimation, with a value of "State", or parameter-estimation, - with a value of "Parameters". The default choice is "State". - - Example : ``{"EstimationOf":"Parameters"}`` + .. include:: snippets/EstimationOf.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -131,73 +101,29 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst The conditional outputs of the algorithm are the following: - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlation - matrix of the optimal state. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard - deviation matrix of the optimal state. - - Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variance matrix - of the optimal state. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/APosterioriCorrelations.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/APosterioriCovariance.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/APosterioriStandardDeviations.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` + .. include:: snippets/APosterioriVariances.rst - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. + .. include:: snippets/BMA.rst - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJ.rst - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. + .. include:: snippets/CostFunctionJb.rst - Example : ``Xs = ADD.get("CurrentState")[:]`` + .. include:: snippets/CostFunctionJo.rst - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. + .. include:: snippets/CurrentState.rst - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/Innovation.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_LinearLeastSquares.rst b/doc/en/ref_algorithm_LinearLeastSquares.rst index 00483b4..cd9a920 100644 --- a/doc/en/ref_algorithm_LinearLeastSquares.rst +++ b/doc/en/ref_algorithm_LinearLeastSquares.rst @@ -48,36 +48,14 @@ In all cases, it is recommanded to prefer at least the Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -89,6 +67,8 @@ command. The options of the algorithm are the following: StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -97,7 +77,8 @@ The options of the algorithm are the following: "CostFunctionJb", "CostFunctionJo", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["OMA", "CurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["OMA", "CurrentState"]}`` *Tips for this algorithm:* @@ -119,42 +100,20 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/CostFunctionJ.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/CostFunctionJb.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. - - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. - - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/OMA.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtOptimum.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` See also ++++++++ diff --git a/doc/en/ref_algorithm_LinearityTest.rst b/doc/en/ref_algorithm_LinearityTest.rst index 8787cba..3731913 100644 --- a/doc/en/ref_algorithm_LinearityTest.rst +++ b/doc/en/ref_algorithm_LinearityTest.rst @@ -112,34 +112,12 @@ If it is equal to 0 only on part of the variation domain of increment Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: ObservationOperator -.. index:: single: AmplitudeOfInitialDirection -.. index:: single: EpsilonMinimumExponent -.. index:: single: InitialDirection -.. index:: single: ResiduFormula -.. index:: single: SetSeed -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control - :math:`U` included in the observation, the operator has to be applied to a - pair :math:`(X,U)`. + .. include:: snippets/CheckingPoint.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -150,31 +128,17 @@ command. The options of the algorithm are the following: - AmplitudeOfInitialDirection - This key indicates the scaling of the initial perturbation build as a vector - used for the directional derivative around the nominal checking point. The - default is 1, that means no scaling. - - Example : ``{"AmplitudeOfInitialDirection":0.5}`` + .. include:: snippets/AmplitudeOfInitialDirection.rst - EpsilonMinimumExponent - This key indicates the minimal exponent value of the power of 10 coefficient - to be used to decrease the increment multiplier. The default is -8, and it - has to be between 0 and -20. For example, its default value leads to - calculate the residue of the scalar product formula with a fixed increment - multiplied from 1.e0 to 1.e-8. + .. include:: snippets/EpsilonMinimumExponent.rst - Example : ``{"EpsilonMinimumExponent":-12}`` + .. include:: snippets/InitialDirection.rst - InitialDirection - This key indicates the vector direction used for the directional derivative - around the nominal checking point. It has to be a vector. If not specified, - this direction defaults to a random perturbation around zero of the same - vector size than the checking point. - - Example : ``{"InitialDirection":[0.1,0.1,100.,3}`` + .. include:: snippets/SetSeed.rst ResiduFormula + .. index:: single: ResiduFormula + This key indicates the residue formula that has to be used for the test. The default choice is "CenteredDL", and the possible ones are "CenteredDL" (residue of the difference between the function at nominal point and the @@ -186,17 +150,12 @@ The options of the algorithm are the following: order 1 approximations of the operator, normalized by RMS to the nominal point, which has to stay close to 0). - Example : ``{"ResiduFormula":"CenteredDL"}`` - - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + Example : + ``{"ResiduFormula":"CenteredDL"}`` StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -204,7 +163,8 @@ The options of the algorithm are the following: are in the following list: ["CurrentState", "Residu", "SimulatedObservationAtCurrentState"]. - Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["CurrentState"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -219,25 +179,13 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Residu - *List of values*. Each element is the value of the particular residue - verified during a checking algorithm, in the order of the tests. - - Example : ``r = ADD.get("Residu")[:]`` + .. include:: snippets/Residu.rst The conditional outputs of the algorithm are the following: - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/CurrentState.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_NonLinearLeastSquares.rst b/doc/en/ref_algorithm_NonLinearLeastSquares.rst index 4063b3b..916da9b 100644 --- a/doc/en/ref_algorithm_NonLinearLeastSquares.rst +++ b/doc/en/ref_algorithm_NonLinearLeastSquares.rst @@ -45,48 +45,16 @@ for its stability as for its behavior during optimization. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: Minimizer -.. index:: single: Bounds -.. index:: single: MaximumNumberOfSteps -.. index:: single: CostDecrementTolerance -.. index:: single: ProjectedGradientTolerance -.. index:: single: GradientNormTolerance -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -98,6 +66,8 @@ command. The options of the algorithm are the following: Minimizer + .. index:: single: Minimizer + This key allows to choose the optimization minimizer. The default choice is "LBFGSB", and the possible ones are "LBFGSB" (nonlinear constrained minimizer, see [Byrd95]_, [Morales11]_ and [Zhu97]_), "TNC" (nonlinear @@ -105,52 +75,22 @@ The options of the algorithm are the following: (nonlinear unconstrained minimizer), "NCG" (Newton CG minimizer). It is strongly recommended to stay with the default. - Example : ``{"Minimizer":"LBFGSB"}`` - - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained optimizers. + Example : + ``{"Minimizer":"LBFGSB"}`` - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}`` + .. include:: snippets/BoundsWithNone.rst - MaximumNumberOfSteps - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which is very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. For some optimizers, the effective stopping step can be - slightly different due to algorithm internal control requirements. + .. include:: snippets/MaximumNumberOfSteps.rst - Example : ``{"MaximumNumberOfSteps":100}`` + .. include:: snippets/CostDecrementTolerance.rst - CostDecrementTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function decreases less than - this tolerance at the last step. The default is 1.e-7, and it is - recommended to adapt it to the needs on real problems. + .. include:: snippets/ProjectedGradientTolerance.rst - Example : ``{"CostDecrementTolerance":1.e-7}`` - - ProjectedGradientTolerance - This key indicates a limit value, leading to stop successfully the iterative - optimization process when all the components of the projected gradient are - under this limit. It is only used for constrained optimizers. The default is - -1, that is the internal default of each minimizer (generally 1.e-5), and it - is not recommended to change it. - - Example : ``{"ProjectedGradientTolerance":-1}`` - - GradientNormTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the norm of the gradient is under this - limit. It is only used for non-constrained optimizers. The default is - 1.e-5 and it is not recommended to change it. - - Example : ``{"GradientNormTolerance":1.e-5}`` + .. include:: snippets/GradientNormTolerance.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -163,7 +103,8 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum", "SimulatedObservationAtCurrentOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` *Tips for this algorithm:* @@ -185,98 +126,45 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. + .. include:: snippets/Analysis.rst - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/CostFunctionJ.rst - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/CostFunctionJb.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` - - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. - - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - IndexOfOptimum - *List of integers*. Each element is the iteration index of the optimum - obtained at the current step the optimization algorithm. It is not - necessarily the number of the last iteration. - - Example : ``i = ADD.get("IndexOfOptimum")[-1]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. - - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/BMA.rst - InnovationAtCurrentState - *List of vectors*. Each element is an innovation vector at current state. + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst - Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]`` + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/CurrentOptimum.rst - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/CurrentState.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/IndexOfOptimum.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/Innovation.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/InnovationAtCurrentState.rst - SimulatedObservationAtCurrentOptimum - *List of vectors*. Each element is a vector of observation simulated from - the optimal state obtained at the current step the optimization algorithm, - that is, in the observation space. + .. include:: snippets/OMA.rst - Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]`` + .. include:: snippets/OMB.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/SimulatedObservationAtBackground.rst - Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtCurrentState.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtOptimum.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_ObserverTest.rst b/doc/en/ref_algorithm_ObserverTest.rst index 9bf095d..eeed178 100644 --- a/doc/en/ref_algorithm_ObserverTest.rst +++ b/doc/en/ref_algorithm_ObserverTest.rst @@ -38,21 +38,10 @@ explicitly associated with the *observer* in the interface. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: ObservationOperator -.. index:: single: Observers - The general required commands, available in the editing user interface, are the following: - Observers - *Optional command*. This command allows to set internal observers, that are - functions linked with a particular variable, which will be executed each - time this variable is modified. It is a convenient way to monitor variables - of interest during the data assimilation or optimization process, by - printing or plotting it, etc. Common templates are provided to help the user - to start or to quickly make his case. + .. include:: snippets/Observers.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. diff --git a/doc/en/ref_algorithm_ParticleSwarmOptimization.rst b/doc/en/ref_algorithm_ParticleSwarmOptimization.rst index 569d7d5..3d0b406 100644 --- a/doc/en/ref_algorithm_ParticleSwarmOptimization.rst +++ b/doc/en/ref_algorithm_ParticleSwarmOptimization.rst @@ -43,57 +43,18 @@ least squares function, classically used in data assimilation. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: MaximumNumberOfSteps -.. index:: single: MaximumNumberOfFunctionEvaluations -.. index:: single: NumberOfInsects -.. index:: single: SwarmVelocity -.. index:: single: GroupRecallRate -.. index:: single: QualityCriterion -.. index:: single: BoxBounds -.. index:: single: SetSeed -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -103,51 +64,38 @@ described hereafter, of the algorithm. See command. The options of the algorithm are the following: +.. index:: single: NumberOfInsects +.. index:: single: SwarmVelocity +.. index:: single: GroupRecallRate +.. index:: single: QualityCriterion +.. index:: single: BoxBounds - MaximumNumberOfSteps - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 50, which is an arbitrary limit. It is then - recommended to adapt this parameter to the needs on real problems. - - Example : ``{"MaximumNumberOfSteps":100}`` + .. include:: snippets/MaximumNumberOfSteps_50.rst - MaximumNumberOfFunctionEvaluations - This key indicates the maximum number of evaluation of the cost function to - be optimized. The default is 15000, which is an arbitrary limit. It is then - recommended to adapt this parameter to the needs on real problems. For some - optimizers, the effective number of function evaluations can be slightly - different of the limit due to algorithm internal control requirements. + .. include:: snippets/MaximumNumberOfFunctionEvaluations.rst - Example : ``{"MaximumNumberOfFunctionEvaluations":50}`` + .. include:: snippets/QualityCriterion.rst NumberOfInsects This key indicates the number of insects or particles in the swarm. The default is 100, which is a usual default for this algorithm. - Example : ``{"NumberOfInsects":100}`` + Example : + ``{"NumberOfInsects":100}`` SwarmVelocity This key indicates the part of the insect velocity which is imposed by the swarm. It is a positive floating point value. The default value is 1. - Example : ``{"SwarmVelocity":1.}`` + Example : + ``{"SwarmVelocity":1.}`` GroupRecallRate This key indicates the recall rate at the best swarm insect. It is a floating point value between 0 and 1. The default value is 0.5. - Example : ``{"GroupRecallRate":0.5}`` - - QualityCriterion - This key indicates the quality criterion, minimized to find the optimal - state estimate. The default is the usual data assimilation criterion named - "DA", the augmented weighted least squares. The possible criteria has to be - in the following list, where the equivalent names are indicated by the sign - "=": ["AugmentedWeightedLeastSquares"="AWLS"="DA", - "WeightedLeastSquares"="WLS", "LeastSquares"="LS"="L2", - "AbsoluteValue"="L1", "MaximumError"="ME"]. - - Example : ``{"QualityCriterion":"DA"}`` + Example : + ``{"GroupRecallRate":0.5}`` BoxBounds This key allows to define upper and lower bounds for *increments* on every @@ -157,17 +105,14 @@ The options of the algorithm are the following: (``None`` is not allowed when there is no bound). This key is required and there is no default values. - Example : ``{"BoxBounds":[[-0.5,0.5], [0.01,2.], [0.,1.e99], [-1.e99,1.e99]]}`` + Example : + ``{"BoxBounds":[[-0.5,0.5], [0.01,2.], [0.,1.e99], [-1.e99,1.e99]]}`` - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + .. include:: snippets/SetSeed.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -177,7 +122,8 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -192,79 +138,31 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. - - Example : ``J = ADD.get("CostFunctionJ")[:]`` - - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/Analysis.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` + .. include:: snippets/CostFunctionJ.rst - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. + .. include:: snippets/CostFunctionJb.rst - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. - - Example : ``d = ADD.get("Innovation")[-1]`` - - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. - - Example : ``oma = ADD.get("OMA")[-1]`` - - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/BMA.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CurrentState.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/Innovation.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/OMA.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/OMB.rst - Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtBackground.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtCurrentState.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtOptimum.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_QuantileRegression.rst b/doc/en/ref_algorithm_QuantileRegression.rst index 7f3c04f..38847b7 100644 --- a/doc/en/ref_algorithm_QuantileRegression.rst +++ b/doc/en/ref_algorithm_QuantileRegression.rst @@ -38,40 +38,14 @@ the model parameters that satisfy to the quantiles conditions. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: Observation -.. index:: single: ObservationOperator -.. index:: single: Quantile -.. index:: single: Minimizer -.. index:: single: MaximumNumberOfSteps -.. index:: single: CostDecrementTolerance -.. index:: single: Bounds -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -82,38 +56,17 @@ command. The options of the algorithm are the following: - Quantile - This key allows to define the real value of the desired quantile, between - 0 and 1. The default is 0.5, corresponding to the median. - - Example : ``{"Quantile":0.5}`` - - MaximumNumberOfSteps - This key indicates the maximum number of iterations allowed for iterative - optimization. The default is 15000, which is very similar to no limit on - iterations. It is then recommended to adapt this parameter to the needs on - real problems. + .. include:: snippets/Quantile.rst - Example : ``{"MaximumNumberOfSteps":100}`` + .. include:: snippets/MaximumNumberOfSteps.rst - CostDecrementTolerance - This key indicates a limit value, leading to stop successfully the - iterative optimization process when the cost function or the surrogate - decreases less than this tolerance at the last step. The default is 1.e-6, - and it is recommended to adapt it to the needs on real problems. + .. include:: snippets/CostDecrementTolerance_6.rst - Example : ``{"CostDecrementTolerance":1.e-7}`` - - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with possibly ``None`` every time - there is no bound. The bounds can always be specified, but they are taken - into account only by the constrained optimizers. - - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}`` + .. include:: snippets/BoundsWithNone.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -123,7 +76,8 @@ The options of the algorithm are the following: "SimulatedObservationAtBackground", "SimulatedObservationAtCurrentState", "SimulatedObservationAtOptimum"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` *Tips for this algorithm:* @@ -145,79 +99,31 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/Analysis.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/CostFunctionJ.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/CostFunctionJb.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. - - Example : ``d = ADD.get("Innovation")[-1]`` - - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. - - Example : ``oma = ADD.get("OMA")[-1]`` - - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/BMA.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CurrentState.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. + .. include:: snippets/Innovation.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/OMA.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/OMB.rst - Example : ``Ys = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtBackground.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or optimal state :math:`\mathbf{x}^a`. + .. include:: snippets/SimulatedObservationAtCurrentState.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtOptimum.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_SamplingTest.rst b/doc/en/ref_algorithm_SamplingTest.rst index 5c0f463..76c5a00 100644 --- a/doc/en/ref_algorithm_SamplingTest.rst +++ b/doc/en/ref_algorithm_SamplingTest.rst @@ -55,56 +55,18 @@ in SALOME. Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: SampleAsnUplet -.. index:: single: SampleAsExplicitHyperCube -.. index:: single: SampleAsMinMaxStepHyperCube -.. index:: single: SampleAsIndependantRandomVariables -.. index:: single: QualityCriterion -.. index:: single: SetDebug -.. index:: single: SetSeed -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/CheckingPoint.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -114,12 +76,17 @@ described hereafter, of the algorithm. See command. The options of the algorithm are the following: +.. index:: single: SampleAsnUplet +.. index:: single: SampleAsExplicitHyperCube +.. index:: single: SampleAsMinMaxStepHyperCube +.. index:: single: SampleAsIndependantRandomVariables SampleAsnUplet This key describes the calculations points as a list of n-uplets, each n-uplet being a state. - Example : ``{"SampleAsnUplet":[[0,1,2,3],[4,3,2,1],[-2,3,-4,5]]}`` for 3 points in a state space of dimension 4 + Example : + ``{"SampleAsnUplet":[[0,1,2,3],[4,3,2,1],[-2,3,-4,5]]}`` for 3 points in a state space of dimension 4 SampleAsExplicitHyperCube This key describes the calculations points as an hyper-cube, from a given @@ -134,7 +101,8 @@ The options of the algorithm are the following: That is then a list of the same size than the one of the state. The bounds are included. - Example : ``{"SampleAsMinMaxStepHyperCube":[[0.,1.,0.25],[-1,3,1]]}`` for a state space of dimension 2 + Example : + ``{"SampleAsMinMaxStepHyperCube":[[0.,1.,0.25],[-1,3,1]]}`` for a state space of dimension 2 SampleAsIndependantRandomVariables This key describes the calculations points as an hyper-cube, for which the @@ -146,34 +114,18 @@ The options of the algorithm are the following: 'uniform' of parameters (low,high), or 'weibull' of parameter (shape). That is then a list of the same size than the one of the state. - Example : ``{"SampleAsIndependantRandomVariables":[ ['normal',[0.,1.],3], ['uniform',[-2,2],4]]`` for a state space of dimension 2 - - QualityCriterion - This key indicates the quality criterion, used to find the state estimate. - The default is the usual data assimilation criterion named "DA", the - augmented weighted least squares. The possible criteria has to be in the - following list, where the equivalent names are indicated by the sign "=": - ["AugmentedWeightedLeastSquares"="AWLS"="DA", "WeightedLeastSquares"="WLS", - "LeastSquares"="LS"="L2", "AbsoluteValue"="L1", "MaximumError"="ME"]. - - Example : ``{"QualityCriterion":"DA"}`` + Example : + ``{"SampleAsIndependantRandomVariables":[ ['normal',[0.,1.],3], ['uniform',[-2,2],4]]`` for a state space of dimension 2 - SetDebug - This key requires the activation, or not, of the debug mode during the - function evaluation. The default is "True", the choices are "True" or - "False". + .. include:: snippets/QualityCriterion.rst - Example : ``{"SetDebug":False}`` + .. include:: snippets/SetDebug.rst - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + .. include:: snippets/SetSeed.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -182,7 +134,8 @@ The options of the algorithm are the following: "CostFunctionJo", "CurrentState", "InnovationAtCurrentState", "SimulatedObservationAtCurrentState"]. - Example : ``{"StoreSupplementaryCalculations":["CostFunctionJ", "SimulatedObservationAtCurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["CostFunctionJ", "SimulatedObservationAtCurrentState"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -197,41 +150,19 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. - - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/CostFunctionJ.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/CostFunctionJb.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJo.rst The conditional outputs of the algorithm are the following: - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - InnovationAtCurrentState - *List of vectors*. Each element is an innovation vector at current state. - - Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]`` + .. include:: snippets/CurrentState.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/InnovationAtCurrentState.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_TangentTest.rst b/doc/en/ref_algorithm_TangentTest.rst index 0297784..1088756 100644 --- a/doc/en/ref_algorithm_TangentTest.rst +++ b/doc/en/ref_algorithm_TangentTest.rst @@ -55,33 +55,12 @@ One take :math:`\mathbf{dx}_0=Normal(0,\mathbf{x})` and Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: ObservationOperator -.. index:: single: AmplitudeOfInitialDirection -.. index:: single: EpsilonMinimumExponent -.. index:: single: InitialDirection -.. index:: single: SetSeed -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control - :math:`U` included in the observation, the operator has to be applied to a - pair :math:`(X,U)`. + .. include:: snippets/CheckingPoint.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -92,39 +71,17 @@ command. The options of the algorithm are the following: - AmplitudeOfInitialDirection - This key indicates the scaling of the initial perturbation build as a vector - used for the directional derivative around the nominal checking point. The - default is 1, that means no scaling. - - Example : ``{"AmplitudeOfInitialDirection":0.5}`` - - EpsilonMinimumExponent - This key indicates the minimal exponent value of the power of 10 coefficient - to be used to decrease the increment multiplier. The default is -8, and it - has to be between 0 and -20. For example, its default value leads to - calculate the residue of the scalar product formula with a fixed increment - multiplied from 1.e0 to 1.e-8. - - Example : ``{"EpsilonMinimumExponent":-12}`` + .. include:: snippets/AmplitudeOfInitialDirection.rst - InitialDirection - This key indicates the vector direction used for the directional derivative - around the nominal checking point. It has to be a vector. If not specified, - this direction defaults to a random perturbation around zero of the same - vector size than the checking point. + .. include:: snippets/EpsilonMinimumExponent.rst - Example : ``{"InitialDirection":[0.1,0.1,100.,3}`` + .. include:: snippets/InitialDirection.rst - SetSeed - This key allow to give an integer in order to fix the seed of the random - generator used to generate the ensemble. A convenient value is for example - 1000. By default, the seed is left uninitialized, and so use the default - initialization from the computer. - - Example : ``{"SetSeed":1000}`` + .. include:: snippets/SetSeed.rst StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -132,7 +89,8 @@ The options of the algorithm are the following: are in the following list: ["CurrentState", "Residu", "SimulatedObservationAtCurrentState"]. - Example : ``{"StoreSupplementaryCalculations":["CurrentState"]}`` + Example : + ``{"StoreSupplementaryCalculations":["CurrentState"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -147,25 +105,13 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Residu - *List of values*. Each element is the value of the particular residue - verified during a checking algorithm, in the order of the tests. - - Example : ``r = ADD.get("Residu")[:]`` + .. include:: snippets/Residu.rst The conditional outputs of the algorithm are the following: - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/CurrentState.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst See also ++++++++ diff --git a/doc/en/ref_algorithm_UnscentedKalmanFilter.rst b/doc/en/ref_algorithm_UnscentedKalmanFilter.rst index 7cf1086..3a54557 100644 --- a/doc/en/ref_algorithm_UnscentedKalmanFilter.rst +++ b/doc/en/ref_algorithm_UnscentedKalmanFilter.rst @@ -35,59 +35,36 @@ This algorithm realizes an estimation of the state of a dynamic system by a operators for the observation and evolution operators, as in the simple or extended Kalman filter. +It applies to non-linear observation and incremental evolution (process) +operators with excellent robustness and performance qualities. It can be +compared to the :ref:`section_ref_algorithm_EnsembleKalmanFilter`, whose +qualities are similar for non-linear systems. + +In case of linear of "slightly" non-linear operators, one can easily use the +:ref:`section_ref_algorithm_ExtendedKalmanFilter` or even the +:ref:`section_ref_algorithm_KalmanFilter`, which are often far less expensive +to evaluate on small systems. One can verify the linearity of the operators +with the help of the :ref:`section_ref_algorithm_LinearityTest`. + Optional and required commands ++++++++++++++++++++++++++++++ -.. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator -.. index:: single: Bounds -.. index:: single: ConstrainedBy -.. index:: single: EstimationOf -.. index:: single: Alpha -.. index:: single: Beta -.. index:: single: Kappa -.. index:: single: Reconditioner -.. index:: single: StoreSupplementaryCalculations - The general required commands, available in the editing user interface, are the following: - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" or a *VectorSerie*" type object. - - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. - - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Background.rst + + .. include:: snippets/BackgroundError.rst + + .. include:: snippets/EvolutionError.rst + + .. include:: snippets/EvolutionModel.rst + + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst The general optional commands, available in the editing user interface, are indicated in :ref:`section_ref_assimilation_keywords`. Moreover, the parameters @@ -98,38 +75,30 @@ command. The options of the algorithm are the following: - Bounds - This key allows to define upper and lower bounds for every state variable - being optimized. Bounds have to be given by a list of list of pairs of - lower/upper bounds for each variable, with extreme values every time there - is no bound (``None`` is not allowed when there is no bound). - - Example : ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}`` + .. include:: snippets/BoundsWithExtremes.rst - ConstrainedBy - This key allows to choose the method to take into account the bounds - constraints. The only one available is the "EstimateProjection", which - projects the current state estimate on the bounds constraints. + .. include:: snippets/ConstrainedBy.rst - Example : ``{"ConstrainedBy":"EstimateProjection"}`` - - EstimationOf - This key allows to choose the type of estimation to be performed. It can be - either state-estimation, with a value of "State", or parameter-estimation, - with a value of "Parameters". The default choice is "State". - - Example : ``{"EstimationOf":"Parameters"}`` + .. include:: snippets/EstimationOf.rst Alpha, Beta, Kappa, Reconditioner + .. index:: single: Alpha + .. index:: single: Beta + .. index:: single: Kappa + .. index:: single: Reconditioner + These keys are internal scaling parameters. "Alpha" requires a value between 1.e-4 and 1. "Beta" has an optimal value of 2 for Gaussian *a priori* distribution. "Kappa" requires an integer value, and the right default is obtained by setting it to 0. "Reconditioner" requires a value between 1.e-3 and 10, it defaults to 1. - Example : ``{"Alpha":1,"Beta":2,"Kappa":0,"Reconditioner":1}`` + Example : + ``{"Alpha":1,"Beta":2,"Kappa":0,"Reconditioner":1}`` StoreSupplementaryCalculations + .. index:: single: StoreSupplementaryCalculations + This list indicates the names of the supplementary variables that can be available at the end of the algorithm. It involves potentially costly calculations or memory consumptions. The default is a void list, none of @@ -139,7 +108,8 @@ The options of the algorithm are the following: "APosterioriVariances", "BMA", "CostFunctionJ", "CostFunctionJb", "CostFunctionJo", "CurrentState", "Innovation"]. - Example : ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` + Example : + ``{"StoreSupplementaryCalculations":["BMA", "Innovation"]}`` Information and variables available at the end of the algorithm +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ @@ -154,73 +124,29 @@ writing of post-processing procedures, are described in the The unconditional outputs of the algorithm are the following: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst The conditional outputs of the algorithm are the following: - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlation - matrix of the optimal state. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard - deviation matrix of the optimal state. - - Example : ``E = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variance matrix - of the optimal state. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. + .. include:: snippets/APosterioriCorrelations.rst - Example : ``J = ADD.get("CostFunctionJ")[:]`` + .. include:: snippets/APosterioriCovariance.rst - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. + .. include:: snippets/APosterioriStandardDeviations.rst - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` + .. include:: snippets/APosterioriVariances.rst - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. + .. include:: snippets/BMA.rst - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` + .. include:: snippets/CostFunctionJ.rst - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. + .. include:: snippets/CostFunctionJb.rst - Example : ``Xs = ADD.get("CurrentState")[:]`` + .. include:: snippets/CostFunctionJo.rst - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. + .. include:: snippets/CurrentState.rst - Example : ``d = ADD.get("Innovation")[-1]`` + .. include:: snippets/Innovation.rst See also ++++++++ diff --git a/doc/en/ref_assimilation_keywords.rst b/doc/en/ref_assimilation_keywords.rst index 2431dbb..eb49c26 100644 --- a/doc/en/ref_assimilation_keywords.rst +++ b/doc/en/ref_assimilation_keywords.rst @@ -29,16 +29,9 @@ List of commands and keywords for an ADAO calculation case .. index:: single: Algorithm .. index:: single: AlgorithmParameters -.. index:: single: Background -.. index:: single: BackgroundError .. index:: single: ControlInput .. index:: single: Debug -.. index:: single: EvolutionError -.. index:: single: EvolutionModel .. index:: single: InputVariables -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator .. index:: single: Observer .. index:: single: Observers .. index:: single: Observer Template @@ -72,16 +65,9 @@ The different commands are the following: :ref:`section_ref_options_Algorithm_Parameters` for the detailed use of this command part. - Background - *Required command*. This indicates the background or initial vector used, - previously noted as :math:`\mathbf{x}^b`. Its value is defined as a - "*Vector*" type object. + .. include:: snippets/Background.rst - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. + .. include:: snippets/BackgroundError.rst ControlInput *Optional command*. This indicates the control vector used to force the @@ -94,47 +80,20 @@ The different commands are the following: information. The choices are limited between 0 (for False) and 1 (for True). - EvolutionError - *Optional command*. This indicates the evolution error covariance matrix, - usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - EvolutionModel - *Optional command*. This indicates the evolution model operator, usually - noted :math:`M`, which describes an elementary step of evolution. Its value - is defined as a "*Function*" type object or a "*Matrix*" type one. In the - case of "*Function*" type, different functional forms can be used, as - described in the section :ref:`section_ref_operator_requirements`. If there - is some control :math:`U` included in the evolution model, the operator has - to be applied to a pair :math:`(X,U)`. + .. include:: snippets/EvolutionError.rst + + .. include:: snippets/EvolutionModel.rst InputVariables *Optional command*. This command allows to indicates the name and size of physical variables that are bundled together in the state vector. This information is dedicated to data processed inside an algorithm. - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control - :math:`U` included in the observation, the operator has to be applied to a - pair :math:`(X,U)`. + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst Observers *Optional command*. This command allows to set internal observers, that are diff --git a/doc/en/ref_checking_keywords.rst b/doc/en/ref_checking_keywords.rst index 7b40f75..f381607 100644 --- a/doc/en/ref_checking_keywords.rst +++ b/doc/en/ref_checking_keywords.rst @@ -29,12 +29,7 @@ List of commands and keywords for an ADAO checking case .. index:: single: Algorithm .. index:: single: AlgorithmParameters -.. index:: single: CheckingPoint -.. index:: single: BackgroundError .. index:: single: Debug -.. index:: single: Observation -.. index:: single: ObservationError -.. index:: single: ObservationOperator .. index:: single: Observer .. index:: single: Observers .. index:: single: Observer Template @@ -64,43 +59,20 @@ The different commands are the following: :ref:`section_ref_options_Algorithm_Parameters` for the detailed use of this command part. - CheckingPoint - *Required command*. This indicates the vector used as the state around which - to perform the required check, noted :math:`\mathbf{x}` and similar to the - background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" type object. + .. include:: snippets/CheckingPoint.rst - BackgroundError - *Required command*. This indicates the background error covariance matrix, - previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" - type object, a "*ScalarSparseMatrix*" type object, or a - "*DiagonalSparseMatrix*" type object. + .. include:: snippets/BackgroundError.rst Debug *Optional command*. This define the level of trace and intermediary debug information. The choices are limited between 0 (for False) and 1 (for True). - Observation - *Required command*. This indicates the observation vector used for data - assimilation or optimization, previously noted as :math:`\mathbf{y}^o`. It - is defined as a "*Vector*" or a *VectorSerie* type object. - - ObservationError - *Required command*. This indicates the observation error covariance matrix, - previously noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type - object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" - type object. - - ObservationOperator - *Required command*. This indicates the observation operator, previously - noted :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to - results :math:`\mathbf{y}` to be compared to observations - :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or - a "*Matrix*" type one. In the case of "*Function*" type, different - functional forms can be used, as described in the section - :ref:`section_ref_operator_requirements`. If there is some control :math:`U` - included in the observation, the operator has to be applied to a pair - :math:`(X,U)`. + .. include:: snippets/Observation.rst + + .. include:: snippets/ObservationError.rst + + .. include:: snippets/ObservationOperator.rst Observers *Optional command*. This command allows to set internal observers, that are diff --git a/doc/en/ref_output_variables.rst b/doc/en/ref_output_variables.rst index 7ce3dc0..60a42ef 100644 --- a/doc/en/ref_output_variables.rst +++ b/doc/en/ref_output_variables.rst @@ -158,234 +158,82 @@ boolean "* * Stored" associated with it in the edition of the ADAO case. Inventory of potentially available information at the output ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -.. index:: single: Dry -.. index:: single: Forecast - -The set of potentially available information at the output is listed here -regardless of algorithms, for inventory. +The main set of potentially available information at the output is listed here +regardless of algorithms, for inventory. One has to look directly to algorithm +details to get full inventory. The optimal state is an information that is always naturally available after an optimization or a data assimilation calculation. It is indicated by the following keywords: - Analysis - *List of vectors*. Each element is an optimal state :math:`\mathbf{x}*` in - optimization or an analysis :math:`\mathbf{x}^a` in data assimilation. - - Example : ``Xa = ADD.get("Analysis")[-1]`` + .. include:: snippets/Analysis.rst The following variables are input variables. They are made available to the user at the output in order to facilitate the writing of post-processing procedures, and are conditioned by a user request using a boolean "*Stored*" -at the input. - - Background - *Vector*, whose availability is conditioned by "*Stored*" at the input. It - is the background vector :math:`\mathbf{x}^b`. - - Example : ``Xb = ADD.get("Background")`` - - BackgroundError - *Matrix*, whose availability is conditioned by "*Stored*" at the input. It - is the matrix :math:`\mathbf{B}` of *a priori* background errors - covariances. +at the input. All these returned input variables can be obtained with the +standard command ".get(...)", which return the unique object given on input. - Example : ``B = ADD.get("BackgroundError")`` + .. include:: snippets/Background.rst - EvolutionError - *Matrix*, whose availability is conditioned by "*Stored*" at the input. It - is the matrix :math:`\mathbf{M}` of *a priori* evolution errors covariances. + .. include:: snippets/BackgroundError.rst - Example : ``M = ADD.get("EvolutionError")`` + .. include:: snippets/EvolutionError.rst - Observation - *Vector*, whose availability is conditioned by "*Stored*" at the input. It - is the observation vector :math:`\mathbf{y}^o`. + .. include:: snippets/Observation.rst - Example : ``Yo = ADD.get("Observation")`` - - ObservationError - *Matrix*, whose availability is conditioned by "*Stored*" at the input. It - is the matrix :math:`\mathbf{R}` of *a priori* observation errors - covariances. - - Example : ``R = ADD.get("ObservationError")`` + .. include:: snippets/ObservationError.rst All other information are conditioned by the algorithm and/or the user requests -of availability. They are the following, in alphabetical order: - - APosterioriCorrelations - *List of matrices*. Each element is an *a posteriori* error correlations - matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance - matrix. - - Example : ``C = ADD.get("APosterioriCorrelations")[-1]`` - - APosterioriCovariance - *List of matrices*. Each element is an *a posteriori* error covariance - matrix :math:`\mathbf{A}*` of the optimal state. - - Example : ``A = ADD.get("APosterioriCovariance")[-1]`` - - APosterioriStandardDeviations - *List of matrices*. Each element is an *a posteriori* error standard errors - diagonal matrix of the optimal state, coming from the :math:`\mathbf{A}*` - covariance matrix. - - Example : ``S = ADD.get("APosterioriStandardDeviations")[-1]`` - - APosterioriVariances - *List of matrices*. Each element is an *a posteriori* error variances - diagonal matrix of the optimal state, coming from the :math:`\mathbf{A}*` - covariance matrix. - - Example : ``V = ADD.get("APosterioriVariances")[-1]`` - - BMA - *List of vectors*. Each element is a vector of difference between the - background and the optimal state. - - Example : ``bma = ADD.get("BMA")[-1]`` - - CostFunctionJ - *List of values*. Each element is a value of the error function :math:`J`. - - Example : ``J = ADD.get("CostFunctionJ")[:]`` - - CostFunctionJb - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. - - Example : ``Jb = ADD.get("CostFunctionJb")[:]`` - - CostFunctionJo - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. - - Example : ``Jo = ADD.get("CostFunctionJo")[:]`` - - CostFunctionJAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J`. - At each step, the value corresponds to the optimal state found from the - beginning. - - Example : ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]`` - - CostFunctionJbAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^b`, - that is of the background difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]`` - - CostFunctionJoAtCurrentOptimum - *List of values*. Each element is a value of the error function :math:`J^o`, - that is of the observation difference part. At each step, the value - corresponds to the optimal state found from the beginning. - - Example : ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]`` - - CurrentOptimum - *List of vectors*. Each element is the optimal state obtained at the current - step of the optimization algorithm. It is not necessarily the last state. - - Example : ``Xo = ADD.get("CurrentOptimum")[:]`` - - CurrentState - *List of vectors*. Each element is a usual state vector used during the - optimization algorithm procedure. - - Example : ``Xs = ADD.get("CurrentState")[:]`` - - IndexOfOptimum - *List of integers*. Each element is the iteration index of the optimum - obtained at the current step the optimization algorithm. It is not - necessarily the number of the last iteration. - - Example : ``i = ADD.get("MahalanobisConsistency")[-1]`` - - Innovation - *List of vectors*. Each element is an innovation vector, which is in static - the difference between the optimal and the background, and in dynamic the - evolution increment. - - Example : ``d = ADD.get("Innovation")[-1]`` +of availability. The main ones are the following, in alphabetical order: - InnovationAtCurrentState - *List of vectors*. Each element is an innovation vector at current state. + .. include:: snippets/APosterioriCorrelations.rst - Example : ``ds = ADD.get("InnovationAtCurrentState")[-1]`` + .. include:: snippets/APosterioriCovariance.rst - MahalanobisConsistency - *List of values*. Each element is a value of the Mahalanobis quality - indicator. + .. include:: snippets/APosterioriStandardDeviations.rst - Example : ``m = ADD.get("MahalanobisConsistency")[-1]`` + .. include:: snippets/APosterioriVariances.rst - OMA - *List of vectors*. Each element is a vector of difference between the - observation and the optimal state in the observation space. + .. include:: snippets/BMA.rst - Example : ``oma = ADD.get("OMA")[-1]`` + .. include:: snippets/CostFunctionJ.rst - OMB - *List of vectors*. Each element is a vector of difference between the - observation and the background state in the observation space. + .. include:: snippets/CostFunctionJb.rst - Example : ``omb = ADD.get("OMB")[-1]`` + .. include:: snippets/CostFunctionJo.rst - Residu - *List of values*. Each element is the value of the particular residu - verified during a checking algorithm, in the order of the tests. + .. include:: snippets/CostFunctionJAtCurrentOptimum.rst - Example : ``r = ADD.get("Residu")[:]`` + .. include:: snippets/CostFunctionJbAtCurrentOptimum.rst - SigmaBck2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^b)^2` of the background part. + .. include:: snippets/CostFunctionJoAtCurrentOptimum.rst - Example : ``sb2 = ADD.get("SigmaBck")[-1]`` + .. include:: snippets/CurrentOptimum.rst - SigmaObs2 - *List of values*. Each element is a value of the quality indicator - :math:`(\sigma^o)^2` of the observation part. + .. include:: snippets/CurrentState.rst - Example : ``so2 = ADD.get("SigmaObs")[-1]`` + .. include:: snippets/IndexOfOptimum.rst - SimulatedObservationAtBackground - *List of vectors*. Each element is a vector of observation simulated from - the background :math:`\mathbf{x}^b`. It is the forecast using the - background, and it is sometimes called "*Dry*". + .. include:: snippets/Innovation.rst - Example : ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` + .. include:: snippets/InnovationAtCurrentState.rst - SimulatedObservationAtCurrentOptimum - *List of vectors*. Each element is a vector of observation simulated from - the optimal state obtained at the current step the optimization algorithm, - that is, in the observation space. + .. include:: snippets/OMA.rst - Example : ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]`` + .. include:: snippets/OMB.rst - SimulatedObservationAtCurrentState - *List of vectors*. Each element is an observed vector at the current state, - that is, in the observation space. + .. include:: snippets/Residu.rst - Example : ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` + .. include:: snippets/SimulatedObservationAtBackground.rst - SimulatedObservationAtOptimum - *List of vectors*. Each element is a vector of observation simulated from - the analysis or the optimal state :math:`\mathbf{x}^a`. It is the forecast - using the analysis or the optimal state, and it is sometimes called - "*Forecast*". + .. include:: snippets/SimulatedObservationAtCurrentOptimum.rst - Example : ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` + .. include:: snippets/SimulatedObservationAtCurrentState.rst - SimulationQuantiles - *List of vectors*. Each element is a vector corresponding to the observed - state which realize the required quantile, in the same order than the - quantiles required by the user. + .. include:: snippets/SimulatedObservationAtOptimum.rst - Example : ``sQuantiles = ADD.get("SimulationQuantiles")[:]`` + .. include:: snippets/SimulationQuantiles.rst .. [#] For more information on PARAVIS, see the *PARAVIS module* and its integrated help available from the main menu *Help* of the SALOME platform. diff --git a/doc/en/snippets/APosterioriCorrelations.rst b/doc/en/snippets/APosterioriCorrelations.rst new file mode 100644 index 0000000..eba27eb --- /dev/null +++ b/doc/en/snippets/APosterioriCorrelations.rst @@ -0,0 +1,9 @@ +.. index:: single: APosterioriCorrelations + +APosterioriCorrelations + *List of matrices*. Each element is an *a posteriori* error correlations + matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance + matrix. + + Example : + ``C = ADD.get("APosterioriCorrelations")[-1]`` diff --git a/doc/en/snippets/APosterioriCovariance.rst b/doc/en/snippets/APosterioriCovariance.rst new file mode 100644 index 0000000..5c13b59 --- /dev/null +++ b/doc/en/snippets/APosterioriCovariance.rst @@ -0,0 +1,8 @@ +.. index:: single: APosterioriCovariance + +APosterioriCovariance + *List of matrices*. Each element is an *a posteriori* error covariance + matrix :math:`\mathbf{A}*` of the optimal state. + + Example : + ``A = ADD.get("APosterioriCovariance")[-1]`` diff --git a/doc/en/snippets/APosterioriStandardDeviations.rst b/doc/en/snippets/APosterioriStandardDeviations.rst new file mode 100644 index 0000000..5ec2ddb --- /dev/null +++ b/doc/en/snippets/APosterioriStandardDeviations.rst @@ -0,0 +1,9 @@ +.. index:: single: APosterioriStandardDeviations + +APosterioriStandardDeviations + *List of matrices*. Each element is an *a posteriori* error standard + errors diagonal matrix of the optimal state, coming from the + :math:`\mathbf{A}*` covariance matrix. + + Example : + ``S = ADD.get("APosterioriStandardDeviations")[-1]`` diff --git a/doc/en/snippets/APosterioriVariances.rst b/doc/en/snippets/APosterioriVariances.rst new file mode 100644 index 0000000..89e3141 --- /dev/null +++ b/doc/en/snippets/APosterioriVariances.rst @@ -0,0 +1,9 @@ +.. index:: single: APosterioriVariances + +APosterioriVariances + *List of matrices*. Each element is an *a posteriori* error variance + errors diagonal matrix of the optimal state, coming from the + :math:`\mathbf{A}*` covariance matrix. + + Example : + ``V = ADD.get("APosterioriVariances")[-1]`` diff --git a/doc/en/snippets/AmplitudeOfInitialDirection.rst b/doc/en/snippets/AmplitudeOfInitialDirection.rst new file mode 100644 index 0000000..b473d1c --- /dev/null +++ b/doc/en/snippets/AmplitudeOfInitialDirection.rst @@ -0,0 +1,9 @@ +.. index:: single: AmplitudeOfInitialDirection + +AmplitudeOfInitialDirection + This key indicates the scaling of the initial perturbation build as a vector + used for the directional derivative around the nominal checking point. The + default is 1, that means no scaling. + + Example : + ``{"AmplitudeOfInitialDirection":0.5}`` diff --git a/doc/en/snippets/Analysis.rst b/doc/en/snippets/Analysis.rst new file mode 100644 index 0000000..1e1d9de --- /dev/null +++ b/doc/en/snippets/Analysis.rst @@ -0,0 +1,9 @@ +.. index:: single: Analysis + +Analysis + *List of vectors*. Each element of this variable is an optimal state + :math:`\mathbf{x}*` in optimization or an analysis :math:`\mathbf{x}^a` in + data assimilation. + + Example : + ``Xa = ADD.get("Analysis")[-1]`` diff --git a/doc/en/snippets/BMA.rst b/doc/en/snippets/BMA.rst new file mode 100644 index 0000000..69b0ec8 --- /dev/null +++ b/doc/en/snippets/BMA.rst @@ -0,0 +1,8 @@ +.. index:: single: BMA + +BMA + *List of vectors*. Each element is a vector of difference between the + background and the optimal state. + + Example : + ``bma = ADD.get("BMA")[-1]`` diff --git a/doc/en/snippets/Background.rst b/doc/en/snippets/Background.rst new file mode 100644 index 0000000..41fdb54 --- /dev/null +++ b/doc/en/snippets/Background.rst @@ -0,0 +1,7 @@ +.. index:: single: Background + +Background + *Required command*. The variable indicates the background or initial vector + used, previously noted as :math:`\mathbf{x}^b`. Its value is defined as a + "*Vector*" or a *VectorSerie*" type object. Its availability in output is + conditioned by the boolean "*Stored*" associated with input. diff --git a/doc/en/snippets/BackgroundError.rst b/doc/en/snippets/BackgroundError.rst new file mode 100644 index 0000000..b086de1 --- /dev/null +++ b/doc/en/snippets/BackgroundError.rst @@ -0,0 +1,9 @@ +.. index:: single: BackgroundError + +BackgroundError + *Required command*. This indicates the background error covariance matrix, + previously noted as :math:`\mathbf{B}`. Its value is defined as a "*Matrix*" + type object, a "*ScalarSparseMatrix*" type object, or a + "*DiagonalSparseMatrix*" type object, as described in detail in the section + :ref:`section_ref_covariance_requirements`. Its availability in output is + conditioned by the boolean "*Stored*" associated with input. diff --git a/doc/en/snippets/BoundsWithExtremes.rst b/doc/en/snippets/BoundsWithExtremes.rst new file mode 100644 index 0000000..cc7259e --- /dev/null +++ b/doc/en/snippets/BoundsWithExtremes.rst @@ -0,0 +1,10 @@ +.. index:: single: Bounds + +Bounds + This key allows to define upper and lower bounds for every state variable + being optimized. Bounds have to be given by a list of list of pairs of + lower/upper bounds for each variable, with extreme values every time there + is no bound (``None`` is not allowed when there is no bound). + + Example : + ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}`` diff --git a/doc/en/snippets/BoundsWithNone.rst b/doc/en/snippets/BoundsWithNone.rst new file mode 100644 index 0000000..1fb608c --- /dev/null +++ b/doc/en/snippets/BoundsWithNone.rst @@ -0,0 +1,11 @@ +.. index:: single: Bounds + +Bounds + This key allows to define upper and lower bounds for every state variable + being optimized. Bounds have to be given by a list of list of pairs of + lower/upper bounds for each variable, with possibly ``None`` every time + there is no bound. The bounds can always be specified, but they are taken + into account only by the constrained optimizers. + + Example : + ``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}`` diff --git a/doc/en/snippets/CheckingPoint.rst b/doc/en/snippets/CheckingPoint.rst new file mode 100644 index 0000000..4a5895d --- /dev/null +++ b/doc/en/snippets/CheckingPoint.rst @@ -0,0 +1,8 @@ +.. index:: single: CheckingPoint + +CheckingPoint + *Required command*. The variable indicates the vector used as the state + around which to perform the required check, noted :math:`\mathbf{x}` and + similar to the background :math:`\mathbf{x}^b`. It is defined as a "*Vector*" + type object. Its availability in output is conditioned by the boolean + "*Stored*" associated with input. diff --git a/doc/en/snippets/ConstrainedBy.rst b/doc/en/snippets/ConstrainedBy.rst new file mode 100644 index 0000000..e422edc --- /dev/null +++ b/doc/en/snippets/ConstrainedBy.rst @@ -0,0 +1,9 @@ +.. index:: single: ConstrainedBy + +ConstrainedBy + This key allows to choose the method to take into account the bounds + constraints. The only one available is the "EstimateProjection", which + projects the current state estimate on the bounds constraints. + + Example : + ``{"ConstrainedBy":"EstimateProjection"}`` diff --git a/doc/en/snippets/CostDecrementTolerance.rst b/doc/en/snippets/CostDecrementTolerance.rst new file mode 100644 index 0000000..e3846ef --- /dev/null +++ b/doc/en/snippets/CostDecrementTolerance.rst @@ -0,0 +1,10 @@ +.. index:: single: CostDecrementTolerance + +CostDecrementTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the cost function decreases less than + this tolerance at the last step. The default is 1.e-7, and it is + recommended to adapt it to the needs on real problems. + + Example : + ``{"CostDecrementTolerance":1.e-7}`` diff --git a/doc/en/snippets/CostDecrementTolerance_6.rst b/doc/en/snippets/CostDecrementTolerance_6.rst new file mode 100644 index 0000000..80db1b0 --- /dev/null +++ b/doc/en/snippets/CostDecrementTolerance_6.rst @@ -0,0 +1,9 @@ +.. index:: single: CostDecrementTolerance + +CostDecrementTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the cost function decreases less than + this tolerance at the last step. The default is 1.e-6, and it is + recommended to adapt it to the needs on real problems. + + Example : ``{"CostDecrementTolerance":1.e-6}`` diff --git a/doc/en/snippets/CostFunctionJ.rst b/doc/en/snippets/CostFunctionJ.rst new file mode 100644 index 0000000..17e479c --- /dev/null +++ b/doc/en/snippets/CostFunctionJ.rst @@ -0,0 +1,8 @@ +.. index:: single: CostFunctionJ + +CostFunctionJ + *List of values*. Each element is a value of the chosen error function + :math:`J`. + + Example : + ``J = ADD.get("CostFunctionJ")[:]`` diff --git a/doc/en/snippets/CostFunctionJAtCurrentOptimum.rst b/doc/en/snippets/CostFunctionJAtCurrentOptimum.rst new file mode 100644 index 0000000..62b01c0 --- /dev/null +++ b/doc/en/snippets/CostFunctionJAtCurrentOptimum.rst @@ -0,0 +1,9 @@ +.. index:: single: CostFunctionJAtCurrentOptimum + +CostFunctionJAtCurrentOptimum + *List of values*. Each element is a value of the error function :math:`J`. + At each step, the value corresponds to the optimal state found from the + beginning. + + Example : + ``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]`` diff --git a/doc/en/snippets/CostFunctionJb.rst b/doc/en/snippets/CostFunctionJb.rst new file mode 100644 index 0000000..14c586a --- /dev/null +++ b/doc/en/snippets/CostFunctionJb.rst @@ -0,0 +1,9 @@ +.. index:: single: CostFunctionJb + +CostFunctionJb + *List of values*. Each element is a value of the error function :math:`J^b`, + that is of the background difference part. If this part does not exist in the + error function, its value is zero. + + Example : + ``Jb = ADD.get("CostFunctionJb")[:]`` diff --git a/doc/en/snippets/CostFunctionJbAtCurrentOptimum.rst b/doc/en/snippets/CostFunctionJbAtCurrentOptimum.rst new file mode 100644 index 0000000..5dd8e5a --- /dev/null +++ b/doc/en/snippets/CostFunctionJbAtCurrentOptimum.rst @@ -0,0 +1,10 @@ +.. index:: single: CostFunctionJbAtCurrentOptimum + +CostFunctionJbAtCurrentOptimum + *List of values*. Each element is a value of the error function :math:`J^b`. At + each step, the value corresponds to the optimal state found from the + beginning. If this part does not exist in the error function, its value is + zero. + + Example : + ``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]`` diff --git a/doc/en/snippets/CostFunctionJo.rst b/doc/en/snippets/CostFunctionJo.rst new file mode 100644 index 0000000..f18e35a --- /dev/null +++ b/doc/en/snippets/CostFunctionJo.rst @@ -0,0 +1,8 @@ +.. index:: single: CostFunctionJo + +CostFunctionJo + *List of values*. Each element is a value of the error function :math:`J^o`, + that is of the observation difference part. + + Example : + ``Jo = ADD.get("CostFunctionJo")[:]`` diff --git a/doc/en/snippets/CostFunctionJoAtCurrentOptimum.rst b/doc/en/snippets/CostFunctionJoAtCurrentOptimum.rst new file mode 100644 index 0000000..27cd6e4 --- /dev/null +++ b/doc/en/snippets/CostFunctionJoAtCurrentOptimum.rst @@ -0,0 +1,9 @@ +.. index:: single: CostFunctionJoAtCurrentOptimum + +CostFunctionJoAtCurrentOptimum + *List of values*. Each element is a value of the error function :math:`J^o`, + that is of the observation difference part. At each step, the value + corresponds to the optimal state found from the beginning. + + Example : + ``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]`` diff --git a/doc/en/snippets/CurrentOptimum.rst b/doc/en/snippets/CurrentOptimum.rst new file mode 100644 index 0000000..7e92f52 --- /dev/null +++ b/doc/en/snippets/CurrentOptimum.rst @@ -0,0 +1,8 @@ +.. index:: single: CurrentOptimum + +CurrentOptimum + *List of vectors*. Each element is the optimal state obtained at the current + step of the optimization algorithm. It is not necessarily the last state. + + Example : + ``Xo = ADD.get("CurrentOptimum")[:]`` diff --git a/doc/en/snippets/CurrentState.rst b/doc/en/snippets/CurrentState.rst new file mode 100644 index 0000000..37720a4 --- /dev/null +++ b/doc/en/snippets/CurrentState.rst @@ -0,0 +1,8 @@ +.. index:: single: CurrentState + +CurrentState + *List of vectors*. Each element is a usual state vector used during the + iterative algorithm procedure. + + Example : + ``Xs = ADD.get("CurrentState")[:]`` diff --git a/doc/en/snippets/EpsilonMinimumExponent.rst b/doc/en/snippets/EpsilonMinimumExponent.rst new file mode 100644 index 0000000..bbe3158 --- /dev/null +++ b/doc/en/snippets/EpsilonMinimumExponent.rst @@ -0,0 +1,11 @@ +.. index:: single: EpsilonMinimumExponent + +EpsilonMinimumExponent + This key indicates the minimal exponent value of the power of 10 coefficient + to be used to decrease the increment multiplier. The default is -8, and it + has to be between 0 and -20. For example, its default value leads to + calculate the residue of the scalar product formula with a fixed increment + multiplied from 1.e0 to 1.e-8. + + Example : + ``{"EpsilonMinimumExponent":-12}`` diff --git a/doc/en/snippets/EstimationOf.rst b/doc/en/snippets/EstimationOf.rst new file mode 100644 index 0000000..a999b54 --- /dev/null +++ b/doc/en/snippets/EstimationOf.rst @@ -0,0 +1,9 @@ +.. index:: single: EstimationOf + +EstimationOf + This key allows to choose the type of estimation to be performed. It can be + either state-estimation, with a value of "State", or parameter-estimation, + with a value of "Parameters". The default choice is "State". + + Example : + ``{"EstimationOf":"Parameters"}`` diff --git a/doc/en/snippets/EvolutionError.rst b/doc/en/snippets/EvolutionError.rst new file mode 100644 index 0000000..6721362 --- /dev/null +++ b/doc/en/snippets/EvolutionError.rst @@ -0,0 +1,9 @@ +.. index:: single: EvolutionError + +EvolutionError + *Matrix*. The variable indicates the evolution error covariance matrix, + usually noted as :math:`\mathbf{Q}`. It is defined as a "*Matrix*" type + object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" + type object, as described in detail in the section + :ref:`section_ref_covariance_requirements`. Its availability in output is + conditioned by the boolean "*Stored*" associated with input. diff --git a/doc/en/snippets/EvolutionModel.rst b/doc/en/snippets/EvolutionModel.rst new file mode 100644 index 0000000..acca1cb --- /dev/null +++ b/doc/en/snippets/EvolutionModel.rst @@ -0,0 +1,10 @@ +.. index:: single: EvolutionModel + +EvolutionModel + *Operator*. The variable indicates the evolution model operator, usually + noted :math:`M`, which describes an elementary step of evolution. Its value + is defined as a "*Function*" type object or a "*Matrix*" type one. In the + case of "*Function*" type, different functional forms can be used, as + described in the section :ref:`section_ref_operator_requirements`. If there + is some control :math:`U` included in the evolution model, the operator has + to be applied to a pair :math:`(X,U)`. diff --git a/doc/en/snippets/GradientNormTolerance.rst b/doc/en/snippets/GradientNormTolerance.rst new file mode 100644 index 0000000..e77c7ee --- /dev/null +++ b/doc/en/snippets/GradientNormTolerance.rst @@ -0,0 +1,10 @@ +.. index:: single: GradientNormTolerance + +GradientNormTolerance + This key indicates a limit value, leading to stop successfully the + iterative optimization process when the norm of the gradient is under this + limit. It is only used for non-constrained optimizers. The default is + 1.e-5 and it is not recommended to change it. + + Example : + ``{"GradientNormTolerance":1.e-5}`` diff --git a/doc/en/snippets/IndexOfOptimum.rst b/doc/en/snippets/IndexOfOptimum.rst new file mode 100644 index 0000000..785b173 --- /dev/null +++ b/doc/en/snippets/IndexOfOptimum.rst @@ -0,0 +1,9 @@ +.. index:: single: IndexOfOptimum + +IndexOfOptimum + *List of integers*. Each element is the iteration index of the optimum + obtained at the current step the optimization algorithm. It is not + necessarily the number of the last iteration. + + Example : + ``i = ADD.get("IndexOfOptimum")[-1]`` diff --git a/doc/en/snippets/InitialDirection.rst b/doc/en/snippets/InitialDirection.rst new file mode 100644 index 0000000..b1df33d --- /dev/null +++ b/doc/en/snippets/InitialDirection.rst @@ -0,0 +1,10 @@ +.. index:: single: InitialDirection + +InitialDirection + This key indicates the vector direction used for the directional derivative + around the nominal checking point. It has to be a vector. If not specified, + this direction defaults to a random perturbation around zero of the same + vector size than the checking point. + + Example : + ``{"InitialDirection":[0.1,0.1,100.,3}`` diff --git a/doc/en/snippets/Innovation.rst b/doc/en/snippets/Innovation.rst new file mode 100644 index 0000000..5a09a99 --- /dev/null +++ b/doc/en/snippets/Innovation.rst @@ -0,0 +1,9 @@ +.. index:: single: Innovation + +Innovation + *List of vectors*. Each element is an innovation vector, which is in static + the difference between the optimal and the background, and in dynamic the + evolution increment. + + Example : + ``d = ADD.get("Innovation")[-1]`` diff --git a/doc/en/snippets/InnovationAtCurrentState.rst b/doc/en/snippets/InnovationAtCurrentState.rst new file mode 100644 index 0000000..0ace580 --- /dev/null +++ b/doc/en/snippets/InnovationAtCurrentState.rst @@ -0,0 +1,7 @@ +.. index:: single: InnovationAtCurrentState + +InnovationAtCurrentState + *List of vectors*. Each element is an innovation vector at current state. + + Example : + ``ds = ADD.get("InnovationAtCurrentState")[-1]`` diff --git a/doc/en/snippets/MahalanobisConsistency.rst b/doc/en/snippets/MahalanobisConsistency.rst new file mode 100644 index 0000000..006b920 --- /dev/null +++ b/doc/en/snippets/MahalanobisConsistency.rst @@ -0,0 +1,8 @@ +.. index:: single: MahalanobisConsistency + +MahalanobisConsistency + *List of values*. Each element is a value of the Mahalanobis quality + indicator. + + Example : + ``m = ADD.get("MahalanobisConsistency")[-1]`` diff --git a/doc/en/snippets/MaximumNumberOfFunctionEvaluations.rst b/doc/en/snippets/MaximumNumberOfFunctionEvaluations.rst new file mode 100644 index 0000000..fbe814b --- /dev/null +++ b/doc/en/snippets/MaximumNumberOfFunctionEvaluations.rst @@ -0,0 +1,11 @@ +.. index:: single: MaximumNumberOfFunctionEvaluations + +MaximumNumberOfFunctionEvaluations + This key indicates the maximum number of evaluation of the cost function to + be optimized. The default is 15000, which is an arbitrary limit. It is then + recommended to adapt this parameter to the needs on real problems. For some + optimizers, the effective number of function evaluations can be slightly + different of the limit due to algorithm internal control requirements. + + Example : + ``{"MaximumNumberOfFunctionEvaluations":50}`` diff --git a/doc/en/snippets/MaximumNumberOfSteps.rst b/doc/en/snippets/MaximumNumberOfSteps.rst new file mode 100644 index 0000000..1f67c8a --- /dev/null +++ b/doc/en/snippets/MaximumNumberOfSteps.rst @@ -0,0 +1,12 @@ +.. index:: single: MaximumNumberOfSteps + +MaximumNumberOfSteps + This key indicates the maximum number of iterations allowed for iterative + optimization. The default is 15000, which is very similar to no limit on + iterations. It is then recommended to adapt this parameter to the needs on + real problems. For some optimizers, the effective stopping step can be + slightly different of the limit due to algorithm internal control + requirements. + + Example : + ``{"MaximumNumberOfSteps":100}`` diff --git a/doc/en/snippets/MaximumNumberOfSteps_50.rst b/doc/en/snippets/MaximumNumberOfSteps_50.rst new file mode 100644 index 0000000..d64ae08 --- /dev/null +++ b/doc/en/snippets/MaximumNumberOfSteps_50.rst @@ -0,0 +1,9 @@ +.. index:: single: MaximumNumberOfSteps + +MaximumNumberOfSteps + This key indicates the maximum number of iterations allowed for iterative + optimization. The default is 50, which is an arbitrary limit. It is then + recommended to adapt this parameter to the needs on real problems. + + Example : + ``{"MaximumNumberOfSteps":50}`` diff --git a/doc/en/snippets/Minimizer_DFO.rst b/doc/en/snippets/Minimizer_DFO.rst new file mode 100644 index 0000000..9b59de8 --- /dev/null +++ b/doc/en/snippets/Minimizer_DFO.rst @@ -0,0 +1,18 @@ +.. index:: single: Minimizer + +Minimizer + This key allows to choose the optimization minimizer. The default choice is + "BOBYQA", and the possible ones are + "BOBYQA" (minimization with or without constraints by quadratic approximation [Powell09]_), + "COBYLA" (minimization with or without constraints by linear approximation [Powell94]_ [Powell98]_). + "NEWUOA" (minimization with or without constraints by iterative quadratic approximation [Powell04]_), + "POWELL" (minimization unconstrained using conjugate directions [Powell64]_), + "SIMPLEX" (minimization with or without constraints using Nelder-Mead simplex algorithm [Nelder65]_), + "SUBPLEX" (minimization with or without constraints using Nelder-Mead on a sequence of subspaces [Rowan90]_). + Remark: the "POWELL" method perform a dual outer/inner loops optimization, + leading then to less control on the cost function evaluation number because + it is the outer loop limit than is controlled. If precise control on this + cost function evaluation number is required, choose an another minimizer. + + Example : + ``{"Minimizer":"BOBYQA"}`` diff --git a/doc/en/snippets/NumberOfMembers.rst b/doc/en/snippets/NumberOfMembers.rst new file mode 100644 index 0000000..429af00 --- /dev/null +++ b/doc/en/snippets/NumberOfMembers.rst @@ -0,0 +1,9 @@ +.. index:: single: NumberOfMembers + +NumberOfMembers + This key indicates the number of members used to realize the ensemble method. + The default is 100, and it is recommended to adapt it to the needs on real + problems. + + Example : + ``{"NumberOfMembers":100}`` diff --git a/doc/en/snippets/NumberOfPrintedDigits.rst b/doc/en/snippets/NumberOfPrintedDigits.rst new file mode 100644 index 0000000..deb9077 --- /dev/null +++ b/doc/en/snippets/NumberOfPrintedDigits.rst @@ -0,0 +1,8 @@ +.. index:: single: NumberOfPrintedDigits + +NumberOfPrintedDigits + This key indicates the number of digits of precision for floating point + printed output. The default is 5, with a minimum of 0. + + Example : + ``{"NumberOfPrintedDigits":5}`` diff --git a/doc/en/snippets/NumberOfRepetition.rst b/doc/en/snippets/NumberOfRepetition.rst new file mode 100644 index 0000000..55a05e7 --- /dev/null +++ b/doc/en/snippets/NumberOfRepetition.rst @@ -0,0 +1,8 @@ +.. index:: single: NumberOfRepetition + +NumberOfRepetition + This key indicates the number of time to repeat the function evaluation. The + default is 1. + + Example : + ``{"NumberOfRepetition":3}`` diff --git a/doc/en/snippets/NumberOfSamplesForQuantiles.rst b/doc/en/snippets/NumberOfSamplesForQuantiles.rst new file mode 100644 index 0000000..a125f64 --- /dev/null +++ b/doc/en/snippets/NumberOfSamplesForQuantiles.rst @@ -0,0 +1,11 @@ +.. index:: single: NumberOfSamplesForQuantiles + +NumberOfSamplesForQuantiles + This key indicates the number of simulation to be done in order to estimate + the quantiles. This option is useful only if the supplementary calculation + "SimulationQuantiles" has been chosen. The default is 100, which is often + sufficient for correct estimation of common quantiles at 5%, 10%, 90% or + 95%. + + Example : + ``{"NumberOfSamplesForQuantiles":100}`` diff --git a/doc/en/snippets/OMA.rst b/doc/en/snippets/OMA.rst new file mode 100644 index 0000000..fa3713a --- /dev/null +++ b/doc/en/snippets/OMA.rst @@ -0,0 +1,8 @@ +.. index:: single: OMA + +OMA + *List of vectors*. Each element is a vector of difference between the + observation and the optimal state in the observation space. + + Example : + ``oma = ADD.get("OMA")[-1]`` diff --git a/doc/en/snippets/OMB.rst b/doc/en/snippets/OMB.rst new file mode 100644 index 0000000..163946b --- /dev/null +++ b/doc/en/snippets/OMB.rst @@ -0,0 +1,8 @@ +.. index:: single: OMB + +OMB + *List of vectors*. Each element is a vector of difference between the + observation and the background state in the observation space. + + Example : + ``omb = ADD.get("OMB")[-1]`` diff --git a/doc/en/snippets/Observation.rst b/doc/en/snippets/Observation.rst new file mode 100644 index 0000000..dffb564 --- /dev/null +++ b/doc/en/snippets/Observation.rst @@ -0,0 +1,8 @@ +.. index:: single: Observation + +Observation + *Vector*. The variable indicates the observation vector used for data + assimilation or optimization, usually noted as :math:`\mathbf{y}^o`. Its + value is defined as a "*Vector*" or a *VectorSerie* type object. Its + availability in output is conditioned by the boolean "*Stored*" associated + with input. diff --git a/doc/en/snippets/ObservationError.rst b/doc/en/snippets/ObservationError.rst new file mode 100644 index 0000000..058aa4e --- /dev/null +++ b/doc/en/snippets/ObservationError.rst @@ -0,0 +1,9 @@ +.. index:: single: ObservationError + +ObservationError + *Matrix*. The variable indicates the observation error covariance matrix, + usually noted as :math:`\mathbf{R}`. It is defined as a "*Matrix*" type + object, a "*ScalarSparseMatrix*" type object, or a "*DiagonalSparseMatrix*" + type object, as described in detail in the section + :ref:`section_ref_covariance_requirements`. Its availability in output is + conditioned by the boolean "*Stored*" associated with input. diff --git a/doc/en/snippets/ObservationOperator.rst b/doc/en/snippets/ObservationOperator.rst new file mode 100644 index 0000000..e34a592 --- /dev/null +++ b/doc/en/snippets/ObservationOperator.rst @@ -0,0 +1,12 @@ +.. index:: single: ObservationOperator + +ObservationOperator + *Operator*. The variable indicates the observation operator, usually noted as + :math:`H`, which transforms the input parameters :math:`\mathbf{x}` to + results :math:`\mathbf{y}` to be compared to observations + :math:`\mathbf{y}^o`. Its value is defined as a "*Function*" type object or a + "*Matrix*" type one. In the case of "*Function*" type, different functional + forms can be used, as described in the section + :ref:`section_ref_operator_requirements`. If there is some control :math:`U` + included in the observation, the operator has to be applied to a pair + :math:`(X,U)`. diff --git a/doc/en/snippets/Observers.rst b/doc/en/snippets/Observers.rst new file mode 100644 index 0000000..1e7bf19 --- /dev/null +++ b/doc/en/snippets/Observers.rst @@ -0,0 +1,9 @@ +.. index:: single: Observers + +Observers + *List of functions linked to variables*. This command allows to set internal + observers, that are functions linked with a particular variable, which will + be executed each time this variable is modified. It is a convenient way to + monitor variables of interest during the data assimilation or optimization + process, by printing or plotting it, etc. Common templates are provided to + help the user to start or to quickly make his case. diff --git a/doc/en/snippets/ProjectedGradientTolerance.rst b/doc/en/snippets/ProjectedGradientTolerance.rst new file mode 100644 index 0000000..4f078ea --- /dev/null +++ b/doc/en/snippets/ProjectedGradientTolerance.rst @@ -0,0 +1,11 @@ +.. index:: single: ProjectedGradientTolerance + +ProjectedGradientTolerance + This key indicates a limit value, leading to stop successfully the iterative + optimization process when all the components of the projected gradient are + under this limit. It is only used for constrained optimizers. The default is + -1, that is the internal default of each minimizer (generally 1.e-5), and it + is not recommended to change it. + + Example : + ``{"ProjectedGradientTolerance":-1}`` diff --git a/doc/en/snippets/QualityCriterion.rst b/doc/en/snippets/QualityCriterion.rst new file mode 100644 index 0000000..2d5261a --- /dev/null +++ b/doc/en/snippets/QualityCriterion.rst @@ -0,0 +1,13 @@ +.. index:: single: QualityCriterion + +QualityCriterion + This key indicates the quality criterion, minimized to find the optimal state + estimate. The default is the usual data assimilation criterion named "DA", + the augmented weighted least squares. The possible criteria has to be in the + following list, where the equivalent names are indicated by the sign "<=>": + ["AugmentedWeightedLeastSquares"<=>"AWLS"<=>"DA", + "WeightedLeastSquares"<=>"WLS", "LeastSquares"<=>"LS"<=>"L2", + "AbsoluteValue"<=>"L1", "MaximumError"<=>"ME"]. + + Example : + ``{"QualityCriterion":"DA"}`` diff --git a/doc/en/snippets/Quantile.rst b/doc/en/snippets/Quantile.rst new file mode 100644 index 0000000..574c754 --- /dev/null +++ b/doc/en/snippets/Quantile.rst @@ -0,0 +1,8 @@ +.. index:: single: Quantile + +Quantile + This key allows to define the real value of the desired quantile, between + 0 and 1. The default is 0.5, corresponding to the median. + + Example : + ``{"Quantile":0.5}`` diff --git a/doc/en/snippets/Quantiles.rst b/doc/en/snippets/Quantiles.rst new file mode 100644 index 0000000..f9c41c7 --- /dev/null +++ b/doc/en/snippets/Quantiles.rst @@ -0,0 +1,11 @@ +.. index:: single: Quantiles + +Quantiles + This list indicates the values of quantile, between 0 and 1, to be estimated + by simulation around the optimal state. The sampling uses a multivariate + Gaussian random sampling, directed by the *a posteriori* covariance matrix. + This option is useful only if the supplementary calculation + "SimulationQuantiles" has been chosen. The default is a void list. + + Example : + ``{"Quantiles":[0.1,0.9]}`` diff --git a/doc/en/snippets/Residu.rst b/doc/en/snippets/Residu.rst new file mode 100644 index 0000000..feb8cd7 --- /dev/null +++ b/doc/en/snippets/Residu.rst @@ -0,0 +1,8 @@ +.. index:: single: Residu + +Residu + *List of values*. Each element is the value of the particular residue + verified during a checking algorithm, in the order of the tests. + + Example : + ``r = ADD.get("Residu")[:]`` diff --git a/doc/en/snippets/SetDebug.rst b/doc/en/snippets/SetDebug.rst new file mode 100644 index 0000000..36fd5a7 --- /dev/null +++ b/doc/en/snippets/SetDebug.rst @@ -0,0 +1,9 @@ +.. index:: single: SetDebug + +SetDebug + This key requires the activation, or not, of the debug mode during the + function or operator evaluation. The default is "False", the choices are + "True" or "False". + + Example : + ``{"SetDebug":False}`` diff --git a/doc/en/snippets/SetSeed.rst b/doc/en/snippets/SetSeed.rst new file mode 100644 index 0000000..16258cb --- /dev/null +++ b/doc/en/snippets/SetSeed.rst @@ -0,0 +1,12 @@ +.. index:: single: SetSeed + +SetSeed + This key allow to give an integer in order to fix the seed of the random + generator used in the algorithm. A simple convenient value is for example + 1000. By default, the seed is left uninitialized, and so use the default + initialization from the computer, which then change at each study. To ensure + the reproducibility of results involving random samples, it is strongly + advised to initialize the seed. + + Example : + ``{"SetSeed":1000}`` diff --git a/doc/en/snippets/SigmaBck2.rst b/doc/en/snippets/SigmaBck2.rst new file mode 100644 index 0000000..fce70d9 --- /dev/null +++ b/doc/en/snippets/SigmaBck2.rst @@ -0,0 +1,8 @@ +.. index:: single: SigmaBck2 + +SigmaBck2 + *List of values*. Each element is a value of the quality indicator + :math:`(\sigma^b)^2` of the background part. + + Example : + ``sb2 = ADD.get("SigmaBck")[-1]`` diff --git a/doc/en/snippets/SigmaObs2.rst b/doc/en/snippets/SigmaObs2.rst new file mode 100644 index 0000000..f05499f --- /dev/null +++ b/doc/en/snippets/SigmaObs2.rst @@ -0,0 +1,8 @@ +.. index:: single: SigmaObs2 + +SigmaObs2 + *List of values*. Each element is a value of the quality indicator + :math:`(\sigma^o)^2` of the observation part. + + Example : + ``so2 = ADD.get("SigmaObs")[-1]`` diff --git a/doc/en/snippets/SimulatedObservationAtBackground.rst b/doc/en/snippets/SimulatedObservationAtBackground.rst new file mode 100644 index 0000000..7cbb7bc --- /dev/null +++ b/doc/en/snippets/SimulatedObservationAtBackground.rst @@ -0,0 +1,10 @@ +.. index:: single: SimulatedObservationAtBackground +.. index:: single: Dry + +SimulatedObservationAtBackground + *List of vectors*. Each element is a vector of observation simulated by the + observation operator from the background :math:`\mathbf{x}^b`. It is the + forecast from the background, and it is sometimes called "*Dry*". + + Example : + ``hxb = ADD.get("SimulatedObservationAtBackground")[-1]`` diff --git a/doc/en/snippets/SimulatedObservationAtCurrentOptimum.rst b/doc/en/snippets/SimulatedObservationAtCurrentOptimum.rst new file mode 100644 index 0000000..1e4d5a7 --- /dev/null +++ b/doc/en/snippets/SimulatedObservationAtCurrentOptimum.rst @@ -0,0 +1,9 @@ +.. index:: single: SimulatedObservationAtCurrentOptimum + +SimulatedObservationAtCurrentOptimum + *List of vectors*. Each element is a vector of observation simulated from + the optimal state obtained at the current step the optimization algorithm, + that is, in the observation space. + + Example : + ``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]`` diff --git a/doc/en/snippets/SimulatedObservationAtCurrentState.rst b/doc/en/snippets/SimulatedObservationAtCurrentState.rst new file mode 100644 index 0000000..e0616fc --- /dev/null +++ b/doc/en/snippets/SimulatedObservationAtCurrentState.rst @@ -0,0 +1,9 @@ +.. index:: single: SimulatedObservationAtCurrentState + +SimulatedObservationAtCurrentState + *List of vectors*. Each element is an observed vector simulated by the + observation operator from the current state, that is, in the observation + space. + + Example : + ``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]`` diff --git a/doc/en/snippets/SimulatedObservationAtOptimum.rst b/doc/en/snippets/SimulatedObservationAtOptimum.rst new file mode 100644 index 0000000..c10f5da --- /dev/null +++ b/doc/en/snippets/SimulatedObservationAtOptimum.rst @@ -0,0 +1,11 @@ +.. index:: single: SimulatedObservationAtOptimum +.. index:: single: Forecast + +SimulatedObservationAtOptimum + *List of vectors*. Each element is a vector of observation simulated by the + observation operator from the analysis or optimal state :math:`\mathbf{x}^a`. + It is the forecast from the analysis or the optimal state, and it is + sometimes called "*Forecast*". + + Example : + ``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]`` diff --git a/doc/en/snippets/SimulationForQuantiles.rst b/doc/en/snippets/SimulationForQuantiles.rst new file mode 100644 index 0000000..e446a4c --- /dev/null +++ b/doc/en/snippets/SimulationForQuantiles.rst @@ -0,0 +1,14 @@ +.. index:: single: SimulationForQuantiles + +SimulationForQuantiles + This key indicates the type of simulation, linear (with the tangent + observation operator applied to perturbation increments around the optimal + state) or non-linear (with standard observation operator applied to + perturbed states), one want to do for each perturbation. It changes mainly + the time of each elementary calculation, usually longer in non-linear than + in linear. This option is useful only if the supplementary calculation + "SimulationQuantiles" has been chosen. The default value is "Linear", and + the possible choices are "Linear" and "NonLinear". + + Example : + ``{"SimulationForQuantiles":"Linear"}`` diff --git a/doc/en/snippets/SimulationQuantiles.rst b/doc/en/snippets/SimulationQuantiles.rst new file mode 100644 index 0000000..188a641 --- /dev/null +++ b/doc/en/snippets/SimulationQuantiles.rst @@ -0,0 +1,9 @@ +.. index:: single: SimulationQuantiles + +SimulationQuantiles + *List of vectors*. Each element is a vector corresponding to the observed + state which realize the required quantile, in the same order than the + quantiles values required by the user. + + Example : + ``sQuantiles = ADD.get("SimulationQuantiles")[:]`` diff --git a/doc/en/snippets/StateVariationTolerance.rst b/doc/en/snippets/StateVariationTolerance.rst new file mode 100644 index 0000000..9e2d774 --- /dev/null +++ b/doc/en/snippets/StateVariationTolerance.rst @@ -0,0 +1,9 @@ +.. index:: single: StateVariationTolerance + +StateVariationTolerance + This key indicates the maximum relative variation of the state for stopping + by convergence on the state. The default is 1.e-4, and it is recommended to + adapt it to the needs on real problems. + + Example : + ``{"StateVariationTolerance":1.e-4}``