matrix of the optimal state, coming from the :math:`\mathbf{A}*` covariance
matrix.
- Example :
+ Example:
``C = ADD.get("APosterioriCorrelations")[-1]``
*List of matrices*. Each element is an *a posteriori* error covariance
matrix :math:`\mathbf{A}*` of the optimal state.
- Example :
+ Example:
``A = ADD.get("APosterioriCovariance")[-1]``
errors diagonal matrix of the optimal state, coming from the
:math:`\mathbf{A}*` covariance matrix.
- Example :
+ Example:
``S = ADD.get("APosterioriStandardDeviations")[-1]``
errors diagonal matrix of the optimal state, coming from the
:math:`\mathbf{A}*` covariance matrix.
- Example :
+ Example:
``V = ADD.get("APosterioriVariances")[-1]``
used for the directional derivative around the nominal checking point. The
default is 1, that means no scaling.
- Example :
+ Example:
``{"AmplitudeOfInitialDirection":0.5}``
:math:`\mathbf{x}*` in optimization or an analysis :math:`\mathbf{x}^a` in
data assimilation.
- Example :
+ Example:
``Xa = ADD.get("Analysis")[-1]``
*List of vectors*. Each element is a vector of difference between the
background and the optimal state.
- Example :
+ Example:
``bma = ADD.get("BMA")[-1]``
lower/upper bounds for each variable, with extreme values every time there
is no bound (``None`` is not allowed when there is no bound).
- Example :
+ Example:
``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,1.e99],[-1.e99,1.e99]]}``
there is no bound. The bounds can always be specified, but they are taken
into account only by the constrained optimizers.
- Example :
+ Example:
``{"Bounds":[[2.,5.],[1.e-2,10.],[-30.,None],[None,None]]}``
constraints. The only one available is the "EstimateProjection", which
projects the current state estimate on the bounds constraints.
- Example :
+ Example:
``{"ConstrainedBy":"EstimateProjection"}``
this tolerance at the last step. The default is 1.e-7, and it is
recommended to adapt it to the needs on real problems.
- Example :
+ Example:
``{"CostDecrementTolerance":1.e-7}``
this tolerance at the last step. The default is 1.e-6, and it is
recommended to adapt it to the needs on real problems.
- Example : ``{"CostDecrementTolerance":1.e-6}``
+ Example:
+ ``{"CostDecrementTolerance":1.e-6}``
*List of values*. Each element is a value of the chosen error function
:math:`J`.
- Example :
+ Example:
``J = ADD.get("CostFunctionJ")[:]``
At each step, the value corresponds to the optimal state found from the
beginning.
- Example :
+ Example:
``JACO = ADD.get("CostFunctionJAtCurrentOptimum")[:]``
that is of the background difference part. If this part does not exist in the
error function, its value is zero.
- Example :
+ Example:
``Jb = ADD.get("CostFunctionJb")[:]``
beginning. If this part does not exist in the error function, its value is
zero.
- Example :
+ Example:
``JbACO = ADD.get("CostFunctionJbAtCurrentOptimum")[:]``
*List of values*. Each element is a value of the error function :math:`J^o`,
that is of the observation difference part.
- Example :
+ Example:
``Jo = ADD.get("CostFunctionJo")[:]``
that is of the observation difference part. At each step, the value
corresponds to the optimal state found from the beginning.
- Example :
+ Example:
``JoACO = ADD.get("CostFunctionJoAtCurrentOptimum")[:]``
*List of vectors*. Each element is the optimal state obtained at the current
step of the optimization algorithm. It is not necessarily the last state.
- Example :
+ Example:
``Xo = ADD.get("CurrentOptimum")[:]``
*List of vectors*. Each element is a usual state vector used during the
iterative algorithm procedure.
- Example :
+ Example:
``Xs = ADD.get("CurrentState")[:]``
calculate the residue of the scalar product formula with a fixed increment
multiplied from 1.e0 to 1.e-8.
- Example :
+ Example:
``{"EpsilonMinimumExponent":-12}``
either state-estimation, with a value of "State", or parameter-estimation,
with a value of "Parameters". The default choice is "State".
- Example :
+ Example:
``{"EstimationOf":"Parameters"}``
limit. It is only used for non-constrained optimizers. The default is
1.e-5 and it is not recommended to change it.
- Example :
+ Example:
``{"GradientNormTolerance":1.e-5}``
obtained at the current step the optimization algorithm. It is not
necessarily the number of the last iteration.
- Example :
+ Example:
``i = ADD.get("IndexOfOptimum")[-1]``
this direction defaults to a random perturbation around zero of the same
vector size than the checking point.
- Example :
+ Example:
``{"InitialDirection":[0.1,0.1,100.,3}``
the difference between the optimal and the background, and in dynamic the
evolution increment.
- Example :
+ Example:
``d = ADD.get("Innovation")[-1]``
InnovationAtCurrentState
*List of vectors*. Each element is an innovation vector at current state.
- Example :
+ Example:
``ds = ADD.get("InnovationAtCurrentState")[-1]``
*List of values*. Each element is a value of the Mahalanobis quality
indicator.
- Example :
+ Example:
``m = ADD.get("MahalanobisConsistency")[-1]``
optimizers, the effective number of function evaluations can be slightly
different of the limit due to algorithm internal control requirements.
- Example :
+ Example:
``{"MaximumNumberOfFunctionEvaluations":50}``
slightly different of the limit due to algorithm internal control
requirements.
- Example :
+ Example:
``{"MaximumNumberOfSteps":100}``
optimization. The default is 50, which is an arbitrary limit. It is then
recommended to adapt this parameter to the needs on real problems.
- Example :
+ Example:
``{"MaximumNumberOfSteps":50}``
it is the outer loop limit than is controlled. If precise control on this
cost function evaluation number is required, choose an another minimizer.
- Example :
+ Example:
``{"Minimizer":"BOBYQA"}``
The default is 100, and it is recommended to adapt it to the needs on real
problems.
- Example :
+ Example:
``{"NumberOfMembers":100}``
This key indicates the number of digits of precision for floating point
printed output. The default is 5, with a minimum of 0.
- Example :
+ Example:
``{"NumberOfPrintedDigits":5}``
This key indicates the number of time to repeat the function evaluation. The
default is 1.
- Example :
+ Example:
``{"NumberOfRepetition":3}``
sufficient for correct estimation of common quantiles at 5%, 10%, 90% or
95%.
- Example :
+ Example:
``{"NumberOfSamplesForQuantiles":100}``
*List of vectors*. Each element is a vector of difference between the
observation and the optimal state in the observation space.
- Example :
+ Example:
``oma = ADD.get("OMA")[-1]``
*List of vectors*. Each element is a vector of difference between the
observation and the background state in the observation space.
- Example :
+ Example:
``omb = ADD.get("OMB")[-1]``
-1, that is the internal default of each minimizer (generally 1.e-5), and it
is not recommended to change it.
- Example :
+ Example:
``{"ProjectedGradientTolerance":-1}``
"WeightedLeastSquares"<=>"WLS", "LeastSquares"<=>"LS"<=>"L2",
"AbsoluteValue"<=>"L1", "MaximumError"<=>"ME"].
- Example :
+ Example:
``{"QualityCriterion":"DA"}``
This key allows to define the real value of the desired quantile, between
0 and 1. The default is 0.5, corresponding to the median.
- Example :
+ Example:
``{"Quantile":0.5}``
This option is useful only if the supplementary calculation
"SimulationQuantiles" has been chosen. The default is a void list.
- Example :
+ Example:
``{"Quantiles":[0.1,0.9]}``
*List of values*. Each element is the value of the particular residue
verified during a checking algorithm, in the order of the tests.
- Example :
+ Example:
``r = ADD.get("Residu")[:]``
function or operator evaluation. The default is "False", the choices are
"True" or "False".
- Example :
+ Example:
``{"SetDebug":False}``
the reproducibility of results involving random samples, it is strongly
advised to initialize the seed.
- Example :
+ Example:
``{"SetSeed":1000}``
*List of values*. Each element is a value of the quality indicator
:math:`(\sigma^b)^2` of the background part.
- Example :
+ Example:
``sb2 = ADD.get("SigmaBck")[-1]``
*List of values*. Each element is a value of the quality indicator
:math:`(\sigma^o)^2` of the observation part.
- Example :
+ Example:
``so2 = ADD.get("SigmaObs")[-1]``
observation operator from the background :math:`\mathbf{x}^b`. It is the
forecast from the background, and it is sometimes called "*Dry*".
- Example :
+ Example:
``hxb = ADD.get("SimulatedObservationAtBackground")[-1]``
the optimal state obtained at the current step the optimization algorithm,
that is, in the observation space.
- Example :
+ Example:
``hxo = ADD.get("SimulatedObservationAtCurrentOptimum")[-1]``
observation operator from the current state, that is, in the observation
space.
- Example :
+ Example:
``hxs = ADD.get("SimulatedObservationAtCurrentState")[-1]``
It is the forecast from the analysis or the optimal state, and it is
sometimes called "*Forecast*".
- Example :
+ Example:
``hxa = ADD.get("SimulatedObservationAtOptimum")[-1]``
"SimulationQuantiles" has been chosen. The default value is "Linear", and
the possible choices are "Linear" and "NonLinear".
- Example :
+ Example:
``{"SimulationForQuantiles":"Linear"}``
state which realize the required quantile, in the same order than the
quantiles values required by the user.
- Example :
+ Example:
``sQuantiles = ADD.get("SimulationQuantiles")[:]``
by convergence on the state. The default is 1.e-4, and it is recommended to
adapt it to the needs on real problems.
- Example :
+ Example:
``{"StateVariationTolerance":1.e-4}``