+which is named the "*3D-VAR*" function. It can be seen as a *least squares
+minimization* extented form, obtained by adding a regularizing term using
+:math:`\mathbf{x}-\mathbf{x}^b`, and by weighting the differences using
+:math:`\mathbf{B}` and :math:`\mathbf{R}` the two covariance matrices. The
+minimization of the :math:`J` function leads to the *best* state estimation.
+
+State estimation possibilities extension, by using more explicitly optimization
+methods and their properties, can be imagined in two ways.
+
+First, classical optimization methods involves using various gradient-based
+minimizing procedures. They are extremely efficient to look for a single local
+minimum. But they require the goal function :math:`J` to be sufficiently regular
+and differentiable, and are not able to capture global properties of the
+minimization problem, for example: global minimum, set of equivalent solutions
+due to over-parametrization, multiple local minima, etc. **A way to extend
+estimation possibilities is then to use a whole range of optimizers, allowing
+global minimization, various robust search properties, etc**. There is a lot of
+minimizing methods, such as stochastic ones, evolutionary ones, heuristics and
+meta-heuristics for real-valued problems, etc. They can treat partially irregular
+or noisy function :math:`J`, can characterize local minima, etc. The main
+drawback is a greater numerical cost to find state estimates, and no guarantee
+of convergence in finite time. Here, we only point the following
+topics, as the methods are available in the ADAO module: *Quantile regression*
+[WikipediaQR]_ and *Particle swarm optimization* [WikipediaPSO]_.
+
+Secondly, optimization methods try usually to minimize quadratic measures of
+errors, as the natural properties of such goal functions are well suited for
+classical gradient optimization. But other measures of errors can be more
+adapted to real physical simulation problems. Then, **an another way to extend
+estimation possibilities is to use other measures of errors to be reduced**. For
+example, we can cite *absolute error value*, *maximum error value*, etc. These
+error measures are not differentiables, but some optimization methods can deal
+with: heuristics and meta-heuristics for real-valued problem, etc. As
+previously, the main drawback remain a greater numerical cost to find state
+estimates, and no guarantee of convergence in finite time. Here, we point also
+the following methods as it is available in the ADAO module: *Particle swarm
+optimization* [WikipediaPSO]_.
+
+The reader interested in the subject of optimization can look at [WikipediaMO]_
+as a general entry point.