In the early nineties we developed the derandomized approach to self-adaptation in evolution strategies (see e.g. ,]). In an advanced level of derandomization the adaptation takes place without any independent stochastic variation of the strategy parameters. The selected points (more specific: selected mutation steps) in object parameter space are used directly for strategy parameter adjustment. We compared three different approaches to self-adaptation of their capability to adapt arbitrary normal mutation distributions and to meet the demands identified in :
When the mutation operator was chosen carefully  all three approaches could reliably adapt arbitrary normal mutation distributions independently of the chosen coordinate system. (This cannot be achieved with the conventional mutation operator .) The following table shows quite coarse performance data of the three approaches:
|serial progress||1 / n3||0.04 / n||0.1 / n|
Here n is the problem dimension and the following applies:
|population size||Roughly the minimal population size (λ) that allows a reliable adaptation within this strategy variant.|
|serial progress||Progress rate per generation divided by the population size (i.e. φ / λ) on the sphere model with a distance to optimum of one. The serial progress φ/λ can be interpreted as the relative reduction of the distance to optimum per function evaluation. The highly competitive (1+1)-ES with optimal step size has a serial progress rate of 0.2 / n. It reduces the distance to optimum by approximately 20% in n steps. Large progress rates are to be preferred.|
|adaptation costs||Number of function evaluations needed to adapt from a spherical distribution to an elliptical distribution or vice versa. The elliptical target distribution has n different axis lengths and a maximal axis length ratio of 1000. Small adaptation costs are to be preferred.|
We can conclude from the data that for problem dimensions greater than three only the derandomized approach turns out to be of practical relevance. Only the CMA-ES reliably adapts arbitrary normal mutation distributions (independently of the chosen coordinate system) with a reasonable efficiency. Moreover it reveals very competitive serial progress rates.
 Hansen, N. and Ostermeier, A. (2001). Completely Derandomized Self-Adaptation in Evolution Strategies. Evolutionary Computation, 9(2), pp. 159-195.
 Ostermeier, A., Gawelczyk, A. and Hansen, N. (1994). A Derandomized Approach to Self-Adaptation of Evolution Strategies. Evolutionary Computation 2(4), 369-380.
 Ostermeier, A. and N. Hansen (1999). An evolution strategy with coordinate system invariant adaptation of arbitrary normal mutation distributions within the concept of mutative strategy parameter control. In W.Banzhaf, J.Daida, A.Eiben, M.H.Garzon, V.Honavar, M.Jakiela, R.E.Smith (Eds.), GECCO-99 Proceedings of the Genetic and Evolutionary Computation Conference, pp. 902-909, San Francisco: Morgan Kaufmann Publishers.
 Hansen , N. (2000). Invariance, Self-Adaptation and Correlated Mutations in Evolution Strategies. Sixth International Conference on Parallel Problem Solving from Nature (PPSN VI), Proceedings, pp. 355-364.