Module cma
[hide private]
[frames] | no frames]

Module cma

source code

Module cma implements the CMA-ES (Covariance Matrix Adaptation Evolution Strategy).

CMA-ES is a stochastic optimizer for robust non-linear non-convex derivative- and function-value-free numerical optimization.

This implementation can be used with Python versions >= 2.6, namely 2.6, 2.7, 3.3, 3.4.

CMA-ES searches for a minimizer (a solution x in Rn) of an objective function f (cost function), such that f(x) is minimal. Regarding f, only a passably reliable ranking of the candidate solutions in each iteration is necessary. Neither the function values itself, nor the gradient of f need to be available or do matter (like in the downhill simplex Nelder-Mead algorithm). Some termination criteria however depend on actual f-values.

Two interfaces are provided:

Used packages:

Install

The file cma.py only needs to be visible in the python path (e.g. in the current working directory).

The preferred way of (system-wide) installation is calling

pip install cma

from the command line.

The cma.py file can also be installed from the system shell terminal command line by:

python cma.py --install

which solely calls the setup function from the standard distutils.core package for installation. If the setup.py file is been provided with cma.py, the standard call is

python setup.py cma

Both calls need to see cma.py in the current working directory and might need to be preceded with sudo.

To upgrade the currently installed version from the Python Package Index, and also for first time installation, type in the system shell:

pip install --upgrade cma

Testing

From the system shell:

python cma.py --test

or from the Python shell ipython:

run cma.py --test

or from any python shell

import cma cma.main('--test')

runs doctest.testmod(cma) showing only exceptions (and not the tests that fail due to small differences in the output) and should run without complaints in about between 20 and 100 seconds.

Example

From a python shell:

import cma
help(cma)  # "this" help message, use cma? in ipython
help(cma.fmin)
help(cma.CMAEvolutionStrategy)
help(cma.CMAOptions)
cma.CMAOptions('tol')  # display 'tolerance' termination options
cma.CMAOptions('verb') # display verbosity options
res = cma.fmin(cma.Fcts.tablet, 15 * [1], 1)
res[0]  # best evaluated solution
res[5]  # mean solution, presumably better with noise

See Also: fmin(), CMAOptions, CMAEvolutionStrategy

Author: Nikolaus Hansen, 2008-2015

License: BSD 3-Clause, see below.

Version: 1.1.06 $Revision: 4129 $ $Date: 2015-01-23 20:13:51 +0100 (Fri, 23 Jan 2015) $

Classes [hide private]
  basestring
str(object='') -> string
  MetaParameters
meta parameters are either "modifiable constants" or refer to options from CMAOptions or are arguments to fmin or to the NoiseHandler class constructor.
  _BlancClass
blanc container class for having a collection of attributes, that might/should at some point become a more tailored class
  DerivedDictBase
for conveniently adding "features" to a dictionary. The actual dictionary is in self.data. Copy-paste and modify setitem, getitem, and delitem, if necessary.
  SolutionDict
dictionary with computation of an hash key.
  CMASolutionDict
a hack to get most code examples running
  BestSolution
container to keep track of the best solution seen
  BoundaryHandlerBase
hacked base class
  BoundNone
  BoundTransform
Handles boundary by a smooth, piecewise linear and quadratic transformation into the feasible domain.
  BoundPenalty
Computes the boundary penalty. Must be updated each iteration, using the update method.
  BoxConstraintsTransformationBase
Implements a transformation into boundaries and is used for boundary handling:
  _BoxConstraintsTransformationTemplate
Implements a transformation into boundaries and is used for boundary handling:
  BoxConstraintsLinQuadTransformation
implements a bijective, monotonous transformation between [lb - al, ub + au] and [lb, ub] which is the identity (and therefore linear) in [lb + al, ub - au] (typically about 90% of the interval) and quadratic in [lb - 3*al, lb + al] and in [ub - au, ub + 3*au]. The transformation is periodically expanded beyond the limits (somewhat resembling the shape sin(x-pi/2)) with a period of 2 * (ub - lb + al + au).
  GenoPheno
Genotype-phenotype transformation.
  OOOptimizer
"abstract" base class for an Object Oriented Optimizer interface.
  CMAAdaptSigmaBase
step-size adaptation base class, implementing hsig functionality via an isotropic evolution path.
  CMAAdaptSigmaNone
  CMAAdaptSigmaDistanceProportional
artificial setting of sigma for test purposes, e.g. to simulate optimal progress rates.
  CMAAdaptSigmaCSA
  CMAAdaptSigmaMedianImprovement
Compares median fitness against a fitness percentile of the previous iteration, see Ait ElHara et al, GECCO 2013.
  CMAAdaptSigmaTPA
two point adaptation for step-size sigma. Relies on a specific sampling of the first two offspring, whose objective function value ranks are used to decide on the step-size change.
  CMAEvolutionStrategy
CMA-ES stochastic optimizer class with ask-and-tell interface.
  CMAOptions
CMAOptions() returns a dictionary with the available options and their default values for class CMAEvolutionStrategy.
  _CMAStopDict
keep and update a termination condition dictionary, which is "usually" empty and returned by CMAEvolutionStrategy.stop(). The class methods entirely depend on CMAEvolutionStrategy class attributes.
  _CMAParameters
strategy parameters like population size and learning rates.
  BaseDataLogger
"abstract" base class for a data logger that can be used with an OOOptimizer
  CMADataLogger
data logger for class CMAEvolutionStrategy. The logger is identified by its name prefix and (over-)writes or reads according data files. Therefore, the logger must be considered as global variable with unpredictable side effects, if two loggers with the same name and on the same working folder are used at the same time.
  NoiseHandler
Noise handling according to [Hansen et al 2009, A Method for Handling Uncertainty in Evolutionary Optimization...]
  Sections
plot sections through an objective function.
  _Error
generic exception of cma module
  ElapsedTime
using time.clock with overflow handling to measure CPU time.
  Misc
  Mh
static convenience math helper functions, if the function name is preceded with an "a", a numpy array is returned
  ConstRandnShift
ConstRandnShift()(x) adds a fixed realization of stddev * randn(len(x)) to the vector x.
  Rotation
Rotation class that implements an orthogonal linear transformation, one for each dimension.
  FFWrapper
A collection of (yet experimental) classes to implement fitness transformations and wrappers. Aliased to FF2 below.
  FF2
A collection of (yet experimental) classes to implement fitness transformations and wrappers. Aliased to FF2 below.
  FitnessFunctions
versatile container for test objective functions
Functions [hide private]
list of integers
xrange(stop)
range(start, stop[, step]) -> list of integers
value
raw_input(prompt=...)
Equivalent to eval(raw_input(prompt)).
 
savefig(*args, **kwargs)
Save the current figure.
 
closefig(*args)
Close a figure window.
 
show() source code
 
rglen(ar)
shortcut for the iterator xrange(len(ar))
source code
 
is_feasible(x, f)
default to check feasibility, see also cma_default_options
source code
 
_print_warning(msg, method_name=None, class_name=None, iteration=None, verbose=None) source code
 
unitdoctest()
is used to describe test cases and might in future become helpful as an experimental tutorial as well. The main testing feature at the moment is by doctest with cma._test() or conveniently by python cma.py --test. With the --verbose option added, the results will always slightly differ and many "failed" test cases might be reported.
source code
 
fmin(objective_function, x0, sigma0, options=None, args=(), gradf=None, restarts=0, restart_from_best=u'False', incpopsize=2, eval_initial_x=False, noise_handler=None, noise_change_sigma_exponent=1, noise_kappa_exponent=0, bipop=False)
functional interface to the stochastic optimizer CMA-ES for non-convex function minimization.
source code
 
plot(name=None, fig=None, abscissa=1, iteridx=None, plot_mean=False, foffset=1e-19, x_opt=None, fontsize=9)
plot data from files written by a CMADataLogger, the call cma.plot(name, **argsdict) is a shortcut for cma.CMADataLogger(name).plot(**argsdict)
source code
 
disp(name=None, idx=None)
displays selected data from (files written by) the class CMADataLogger.
source code
 
_fileToMatrix(file_name)
rudimentary method to read in data from a file
source code
 
pprint(to_be_printed)
nicely formated print
source code
 
pp(to_be_printed)
nicely formated print
source code
 
felli(x)
unbound test function, needed to test multiprocessor
source code
 
_test(module=None) source code
 
process_doctest_output(stream=None) source code
 
main(argv=None)
to install and/or test from the command line use:
source code
Variables [hide private]
  __author__ = u'Nikolaus Hansen'
  use_archives = True
speed up for very large population size. use_archives prevents the need for an inverse gp-transformation, relies on collections module, not sure what happens if set to False.
  meta_parameters = MetaParameters()
  global_verbosity = 1
  _experimental = False
  new_injections = True
  cma_default_options = {u'AdaptSigma': u'CMAAdaptSigmaCSA # or...
  last_figure_number = 324
  rotate = Rotation()
  fcts = FitnessFunctions()
  Fcts = FitnessFunctions()
  FF = FitnessFunctions()
  __package__ = None
hash(x)
Function Details [hide private]

xrange(stop)

 

range(start, stop[, step]) -> list of integers

Return a list containing an arithmetic progression of integers. range(i, j) returns [i, i+1, i+2, ..., j-1]; start (!) defaults to 0. When step is given, it specifies the increment (or decrement). For example, range(4) returns [0, 1, 2, 3]. The end point is omitted! These are exactly the valid indices for a list of 4 elements.

Returns: list of integers

savefig(*args, **kwargs)

 
Save the current figure.

Call signature::

  savefig(fname, dpi=None, facecolor='w', edgecolor='w',
          orientation='portrait', papertype=None, format=None,
          transparent=False, bbox_inches=None, pad_inches=0.1,
          frameon=None)

The output formats available depend on the backend being used.

Arguments:

  *fname*:
    A string containing a path to a filename, or a Python
    file-like object, or possibly some backend-dependent object
    such as :class:`~matplotlib.backends.backend_pdf.PdfPages`.

    If *format* is *None* and *fname* is a string, the output
    format is deduced from the extension of the filename. If
    the filename has no extension, the value of the rc parameter
    ``savefig.format`` is used.

    If *fname* is not a string, remember to specify *format* to
    ensure that the correct backend is used.

Keyword arguments:

  *dpi*: [ *None* | ``scalar > 0`` ]
    The resolution in dots per inch.  If *None* it will default to
    the value ``savefig.dpi`` in the matplotlibrc file.

  *facecolor*, *edgecolor*:
    the colors of the figure rectangle

  *orientation*: [ 'landscape' | 'portrait' ]
    not supported on all backends; currently only on postscript output

  *papertype*:
    One of 'letter', 'legal', 'executive', 'ledger', 'a0' through
    'a10', 'b0' through 'b10'. Only supported for postscript
    output.

  *format*:
    One of the file extensions supported by the active
    backend.  Most backends support png, pdf, ps, eps and svg.

  *transparent*:
    If *True*, the axes patches will all be transparent; the
    figure patch will also be transparent unless facecolor
    and/or edgecolor are specified via kwargs.
    This is useful, for example, for displaying
    a plot on top of a colored background on a web page.  The
    transparency of these patches will be restored to their
    original values upon exit of this function.

  *frameon*:
    If *True*, the figure patch will be colored, if *False*, the
    figure background will be transparent.  If not provided, the
    rcParam 'savefig.frameon' will be used.

  *bbox_inches*:
    Bbox in inches. Only the given portion of the figure is
    saved. If 'tight', try to figure out the tight bbox of
    the figure.

  *pad_inches*:
    Amount of padding around the figure when bbox_inches is
    'tight'.

  *bbox_extra_artists*:
    A list of extra artists that will be considered when the
    tight bbox is calculated.

closefig(*args)

 

Close a figure window.

``close()`` by itself closes the current figure

``close(h)`` where *h* is a :class:`Figure` instance, closes that figure

``close(num)`` closes figure number *num*

``close(name)`` where *name* is a string, closes figure with that label

``close('all')`` closes all the figure windows

unitdoctest()

source code 

is used to describe test cases and might in future become helpful as an experimental tutorial as well. The main testing feature at the moment is by doctest with cma._test() or conveniently by python cma.py --test. With the --verbose option added, the results will always slightly differ and many "failed" test cases might be reported.

A simple first overall test:
>>> import cma
>>> res = cma.fmin(cma.fcts.elli, 3*[1], 1,
...                {'CMA_diagonal':2, 'seed':1, 'verb_time':0})
(3_w,7)-CMA-ES (mu_w=2.3,w_1=58%) in dimension 3 (seed=1)
   Covariance matrix is diagonal for 2 iterations (1/ccov=7.0)
Iterat #Fevals   function value     axis ratio  sigma   minstd maxstd min:sec
    1       7 1.453161670768570e+04 1.2e+00 1.08e+00  1e+00  1e+00
    2      14 3.281197961927601e+04 1.3e+00 1.22e+00  1e+00  2e+00
    3      21 1.082851071704020e+04 1.3e+00 1.24e+00  1e+00  2e+00
  100     700 8.544042012075362e+00 1.4e+02 3.18e-01  1e-03  2e-01
  200    1400 5.691152415221861e-12 1.0e+03 3.82e-05  1e-09  1e-06
  220    1540 3.890107746209078e-15 9.5e+02 4.56e-06  8e-11  7e-08
termination on tolfun : 1e-11
final/bestever f-value = 3.89010774621e-15 2.52273602735e-15
mean solution:  [ -4.63614606e-08  -3.42761465e-10   1.59957987e-11]
std deviation: [  6.96066282e-08   2.28704425e-09   7.63875911e-11]

Test on the Rosenbrock function with 3 restarts. The first trial only finds the local optimum, which happens in about 20% of the cases.

>>> import cma
>>> res = cma.fmin(cma.fcts.rosen, 4*[-1], 1,
...                options={'ftarget':1e-6, 'verb_time':0,
...                    'verb_disp':500, 'seed':3},
...                restarts=3)
(4_w,8)-CMA-ES (mu_w=2.6,w_1=52%) in dimension 4 (seed=3)
Iterat #Fevals   function value     axis ratio  sigma   minstd maxstd min:sec
    1       8 4.875315645656848e+01 1.0e+00 8.43e-01  8e-01  8e-01
    2      16 1.662319948123120e+02 1.1e+00 7.67e-01  7e-01  8e-01
    3      24 6.747063604799602e+01 1.2e+00 7.08e-01  6e-01  7e-01
  184    1472 3.701428610430019e+00 4.3e+01 9.41e-07  3e-08  5e-08
termination on tolfun : 1e-11
final/bestever f-value = 3.70142861043 3.70142861043
mean solution:  [-0.77565922  0.61309336  0.38206284  0.14597202]
std deviation: [  2.54211502e-08   3.88803698e-08   4.74481641e-08   3.64398108e-08]
(8_w,16)-CMA-ES (mu_w=4.8,w_1=32%) in dimension 4 (seed=4)
Iterat #Fevals   function value     axis ratio  sigma   minstd maxstd min:sec
    1    1489 2.011376859371495e+02 1.0e+00 8.90e-01  8e-01  9e-01
    2    1505 4.157106647905128e+01 1.1e+00 8.02e-01  7e-01  7e-01
    3    1521 3.548184889359060e+01 1.1e+00 1.02e+00  8e-01  1e+00
  111    3249 6.831867555502181e-07 5.1e+01 2.62e-02  2e-04  2e-03
termination on ftarget : 1e-06
final/bestever f-value = 6.8318675555e-07 1.18576673231e-07
mean solution:  [ 0.99997004  0.99993938  0.99984868  0.99969505]
std deviation: [ 0.00018973  0.00038006  0.00076479  0.00151402]
>>> assert res[1] <= 1e-6

Notice the different termination conditions. Termination on the target function value ftarget prevents further restarts.

Test of scaling_of_variables option

>>> import cma
>>> opts = cma.CMAOptions()
>>> opts['seed'] = 456
>>> opts['verb_disp'] = 0
>>> opts['CMA_active'] = 1
>>> # rescaling of third variable: for searching in  roughly
>>> #   x0 plus/minus 1e3*sigma0 (instead of plus/minus sigma0)
>>> opts['scaling_of_variables'] = [1, 1, 1e3, 1]
>>> res = cma.fmin(cma.fcts.rosen, 4 * [0.1], 0.1, opts)
termination on tolfun : 1e-11
final/bestever f-value = 2.68096173031e-14 1.09714829146e-14
mean solution:  [ 1.00000001  1.00000002  1.00000004  1.00000007]
std deviation: [  3.00466854e-08   5.88400826e-08   1.18482371e-07   2.34837383e-07]

The printed std deviations reflect the actual value in the parameters of the function (not the one in the internal representation which can be different).

Test of CMA_stds scaling option.

>>> import cma
>>> opts = cma.CMAOptions()
>>> s = 5 * [1]
>>> s[0] = 1e3
>>> opts.set('CMA_stds', s)
>>> opts.set('verb_disp', 0)
>>> res = cma.fmin(cma.fcts.cigar, 5 * [0.1], 0.1, opts)
>>> assert res[1] < 1800

See Also: cma.main(), cma._test()

fmin(objective_function, x0, sigma0, options=None, args=(), gradf=None, restarts=0, restart_from_best=u'False', incpopsize=2, eval_initial_x=False, noise_handler=None, noise_change_sigma_exponent=1, noise_kappa_exponent=0, bipop=False)

source code 

functional interface to the stochastic optimizer CMA-ES for non-convex function minimization.

Calling Sequences

fmin(objective_function, x0, sigma0)
minimizes objective_function starting at x0 and with standard deviation sigma0 (step-size)
fmin(objective_function, x0, sigma0, options={'ftarget': 1e-5})
minimizes objective_function up to target function value 1e-5, which is typically useful for benchmarking.
fmin(objective_function, x0, sigma0, args=('f',))
minimizes objective_function called with an additional argument 'f'.
fmin(objective_function, x0, sigma0, options={'ftarget':1e-5, 'popsize':40})
uses additional options ftarget and popsize
fmin(objective_function, esobj, None, options={'maxfevals': 1e5})
uses the CMAEvolutionStrategy object instance esobj to optimize objective_function, similar to esobj.optimize().

Arguments

objective_function
function to be minimized. Called as objective_function(x, *args). x is a one-dimensional numpy.ndarray. objective_function can return numpy.NaN, which is interpreted as outright rejection of solution x and invokes an immediate resampling and (re-)evaluation of a new solution not counting as function evaluation.
x0
list or numpy.ndarray, initial guess of minimum solution before the application of the geno-phenotype transformation according to the transformation option. It can also be a string holding a Python expression that is evaluated to yield the initial guess - this is important in case restarts are performed so that they start from different places. Otherwise x0 can also be a cma.CMAEvolutionStrategy object instance, in that case sigma0 can be None.
sigma0
scalar, initial standard deviation in each coordinate. sigma0 should be about 1/4th of the search domain width (where the optimum is to be expected). The variables in objective_function should be scaled such that they presumably have similar sensitivity. See also option scaling_of_variables.
options
a dictionary with additional options passed to the constructor of class CMAEvolutionStrategy, see cma.CMAOptions() for a list of available options.
args=()
arguments to be used to call the objective_function
gradf
gradient of f, where len(gradf(x, *args)) == len(x). gradf is called once in each iteration if gradf is not None.
restarts=0
number of restarts with increasing population size, see also parameter incpopsize, implementing the IPOP-CMA-ES restart strategy, see also parameter bipop; to restart from different points (recommended), pass x0 as a string.
restart_from_best=False
which point to restart from
incpopsize=2
multiplier for increasing the population size popsize before each restart
eval_initial_x=None
evaluate initial solution, for None only with elitist option
noise_handler=None
a NoiseHandler instance or None, a simple usecase is cma.fmin(f, 6 * [1], 1, noise_handler=cma.NoiseHandler(6)) see help(cma.NoiseHandler).
noise_change_sigma_exponent=1
exponent for sigma increment for additional noise treatment
noise_evaluations_as_kappa
instead of applying reevaluations, the "number of evaluations" is (ab)used as scaling factor kappa (experimental).
bipop
if True, run as BIPOP-CMA-ES; BIPOP is a special restart strategy switching between two population sizings - small (like the default CMA, but with more focused search) and large (progressively increased as in IPOP). This makes the algorithm perform well both on functions with many regularly or irregularly arranged local optima (the latter by frequently restarting with small populations). For the bipop parameter to actually take effect, also select non-zero number of (IPOP) restarts; the recommended setting is restarts<=9 and x0 passed as a string. Note that small-population restarts do not count into the total restart count.

Optional Arguments

All values in the options dictionary are evaluated if they are of type str, besides verb_filenameprefix, see class CMAOptions for details. The full list is available via cma.CMAOptions().

>>> import cma
>>> cma.CMAOptions()

Subsets of options can be displayed, for example like cma.CMAOptions('tol'), or cma.CMAOptions('bound'), see also class CMAOptions.

Return

Return the list provided by CMAEvolutionStrategy.result() appended with termination conditions, an OOOptimizer and a BaseDataLogger:

res = es.result() + (es.stop(), es, logger)
where
  • res[0] (xopt) -- best evaluated solution
  • res[1] (fopt) -- respective function value
  • res[2] (evalsopt) -- respective number of function evaluations
  • res[3] (evals) -- number of overall conducted objective function evaluations
  • res[4] (iterations) -- number of overall conducted iterations
  • res[5] (xmean) -- mean of the final sample distribution
  • res[6] (stds) -- effective stds of the final sample distribution
  • res[-3] (stop) -- termination condition(s) in a dictionary
  • res[-2] (cmaes) -- class CMAEvolutionStrategy instance
  • res[-1] (logger) -- class CMADataLogger instance

Details

This function is an interface to the class CMAEvolutionStrategy. The latter class should be used when full control over the iteration loop of the optimizer is desired.

Examples

The following example calls fmin optimizing the Rosenbrock function in 10-D with initial solution 0.1 and initial step-size 0.5. The options are specified for the usage with the doctest module.

>>> import cma
>>> # cma.CMAOptions()  # returns all possible options
>>> options = {'CMA_diagonal':100, 'seed':1234, 'verb_time':0}
>>>
>>> res = cma.fmin(cma.fcts.rosen, [0.1] * 10, 0.5, options)
(5_w,10)-CMA-ES (mu_w=3.2,w_1=45%) in dimension 10 (seed=1234)
   Covariance matrix is diagonal for 10 iterations (1/ccov=29.0)
Iterat #Fevals   function value     axis ratio  sigma   minstd maxstd min:sec
    1      10 1.264232686260072e+02 1.1e+00 4.40e-01  4e-01  4e-01
    2      20 1.023929748193649e+02 1.1e+00 4.00e-01  4e-01  4e-01
    3      30 1.214724267489674e+02 1.2e+00 3.70e-01  3e-01  4e-01
  100    1000 6.366683525319511e+00 6.2e+00 2.49e-02  9e-03  3e-02
  200    2000 3.347312410388666e+00 1.2e+01 4.52e-02  8e-03  4e-02
  300    3000 1.027509686232270e+00 1.3e+01 2.85e-02  5e-03  2e-02
  400    4000 1.279649321170636e-01 2.3e+01 3.53e-02  3e-03  3e-02
  500    5000 4.302636076186532e-04 4.6e+01 4.78e-03  3e-04  5e-03
  600    6000 6.943669235595049e-11 5.1e+01 5.41e-06  1e-07  4e-06
  650    6500 5.557961334063003e-14 5.4e+01 1.88e-07  4e-09  1e-07
termination on tolfun : 1e-11
final/bestever f-value = 5.55796133406e-14 2.62435631419e-14
mean solution:  [ 1.          1.00000001  1.          1.
    1.          1.00000001  1.00000002  1.00000003 ...]
std deviation: [ 3.9193387e-09  3.7792732e-09  4.0062285e-09  4.6605925e-09
    5.4966188e-09   7.4377745e-09   1.3797207e-08   2.6020765e-08 ...]
>>>
>>> print('best solutions fitness = %f' % (res[1]))
best solutions fitness = 2.62435631419e-14
>>> assert res[1] < 1e-12

The above call is pretty much equivalent with the slightly more verbose call:

es = cma.CMAEvolutionStrategy([0.1] * 10, 0.5,
            options=options).optimize(cma.fcts.rosen)

The following example calls fmin optimizing the Rastrigin function in 3-D with random initial solution in [-2,2], initial step-size 0.5 and the BIPOP restart strategy (that progressively increases population). The options are specified for the usage with the doctest module.

>>> import cma
>>> # cma.CMAOptions()  # returns all possible options
>>> options = {'seed':12345, 'verb_time':0, 'ftarget': 1e-8}
>>>
>>> res = cma.fmin(cma.fcts.rastrigin, '2. * np.random.rand(3) - 1', 0.5,
...                options, restarts=9, bipop=True)
(3_w,7)-aCMA-ES (mu_w=2.3,w_1=58%) in dimension 3 (seed=12345)
Iterat #Fevals   function value    axis ratio  sigma  minstd maxstd min:sec
    1       7 1.633489455763566e+01 1.0e+00 4.35e-01  4e-01  4e-01
    2      14 9.762462950258016e+00 1.2e+00 4.12e-01  4e-01  4e-01
    3      21 2.461107851413725e+01 1.4e+00 3.78e-01  3e-01  4e-01
  100     700 9.949590571272680e-01 1.7e+00 5.07e-05  3e-07  5e-07
  123     861 9.949590570932969e-01 1.3e+00 3.93e-06  9e-09  1e-08
termination on tolfun=1e-11
final/bestever f-value = 9.949591e-01 9.949591e-01
mean solution: [  9.94958638e-01  -7.19265205e-10   2.09294450e-10]
std deviation: [  8.71497860e-09   8.58994807e-09   9.85585654e-09]
[...]
(4_w,9)-aCMA-ES (mu_w=2.8,w_1=49%) in dimension 3 (seed=12349)
Iterat #Fevals   function value    axis ratio  sigma  minstd maxstd min:sec
    1  5342.0 2.114883315350800e+01 1.0e+00 3.42e-02  3e-02  4e-02
    2  5351.0 1.810102940125502e+01 1.4e+00 3.79e-02  3e-02  4e-02
    3  5360.0 1.340222457448063e+01 1.4e+00 4.58e-02  4e-02  6e-02
   50  5783.0 8.631491965616078e-09 1.6e+00 2.01e-04  8e-06  1e-05
termination on ftarget=1e-08 after 4 restarts
final/bestever f-value = 8.316963e-09 8.316963e-09
mean solution: [ -3.10652459e-06   2.77935436e-06  -4.95444519e-06]
std deviation: [  1.02825265e-05   8.08348144e-06   8.47256408e-06]

In either case, the method:

cma.plot();

(based on matplotlib.pyplot) produces a plot of the run and, if necessary:

cma.show()

shows the plot in a window. Finally:

cma.savefig('myfirstrun')  # savefig from matplotlib.pyplot

will save the figure in a png.

We can use the gradient like

>>> import cma
>>> res = cma.fmin(cma.fcts.rosen, np.zeros(10), 0.1,
...             options = {'ftarget':1e-8,},
...             gradf=cma.fcts.grad_rosen,
...         )
>>> assert cma.fcts.rosen(res[0]) < 1e-8
>>> assert res[2] < 3600  # 1% are > 3300
>>> assert res[3] < 3600  # 1% are > 3300

plot(name=None, fig=None, abscissa=1, iteridx=None, plot_mean=False, foffset=1e-19, x_opt=None, fontsize=9)

source code 

plot data from files written by a CMADataLogger, the call cma.plot(name, **argsdict) is a shortcut for cma.CMADataLogger(name).plot(**argsdict)

Arguments

name
name of the logger, filename prefix, None evaluates to the default 'outcmaes'
fig
filename or figure number, or both as a tuple (any order)
abscissa
0==plot versus iteration count, 1==plot versus function evaluation number
iteridx
iteration indices to plot

Return None

Examples

cma.plot();  # the optimization might be still
             # running in a different shell
cma.savefig('fig325.png')
cma.closefig()

cdl = cma.CMADataLogger().downsampling().plot()
# in case the file sizes are large

Details

Data from codes in other languages (C, Java, Matlab, Scilab) have the same format and can be plotted just the same.

disp(name=None, idx=None)

source code 

displays selected data from (files written by) the class CMADataLogger.

The call cma.disp(name, idx) is a shortcut for cma.CMADataLogger(name).disp(idx).

Arguments

name
name of the logger, filename prefix, None evaluates to the default 'outcmaes'
idx
indices corresponding to rows in the data file; by default the first five, then every 100-th, and the last 10 rows. Too large index values are removed.

Examples

import cma, numpy
# assume some data are available from previous runs
cma.disp(None,numpy.r_[0,-1])  # first and last
cma.disp(None,numpy.r_[0:1e9:100,-1]) # every 100-th and last
cma.disp(idx=numpy.r_[0,-10:0]) # first and ten last
cma.disp(idx=numpy.r_[0:1e9:1e3,-10:0])

main(argv=None)

source code 

to install and/or test from the command line use:

python cma.py [options | func dim sig0 [optkey optval][optkey optval]...]

with options being

--test (or -t) to run the doctest, --test -v to get (much) verbosity.

install to install cma.py (uses setup from distutils.core).

--doc for more infos.

Or start Python or (even better) ipython and:

import cma
cma.main('--test')
help(cma)
help(cma.fmin)
res = fmin(cma.fcts.rosen, 10 * [0], 1)
cma.plot()

Examples

Testing with the local python distribution from a command line in a folder where cma.py can be found:

python cma.py --test

And a single run on the Rosenbrock function:

python cma.py rosen 10 1  # dimension initial_sigma
python cma.py plot

In the python shell:

import cma
cma.main('--test')

Variables Details [hide private]

cma_default_options

Value:
{u'AdaptSigma': u'CMAAdaptSigmaCSA  # or any other CMAAdaptSigmaBase c\
lass e.g. CMAAdaptSigmaTPA',
 u'CMA_active': u'True  # negative update, conducted after the origina\
l update',
 u'CMA_cmean': u'1  # learning rate for the mean value',
 u'CMA_const_trace': u'False  # normalize trace, value CMA_const_trace\
=2 normalizes sum log eigenvalues to zero',
 u'CMA_dampsvec_fac': u'np.Inf  # tentative and subject to changes, 0.\
...