Improved Stochastic Ranking Evolution Strategy optimizer.
Improved Stochastic Ranking Evolution Strategy (ISRES) is an algorithm for non-linearly constrained global optimization. It has heuristics to escape local optima, even though convergence to a global optima is not guaranteed. The evolution strategy is based on a combination of a mutation rule and differential variation. The fitness ranking is simply via the objective function for problems without nonlinear constraints. When nonlinear constraints are included, the stochastic ranking proposed by Runarsson and Yao (opens in a new tab) is employed. This method supports arbitrary nonlinear inequality and equality constraints, in addition to the bound constraints.
NLopt global optimizer, derivative-free. For further detail, please refer to http://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#isres-improved-stochastic-ranking-evolution-strategy (opens in a new tab)
max_evals (int (opens in a new tab)) – Maximum allowed number of function evaluations.
MissingOptionalLibraryError – NLopt library not installed.
Returns bounds support level
Returns gradient support level
Returns initial point support level
Returns is bounds ignored
Returns is bounds required
Returns is bounds supported
Returns is gradient ignored
Returns is gradient required
Returns is gradient supported
Returns is initial point ignored
Returns is initial point required
Returns is initial point supported
Return NLopt optimizer type
return support level dictionary
static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)
We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
- x_center (ndarray) – point around which we compute the gradient
- f (func) – the function of which the gradient is to be computed.
- epsilon (float (opens in a new tab)) – the epsilon used in the numeric differentiation.
- max_evals_grouped (int (opens in a new tab)) – max evals grouped, defaults to 1 (i.e. no batching).
the gradient computed
minimize(fun, x0, jac=None, bounds=None)
Minimize the scalar function.
- fun (Callable[[POINT], float (opens in a new tab)]) – The scalar function to minimize.
- x0 (POINT) – The initial point for the minimization.
- jac (Callable[[POINT], POINT] | None) – The gradient of the scalar function
- bounds (list (opens in a new tab)[tuple (opens in a new tab)[float (opens in a new tab), float (opens in a new tab)]] | None) – Bounds for the variables of
fun. This argument might be ignored if the optimizer does not support bounds.
The result of the optimization, containing e.g. the result as attribute
Print algorithm-specific options.
Set max evals grouped
Sets or updates values in the options dictionary.
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
kwargs (dict (opens in a new tab)) – options, given as name=value.
static wrap_function(function, args)
Wrap the function to implicitly inject the args at the call of the function.
- function (func) – the target function
- args (tuple (opens in a new tab)) – the args to be injected