Skip to main contentIBM Quantum Documentation
You are viewing the API reference for an old version of Qiskit SDK. Switch to latest version

Optimizer

class qiskit.algorithms.optimizers.Optimizer

GitHub(opens in a new tab)

Bases: ABC(opens in a new tab)

Base class for optimization algorithm.

Initialize the optimization algorithm, setting the support level for _gradient_support_level, _bound_support_level, _initial_point_support_level, and empty options.


Attributes

bounds_support_level

Returns bounds support level

gradient_support_level

Returns gradient support level

initial_point_support_level

Returns initial point support level

is_bounds_ignored

Returns is bounds ignored

is_bounds_required

Returns is bounds required

is_bounds_supported

Returns is bounds supported

is_gradient_ignored

Returns is gradient ignored

is_gradient_required

Returns is gradient required

is_gradient_supported

Returns is gradient supported

is_initial_point_ignored

Returns is initial point ignored

is_initial_point_required

Returns is initial point required

is_initial_point_supported

Returns is initial point supported

setting

Return setting

settings

The optimizer settings in a dictionary format.

The settings can for instance be used for JSON-serialization (if all settings are serializable, which e.g. doesn’t hold per default for callables), such that the optimizer object can be reconstructed as

settings = optimizer.settings
# JSON serialize and send to another server
optimizer = OptimizerClass(**settings)

Methods

get_support_level

abstract get_support_level()

Return support level dictionary

gradient_num_diff

static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Parameters

  • x_center (ndarray) – point around which we compute the gradient
  • f (func) – the function of which the gradient is to be computed.
  • epsilon (float(opens in a new tab)) – the epsilon used in the numeric differentiation.
  • max_evals_grouped (int(opens in a new tab)) – max evals grouped, defaults to 1 (i.e. no batching).

Returns

the gradient computed

Return type

grad

minimize

abstract minimize(fun, x0, jac=None, bounds=None)

Minimize the scalar function.

Parameters

Returns

The result of the optimization, containing e.g. the result as attribute x.

Return type

OptimizerResult

print_options()

Print algorithm-specific options.

set_max_evals_grouped

set_max_evals_grouped(limit)

Set max evals grouped

set_options

set_options(**kwargs)

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Parameters

kwargs (dict(opens in a new tab)) – options, given as name=value.

wrap_function

static wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.

Parameters

Returns

wrapper

Return type

function_wrapper

Was this page helpful?
Report a bug or request content on GitHub.