Skip to main contentIBM Quantum Documentation
This page is from an old version of Qiskit SDK and does not exist in the latest version. We recommend you migrate to the latest version. See the release notes for more information.

qiskit.aqua.components.optimizers.SPSA

class SPSA(maxiter=1000, save_steps=1, last_avg=1, c0=0.6283185307179586, c1=0.1, c2=0.602, c3=0.101, c4=0, skip_calibration=False, max_trials=None)

GitHub

Simultaneous Perturbation Stochastic Approximation (SPSA) optimizer.

SPSA is an algorithmic method for optimizing systems with multiple unknown parameters. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, and simulation optimization.

See also

Many examples are presented at the SPSA Web site.

SPSA is a descent method capable of finding global minima, sharing this property with other methods as simulated annealing. Its main feature is the gradient approximation, which requires only two measurements of the objective function, regardless of the dimension of the optimization problem.

Note

SPSA can be used in the presence of noise, and it is therefore indicated in situations involving measurement uncertainty on a quantum computation when finding a minimum. If you are executing a variational algorithm using a Quantum ASseMbly Language (QASM) simulator or a real device, SPSA would be the most recommended choice among the optimizers provided here.

The optimization process includes a calibration phase, which requires additional functional evaluations.

For further details, please refer to https://arxiv.org/pdf/1704.05018v2.pdf#section*.11 (Supplementary information Section IV.)

Parameters

  • maxiter (int) – Maximum number of iterations to perform.
  • save_steps (int) – Save intermediate info every save_steps step. It has a min. value of 1.
  • last_avg (int) – Averaged parameters over the last_avg iterations. If last_avg = 1, only the last iteration is considered. It has a min. value of 1.
  • c0 (float) – The initial a. Step size to update parameters.
  • c1 (float) – The initial c. The step size used to approximate gradient.
  • c2 (float) – The alpha in the paper, and it is used to adjust a (c0) at each iteration.
  • c3 (float) – The gamma in the paper, and it is used to adjust c (c1) at each iteration.
  • c4 (float) – The parameter used to control a as well.
  • skip_calibration (bool) – Skip calibration and use provided c(s) as is.
  • max_trials (Optional[int]) – Deprecated, use maxiter.

__init__

__init__(maxiter=1000, save_steps=1, last_avg=1, c0=0.6283185307179586, c1=0.1, c2=0.602, c3=0.101, c4=0, skip_calibration=False, max_trials=None)

Parameters

  • maxiter (int) – Maximum number of iterations to perform.
  • save_steps (int) – Save intermediate info every save_steps step. It has a min. value of 1.
  • last_avg (int) – Averaged parameters over the last_avg iterations. If last_avg = 1, only the last iteration is considered. It has a min. value of 1.
  • c0 (float) – The initial a. Step size to update parameters.
  • c1 (float) – The initial c. The step size used to approximate gradient.
  • c2 (float) – The alpha in the paper, and it is used to adjust a (c0) at each iteration.
  • c3 (float) – The gamma in the paper, and it is used to adjust c (c1) at each iteration.
  • c4 (float) – The parameter used to control a as well.
  • skip_calibration (bool) – Skip calibration and use provided c(s) as is.
  • max_trials (Optional[int]) – Deprecated, use maxiter.

Methods

__init__([maxiter, save_steps, last_avg, …])type maxiterint
get_support_level()return support level dictionary
gradient_num_diff(x_center, f, epsilon[, …])We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
optimize(num_vars, objective_function[, …])Perform optimization.
print_options()Print algorithm-specific options.
set_max_evals_grouped(limit)Set max evals grouped
set_options(**kwargs)Sets or updates values in the options dictionary.
wrap_function(function, args)Wrap the function to implicitly inject the args at the call of the function.

Attributes

bounds_support_levelReturns bounds support level
gradient_support_levelReturns gradient support level
initial_point_support_levelReturns initial point support level
is_bounds_ignoredReturns is bounds ignored
is_bounds_requiredReturns is bounds required
is_bounds_supportedReturns is bounds supported
is_gradient_ignoredReturns is gradient ignored
is_gradient_requiredReturns is gradient required
is_gradient_supportedReturns is gradient supported
is_initial_point_ignoredReturns is initial point ignored
is_initial_point_requiredReturns is initial point required
is_initial_point_supportedReturns is initial point supported
settingReturn setting

bounds_support_level

Returns bounds support level

get_support_level

get_support_level()

return support level dictionary

gradient_num_diff

static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=1)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Parameters

  • x_center (ndarray) – point around which we compute the gradient
  • f (func) – the function of which the gradient is to be computed.
  • epsilon (float) – the epsilon used in the numeric differentiation.
  • max_evals_grouped (int) – max evals grouped

Returns

the gradient computed

Return type

grad

gradient_support_level

Returns gradient support level

initial_point_support_level

Returns initial point support level

is_bounds_ignored

Returns is bounds ignored

is_bounds_required

Returns is bounds required

is_bounds_supported

Returns is bounds supported

is_gradient_ignored

Returns is gradient ignored

is_gradient_required

Returns is gradient required

is_gradient_supported

Returns is gradient supported

is_initial_point_ignored

Returns is initial point ignored

is_initial_point_required

Returns is initial point required

is_initial_point_supported

Returns is initial point supported

optimize

optimize(num_vars, objective_function, gradient_function=None, variable_bounds=None, initial_point=None)

Perform optimization.

Parameters

  • num_vars (int) – Number of parameters to be optimized.
  • objective_function (callable) – A function that computes the objective function.
  • gradient_function (callable) – A function that computes the gradient of the objective function, or None if not available.
  • variable_bounds (list[(float, float)]) – List of variable bounds, given as pairs (lower, upper). None means unbounded.
  • initial_point (numpy.ndarray[float]) – Initial point.

Returns

point, value, nfev

point: is a 1D numpy.ndarray[float] containing the solution value: is a float with the objective function value nfev: number of objective function calls made if available or None

Raises

ValueError – invalid input

print_options()

Print algorithm-specific options.

set_max_evals_grouped

set_max_evals_grouped(limit)

Set max evals grouped

set_options

set_options(**kwargs)

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Parameters

kwargs (dict) – options, given as name=value.

setting

Return setting

wrap_function

static wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.

Parameters

  • function (func) – the target function
  • args (tuple) – the args to be injected

Returns

wrapper

Return type

function_wrapper

Was this page helpful?
Report a bug or request content on GitHub.