Skip to main contentIBM Quantum Documentation
This page is from an old version of Qiskit SDK and does not exist in the latest version. We recommend you migrate to the latest version. See the release notes for more information.
Important

IBM Quantum Platform is moving and this version will be sunset on July 1. To get started on the new platform, read the migration guide.

AQGD

class AQGD(maxiter=1000, eta=3.0, tol=1e-06, disp=False, momentum=0.25)

GitHub

Analytic Quantum Gradient Descent (AQGD) optimizer.

Performs gradient descent optimization with a momentum term and analytic gradients for parametrized quantum gates, i.e. Pauli Rotations. See, for example:

  • K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. (2018). Quantum circuit learning. Phys. Rev. A 98, 032309. https://arxiv.org/abs/1803.00745
  • Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, Nathan Killoran. (2019). Evaluating analytic gradients on quantum hardware. Phys. Rev. A 99, 032331. https://arxiv.org/abs/1811.11184

for further details on analytic gradients of parametrized quantum gates.

Gradients are computed “analytically” using the quantum circuit when evaluating the objective function.

Parameters

  • maxiter (int) – Maximum number of iterations, each iteration evaluation gradient.
  • eta (float) – The coefficient of the gradient update. Increasing this value results in larger step sizes: param = previous_param - eta * deriv
  • tol (float) – The convergence criteria that must be reached before stopping. Optimization stops when: absolute(loss - previous_loss) < tol
  • disp (bool) – Set to True to display convergence messages.
  • momentum (float) – Bias towards the previous gradient momentum in current update. Must be within the bounds: [0,1)

Attributes

bounds_support_level

Returns bounds support level

gradient_support_level

Returns gradient support level

initial_point_support_level

Returns initial point support level

is_bounds_ignored

Returns is bounds ignored

is_bounds_required

Returns is bounds required

is_bounds_supported

Returns is bounds supported

is_gradient_ignored

Returns is gradient ignored

is_gradient_required

Returns is gradient required

is_gradient_supported

Returns is gradient supported

is_initial_point_ignored

Returns is initial point ignored

is_initial_point_required

Returns is initial point required

is_initial_point_supported

Returns is initial point supported

setting

Return setting


Methods

converged

AQGD.converged(objval, n=2)

Determines if the objective function has converged by finding the difference between the current value and the previous n values.

Parameters

  • objval (float) – Current value of the objective function.
  • n (int) – Number of previous steps which must be within the convergence criteria in order to be considered converged. Using a larger number will prevent the optimizer from stopping early.

Returns

Whether or not the optimization has converged.

Return type

bool

deriv

AQGD.deriv(j, params, obj)

Obtains the analytical quantum derivative of the objective function with respect to the jth parameter.

Parameters

  • j (int) – Index of the parameter to compute the derivative of.
  • params (array) – Current value of the parameters to evaluate the objective function at.
  • obj (callable) – Objective function.

Returns

The derivative of the objective function w.r.t. j

Return type

float

get_support_level

AQGD.get_support_level()

Return support level dictionary

gradient_num_diff

static AQGD.gradient_num_diff(x_center, f, epsilon, max_evals_grouped=1)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Parameters

  • x_center (ndarray) – point around which we compute the gradient
  • f (func) – the function of which the gradient is to be computed.
  • epsilon (float) – the epsilon used in the numeric differentiation.
  • max_evals_grouped (int) – max evals grouped

Returns

the gradient computed

Return type

grad

optimize

AQGD.optimize(num_vars, objective_function, gradient_function=None, variable_bounds=None, initial_point=None)

Perform optimization.

Parameters

  • num_vars (int) – Number of parameters to be optimized.
  • objective_function (callable) – A function that computes the objective function.
  • gradient_function (callable) – A function that computes the gradient of the objective function, or None if not available.
  • variable_bounds (list[(float, float)]) – List of variable bounds, given as pairs (lower, upper). None means unbounded.
  • initial_point (numpy.ndarray[float]) – Initial point.

Returns

point, value, nfev

point: is a 1D numpy.ndarray[float] containing the solution value: is a float with the objective function value nfev: number of objective function calls made if available or None

Raises

ValueError – invalid input

AQGD.print_options()

Print algorithm-specific options.

set_max_evals_grouped

AQGD.set_max_evals_grouped(limit)

Set max evals grouped

set_options

AQGD.set_options(**kwargs)

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Parameters

kwargs (dict) – options, given as name=value.

update

AQGD.update(j, params, deriv, mprev)

Updates the jth parameter based on the derivative and previous momentum

Parameters

  • j (int) – Index of the parameter to compute the derivative of.
  • params (array) – Current value of the parameters to evaluate the objective function at.
  • deriv (float) – Value of the derivative w.r.t. the jth parameter
  • mprev (array) – Array containing all of the parameter momentums

Returns

params, new momentums

Return type

tuple

wrap_function

static AQGD.wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.

Parameters

  • function (func) – the target function
  • args (tuple) – the args to be injected

Returns

wrapper

Return type

function_wrapper

Was this page helpful?
Report a bug or request content on GitHub.