Skip to main contentIBM Quantum Documentation
This page is from an old version of Qiskit SDK. Go to the latest version.

ADAM

class qiskit.algorithms.optimizers.ADAM(maxiter=10000, tol=1e-06, lr=0.001, beta_1=0.9, beta_2=0.99, noise_factor=1e-08, eps=1e-10, amsgrad=False, snapshot_dir=None)

GitHub

Bases: Optimizer

Adam and AMSGRAD optimizers.

Adam [1] is a gradient-based optimization algorithm that is relies on adaptive estimates of lower-order moments. The algorithm requires little memory and is invariant to diagonal rescaling of the gradients. Furthermore, it is able to cope with non-stationary objective functions and noisy and/or sparse gradients.

AMSGRAD [2] (a variant of Adam) uses a ‘long-term memory’ of past gradients and, thereby, improves convergence properties.

References

[1]: Kingma, Diederik & Ba, Jimmy (2014), Adam: A Method for Stochastic Optimization.

arXiv:1412.6980

[2]: Sashank J. Reddi and Satyen Kale and Sanjiv Kumar (2018),

On the Convergence of Adam and Beyond. arXiv:1904.09237

Note

This component has some function that is normally random. If you want to reproduce behavior then you should set the random number generator seed in the algorithm_globals (qiskit.utils.algorithm_globals.random_seed = seed).

Parameters

  • maxiter (int) – Maximum number of iterations
  • tol (float) – Tolerance for termination
  • lr (float) – Value >= 0, Learning rate.
  • beta_1 (float) – Value in range 0 to 1, Generally close to 1.
  • beta_2 (float) – Value in range 0 to 1, Generally close to 1.
  • noise_factor (float) – Value >= 0, Noise factor
  • eps (float) – Value >=0, Epsilon to be used for finite differences if no analytic gradient method is given.
  • amsgrad (bool) – True to use AMSGRAD, False if not
  • snapshot_dir (str | None) – If not None save the optimizer’s parameter after every step to the given directory

Attributes

bounds_support_level

Returns bounds support level

gradient_support_level

Returns gradient support level

initial_point_support_level

Returns initial point support level

is_bounds_ignored

Returns is bounds ignored

is_bounds_required

Returns is bounds required

is_bounds_supported

Returns is bounds supported

is_gradient_ignored

Returns is gradient ignored

is_gradient_required

Returns is gradient required

is_gradient_supported

Returns is gradient supported

is_initial_point_ignored

Returns is initial point ignored

is_initial_point_required

Returns is initial point required

is_initial_point_supported

Returns is initial point supported

setting

Return setting

settings


Methods

get_support_level

get_support_level()

Return support level dictionary

gradient_num_diff

static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Parameters

  • x_center (ndarray) – point around which we compute the gradient
  • f (func) – the function of which the gradient is to be computed.
  • epsilon (float) – the epsilon used in the numeric differentiation.
  • max_evals_grouped (int) – max evals grouped, defaults to 1 (i.e. no batching).

Returns

the gradient computed

Return type

grad

load_params

load_params(load_dir)

Load iteration parameters for a file called adam_params.csv.

Parameters

load_dir (str) – The directory containing adam_params.csv.

minimize

minimize(fun, x0, jac=None, bounds=None, objective_function=None, initial_point=None, gradient_function=None)

Minimize the scalar function.

Deprecated since version 0.19.0

qiskit.algorithms.optimizers.adam_amsgrad.ADAM.minimize()’s argument gradient_function is deprecated as of qiskit-terra 0.19.0. It will be removed no earlier than 3 months after the release date. Instead, use the argument jac, which behaves identically.

Deprecated since version 0.19.0

qiskit.algorithms.optimizers.adam_amsgrad.ADAM.minimize()’s argument initial_point is deprecated as of qiskit-terra 0.19.0. It will be removed no earlier than 3 months after the release date. Instead, use the argument fun, which behaves identically.

Deprecated since version 0.19.0

qiskit.algorithms.optimizers.adam_amsgrad.ADAM.minimize()’s argument objective_function is deprecated as of qiskit-terra 0.19.0. It will be removed no earlier than 3 months after the release date. Instead, use the argument fun, which behaves identically.

Parameters

  • fun (Callable[[POINT], float]) – The scalar function to minimize.
  • x0 (POINT) – The initial point for the minimization.
  • jac (Callable[[POINT], POINT] | None) – The gradient of the scalar function fun.
  • bounds (list[tuple[float, float]] | None) – Bounds for the variables of fun. This argument might be ignored if the optimizer does not support bounds.
  • objective_function (Callable[[np.ndarray], float] | None) – DEPRECATED. A function handle to the objective function.
  • initial_point (np.ndarray | None) – DEPRECATED. The initial iteration point.
  • gradient_function (Callable[[np.ndarray], float] | None) – DEPRECATED. A function handle to the gradient of the objective function.

Returns

The result of the optimization, containing e.g. the result as attribute x.

Return type

OptimizerResult

print_options()

Print algorithm-specific options.

save_params

save_params(snapshot_dir)

Save the current iteration parameters to a file called adam_params.csv.

Note

The current parameters are appended to the file, if it exists already. The file is not overwritten.

Parameters

snapshot_dir (str) – The directory to store the file in.

set_max_evals_grouped

set_max_evals_grouped(limit)

Set max evals grouped

set_options

set_options(**kwargs)

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Parameters

kwargs (dict) – options, given as name=value.

wrap_function

static wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.

Parameters

  • function (func) – the target function
  • args (tuple) – the args to be injected

Returns

wrapper

Return type

function_wrapper

Was this page helpful?
Report a bug or request content on GitHub.