GradientDescent
class GradientDescent(maxiter=100, learning_rate=0.01, tol=1e-07, callback=None, perturbation=None)
Bases: qiskit.algorithms.optimizers.optimizer.Optimizer
The gradient descent minimization routine.
For a function and an initial point , the standard (or “vanilla”) gradient descent method is an iterative scheme to find the minimum of by updating the parameters in the direction of the negative gradient of
for a small learning rate .
You can either provide the analytic gradient as gradient_function
in the optimize
method, or, if you do not provide it, use a finite difference approximation of the gradient. To adapt the size of the perturbation in the finite difference gradients, set the perturbation
property in the initializer.
This optimizer supports a callback function. If provided in the initializer, the optimizer will call the callback in each iteration with the following information in this order: current number of function values, current parameters, current function value, norm of current gradient.
Examples
A minimum example that will use finite difference gradients with a default perturbation of 0.01 and a default learning rate of 0.01.
An example where the learning rate is an iterator and we supply the analytic gradient. Note how much faster this convergences (i.e. less nfevs
) compared to the previous example.
Parameters
- maxiter (
int
) – The maximum number of iterations. - learning_rate (
Union
[float
,Callable
[[],Iterator
]]) – A constant or generator yielding learning rates for the parameter updates. See the docstring for an example. - tol (
float
) – If the norm of the parameter update is smaller than this threshold, the optimizer is converged. - perturbation (
Optional
[float
]) – If no gradient is passed toGradientDescent.optimize
the gradient is approximated with a symmetric finite difference scheme withperturbation
perturbation in both directions (defaults to 1e-2 if required). Ignored if a gradient callable is passed toGradientDescent.optimize
.
Methods
get_support_level
GradientDescent.get_support_level()
Get the support level dictionary.
gradient_num_diff
static GradientDescent.gradient_num_diff(x_center, f, epsilon, max_evals_grouped=1)
We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
Parameters
- x_center (ndarray) – point around which we compute the gradient
- f (func) – the function of which the gradient is to be computed.
- epsilon (float) – the epsilon used in the numeric differentiation.
- max_evals_grouped (int) – max evals grouped
Returns
the gradient computed
Return type
grad
minimize
GradientDescent.minimize(fun, x0, jac=None, bounds=None)
Minimize the scalar function.
Parameters
- fun (
Callable
[[Union
[float
,ndarray
]],float
]) – The scalar function to minimize. - x0 (
Union
[float
,ndarray
]) – The initial point for the minimization. - jac (
Optional
[Callable
[[Union
[float
,ndarray
]],Union
[float
,ndarray
]]]) – The gradient of the scalar functionfun
. - bounds (
Optional
[List
[Tuple
[float
,float
]]]) – Bounds for the variables offun
. This argument might be ignored if the optimizer does not support bounds.
Return type
OptimizerResult
Returns
The result of the optimization, containing e.g. the result as attribute x
.
optimize
GradientDescent.optimize(num_vars, objective_function, gradient_function=None, variable_bounds=None, initial_point=None)
Perform optimization.
Parameters
- num_vars (int) – Number of parameters to be optimized.
- objective_function (callable) – A function that computes the objective function.
- gradient_function (callable) – A function that computes the gradient of the objective function, or None if not available.
- variable_bounds (list[(float, float)]) – List of variable bounds, given as pairs (lower, upper). None means unbounded.
- initial_point (numpy.ndarray[float]) – Initial point.
Returns
point, value, nfev
point: is a 1D numpy.ndarray[float] containing the solution value: is a float with the objective function value nfev: number of objective function calls made if available or None
Raises
ValueError – invalid input
print_options
GradientDescent.print_options()
Print algorithm-specific options.
set_max_evals_grouped
GradientDescent.set_max_evals_grouped(limit)
Set max evals grouped
set_options
GradientDescent.set_options(**kwargs)
Sets or updates values in the options dictionary.
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
Parameters
kwargs (dict) – options, given as name=value.
wrap_function
static GradientDescent.wrap_function(function, args)
Wrap the function to implicitly inject the args at the call of the function.
Parameters
- function (func) – the target function
- args (tuple) – the args to be injected
Returns
wrapper
Return type
function_wrapper
Attributes
bounds_support_level
Returns bounds support level
gradient_support_level
Returns gradient support level
initial_point_support_level
Returns initial point support level
is_bounds_ignored
Returns is bounds ignored
is_bounds_required
Returns is bounds required
is_bounds_supported
Returns is bounds supported
is_gradient_ignored
Returns is gradient ignored
is_gradient_required
Returns is gradient required
is_gradient_supported
Returns is gradient supported
is_initial_point_ignored
Returns is initial point ignored
is_initial_point_required
Returns is initial point required
is_initial_point_supported
Returns is initial point supported
setting
Return setting
settings
Return type
Dict
[str
, Any
]