NFT
class qiskit.algorithms.optimizers.NFT(maxiter=None, maxfev=1024, disp=False, reset_interval=32, options=None, **kwargs)
Bases: SciPyOptimizer
Nakanishi-Fujii-Todo algorithm.
See https://arxiv.org/abs/1903.12166
Built out using scipy framework, for details, please refer to https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html.
Parameters
- maxiter (int | None) – Maximum number of iterations to perform.
- maxfev (int) – Maximum number of function evaluations to perform.
- disp (bool) – disp
- reset_interval (int) – The minimum estimates directly once in
reset_interval
times. - options (dict | None) – A dictionary of solver options.
- kwargs – additional kwargs for scipy.optimize.minimize.
Notes
In this optimization method, the optimization function have to satisfy three conditions written in [1].
References
[1]
K. M. Nakanishi, K. Fujii, and S. Todo. 2019. Sequential minimal optimization for quantum-classical hybrid algorithms. arXiv preprint arXiv:1903.12166.
Attributes
bounds_support_level
Returns bounds support level
gradient_support_level
Returns gradient support level
initial_point_support_level
Returns initial point support level
is_bounds_ignored
Returns is bounds ignored
is_bounds_required
Returns is bounds required
is_bounds_supported
Returns is bounds supported
is_gradient_ignored
Returns is gradient ignored
is_gradient_required
Returns is gradient required
is_gradient_supported
Returns is gradient supported
is_initial_point_ignored
Returns is initial point ignored
is_initial_point_required
Returns is initial point required
is_initial_point_supported
Returns is initial point supported
setting
Return setting
settings
Methods
get_support_level
get_support_level()
Return support level dictionary
gradient_num_diff
static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)
We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
Parameters
- x_center (ndarray) – point around which we compute the gradient
- f (func) – the function of which the gradient is to be computed.
- epsilon (float) – the epsilon used in the numeric differentiation.
- max_evals_grouped (int) – max evals grouped, defaults to 1 (i.e. no batching).
Returns
the gradient computed
Return type
grad
minimize
minimize(fun, x0, jac=None, bounds=None)
Minimize the scalar function.
Parameters
- fun (Callable[[POINT], float]) – The scalar function to minimize.
- x0 (POINT) – The initial point for the minimization.
- jac (Callable[[POINT], POINT] | None) – The gradient of the scalar function
fun
. - bounds (list[tuple[float, float]] | None) – Bounds for the variables of
fun
. This argument might be ignored if the optimizer does not support bounds.
Returns
The result of the optimization, containing e.g. the result as attribute x
.
Return type
print_options
print_options()
Print algorithm-specific options.
set_max_evals_grouped
set_max_evals_grouped(limit)
Set max evals grouped
set_options
set_options(**kwargs)
Sets or updates values in the options dictionary.
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
Parameters
kwargs (dict) – options, given as name=value.
wrap_function
static wrap_function(function, args)
Wrap the function to implicitly inject the args at the call of the function.
Parameters
- function (func) – the target function
- args (tuple) – the args to be injected
Returns
wrapper
Return type
function_wrapper