About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
GradientDescentState
class GradientDescentState(x, fun, jac, nfev, njev, nit, stepsize, learning_rate)
Bases: qiskit.algorithms.optimizers.steppable_optimizer.OptimizerState
State of GradientDescent
.
Dataclass with all the information of an optimizer plus the learning_rate and the stepsize.
Attributes
stepsize
Type: Optional[float]
Norm of the gradient on the last step.
learning_rate
Type: qiskit.algorithms.optimizers.optimizer_utils.learning_rate.LearningRate
Learning rate at the current step of the optimization process.
It behaves like a generator, (use next(learning_rate)
to get the learning rate for the next step) but it can also return the current learning rate with learning_rate.current
.
x
Type: Union[float, numpy.ndarray]
Current optimization parameters.
fun
Type: Optional[Callable[[Union[float, numpy.ndarray]], float]]
Function being optimized.
jac
Type: Optional[Callable[[Union[float, numpy.ndarray]], Union[float, numpy.ndarray]]]
Jacobian of the function being optimized.
nfev
Type: Optional[int]
Number of function evaluations so far in the optimization.
njev
Type: Optional[int]
Number of jacobian evaluations so far in the opimization.
nit
Type: Optional[int]
Number of optmization steps performed so far in the optimization.
Was this page helpful?
Report a bug or request content on GitHub.