About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
GradientDescentState
class qiskit.algorithms.optimizers.GradientDescentState(x, fun, jac, nfev, njev, nit, stepsize, learning_rate)
Bases: OptimizerState
State of GradientDescent
.
Dataclass with all the information of an optimizer plus the learning_rate and the stepsize.
Attributes
stepsize
Type: float | None
Norm of the gradient on the last step.
learning_rate
Type: LearningRate
Learning rate at the current step of the optimization process.
It behaves like a generator, (use next(learning_rate)
to get the learning rate for the next step) but it can also return the current learning rate with learning_rate.current
.
x
Type: POINT
Current optimization parameters.
fun
Type: Callable[[POINT], float] | None
Function being optimized.
jac
Type: Callable[[POINT], POINT] | None
Jacobian of the function being optimized.
nfev
Type: int | None
Number of function evaluations so far in the optimization.
njev
Type: int | None
Number of jacobian evaluations so far in the opimization.
nit
Type: int | None
Number of optimization steps performed so far in the optimization.
Was this page helpful?
Report a bug or request content on GitHub.