Skip to main contentIBM Quantum Documentation
This page is from an old version of Qiskit SDK and does not exist in the latest version. We recommend you migrate to the latest version. See the release notes for more information.

GradientDescentState

class qiskit.algorithms.optimizers.GradientDescentState(x, fun, jac, nfev, njev, nit, stepsize, learning_rate)

GitHub

Bases: OptimizerState

State of GradientDescent.

Dataclass with all the information of an optimizer plus the learning_rate and the stepsize.


Attributes

stepsize

Type: float | None

Norm of the gradient on the last step.

learning_rate

Type: LearningRate

Learning rate at the current step of the optimization process.

It behaves like a generator, (use next(learning_rate) to get the learning rate for the next step) but it can also return the current learning rate with learning_rate.current.

x

Type: POINT

Current optimization parameters.

fun

Type: Callable[[POINT], float] | None

Function being optimized.

jac

Type: Callable[[POINT], POINT] | None

Jacobian of the function being optimized.

nfev

Type: int | None

Number of function evaluations so far in the optimization.

njev

Type: int | None

Number of jacobian evaluations so far in the opimization.

nit

Type: int | None

Number of optimization steps performed so far in the optimization.

Was this page helpful?
Report a bug or request content on GitHub.