Skip to main contentIBM Quantum Documentation
This page is from an old version of Qiskit SDK and does not exist in the latest version. We recommend you migrate to the latest version. See the release notes for more information.

GradientDescentState

class GradientDescentState(x, fun, jac, nfev, njev, nit, stepsize, learning_rate)

GitHub

Bases: qiskit.algorithms.optimizers.steppable_optimizer.OptimizerState

State of GradientDescent.

Dataclass with all the information of an optimizer plus the learning_rate and the stepsize.


Attributes

stepsize

Type: Optional[float]

Norm of the gradient on the last step.

learning_rate

Type: qiskit.algorithms.optimizers.optimizer_utils.learning_rate.LearningRate

Learning rate at the current step of the optimization process.

It behaves like a generator, (use next(learning_rate) to get the learning rate for the next step) but it can also return the current learning rate with learning_rate.current.

x

Type: Union[float, numpy.ndarray]

Current optimization parameters.

fun

Type: Optional[Callable[[Union[float, numpy.ndarray]], float]]

Function being optimized.

jac

Type: Optional[Callable[[Union[float, numpy.ndarray]], Union[float, numpy.ndarray]]]

Jacobian of the function being optimized.

nfev

Type: Optional[int]

Number of function evaluations so far in the optimization.

njev

Type: Optional[int]

Number of jacobian evaluations so far in the opimization.

nit

Type: Optional[int]

Number of optmization steps performed so far in the optimization.

Was this page helpful?
Report a bug or request content on GitHub.