GradientDescentState
class GradientDescentState(x, fun, jac, nfev, njev, nit, stepsize, learning_rate)
Bases: qiskit.algorithms.optimizers.steppable_optimizer.OptimizerState
State of GradientDescent
.
Dataclass with all the information of an optimizer plus the learning_rate and the stepsize.
Attributes
stepsize
Type: Optional[float]
Norm of the gradient on the last step.
learning_rate
Type: qiskit.algorithms.optimizers.optimizer_utils.learning_rate.LearningRate
Learning rate at the current step of the optimization process.
It behaves like a generator, (use next(learning_rate)
to get the learning rate for the next step) but it can also return the current learning rate with learning_rate.current
.
x
Type: Union[float, numpy.ndarray]
Current optimization parameters.
fun
Type: Optional[Callable[[Union[float, numpy.ndarray]], float]]
Function being optimized.
jac
Type: Optional[Callable[[Union[float, numpy.ndarray]], Union[float, numpy.ndarray]]]
Jacobian of the function being optimized.
nfev
Type: Optional[int]
Number of function evaluations so far in the optimization.
njev
Type: Optional[int]
Number of jacobian evaluations so far in the opimization.
nit
Type: Optional[int]
Number of optmization steps performed so far in the optimization.