Skip to main contentIBM Quantum Documentation
You are viewing the API reference for an old version of Qiskit SDK. Switch to latest version

UMDA

class qiskit.algorithms.optimizers.UMDA(maxiter=100, size_gen=20, alpha=0.5, callback=None)

GitHub(opens in a new tab)

Bases: Optimizer

Continuous Univariate Marginal Distribution Algorithm (UMDA).

UMDA [1] is a specific type of Estimation of Distribution Algorithm (EDA) where new individuals are sampled from univariate normal distributions and are updated in each iteration of the algorithm by the best individuals found in the previous iteration.

See also

This original implementation of the UDMA optimizer for Qiskit was inspired by my (Vicente P. Soloviev) work on the EDAspy Python package [2].

EDAs are stochastic search algorithms and belong to the family of the evolutionary algorithms. The main difference is that EDAs have a probabilistic model which is updated in each iteration from the best individuals of previous generations (elite selection). Depending on the complexity of the probabilistic model, EDAs can be classified in different ways. In this case, UMDA is a univariate EDA as the embedded probabilistic model is univariate.

UMDA has been compared to some of the already implemented algorithms in Qiskit library to optimize the parameters of variational algorithms such as QAOA or VQE and competitive results have been obtained [1]. UMDA seems to provide very good solutions for those circuits in which the number of layers is not big.

The optimization process can be personalized depending on the parameters chosen in the initialization. The main parameter is the population size. The bigger it is, the final result will be better. However, this increases the complexity of the algorithm and the runtime will be much heavier. In the work [1] different experiments have been performed where population size has been set to 20 - 30.

Note

The UMDA implementation has more parameters but these have default values for the initialization for better understanding of the user. For example, lpha parameter has been set to 0.5 and is the percentage of the population which is selected in each iteration to update the probabilistic model.

Example

This short example runs UMDA to optimize the parameters of a variational algorithm. Here we will use the same operator as used in the algorithms introduction, which was originally computed by Qiskit Nature for an H2 molecule. The minimum energy of the H2 Hamiltonian can be found quite easily so we are able to set maxiters to a small value.

from qiskit.opflow import X, Z, I
from qiskit import Aer
from qiskit.algorithms.optimizers import UMDA
from qiskit.algorithms import QAOA
from qiskit.utils import QuantumInstance
 
 
H2_op = (-1.052373245772859 * I ^ I) +             (0.39793742484318045 * I ^ Z) +             (-0.39793742484318045 * Z ^ I) +             (-0.01128010425623538 * Z ^ Z) +             (0.18093119978423156 * X ^ X)
 
p = 2  # Toy example: 2 layers with 2 parameters in each layer: 4 variables
 
opt = UMDA(maxiter=100, size_gen=20)
 
backend = Aer.get_backend('statevector_simulator')
vqe = QAOA(opt,
           quantum_instance=QuantumInstance(backend=backend),
           reps=p)
 
result = vqe.compute_minimum_eigenvalue(operator=H2_op)

If it is desired to modify the percentage of individuals considered to update the probabilistic model, then this code can be used. Here for example we set the 60% instead of the 50% predefined.

opt = UMDA(maxiter=100, size_gen=20, alpha = 0.6)
 
backend = Aer.get_backend('statevector_simulator')
vqe = QAOA(opt,
           quantum_instance=QuantumInstance(backend=backend),
           reps=p)
 
result = vqe.compute_minimum_eigenvalue(operator=qubit_op)

References

[1]: Vicente P. Soloviev, Pedro Larrañaga and Concha Bielza (2022, July). Quantum Parametric Circuit Optimization with Estimation of Distribution Algorithms. In 2022 The Genetic and Evolutionary Computation Conference (GECCO). DOI: https://doi.org/10.1145/3520304.3533963(opens in a new tab)

[2]: Vicente P. Soloviev. Python package EDAspy. https://github.com/VicentePerezSoloviev/EDAspy(opens in a new tab).

Parameters


Attributes

ELITE_FACTOR

Default value: 0.4

STD_BOUND

Default value: 0.3

alpha

Returns the alpha parameter value (percentage of population selected to update probabilistic model)

bounds_support_level

Returns bounds support level

gradient_support_level

Returns gradient support level

initial_point_support_level

Returns initial point support level

is_bounds_ignored

Returns is bounds ignored

is_bounds_required

Returns is bounds required

is_bounds_supported

Returns is bounds supported

is_gradient_ignored

Returns is gradient ignored

is_gradient_required

Returns is gradient required

is_gradient_supported

Returns is gradient supported

is_initial_point_ignored

Returns is initial point ignored

is_initial_point_required

Returns is initial point required

is_initial_point_supported

Returns is initial point supported

maxiter

Returns the maximum number of iterations

setting

Return setting

settings

size_gen

Returns the size of the generations (number of individuals per generation)


Methods

get_support_level

get_support_level()

Get the support level dictionary.

gradient_num_diff

static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Parameters

  • x_center (ndarray) – point around which we compute the gradient
  • f (func) – the function of which the gradient is to be computed.
  • epsilon (float(opens in a new tab)) – the epsilon used in the numeric differentiation.
  • max_evals_grouped (int(opens in a new tab)) – max evals grouped, defaults to 1 (i.e. no batching).

Returns

the gradient computed

Return type

grad

minimize

minimize(fun, x0, jac=None, bounds=None)

Minimize the scalar function.

Parameters

Returns

The result of the optimization, containing e.g. the result as attribute x.

Return type

OptimizerResult

print_options()

Print algorithm-specific options.

set_max_evals_grouped

set_max_evals_grouped(limit)

Set max evals grouped

set_options

set_options(**kwargs)

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Parameters

kwargs (dict(opens in a new tab)) – options, given as name=value.

wrap_function

static wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.

Parameters

Returns

wrapper

Return type

function_wrapper

Was this page helpful?
Report a bug or request content on GitHub.