# GSLS

Bases: Optimizer

Gaussian-smoothed Line Search.

An implementation of the line search algorithm described in https://arxiv.org/pdf/1905.01332.pdf (opens in a new tab), using gradient approximation based on Gaussian-smoothed samples on a sphere.

Note

This component has some function that is normally random. If you want to reproduce behavior then you should set the random number generator seed in the algorithm_globals (qiskit.utils.algorithm_globals.random_seed = seed).

Parameters

## Attributes

### bounds_support_level

Returns bounds support level

### initial_point_support_level

Returns initial point support level

### is_bounds_ignored

Returns is bounds ignored

### is_bounds_required

Returns is bounds required

### is_bounds_supported

Returns is bounds supported

### is_initial_point_ignored

Returns is initial point ignored

### is_initial_point_required

Returns is initial point required

### is_initial_point_supported

Returns is initial point supported

Return setting

## Methods

### get_support_level

get_support_level()

Return support level dictionary.

Returns

A dictionary containing the support levels for different options.

Return type

gradient_approximation(n, x, x_value, directions, sample_set_x, sample_set_y)

Construct gradient approximation from given sample.

Parameters

Returns

Gradient approximation at x, as a 1D array.

Return type

ndarray (opens in a new tab)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Parameters

• x_center (ndarray) – point around which we compute the gradient
• f (func) – the function of which the gradient is to be computed.
• epsilon (float (opens in a new tab)) – the epsilon used in the numeric differentiation.
• max_evals_grouped (int (opens in a new tab)) – max evals grouped, defaults to 1 (i.e. no batching).

Returns

Return type

### ls_optimize

ls_optimize(n, obj_fun, initial_point, var_lb, var_ub)

Run the line search optimization.

Parameters

Returns

Final iterate as a vector, corresponding objective function value, number of evaluations, and norm of the gradient estimate.

Raises

ValueError (opens in a new tab) – If the number of dimensions mismatches the size of the initial point or the length of the lower or upper bound.

Return type

### minimize

minimize(fun, x0, jac=None, bounds=None)

Minimize the scalar function.

Parameters

Returns

The result of the optimization, containing e.g. the result as attribute x.

Return type

OptimizerResult

print_options()

Print algorithm-specific options.

### sample_points

sample_points(n, x, num_points)

Sample num_points points around x on the n-sphere of specified radius.

Parameters

Returns

A tuple containing the sampling points and the directions.

Return type

### sample_set

sample_set(n, x, var_lb, var_ub, num_points)

Construct sample set of given size.

Parameters

Returns

Matrices of (unit-norm) sample directions and sample points, one per row. Both matrices are 2D arrays of floats.

Raises

RuntimeError (opens in a new tab) – If not enough samples could be generated within the bounds.

Return type

### set_max_evals_grouped

set_max_evals_grouped(limit)

Set max evals grouped

### set_options

set_options(**kwargs)

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Parameters

kwargs (dict (opens in a new tab)) – options, given as name=value.

### wrap_function

static wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.

Parameters

Returns

wrapper

Return type

function_wrapper