AQGD#

class qiskit.algorithms.optimizers.AQGD(maxiter=1000, eta=1.0, tol=1e-06, momentum=0.25, param_tol=1e-06, averaging=10)[source]#

Bases : Optimizer

Analytic Quantum Gradient Descent (AQGD) with Epochs optimizer. Performs gradient descent optimization with a momentum term, analytic gradients, and customized step length schedule for parameterized quantum gates, i.e. Pauli Rotations. See, for example:

  • K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. (2018). Quantum circuit learning. Phys. Rev. A 98, 032309. https://arxiv.org/abs/1803.00745

  • Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, Nathan Killoran. (2019). Evaluating analytic gradients on quantum hardware. Phys. Rev. A 99, 032331. https://arxiv.org/abs/1811.11184

for further details on analytic gradients of parameterized quantum gates.

Gradients are computed « analytically » using the quantum circuit when evaluating the objective function.

Performs Analytical Quantum Gradient Descent (AQGD) with Epochs.

Paramètres:
  • maxiter (int | list[int]) – Maximum number of iterations (full gradient steps)

  • eta (float | list[float]) – The coefficient of the gradient update. Increasing this value results in larger step sizes: param = previous_param - eta * deriv

  • tol (float) – Tolerance for change in windowed average of objective values. Convergence occurs when either objective tolerance is met OR parameter tolerance is met.

  • momentum (float | list[float]) – Bias towards the previous gradient momentum in current update. Must be within the bounds: [0,1)

  • param_tol (float) – Tolerance for change in norm of parameters.

  • averaging (int) – Length of window over which to average objective values for objective convergence criterion

Lève:

AlgorithmError – If the length of maxiter, momentum`, and eta is not the same.

Attributes

bounds_support_level#

Returns bounds support level

gradient_support_level#

Returns gradient support level

initial_point_support_level#

Returns initial point support level

is_bounds_ignored#

Returns is bounds ignored

is_bounds_required#

Returns is bounds required

is_bounds_supported#

Returns is bounds supported

is_gradient_ignored#

Returns is gradient ignored

is_gradient_required#

Returns is gradient required

is_gradient_supported#

Returns is gradient supported

is_initial_point_ignored#

Returns is initial point ignored

is_initial_point_required#

Returns is initial point required

is_initial_point_supported#

Returns is initial point supported

setting#

Return setting

settings#

Methods

get_support_level()[source]#

Support level dictionary

Renvoie:

gradient, bounds and initial point

support information that is ignored/required.

Type renvoyé:

Dict[str, int]

static gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)#

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

Paramètres:
  • x_center (ndarray) – point around which we compute the gradient

  • f (func) – the function of which the gradient is to be computed.

  • epsilon (float) – the epsilon used in the numeric differentiation.

  • max_evals_grouped (int) – max evals grouped, defaults to 1 (i.e. no batching).

Renvoie:

the gradient computed

Type renvoyé:

grad

minimize(fun, x0, jac=None, bounds=None)[source]#

Minimize the scalar function.

Paramètres:
  • fun (Callable[[POINT], float]) – The scalar function to minimize.

  • x0 (POINT) – The initial point for the minimization.

  • jac (Callable[[POINT], POINT] | None) – The gradient of the scalar function fun.

  • bounds (list[tuple[float, float]] | None) – Bounds for the variables of fun. This argument might be ignored if the optimizer does not support bounds.

Renvoie:

The result of the optimization, containing e.g. the result as attribute x.

Type renvoyé:

OptimizerResult

print_options()#

Print algorithm-specific options.

set_max_evals_grouped(limit)#

Set max evals grouped

set_options(**kwargs)#

Sets or updates values in the options dictionary.

The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.

Paramètres:

kwargs (dict) – options, given as name=value.

static wrap_function(function, args)#

Wrap the function to implicitly inject the args at the call of the function.

Paramètres:
  • function (func) – the target function

  • args (tuple) – the args to be injected

Renvoie:

wrapper

Type renvoyé:

function_wrapper