qiskit.algorithms.optimizers.ADAM¶
-
class
ADAM
(maxiter=10000, tol=1e-06, lr=0.001, beta_1=0.9, beta_2=0.99, noise_factor=1e-08, eps=1e-10, amsgrad=False, snapshot_dir=None)[source]¶ Adam and AMSGRAD optimizers.
Adam [1] is a gradient-based optimization algorithm that is relies on adaptive estimates of lower-order moments. The algorithm requires little memory and is invariant to diagonal rescaling of the gradients. Furthermore, it is able to cope with non-stationary objective functions and noisy and/or sparse gradients.
AMSGRAD [2] (a variant of Adam) uses a ‘long-term memory’ of past gradients and, thereby, improves convergence properties.
References
- [1]: Kingma, Diederik & Ba, Jimmy (2014), Adam: A Method for Stochastic Optimization.
- [2]: Sashank J. Reddi and Satyen Kale and Sanjiv Kumar (2018),
On the Convergence of Adam and Beyond. arXiv:1904.09237
- Parameters
maxiter (
int
) – Maximum number of iterationstol (
float
) – Tolerance for terminationlr (
float
) – Value >= 0, Learning rate.beta_1 (
float
) – Value in range 0 to 1, Generally close to 1.beta_2 (
float
) – Value in range 0 to 1, Generally close to 1.noise_factor (
float
) – Value >= 0, Noise factoreps (
float
) – Value >=0, Epsilon to be used for finite differences if no analytic gradient method is given.amsgrad (
bool
) – True to use AMSGRAD, False if notsnapshot_dir (
Optional
[str
]) – If not None save the optimizer’s parameter after every step to the given directory
-
__init__
(maxiter=10000, tol=1e-06, lr=0.001, beta_1=0.9, beta_2=0.99, noise_factor=1e-08, eps=1e-10, amsgrad=False, snapshot_dir=None)[source]¶ - Parameters
maxiter (
int
) – Maximum number of iterationstol (
float
) – Tolerance for terminationlr (
float
) – Value >= 0, Learning rate.beta_1 (
float
) – Value in range 0 to 1, Generally close to 1.beta_2 (
float
) – Value in range 0 to 1, Generally close to 1.noise_factor (
float
) – Value >= 0, Noise factoreps (
float
) – Value >=0, Epsilon to be used for finite differences if no analytic gradient method is given.amsgrad (
bool
) – True to use AMSGRAD, False if notsnapshot_dir (
Optional
[str
]) – If not None save the optimizer’s parameter after every step to the given directory
Methods
__init__
([maxiter, tol, lr, beta_1, beta_2, …])- type maxiter
int
Return support level dictionary
gradient_num_diff
(x_center, f, epsilon[, …])We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
load_params
(load_dir)Load iteration parameters for a file called
adam_params.csv
.minimize
(objective_function, initial_point, …)Run the minimization.
optimize
(num_vars, objective_function[, …])Perform optimization.
Print algorithm-specific options.
save_params
(snapshot_dir)Save the current iteration parameters to a file called
adam_params.csv
.set_max_evals_grouped
(limit)Set max evals grouped
set_options
(**kwargs)Sets or updates values in the options dictionary.
wrap_function
(function, args)Wrap the function to implicitly inject the args at the call of the function.
Attributes
Returns bounds support level
Returns gradient support level
Returns initial point support level
Returns is bounds ignored
Returns is bounds required
Returns is bounds supported
Returns is gradient ignored
Returns is gradient required
Returns is gradient supported
Returns is initial point ignored
Returns is initial point required
Returns is initial point supported
Return setting
-
property
bounds_support_level
¶ Returns bounds support level
-
static
gradient_num_diff
(x_center, f, epsilon, max_evals_grouped=1)¶ We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
- Parameters
x_center (ndarray) – point around which we compute the gradient
f (func) – the function of which the gradient is to be computed.
epsilon (float) – the epsilon used in the numeric differentiation.
max_evals_grouped (int) – max evals grouped
- Returns
the gradient computed
- Return type
grad
-
property
gradient_support_level
¶ Returns gradient support level
-
property
initial_point_support_level
¶ Returns initial point support level
-
property
is_bounds_ignored
¶ Returns is bounds ignored
-
property
is_bounds_required
¶ Returns is bounds required
-
property
is_bounds_supported
¶ Returns is bounds supported
-
property
is_gradient_ignored
¶ Returns is gradient ignored
-
property
is_gradient_required
¶ Returns is gradient required
-
property
is_gradient_supported
¶ Returns is gradient supported
-
property
is_initial_point_ignored
¶ Returns is initial point ignored
-
property
is_initial_point_required
¶ Returns is initial point required
-
property
is_initial_point_supported
¶ Returns is initial point supported
-
load_params
(load_dir)[source]¶ Load iteration parameters for a file called
adam_params.csv
.- Parameters
load_dir (
str
) – The directory containingadam_params.csv
.- Return type
None
-
minimize
(objective_function, initial_point, gradient_function)[source]¶ Run the minimization.
- Parameters
objective_function (
Callable
[[ndarray
],float
]) – A function handle to the objective function.initial_point (
ndarray
) – The initial iteration point.gradient_function (
Callable
[[ndarray
],float
]) – A function handle to the gradient of the objective function.
- Return type
Tuple
[ndarray
,float
,int
]- Returns
A tuple of (optimal parameters, optimal value, number of iterations).
-
optimize
(num_vars, objective_function, gradient_function=None, variable_bounds=None, initial_point=None)[source]¶ Perform optimization.
- Parameters
num_vars (
int
) – Number of parameters to be optimized.objective_function (
Callable
[[ndarray
],float
]) – Handle to a function that computes the objective function.gradient_function (
Optional
[Callable
[[ndarray
],float
]]) – Handle to a function that computes the gradient of the objective function.variable_bounds (
Optional
[List
[Tuple
[float
,float
]]]) – deprecatedinitial_point (
Optional
[ndarray
]) – The initial point for the optimization.
- Return type
Tuple
[ndarray
,float
,int
]- Returns
A tuple (point, value, nfev) where
point: is a 1D numpy.ndarray[float] containing the solution
value: is a float with the objective function value
nfev: is the number of objective function calls
-
print_options
()¶ Print algorithm-specific options.
-
save_params
(snapshot_dir)[source]¶ Save the current iteration parameters to a file called
adam_params.csv
.Note
The current parameters are appended to the file, if it exists already. The file is not overwritten.
- Parameters
snapshot_dir (
str
) – The directory to store the file in.- Return type
None
-
set_max_evals_grouped
(limit)¶ Set max evals grouped
-
set_options
(**kwargs)¶ Sets or updates values in the options dictionary.
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
- Parameters
kwargs (dict) – options, given as name=value.
-
property
setting
¶ Return setting
-
static
wrap_function
(function, args)¶ Wrap the function to implicitly inject the args at the call of the function.
- Parameters
function (func) – the target function
args (tuple) – the args to be injected
- Returns
wrapper
- Return type
function_wrapper