P_BFGS

class P_BFGS(maxfun=1000, factr=10, iprint=- 1, max_processes=None)[source]

Parallelized Limited-memory BFGS optimizer.

P-BFGS is a parallelized version of L_BFGS_B with which it shares the same parameters. P-BFGS can be useful when the target hardware is a quantum simulator running on a classical machine. This allows the multiple processes to use simulation to potentially reach a minimum faster. The parallelization may also help the optimizer avoid getting stuck at local optima.

Uses scipy.optimize.fmin_l_bfgs_b. For further detail, please refer to https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fmin_l_bfgs_b.html

Parameters
  • maxfun (int) – Maximum number of function evaluations.

  • factr (float) – The iteration stops when (f^k - f^{k+1})/max{|f^k|, |f^{k+1}|,1} <= factr * eps, where eps is the machine precision, which is automatically generated by the code. Typical values for factr are: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy. See Notes for relationship to ftol, which is exposed (instead of factr) by the scipy.optimize.minimize interface to L-BFGS-B.

  • iprint (int) – Controls the frequency of output. iprint < 0 means no output; iprint = 0 print only one line at the last iteration; 0 < iprint < 99 print also f and |proj g| every iprint iterations; iprint = 99 print details of every iteration except n-vectors; iprint = 100 print also the changes of active set and final x; iprint > 100 print details of every iteration including x and g.

  • max_processes (Optional[int]) – maximum number of processes allowed, has a min. value of 1 if not None.

Attributes

P_BFGS.bounds_support_level

Returns bounds support level

P_BFGS.gradient_support_level

Returns gradient support level

P_BFGS.initial_point_support_level

Returns initial point support level

P_BFGS.is_bounds_ignored

Returns is bounds ignored

P_BFGS.is_bounds_required

Returns is bounds required

P_BFGS.is_bounds_supported

Returns is bounds supported

P_BFGS.is_gradient_ignored

Returns is gradient ignored

P_BFGS.is_gradient_required

Returns is gradient required

P_BFGS.is_gradient_supported

Returns is gradient supported

P_BFGS.is_initial_point_ignored

Returns is initial point ignored

P_BFGS.is_initial_point_required

Returns is initial point required

P_BFGS.is_initial_point_supported

Returns is initial point supported

P_BFGS.setting

Return setting

Methods

P_BFGS.get_support_level()

return support level dictionary

P_BFGS.gradient_num_diff(x_center, f, epsilon)

We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.

P_BFGS.optimize(num_vars, objective_function)

Perform optimization.

P_BFGS.print_options()

Print algorithm-specific options.

P_BFGS.set_max_evals_grouped(limit)

Set max evals grouped

P_BFGS.set_options(**kwargs)

Sets or updates values in the options dictionary.

P_BFGS.wrap_function(function, args)

Wrap the function to implicitly inject the args at the call of the function.