TNC¶
- class TNC(maxiter=100, disp=False, accuracy=0, ftol=- 1, xtol=- 1, gtol=- 1, tol=None, eps=1e-08)[source]¶
Truncated Newton (TNC) optimizer.
TNC uses a truncated Newton algorithm to minimize a function with variables subject to bounds. This algorithm uses gradient information; it is also called Newton Conjugate-Gradient. It differs from the
CG
method as it wraps a C implementation and allows each variable to be given upper and lower bounds.Uses scipy.optimize.minimize TNC For further detail, please refer to See https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
- Parameters
maxiter (
int
) – Maximum number of function evaluation.disp (
bool
) – Set to True to print convergence messages.accuracy (
float
) – Relative precision for finite difference calculations. If <= machine_precision, set to sqrt(machine_precision). Defaults to 0.ftol (
float
) – Precision goal for the value of f in the stopping criterion. If ftol < 0.0, ftol is set to 0.0 defaults to -1.xtol (
float
) – Precision goal for the value of x in the stopping criterion (after applying x scaling factors). If xtol < 0.0, xtol is set to sqrt(machine_precision). Defaults to -1.gtol (
float
) – Precision goal for the value of the projected gradient in the stopping criterion (after applying x scaling factors). If gtol < 0.0, gtol is set to 1e-2 * sqrt(accuracy). Setting it to 0.0 is not recommended. Defaults to -1.tol (
Optional
[float
]) – Tolerance for termination.eps (
float
) – Step size used for numerical approximation of the Jacobian.
Attributes
Returns bounds support level
Returns gradient support level
Returns initial point support level
Returns is bounds ignored
Returns is bounds required
Returns is bounds supported
Returns is gradient ignored
Returns is gradient required
Returns is gradient supported
Returns is initial point ignored
Returns is initial point required
Returns is initial point supported
Return setting
Methods
return support level dictionary
TNC.gradient_num_diff
(x_center, f, epsilon)We compute the gradient with the numeric differentiation in the parallel way, around the point x_center.
TNC.optimize
(num_vars, objective_function[, …])Perform optimization.
Print algorithm-specific options.
TNC.set_max_evals_grouped
(limit)Set max evals grouped
TNC.set_options
(**kwargs)Sets or updates values in the options dictionary.
TNC.wrap_function
(function, args)Wrap the function to implicitly inject the args at the call of the function.