17.1.1.13. pysisyphus.optimizers package

17.1.1.13.1. Submodules

17.1.1.13.2. pysisyphus.optimizers.BFGS module

17.1.1.13.3. pysisyphus.optimizers.BacktrackingOptimizer module

17.1.1.13.4. pysisyphus.optimizers.ConjugateGradient module

17.1.1.13.5. pysisyphus.optimizers.CubicNewton module

17.1.1.13.6. pysisyphus.optimizers.FIRE module

17.1.1.13.7. pysisyphus.optimizers.HessianOptimizer module

17.1.1.13.8. pysisyphus.optimizers.LBFGS module

17.1.1.13.9. pysisyphus.optimizers.LayerOpt module

17.1.1.13.10. pysisyphus.optimizers.MicroOptimizer module

17.1.1.13.11. pysisyphus.optimizers.NCOptimizer module

17.1.1.13.12. pysisyphus.optimizers.Optimizer module

17.1.1.13.13. pysisyphus.optimizers.PreconLBFGS module

17.1.1.13.14. pysisyphus.optimizers.PreconSteepestDescent module

17.1.1.13.15. pysisyphus.optimizers.QuickMin module

17.1.1.13.16. pysisyphus.optimizers.RFOptimizer module

17.1.1.13.17. pysisyphus.optimizers.RSA module

17.1.1.13.18. pysisyphus.optimizers.StabilizedQNMethod module

17.1.1.13.19. pysisyphus.optimizers.SteepestDescent module

17.1.1.13.20. pysisyphus.optimizers.StringOptimizer module

17.1.1.13.21. pysisyphus.optimizers.closures module

pysisyphus.optimizers.closures.bfgs_multiply(s_list, y_list, vector, beta=1, P=None, logger=None, gamma_mult=True, mu_reg=None, inds=None, cur_size=None)[source]

Matrix-vector product H·v.

Multiplies given vector with inverse Hessian, obtained from repeated BFGS updates calculated from steps in 's_list' and gradient differences in 'y_list'.

Based on algorithm 7.4 Nocedal, Num. Opt., p. 178.

pysisyphus.optimizers.closures.get_update_mu_reg(mu_min=0.001, gamma_1=0.1, gamma_2=5.0, eta_1=0.01, eta_2=0.9, logger=None)[source]

See 5.1 in [1]

pysisyphus.optimizers.closures.lbfgs_closure(force_getter, M=10, beta=1, restrict_step=None)[source]
pysisyphus.optimizers.closures.modified_broyden_closure(force_getter, M=5, beta=1, restrict_step=None)[source]

https://doi.org/10.1006/jcph.1996.0059 F corresponds to the residual gradient, so we after calling force_getter we multiply the force by -1 to get the gradient.

pysisyphus.optimizers.closures.small_lbfgs_closure(history=5, gamma_mult=True)[source]

Compact LBFGS closure.

The returned function takes two arguments: forces and prev_step. forces are the forces at the current iterate and prev_step is the previous step that lead us to the current iterate. In this way step restriction/line search can be done outisde of the lbfgs function.

17.1.1.13.22. pysisyphus.optimizers.cls_map module

17.1.1.13.23. pysisyphus.optimizers.exceptions module

exception pysisyphus.optimizers.exceptions.OptimizationError[source]

Bases: Exception

exception pysisyphus.optimizers.exceptions.ZeroStepLength[source]

Bases: Exception

17.1.1.13.24. pysisyphus.optimizers.gdiis module

class pysisyphus.optimizers.gdiis.DIISResult(coeffs, coords, forces, energy, N, prefix)[source]

Bases: object

N: int
coeffs: ndarray
coords: ndarray
energy: float
forces: ndarray
prefix: str
property type
pysisyphus.optimizers.gdiis.diis_result(coeffs, coords, forces, energy=None, prefix='')[source]
pysisyphus.optimizers.gdiis.from_coeffs(vec, coeffs)[source]
pysisyphus.optimizers.gdiis.gdiis(err_vecs, coords, forces, ref_step, max_vecs=5, test_direction=True, logger=None)[source]
pysisyphus.optimizers.gdiis.gediis(coords, energies, forces, hessian=None, max_vecs=3, logger=None)[source]
pysisyphus.optimizers.gdiis.valid_diis_direction(diis_step, ref_step, use)[source]

17.1.1.13.25. pysisyphus.optimizers.guess_hessians module

17.1.1.13.26. pysisyphus.optimizers.hessian_updates module

pysisyphus.optimizers.hessian_updates.bfgs_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.bofill_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.curvature_at_image(index, energies, coords)[source]

Curvate at given index for given reaction path.

Eq. (5) in [10]. Can be used to calculate the curvature at the HEI of in COS, to construct a suitable Hessian for a TS optimizatio.

Parameters:
  • index (int) -- Integer > 0. Index an image in energies & coords array.

  • energies (ndarray) -- 1d array of shape (nimages, ). Contains image energies.

  • coords (ndarray) -- 2d array of shape (nimages, ncoords). Contains the image coordinates, where the energies were calculated.

Return type:

float

pysisyphus.optimizers.hessian_updates.curvature_tangent_update(hessian, C, tangent)[source]

Introduce a direction with a certain curvature into the Hessian.

Can be used to construct a suitable starting Hessian for a TS optimization in a COS. See eq. (6) in [10].

Parameters:
  • hessian (ndarray) -- Cartesian Hessian of shape (3N, 3N), with N denoting the number of atoms.

  • C (float) -- Curvature.

  • tangent (ndarray) -- 1d array. Tanget vector of shape (3N, ) for which curvature C was calculated.

Return type:

tuple[ndarray, str]

Returns:

  • dH -- Hessian update.

  • label -- Kind of update.

pysisyphus.optimizers.hessian_updates.damped_bfgs_update(H, dx, dg)[source]

See [5]

pysisyphus.optimizers.hessian_updates.double_damp(s, y, H=None, s_list=None, y_list=None, mu_1=0.2, mu_2=0.2, logger=None)[source]

Double damped step 's' and gradient differences 'y'.

H is the inverse Hessian!

See [6]. Potentially updates s and y. y is only updated if mu_2 is not None.

Parameters:
  • s (np.array, shape (N, ), floats) -- Coordiante differences/step.

  • y (np.array, shape (N, ), floats) -- Gradient differences

  • H (np.array, shape (N, N), floats, optional) -- Inverse Hessian.

  • s_list (list of nd.array, shape (K, N), optional) -- List of K previous steps. If no H is supplied and prev_ys is given the matrix-vector product Hy will be calculated through the two-loop LBFGS-recursion.

  • y_list (list of nd.array, shape (K, N), optional) -- List of K previous gradient differences. See s_list.

  • mu_1 (float, optional) -- Parameter for 's' damping.

  • mu_2 (float, optional) -- Parameter for 'y' damping.

  • logger (logging.Logger, optional) -- Logger to be used.

Returns:

  • s (np.array, shape (N, ), floats) -- Damped coordiante differences/step.

  • y (np.array, shape (N, ), floats) -- Damped gradient differences

pysisyphus.optimizers.hessian_updates.flowchart_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.mod_flowchart_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.psb_update(z, dx)[source]
pysisyphus.optimizers.hessian_updates.sr1_update(z, dx)[source]
pysisyphus.optimizers.hessian_updates.ts_bfgs_update(H, dx, dg)[source]

As described in [7]

pysisyphus.optimizers.hessian_updates.ts_bfgs_update_org(H, dx, dg)[source]

Do not use! Implemented as described in the 1998 bofill paper [8].

This does not seem to work too well.

pysisyphus.optimizers.hessian_updates.ts_bfgs_update_revised(H, dx, dg)[source]

TS-BFGS update as described in [9].

Better than the original formula of Bofill, worse than the implementation in [7]. a is caluclated as described in the footnote 1 on page 38. Eq. (8) looks suspicious as it contains the inverse of a vector?! As also outlined in the paper abs(a) is used (|a| in the paper).

17.1.1.13.27. pysisyphus.optimizers.poly_fit module

class pysisyphus.optimizers.poly_fit.FitResult(x, y, polys)

Bases: tuple

polys

Alias for field number 2

x

Alias for field number 0

y

Alias for field number 1

pysisyphus.optimizers.poly_fit.cubic_fit(e0, e1, g0, g1)[source]
pysisyphus.optimizers.poly_fit.gen_solutions()[source]

Given two energies (e0, e1) and corresponding gradients (g0, g1) we can (try to) fit a quartic polynomial

f(x) = a0 + a1*x + a2*x**2 + a3*x**3 + a4*x**4

s.t. the constraint f''(x) >= 0, with the equality being fullfilled at only one point. There are five unknowns (a0 - a4) to be determined. Four equations can be derived from f(x) and its first derivative

f'(x) = a1 + 2*a2*x + 3*a3*x**2 + 4*a4*x**3 .

With (e0, g0) being given at x=0 and (e1, g1) being given at x=1 we can setup the following equations:

f (0) = a0 (1) f'(0) = a1 (2)

using e0 and g0 at x=0, and

f (1) = a0 + a1 + a2 + a3 + a4 (3) f'(1) = a1 + 2*a2 + 3*a3 + 4*a4 . (4)

The missing last equation can be derived from the constraint. The second derivative of f(x) is

f''(x) = 2*a2 + 6*a3*x + 12*a4*x**2

and shall be positive except at one point where it is allowed to be 0, that its two roots (f''(x) = 0) must be degenerate. This is fullfilled when the discriminant D of the quadratic polynomial a*x**2 + b*x + c is zero.

D = b**2 – 4*a*c = 0

With

a = 12*a4 b = 6*a3 c = 2*a2

we get

0 = (6*a3)**2 - 4*12*a4*2*a2 0 = 36*a3**2 - 96*a4*a2 0 = 3*a3**2 - 8*a4*a2 (5) or a4 = 3/8 * a3**2 / a2

Using (1) - (5) we can solve the set of equations for a0 - a4.

pysisyphus.optimizers.poly_fit.get_maximum(poly)[source]
pysisyphus.optimizers.poly_fit.get_minimum(poly)[source]

Generate directional gradients by projecting them on the previous step.

pysisyphus.optimizers.poly_fit.quartic_fit(e0, e1, g0, g1, maximize=False)[source]

See gen_solutions() for derivation.

pysisyphus.optimizers.poly_fit.quintic_fit(e0, e1, g0, g1, H0, H1)[source]

17.1.1.13.28. pysisyphus.optimizers.precon module

17.1.1.13.29. pysisyphus.optimizers.restrict_step module

pysisyphus.optimizers.restrict_step.get_scale_max(max_element)[source]
pysisyphus.optimizers.restrict_step.restrict_step(steps, max_step)[source]
pysisyphus.optimizers.restrict_step.scale_by_max_step(steps, max_step)[source]

17.1.1.13.30. Module contents