17.1.1.13. pysisyphus.optimizers package

17.1.1.13.1. Submodules

17.1.1.13.2. pysisyphus.optimizers.BFGS module

class pysisyphus.optimizers.BFGS.BFGS(geometry, *args, update='bfgs', **kwargs)[source]

Bases: Optimizer

bfgs_update(s, y)[source]
damped_bfgs_update(s, y, mu_1=0.2)[source]

Damped BFGS update of inverse Hessian.

Potentially updates s. See Section 3.2 of [2], Eq. (30) - (33). There is a typo ;) It should be

H_{k+1} = V_k H_k V_k^T + ...

instead of

H_{k+1} = V_k^T H_k V_k + ...

double_damped_bfgs_update(s, y, mu_1=0.2, mu_2=0.2)[source]

Double damped BFGS update of inverse Hessian.

See [3]. Potentially updates s and y.

property eye
optimize()[source]
prepare_opt()[source]

17.1.1.13.3. pysisyphus.optimizers.BacktrackingOptimizer module

class pysisyphus.optimizers.BacktrackingOptimizer.BacktrackingOptimizer(geometry, alpha, bt_force=5, dont_skip_after=2, bt_max_scale=4, bt_disable=False, **kwargs)[source]

Bases: Optimizer

backtrack(cur_forces, prev_forces, reset_hessian=None)[source]

Accelerated backtracking line search.

reset()[source]

17.1.1.13.4. pysisyphus.optimizers.ConjugateGradient module

class pysisyphus.optimizers.ConjugateGradient.ConjugateGradient(geometry, alpha=0.1, formula='FR', dont_skip=True, **kwargs)[source]

Bases: BacktrackingOptimizer

get_beta(cur_forces, prev_forces)[source]
optimize()[source]
prepare_opt()[source]
reset()[source]

17.1.1.13.5. pysisyphus.optimizers.CubicNewton module

class pysisyphus.optimizers.CubicNewton.CubicNewton(geometry, **kwargs)[source]

Bases: HessianOptimizer

optimize()[source]
postprocess_opt()[source]

17.1.1.13.6. pysisyphus.optimizers.FIRE module

class pysisyphus.optimizers.FIRE.FIRE(geometry, dt=0.1, dt_max=1, N_acc=2, f_inc=1.1, f_acc=0.99, f_dec=0.5, n_reset=0, a_start=0.1, **kwargs)[source]

Bases: Optimizer

optimize()[source]
reset()[source]

17.1.1.13.7. pysisyphus.optimizers.HessianOptimizer module

class pysisyphus.optimizers.HessianOptimizer.HessianOptimizer(geometry, trust_radius=0.5, trust_update=True, trust_min=0.1, trust_max=1, max_energy_incr=None, hessian_update='bfgs', hessian_init='fischer', hessian_recalc=None, hessian_recalc_adapt=None, hessian_xtb=False, hessian_recalc_reset=False, small_eigval_thresh=1e-08, line_search=False, alpha0=1.0, max_micro_cycles=25, rfo_overlaps=False, **kwargs)[source]

Bases: Optimizer

__init__(geometry, trust_radius=0.5, trust_update=True, trust_min=0.1, trust_max=1, max_energy_incr=None, hessian_update='bfgs', hessian_init='fischer', hessian_recalc=None, hessian_recalc_adapt=None, hessian_xtb=False, hessian_recalc_reset=False, small_eigval_thresh=1e-08, line_search=False, alpha0=1.0, max_micro_cycles=25, rfo_overlaps=False, **kwargs)[source]

Baseclass for optimizers utilizing Hessian information.

Parameters:
  • geometry (Geometry) -- Geometry to be optimized.

  • trust_radius (float, default: 0.5) -- Initial trust radius in whatever unit the optimization is carried out.

  • trust_update (bool, default: True) -- Whether to update the trust radius throughout the optimization.

  • trust_min (float, default: 0.1) -- Minimum trust radius.

  • trust_max (float, default: 1) -- Maximum trust radius.

  • max_energy_incr (Optional[float], default: None) -- Maximum allowed energy increased after a faulty step. Optimization is aborted when the threshold is exceeded.

  • hessian_update (Literal['none', None, False, 'bfgs', 'damped_bfgs', 'flowchart', 'bofill', 'ts_bfgs', 'ts_bfgs_org', 'ts_bfgs_rev'], default: 'bfgs') -- Type of Hessian update. Defaults to BFGS for minimizations and Bofill for saddle point searches.

  • hessian_init (Literal['calc', 'unit', 'fischer', 'lindh', 'simple', 'swart', 'xtb', 'xtb1', 'xtbff'], default: 'fischer') -- Type of initial model Hessian.

  • hessian_recalc (Optional[int], default: None) -- Recalculate exact Hessian every n-th cycle instead of updating it.

  • hessian_recalc_adapt (Optional[float], default: None) -- Use a more flexible scheme to determine Hessian recalculation. Undocumented.

  • hessian_xtb (bool, default: False) -- Recalculate the Hessian at the GFN2-XTB level of theory.

  • hessian_recalc_reset (bool, default: False) -- Whether to skip Hessian recalculation after reset. Undocumented.

  • small_eigval_thresh (float, default: 1e-08) -- Threshold for small eigenvalues. Eigenvectors belonging to eigenvalues below this threshold are discardewd.

  • line_search (bool, default: False) -- Whether to carry out a line search. Not implemented by a subclassing optimizers.

  • alpha0 (float, default: 1.0) -- Initial alpha for restricted-step (RS) procedure.

  • max_micro_cycles (int, default: 25) -- Maximum number of RS iterations.

  • rfo_overlaps (bool, default: False) -- Enable mode-following in RS procedure.

  • **kwargs -- Keyword arguments passed to the Optimizer baseclass.

filter_small_eigvals(eigvals, eigvecs, mask=False)[source]
get_alpha_step(cur_alpha, rfo_eigval, step_norm, eigvals, gradient)[source]
get_augmented_hessian(eigvals, gradient, alpha=1.0)[source]
static get_newton_step(eigvals, eigvecs, gradient)[source]
get_newton_step_on_trust(eigvals, eigvecs, gradient, transform=True)[source]

Step on trust-radius.

See Nocedal 4.3 Iterative solutions of the subproblem

get_rs_step(eigvals, eigvecs, gradient, name='RS')[source]
static get_shifted_step_trans(eigvals, gradient_trans, shift)[source]
get_step_func(eigvals, gradient, grad_rms_thresh=0.01)[source]
housekeeping()[source]

Calculate gradient and energy. Update trust radius and hessian if needed. Return energy, gradient and hessian for the current cycle.

log_negative_eigenvalues(eigvals, pre_str='')[source]
prepare_opt(hessian_init=None)[source]
property prev_eigvec_max
property prev_eigvec_min
static quadratic_model(gradient, hessian, step)[source]
reset()[source]
rfo_dict = {'max': (-1, 'max'), 'min': (0, 'min')}
static rfo_model(gradient, hessian, step)[source]
save_hessian()[source]
set_new_trust_radius(coeff, last_step_norm)[source]
solve_rfo(rfo_mat, kind='min', prev_eigvec=None)[source]
update_hessian()[source]
update_trust_radius()[source]
pysisyphus.optimizers.HessianOptimizer.dummy_hessian_update(H, dx, dg)[source]

17.1.1.13.8. pysisyphus.optimizers.LBFGS module

class pysisyphus.optimizers.LBFGS.LBFGS(geometry, keep_last=7, beta=1, max_step=0.2, double_damp=True, gamma_mult=False, line_search=False, mu_reg=None, max_mu_reg_adaptions=10, control_step=True, **kwargs)[source]

Bases: Optimizer

__init__(geometry, keep_last=7, beta=1, max_step=0.2, double_damp=True, gamma_mult=False, line_search=False, mu_reg=None, max_mu_reg_adaptions=10, control_step=True, **kwargs)[source]

Limited-memory BFGS optimizer.

See [1] Nocedal, Wright - Numerical Optimization, 2006 for a general discussion of LBFGS. See pysisyphus.optimizers.hessian_updates for the references related to double damping and pysisyphus.optimizers.closures for references related to regularized LBFGS.

Parameters:
  • geometry (Geometry) -- Geometry to be optimized.

  • keep_last (int, default: 7) -- History size. Keep last 'keep_last' steps and gradient differences.

  • beta (float, default: 1) -- Force constant β in -(H + βI)⁻¹g.

  • max_step (float, default: 0.2) -- Upper limit for the absolute component of the step vector in whatever unit the optimization is carried out.

  • double_damp (bool, default: True) -- Use double damping procedure to modify steps s and gradient differences y to ensure sy > 0.

  • gamma_mult (bool, default: False) -- Estimate β from previous cycle. Eq. (7.20) in [1]. See 'beta' argument.

  • line_search (bool, default: False) -- Enable implicit linesearches.

  • mu_reg (Optional[float], default: None) -- Initial guess for regularization constant in regularized LBFGS.

  • max_mu_reg_adaptions (int, default: 10) -- Maximum number of trial steps in regularized LBFGS.

  • control_step (bool, default: True) -- Wheter to scale down the proposed step its biggest absolute component is equal to or below 'max_step'

  • **kwargs -- Keyword arguments passed to the Optimizer baseclass.

get_lbfgs_step(forces)[source]
optimize()[source]
postprocess_opt()[source]
reset()[source]

17.1.1.13.9. pysisyphus.optimizers.LayerOpt module

class pysisyphus.optimizers.LayerOpt.LayerOpt(geometry, layers=None, **kwargs)[source]

Bases: Optimizer

check_convergence(*args, **kwargs)[source]

Check if we must use the model optimizer to signal convergence.

property layer_num: int
property model_opt

Return the persistent optimizer belonging to the model system. We don't have to supply any coordinates to the optimizer of the most expensive layer, as it is persistent, as well as the associated geometry.

optimize()[source]
Return type:

None

postprocess_opt()[source]
Return type:

None

print_opt_progress(*args, **kwargs)[source]

Pick the correct method to report opt_progress.

When the model optimizer decides convergence we also report optimization progress using its data and not the data from LayerOpt, where the total ONIOM gradient is stored.

class pysisyphus.optimizers.LayerOpt.Layers(geometry, opt_thresh, layers=None)[source]

Bases: object

classmethod from_oniom_calculator(geometry, oniom_calc=None, layers=None, **kwargs)[source]
pysisyphus.optimizers.LayerOpt.get_geom_kwargs(layer_ind, layer_mask)[source]
pysisyphus.optimizers.LayerOpt.get_opt_kwargs(opt_key, layer_ind, thresh)[source]

17.1.1.13.10. pysisyphus.optimizers.MicroOptimizer module

class pysisyphus.optimizers.MicroOptimizer.MicroOptimizer(geom, step='lbfgs', line_search=True, max_cycles=100000000, max_step=0.2, keep_last=10, rms_force=None, double_damp=True, dump=False, **kwargs)[source]

Bases: object

cg_step(forces)[source]
lbfgs_step(forces)[source]
log(msg)[source]
optimize()[source]
run()[source]
sd_step(forces)[source]
take_step(energy, forces, return_step=False)[source]

17.1.1.13.11. pysisyphus.optimizers.NCOptimizer module

class pysisyphus.optimizers.NCOptimizer.NCOptimizer(geometry, *args, freeze_modes=None, **kwargs)[source]

Bases: HessianOptimizer

optimize()[source]

17.1.1.13.12. pysisyphus.optimizers.Optimizer module

class pysisyphus.optimizers.Optimizer.ConvInfo(cur_cycle, energy_converged, max_force_converged, rms_force_converged, max_step_converged, rms_step_converged, desired_eigval_structure)[source]

Bases: object

cur_cycle: int
desired_eigval_structure: bool
energy_converged: bool
get_convergence()[source]
is_converged()[source]
max_force_converged: bool
max_step_converged: bool
rms_force_converged: bool
rms_step_converged: bool
class pysisyphus.optimizers.Optimizer.Optimizer(geometry, thresh='gau_loose', max_step=0.04, max_cycles=150, min_step_norm=1e-08, assert_min_step=True, rms_force=None, rms_force_only=False, max_force_only=False, force_only=False, converge_to_geom_rms_thresh=0.05, align=False, align_factor=1.0, dump=False, dump_restart=False, print_every=1, prefix='', reparam_thresh=0.001, reparam_check_rms=True, reparam_when='after', overachieve_factor=0.0, check_eigval_structure=False, restart_info=None, check_coord_diffs=True, coord_diff_thresh=0.01, fragments=None, monitor_frag_dists=0, out_dir='.', h5_fn='optimization.h5', h5_group_name='opt')[source]

Bases: object

__init__(geometry, thresh='gau_loose', max_step=0.04, max_cycles=150, min_step_norm=1e-08, assert_min_step=True, rms_force=None, rms_force_only=False, max_force_only=False, force_only=False, converge_to_geom_rms_thresh=0.05, align=False, align_factor=1.0, dump=False, dump_restart=False, print_every=1, prefix='', reparam_thresh=0.001, reparam_check_rms=True, reparam_when='after', overachieve_factor=0.0, check_eigval_structure=False, restart_info=None, check_coord_diffs=True, coord_diff_thresh=0.01, fragments=None, monitor_frag_dists=0, out_dir='.', h5_fn='optimization.h5', h5_group_name='opt')[source]

Optimizer baseclass. Meant to be subclassed.

Parameters:
  • geometry (Geometry) -- Geometry to be optimized.

  • thresh (Literal['gau_loose', 'gau', 'gau_tight', 'gau_vtight', 'baker', 'never'], default: 'gau_loose') -- Convergence threshold.

  • max_step (float, default: 0.04) -- Maximum absolute component of the allowed step vector. Utilized in optimizers that don't support a trust region or line search.

  • max_cycles (int, default: 150) -- Maximum number of allowed optimization cycles.

  • min_step_norm (float, default: 1e-08) -- Minimum norm of an allowed step. If the step norm drops below this value a ZeroStepLength-exception is raised. The unit depends on the coordinate system of the supplied geometry.

  • assert_min_step (bool, default: True) -- Flag that controls whether the norm of the proposed step is check for being too small.

  • rms_force (Optional[float], default: None) -- Root-mean-square of the force from which user-defined thresholds are derived. When 'rms_force' is given 'thresh' is ignored.

  • rms_force_only (bool, default: False) -- When set, convergence is signalled only based on rms(forces).

  • max_force_only (bool, default: False) -- When set, convergence is signalled only based on max(|forces|).

  • force_only (bool, default: False) -- When set, convergence is signalled only based on max(|forces|) and rms(forces).

  • converge_to_geom_rms_thresh (float, default: 0.05) -- Threshold for the RMSD with another geometry. When the RMSD drops below this threshold convergence is signalled. Only used with Growing Newton trajectories.

  • align (bool, default: False) -- Flag that controls whether the geometry is aligned in every step onto the coordinates of the previous step. Must not be used with internal coordinates.

  • align_factor (float, default: 1.0) -- Factor that controls the strength of the alignment. 1.0 means full alignment, 0.0 means no alignment. The factor mixes the rotation matrix of the alignment with the identity matrix.

  • dump (bool, default: False) -- Flag to control dumping/writing of optimization progress to the filesystem

  • dump_restart (bool, default: False) -- Flag to control whether restart information is dumped to the filesystem.

  • print_every (int, default: 1) -- Report optimization progress every nth cycle.

  • prefix (str, default: '') -- Short string that is prepended to several files created by the optimizer. Allows distinguishing several optimizations carried out in the same directory.

  • reparam_thresh (float, default: 0.001) -- Controls the minimal allowed similarity between coordinates after two successive reparametrizations. Convergence is signalled if the coordinates did not change significantly.

  • reparam_check_rms (bool, default: True) -- Whether to check for (too) similar coordinates after reparametrization.

  • reparam_when (Optional[Literal['before', 'after']], default: 'after') -- Reparametrize before or after calculating the step. Can also be turned off by setting it to None.

  • overachieve_factor (float, default: 0.0) -- Signal convergence when max(forces) and rms(forces) fall below the chosen threshold, divided by this factor. Convergence of max(step) and rms(step) is ignored.

  • check_eigval_structure (bool, default: False) -- Check the eigenvalues of the modes we maximize along. Convergence requires them to be negative. Useful if TS searches are started from geometries close to a minimum.

  • restart_info (default: None) -- Restart information. Undocumented.

  • check_coord_diffs (bool, default: True) -- Whether coordinates of chain-of-sates images are checked for being too similar.

  • coord_diff_thresh (float, default: 0.01) -- Unitless threshold for similary checking of COS image coordinates. The first image is assigned 0, the last image is assigned to 1.

  • fragments (Optional[Tuple], default: None) -- Tuple of lists containing atom indices, defining two fragments.

  • monitor_frag_dists (int, default: 0) -- Monitor fragment distances for N cycles. The optimization is terminated when the interfragment distances falls below the initial value after N cycles.

  • out_dir (str, default: '.') -- String poiting to a directory where optimization progress is dumped.

  • h5_fn (str, default: 'optimization.h5') -- Basename of the HDF5 file used for dumping.

  • h5_group_name (str, default: 'opt') -- Groupname used for dumping of this optimization.

check_convergence(step=None, multiple=1.0, overachieve_factor=None)[source]

Check if the current convergence of the optimization is equal to or below the required thresholds, or a multiple thereof. The latter may be used in initiating the climbing image.

dump_restart_info()[source]
final_summary()[source]
fit_rigid(*, vectors=None, vector_lists=None, hessian=None)[source]
get_path_for_fn(fn, with_prefix=True)[source]
get_restart_info()[source]
log(message, level=50)[source]
make_conv_dict(key, rms_force=None, rms_force_only=False, max_force_only=False, force_only=False)[source]
abstract optimize()[source]
postprocess_opt()[source]
prepare_opt()[source]
print_opt_progress(conv_info)[source]
procrustes()[source]

Wrapper for procrustes that passes additional arguments along.

report_conv_thresholds()[source]
run()[source]
scale_by_max_step(steps)[source]
set_restart_info(restart_info)[source]
write_cycle_to_file()[source]
write_image_trjs()[source]
write_results()[source]
write_to_out_dir(out_fn, content, mode='w')[source]
pysisyphus.optimizers.Optimizer.get_data_model(geometry, is_cos, max_cycles)[source]

17.1.1.13.13. pysisyphus.optimizers.PreconLBFGS module

class pysisyphus.optimizers.PreconLBFGS.PreconLBFGS(geometry, alpha_init=1.0, history=7, precon=True, precon_update=1, precon_getter_update=None, precon_kind='full', max_step_element=None, line_search='armijo', c_stab=None, **kwargs)[source]

Bases: Optimizer

__init__(geometry, alpha_init=1.0, history=7, precon=True, precon_update=1, precon_getter_update=None, precon_kind='full', max_step_element=None, line_search='armijo', c_stab=None, **kwargs)[source]

Preconditioned limited-memory BFGS optimizer.

See pysisyphus.optimizers.precon for related references.

Parameters:
  • geometry (Geometry) -- Geometry to be optimized.

  • alpha_init (float, default: 1.0) -- Initial scaling factor for the first trial step in the excplicit line search.

  • history (int, default: 7) -- History size. Keep last 'history' steps and gradient differences.

  • precon (bool, default: True) -- Wheter to use preconditioning or not.

  • precon_update (int, default: 1) -- Recalculate preconditioner P in every n-th cycle with the same topology.

  • precon_getter_update (Optional[int], default: None) -- Recalculate topology for preconditioner P in every n-th cycle. It is usually sufficient to only determine the topology once at the beginning.

  • precon_kind (Literal['full', 'full_fast', 'bonds', 'bonds_bends'], default: 'full') -- What types of primitive internal coordinates to consider in the preconditioner.

  • max_step_element (Optional[float], default: None) -- Maximum component of the absolute step vector when no line search is carried out.

  • line_search (Literal['armijo', 'armijo_fg', 'strong_wolfe', 'hz', None, False], default: 'armijo') -- Whether to use explicit line searches and if so, which kind of line search.

  • c_stab (Optional[float], default: None) -- Regularization constant c in (H + cI)⁻¹ in atomic units.

  • **kwargs -- Keyword arguments passed to the Optimizer baseclass.

get_precon_getter()[source]
optimize()[source]
prepare_opt()[source]
scale_max_element(step, max_step_element)[source]

17.1.1.13.14. pysisyphus.optimizers.PreconSteepestDescent module

class pysisyphus.optimizers.PreconSteepestDescent.PreconSteepestDescent(geometry, alpha_init=0.5, **kwargs)[source]

Bases: PreconLBFGS

17.1.1.13.15. pysisyphus.optimizers.QuickMin module

class pysisyphus.optimizers.QuickMin.QuickMin(geometry, dt=0.35, **kwargs)[source]

Bases: Optimizer

optimize()[source]
prepare_opt()[source]
reset()[source]

17.1.1.13.16. pysisyphus.optimizers.RFOptimizer module

class pysisyphus.optimizers.RFOptimizer.RFOptimizer(geometry, line_search=True, gediis=False, gdiis=True, gdiis_thresh=0.0025, gediis_thresh=0.01, gdiis_test_direction=True, max_micro_cycles=25, adapt_step_func=False, **kwargs)[source]

Bases: HessianOptimizer

__init__(geometry, line_search=True, gediis=False, gdiis=True, gdiis_thresh=0.0025, gediis_thresh=0.01, gdiis_test_direction=True, max_micro_cycles=25, adapt_step_func=False, **kwargs)[source]

Rational function Optimizer.

Parameters:
  • geometry (Geometry) -- Geometry to be optimized.

  • line_search (bool, default: True) -- Whether to carry out implicit line searches.

  • gediis (bool, default: False) -- Whether to enable GEDIIS.

  • gdiis (bool, default: True) -- Whether to enable GDIIS.

  • gdiis_thresh (float, default: 0.0025) -- Threshold for rms(forces) to enable GDIIS.

  • gediis_thresh (float, default: 0.01) -- Threshold for rms(step) to enable GEDIIS.

  • gdiis_test_direction (bool, default: True) -- Whether to the overlap of the RFO step and the GDIIS step.

  • max_micro_cycles (int, default: 25) -- Number of restricted-step microcycles. Disabled by default.

  • adapt_step_func (bool, default: False) -- Whether to switch between shifted Newton and RFO-steps.

  • **kwargs -- Keyword arguments passed to the Optimizer/HessianOptimizer baseclass.

optimize()[source]
postprocess_opt()[source]

17.1.1.13.17. pysisyphus.optimizers.RSA module

class pysisyphus.optimizers.RSA.RSA(geometry, trust_radius=0.5, trust_update=True, trust_min=0.1, trust_max=1, max_energy_incr=None, hessian_update='bfgs', hessian_init='fischer', hessian_recalc=None, hessian_recalc_adapt=None, hessian_xtb=False, hessian_recalc_reset=False, small_eigval_thresh=1e-08, line_search=False, alpha0=1.0, max_micro_cycles=25, rfo_overlaps=False, **kwargs)[source]

Bases: HessianOptimizer

The Importance of Step Control in Optimization Methods, del Campo, 2009.

optimize()[source]

17.1.1.13.18. pysisyphus.optimizers.StabilizedQNMethod module

class pysisyphus.optimizers.StabilizedQNMethod.StabilizedQNMethod(geometry, alpha=0.5, alpha_max=1, alpha_stretch=0.5, alpha_stretch_max=1, eps=0.0001, hist_max=10, E_thresh=1e-06, bio=True, trust_radius=0.1, linesearch=True, **kwargs)[source]

Bases: Optimizer

adjust_alpha(gradient, precon_gradient)[source]
adjust_alpha_stretch()[source]
bio_mode(gradient)[source]
property n_hist
optimize()[source]
precondition_gradient(gradient, steps, grad_diffs, eps)[source]
prepare_opt()[source]

17.1.1.13.19. pysisyphus.optimizers.SteepestDescent module

class pysisyphus.optimizers.SteepestDescent.SteepestDescent(geometry, alpha=0.1, **kwargs)[source]

Bases: BacktrackingOptimizer

optimize()[source]
prepare_opt()[source]

17.1.1.13.20. pysisyphus.optimizers.StringOptimizer module

class pysisyphus.optimizers.StringOptimizer.StringOptimizer(geometry, max_step=0.1, stop_in_when_full=-1, keep_last=10, lbfgs_when_full=True, gamma_mult=False, double_damp=True, scale_step='global', **kwargs)[source]

Bases: Optimizer

check_convergence(*args, **kwargs)[source]

Check if the current convergence of the optimization is equal to or below the required thresholds, or a multiple thereof. The latter may be used in initiating the climbing image.

optimize()[source]
prepare_opt()[source]
reset()[source]
restrict_step_components(steps)[source]

17.1.1.13.21. pysisyphus.optimizers.closures module

pysisyphus.optimizers.closures.bfgs_multiply(s_list, y_list, vector, beta=1, P=None, logger=None, gamma_mult=True, mu_reg=None, inds=None, cur_size=None)[source]

Matrix-vector product H·v.

Multiplies given vector with inverse Hessian, obtained from repeated BFGS updates calculated from steps in 's_list' and gradient differences in 'y_list'.

Based on algorithm 7.4 Nocedal, Num. Opt., p. 178.

pysisyphus.optimizers.closures.get_update_mu_reg(mu_min=0.001, gamma_1=0.1, gamma_2=5.0, eta_1=0.01, eta_2=0.9, logger=None)[source]

See 5.1 in [1]

pysisyphus.optimizers.closures.lbfgs_closure(force_getter, M=10, beta=1, restrict_step=None)[source]
pysisyphus.optimizers.closures.modified_broyden_closure(force_getter, M=5, beta=1, restrict_step=None)[source]

https://doi.org/10.1006/jcph.1996.0059 F corresponds to the residual gradient, so we after calling force_getter we multiply the force by -1 to get the gradient.

pysisyphus.optimizers.closures.small_lbfgs_closure(history=5, gamma_mult=True)[source]

Compact LBFGS closure.

The returned function takes two arguments: forces and prev_step. forces are the forces at the current iterate and prev_step is the previous step that lead us to the current iterate. In this way step restriction/line search can be done outisde of the lbfgs function.

17.1.1.13.22. pysisyphus.optimizers.cls_map module

pysisyphus.optimizers.cls_map.get_opt_cls(opt_key)[source]
pysisyphus.optimizers.cls_map.key_is_tsopt(opt_key)[source]

17.1.1.13.23. pysisyphus.optimizers.exceptions module

exception pysisyphus.optimizers.exceptions.OptimizationError[source]

Bases: Exception

exception pysisyphus.optimizers.exceptions.ZeroStepLength[source]

Bases: Exception

17.1.1.13.24. pysisyphus.optimizers.gdiis module

class pysisyphus.optimizers.gdiis.DIISResult(coeffs, coords, forces, energy, N, type)

Bases: tuple

N

Alias for field number 4

coeffs

Alias for field number 0

coords

Alias for field number 1

energy

Alias for field number 3

forces

Alias for field number 2

type

Alias for field number 5

pysisyphus.optimizers.gdiis.diis_result(coeffs, coords, forces, energy=None, prefix='')[source]
pysisyphus.optimizers.gdiis.from_coeffs(vec, coeffs)[source]
pysisyphus.optimizers.gdiis.gdiis(err_vecs, coords, forces, ref_step, max_vecs=5, test_direction=True)[source]
pysisyphus.optimizers.gdiis.gediis(coords, energies, forces, hessian=None, max_vecs=3)[source]
pysisyphus.optimizers.gdiis.log(msg)[source]
pysisyphus.optimizers.gdiis.valid_diis_direction(diis_step, ref_step, use)[source]

17.1.1.13.25. pysisyphus.optimizers.guess_hessians module

pysisyphus.optimizers.guess_hessians.fischer_guess(geom)[source]
pysisyphus.optimizers.guess_hessians.get_guess_hessian(geometry, hessian_init, int_gradient=None, cart_gradient=None, h5_fn=None)[source]

Obtain/calculate (model) Hessian.

For hessian_init="calc" the Hessian will be in the coord_type of the geometry, otherwise a Hessian in primitive internals will be returned.

pysisyphus.optimizers.guess_hessians.get_lindh_alpha(atom1, atom2)[source]
pysisyphus.optimizers.guess_hessians.improved_guess(geom, bond_func, bend_func, dihedral_func)[source]
pysisyphus.optimizers.guess_hessians.lindh_guess(geom)[source]

Slightly modified Lindh model hessian as described in [1].

Instead of using the tabulated r_ref,ij values from [1] we will use the 'true' covalent radii as pyberny. The tabulated r_ref,ij value for two carbons (2nd period) is 2.87 Bohr. Carbons covalent radius is ~ 1.44 Bohr, so 2*1.44 Bohr = 2.88 Bohr which fits nicely with the tabulate value. If values for elements > 3rd are requested the alpha values for the 3rd period will be (re)used.

pysisyphus.optimizers.guess_hessians.lindh_style_guess(geom, ks, rhos)[source]

Approximate force constants according to Lindh.[1]

Bonds: k_ij = k_r * rho_ij Bends: k_ijk = k_b * rho_ij * rho_jk Dihedrals: k_ijkl = k_d * rho_ij * rho_jk * rho_kl

pysisyphus.optimizers.guess_hessians.simple_guess(geom)[source]

Default force constants.

pysisyphus.optimizers.guess_hessians.swart_guess(geom)[source]
pysisyphus.optimizers.guess_hessians.ts_hessian(hessian, coord_inds, damp=0.25)[source]

According to [3]

pysisyphus.optimizers.guess_hessians.xtb_hessian(geom, gfn=None)[source]

17.1.1.13.26. pysisyphus.optimizers.hessian_updates module

pysisyphus.optimizers.hessian_updates.bfgs_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.bofill_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.damped_bfgs_update(H, dx, dg)[source]

See [5]

pysisyphus.optimizers.hessian_updates.double_damp(s, y, H=None, s_list=None, y_list=None, mu_1=0.2, mu_2=0.2, logger=None)[source]

Double damped step 's' and gradient differences 'y'.

H is the inverse Hessian! See [6]. Potentially updates s and y. y is only updated if mu_2 is not None.

Parameters:
  • s (np.array, shape (N, ), floats) -- Coordiante differences/step.

  • y (np.array, shape (N, ), floats) -- Gradient differences

  • H (np.array, shape (N, N), floats, optional) -- Inverse Hessian.

  • s_list (list of nd.array, shape (K, N), optional) -- List of K previous steps. If no H is supplied and prev_ys is given the matrix-vector product Hy will be calculated through the two-loop LBFGS-recursion.

  • y_list (list of nd.array, shape (K, N), optional) -- List of K previous gradient differences. See s_list.

  • mu_1 (float, optional) -- Parameter for 's' damping.

  • mu_2 (float, optional) -- Parameter for 'y' damping.

  • logger (logging.Logger, optional) -- Logger to be used.

Returns:

  • s (np.array, shape (N, ), floats) -- Damped coordiante differences/step.

  • y (np.array, shape (N, ), floats) -- Damped gradient differences

pysisyphus.optimizers.hessian_updates.flowchart_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.mod_flowchart_update(H, dx, dg)[source]
pysisyphus.optimizers.hessian_updates.psb_update(z, dx)[source]
pysisyphus.optimizers.hessian_updates.sr1_update(z, dx)[source]
pysisyphus.optimizers.hessian_updates.ts_bfgs_update(H, dx, dg)[source]

As described in [7]

pysisyphus.optimizers.hessian_updates.ts_bfgs_update_org(H, dx, dg)[source]

Do not use! Implemented as described in the 1998 bofill paper [8].

This does not seem to work too well.

pysisyphus.optimizers.hessian_updates.ts_bfgs_update_revised(H, dx, dg)[source]

TS-BFGS update as described in [9].

Better than the original formula of Bofill, worse than the implementation in [7]. a is caluclated as described in the footnote 1 on page 38. Eq. (8) looks suspicious as it contains the inverse of a vector?! As also outlined in the paper abs(a) is used (|a| in the paper).

17.1.1.13.27. pysisyphus.optimizers.poly_fit module

class pysisyphus.optimizers.poly_fit.FitResult(x, y, polys)

Bases: tuple

polys

Alias for field number 2

x

Alias for field number 0

y

Alias for field number 1

pysisyphus.optimizers.poly_fit.cubic_fit(e0, e1, g0, g1)[source]
pysisyphus.optimizers.poly_fit.gen_solutions()[source]

Given two energies (e0, e1) and corresponding gradients (g0, g1) we can (try to) fit a quartic polynomial

f(x) = a0 + a1*x + a2*x**2 + a3*x**3 + a4*x**4

s.t. the constraint f''(x) >= 0, with the equality being fullfilled at only one point. There are five unknowns (a0 - a4) to be determined. Four equations can be derived from f(x) and its first derivative

f'(x) = a1 + 2*a2*x + 3*a3*x**2 + 4*a4*x**3 .

With (e0, g0) being given at x=0 and (e1, g1) being given at x=1 we can setup the following equations:

f (0) = a0 (1) f'(0) = a1 (2)

using e0 and g0 at x=0, and

f (1) = a0 + a1 + a2 + a3 + a4 (3) f'(1) = a1 + 2*a2 + 3*a3 + 4*a4 . (4)

The missing last equation can be derived from the constraint. The second derivative of f(x) is

f''(x) = 2*a2 + 6*a3*x + 12*a4*x**2

and shall be positive except at one point where it is allowed to be 0, that its two roots (f''(x) = 0) must be degenerate. This is fullfilled when the discriminant D of the quadratic polynomial a*x**2 + b*x + c is zero.

D = b**2 – 4*a*c = 0

With

a = 12*a4 b = 6*a3 c = 2*a2

we get

0 = (6*a3)**2 - 4*12*a4*2*a2 0 = 36*a3**2 - 96*a4*a2 0 = 3*a3**2 - 8*a4*a2 (5) or a4 = 3/8 * a3**2 / a2

Using (1) - (5) we can solve the set of equations for a0 - a4.

pysisyphus.optimizers.poly_fit.get_maximum(poly)[source]
pysisyphus.optimizers.poly_fit.get_minimum(poly)[source]

Generate directional gradients by projecting them on the previous step.

pysisyphus.optimizers.poly_fit.quartic_fit(e0, e1, g0, g1, maximize=False)[source]

See gen_solutions() for derivation.

pysisyphus.optimizers.poly_fit.quintic_fit(e0, e1, g0, g1, H0, H1)[source]

17.1.1.13.28. pysisyphus.optimizers.precon module

pysisyphus.optimizers.precon.get_lindh_k(atoms, coords3d, bonds=None, angles=None, torsions=None)[source]
pysisyphus.optimizers.precon.get_lindh_precon(atoms, coords, bonds=None, bends=None, dihedrals=None, c_stab=0.0103, logger=None)[source]

c_stab = 0.00103 hartree/bohr² corresponds to 0.1 eV/Ų as given in the paper.

pysisyphus.optimizers.precon.precon_getter(geom, c_stab=0.0103, kind='full', logger=None)[source]

17.1.1.13.29. pysisyphus.optimizers.restrict_step module

pysisyphus.optimizers.restrict_step.get_scale_max(max_element)[source]
pysisyphus.optimizers.restrict_step.restrict_step(steps, max_step)[source]
pysisyphus.optimizers.restrict_step.scale_by_max_step(steps, max_step)[source]

17.1.1.13.30. Module contents

class pysisyphus.optimizers.CubicNewton(geometry, **kwargs)[source]

Bases: HessianOptimizer

optimize()[source]
postprocess_opt()[source]
class pysisyphus.optimizers.MicroOptimizer(geom, step='lbfgs', line_search=True, max_cycles=100000000, max_step=0.2, keep_last=10, rms_force=None, double_damp=True, dump=False, **kwargs)[source]

Bases: object

cg_step(forces)[source]
lbfgs_step(forces)[source]
log(msg)[source]
optimize()[source]
run()[source]
sd_step(forces)[source]
take_step(energy, forces, return_step=False)[source]