More than once in this course, we have discussed the fact that the power of the Python programming language comes from the vast ecosystem of libraries that we can utilise.
Previously, you have worked to write your own minimization algorithms in Python, using loops and functions.
However, potentially unsurprisingly, there is a library that we can use to help us with minimisation algorithms, which has built-in much more powerful algorithms than those you have used.
The library is called
scipy (pronounced Sigh-Pie), the
scipy library has a huge variety of scientific computing utilities (including the
constants module that you encountered previously), but in this section, we will focus on the
This method is within the
optimize module of the
scipy library, so to access this, we must import the following.
from scipy.optimize import minimize
Let us first investigate the documentation of the minimize function, this is available online or it can be printed in the Jupyter Notebook with the following command.
We will not reproduce the documentation directly in this course text as it is very long, but we suggest that you study it. It is okay if you don’t understand everything discussed, we will here go through some practical application.
From the documentation, we will see that the
minimize function has two required arguments:
fun: this is the function that we will minimize
x0: this is our initial guess of the parameter used in the minimisation
The only optional argument we will look at from the documentation is the
args, which are the invariant parameters in our model that should be passed to our function.
We will overlook the other optional arguments for now, but the interested reader may want to investigate these further.
We want to minimize the potential energy from the Lennard-Jones potential, this means writing a function for this, this is the
fun discussed above.
def lennard_jones(r, A, B): """ Lennard-Jones potential energy. Args: r (float): the distances to find the energy for (Å). A (float): the repulsive prefactor (eV/Å^12). B (float): the attractive prefactor (eV/Å^6). Returns: (float): the potential energy at each r. """ return A / r**12 - B / r**6
Note in the documentation, it says that the
minimize function should be passed a function with the form
fun(x, *args) -> float
Above, this is mapped out, were
r is our
A, B are the
*args (note the
* means this can be any number of
args are not varied during the minimisation process.
minimization function works as follows.
A = 1e5 B = 40 result = minimize(lennard_jones, [3.5], args=(A, B))
fun: -0.003999999054071188 hess_inv: array([[54.70574037]]) jac: array([-5.64771472e-06]) message: 'Optimization terminated successfully.' nfev: 18 nit: 8 njev: 9 status: 0 success: True x: array([4.13485048])
The object that is returned by the
minimize function is a special object which gives a lot of information about the minimisation.
The value we are interested in is the minimum energy difference, which is the
x attribute of this object.
This agrees well with the value that we found when writing our own minimisation methods.
Write a function to model the following potential energy function,
this function describes the potential energy of the Cl-C-C-Cl dihedral angle in a molecule of 1, 2-dichloroethane, where \(A_1 = 55.229\) kJ/mol, \(A_2 = 3.3472\) kJ/mol, \(A_3 = -58.576\) kJ/mol and \(\phi\) is the dihedral angle.
Plot this function between \(-\pi\) and \(\pi\), remember to label your axes.
minimizetest different initial values of
x0for \(\phi\). Consider why the result is so dependent on the starting position for this function (it may help to plot your result on the function).