Gradient Descent Method#
A more efficient approach is to use information about the slope of the potential energy surface to guide our search.
The gradient descent method (also called the steepest descent method) works by analogy to releasing a ball on a hill and letting it roll to the bottom. At any point on our potential energy surface, the gradient tells us which direction is “uphill” and how steep the surface is at this point. With this information, we can step in the opposite direction (i.e., downhill), then recalculate the gradient at our new position, and repeat until we reach a point where the gradient is \(\sim0\).
The simplest implementation of this method is to move a fixed distance every step.
Algorithm#
Start at an initial guess position \(r_0\)
Calculate the gradient \(U^\prime = \frac{dU}{dr}\) at this point
Move in the direction opposite to the gradient (i.e., downhill), by some amount \(\Delta r\).
Repeat until the gradient is close to zero.
Exercise:#
Write a function to calculate the first derivative of a harmonic potential:
Using this function, write code to perform a gradient descent search, to find the minimum of your harmonic potential energy surface. Use \(r=1.0\) Å as your starting position, and a step size \(\Delta r=0.1\) Å.
Note: You should not need to run for more than 10 steps.
What happens using this algorithm when you get close to the minimum? What happens if you decrease the step size to \(\Delta r=0.01\) Å?
Rescaling the Step Size#
Using a fixed step-size makes our method very sensitive to the choice of step-size. Large step sizes will overshoot the minimum and then oscillate back and forth. Small step sizes will get closer to the minimum, but at the cost of needing to perform many more calculations.
A common approach designed to address this problem is to rescale the step size for each iteration, based on how far we think we are from the minimum. A simple model is that we can expect the gradient to be steep if we are a long way from the minimum, but shallow if we are already close to the minimum, so we make our step-size proportional to the (negative) local gradient.
or
where \(F(r_i)\) is the force (i.e., the negative gradient of the energy) at \(r_i\).
Exercise:#
Write a new version of your steepest descent code that rescales the step to be proportional to the local force, with \(\alpha = 0.01\). You should write this as a loop that iteratively updates the current position, i.e.
By combining a suitable if
statement with break
, have your code stop and report the predicted equilibrium bond-length when \(U < \left|0.001\right|\).
How does changing \(\alpha\) affect your rate of convergence? Experiment with larger and smaller values of \(\alpha\).
Note: You might want to set a maximum number of iterations, e.g., 30.