Worked Examples: Newton-Raphson Method#
These worked solutions correspond to the exercises on the Newton-Raphson Method page.
How to use this notebook:
Try each exercise yourself first before looking at the solution
The code cells show both the code and its output
Download this notebook if you want to run and experiment with the code yourself
Your solution might look different - that’s fine as long as it gives the correct answer!
Exercise: Newton-Raphson Method for Harmonic Potential#
Problem: Implement the Newton-Raphson method to find the minimum of the harmonic potential energy surface. The Newton-Raphson update rule is:
We’ll explore the remarkable property that for a harmonic potential, this method converges to the exact minimum in a single step, regardless of the starting position.
Part 1: Define the second derivative#
First, we need to define the second derivative of the harmonic potential:
For the harmonic potential \(U(r) = \frac{k}{2}(r - r_0)^2\), the second derivative is simply the force constant \(k\)—it’s a constant that doesn’t depend on \(r\).
import numpy as np
def harmonic_potential(r, k, r_0):
"""Calculate the harmonic potential energy.
Args:
r: Bond length (Å). Can be a float or np.ndarray.
k: Force constant (eV Å⁻²).
r_0: Equilibrium bond length (Å).
Returns:
Potential energy (eV). Type matches input r.
"""
return 0.5 * k * (r - r_0) ** 2
def harmonic_gradient(r, k, r_0):
"""Calculate the first derivative of the harmonic potential.
Args:
r: Bond length (Å). Can be a float or np.ndarray.
k: Force constant (eV Å⁻²).
r_0: Equilibrium bond length (Å).
Returns:
First derivative dU/dr (eV Å⁻¹). Type matches input r.
"""
return k * (r - r_0)
def harmonic_second_derivative(r, k, r_0):
"""Calculate the second derivative of the harmonic potential.
Args:
r: Bond length (Å). Can be a float or np.ndarray.
k: Force constant (eV Å⁻²).
r_0: Equilibrium bond length (Å).
Returns:
Second derivative d²U/dr² (eV Å⁻²). Type matches input r.
"""
return k # Constant, independent of r
# Parameters for H₂
k = 36.0 # eV Å⁻²
r_0 = 0.74 # Å
# Test the second derivative function
r_test = 1.0 # Å
second_derivative = harmonic_second_derivative(r_test, k, r_0)
print(f"Second derivative at r = {r_test} Å: U''(r) = {second_derivative:.3f} eV Å⁻²")
Second derivative at r = 1.0 Å: U''(r) = 36.000 eV Å⁻²
Part 2: Demonstrate one-step convergence from r = 1.0 Å#
Let’s apply the Newton-Raphson equation once and see what happens:
# Starting position
r_start = 1.0 # Å
# Calculate derivatives at starting position
gradient = harmonic_gradient(r_start, k, r_0)
second_deriv = harmonic_second_derivative(r_start, k, r_0)
print(f"Starting from r = {r_start} Å\n")
print(f"First derivative: U'(r) = {gradient:.6f} eV Å⁻¹")
print(f"Second derivative: U''(r) = {second_deriv:.6f} eV Å⁻²")
# Apply Newton-Raphson equation once
r_new = r_start - gradient / second_deriv
print(f"\nAfter one Newton-Raphson step:")
print(f" New position: r = {r_new:.6f} Å")
print(f" Analytical solution: r_0 = {r_0} Å")
print(f" Difference: {abs(r_new - r_0):.10f} Å")
Starting from r = 1.0 Å
First derivative: U'(r) = 9.360000 eV Å⁻¹
Second derivative: U''(r) = 36.000000 eV Å⁻²
After one Newton-Raphson step:
New position: r = 0.740000 Å
Analytical solution: r_0 = 0.74 Å
Difference: 0.0000000000 Å
Explanation:
The Newton-Raphson method converges to the exact minimum in a single step! Starting from \(r = 1.0\) Å, we arrive at \(r = 0.74\) Å after just one iteration.
Let’s understand why this happens. For the harmonic potential:
Substituting into the Newton-Raphson equation:
The \(k\) terms cancel:
So regardless of where we start, a single Newton-Raphson step takes us directly to \(r_0\)!
Why does this work? The Newton-Raphson method assumes the function is locally quadratic. For a harmonic potential, the function is exactly quadratic everywhere, so the “local” approximation is actually perfect globally. The method doesn’t need iteration because its fundamental assumption is exactly satisfied.
Part 3: Testing different starting positions#
Let’s verify that one-step convergence works from any starting position:
# Test three different starting positions
starting_positions = [0.5, 1.0, 1.5] # Å
print("Testing Newton-Raphson from different starting positions:\n")
print("Starting r (Å) | U'(r) (eV Å⁻¹) | Final r (Å) | Error (Å)")
print("-" * 65)
for r_start in starting_positions:
# Calculate derivatives
gradient = harmonic_gradient(r_start, k, r_0)
second_deriv = harmonic_second_derivative(r_start, k, r_0)
# Apply Newton-Raphson equation once
r_final = r_start - gradient / second_deriv
error = abs(r_final - r_0)
print(f"{r_start:14.2f} | {gradient:15.6f} | {r_final:11.8f} | {error:.2e}")
print(f"\nAnalytical minimum: r_0 = {r_0} Å")
Testing Newton-Raphson from different starting positions:
Starting r (Å) | U'(r) (eV Å⁻¹) | Final r (Å) | Error (Å)
-----------------------------------------------------------------
0.50 | -8.640000 | 0.74000000 | 0.00e+00
1.00 | 9.360000 | 0.74000000 | 0.00e+00
1.50 | 27.360000 | 0.74000000 | 0.00e+00
Analytical minimum: r_0 = 0.74 Å
Explanation:
All three starting positions converge to exactly \(r_0 = 0.74\) Å in a single step:
r = 0.5 Å (below the minimum):
Gradient is negative (−8.64 eV Å⁻¹), pointing left/down
Newton-Raphson steps right to reach 0.74 Å
r = 1.0 Å (above the minimum):
Gradient is positive (+9.36 eV Å⁻¹), pointing right/up
Newton-Raphson steps left to reach 0.74 Å
r = 1.5 Å (well above the minimum):
Gradient is strongly positive (+27.36 eV Å⁻¹)
Takes a large step left to reach 0.74 Å
The size and direction of each step is automatically calibrated by the ratio \(\frac{U'(r)}{U''(r)}\) to land exactly at the minimum.
Comparison with gradient descent#
Let’s compare Newton-Raphson to grid search and gradient descent for this problem:
Method |
Iterations to converge |
Key feature |
|---|---|---|
Grid search |
N/A (discrete) |
Limited by grid spacing |
Gradient descent (fixed Δr = 0.01 Å) |
26 |
Simple but slow |
Gradient descent (adaptive α = 0.01) |
21 |
Faster, but needs tuning |
Newton-Raphson |
1 |
Perfect for quadratic functions |
Why is Newton-Raphson so effective here?
Exact for quadratic functions: The harmonic potential is exactly quadratic, and Newton-Raphson assumes a quadratic function, so the approximation is perfect.
Uses curvature information: By including the second derivative, Newton-Raphson knows how “curved” the function is and can predict exactly where the minimum lies.
Automatic step size: The ratio \(\frac{U'(r)}{U''(r)}\) automatically determines the optimal step size—no tuning of \(\alpha\) or \(\Delta r\) required.
Will this work for other potentials?#
Important question: Will Newton-Raphson always converge in one step for other potential energy functions?
Answer: No. The one-step convergence property is special to exactly quadratic (harmonic) functions.
For the Lennard-Jones potential:
This is not quadratic, so:
The local quadratic approximation is only approximate
Multiple iterations will be needed
Convergence is not guaranteed from all starting positions
A critical limitation: Newton-Raphson can fail or even diverge if:
The second derivative is negative (negative curvature—the function is locally concave down rather than concave up)
The second derivative is very small (nearly flat region)
The starting position is very far from the minimum
The method works best when:
The function is approximately quadratic near the minimum
You have a reasonable starting guess
The region around your starting point has positive curvature (U’’(r) > 0)
What you’ll discover: In the synoptic exercise, you’ll explore how Newton-Raphson performs on the Lennard-Jones potential, where it can converge very rapidly from some starting positions but may struggle or fail from others. You’ll see that the choice of starting position matters much more for Newton-Raphson than for gradient descent, and you’ll learn to recognize when each method is most appropriate.