Physics 366x


Learning goals

The goals of Physics 366x build on those of physics 365x. We aim to teach you to use computational methods to solve real-world physics problems. At the same time, we aim to help improve and solidify the material you are learning in the paradigms. At the same time, you should learn:

  1. Time-dependent Schroedinger’s equation
  2. Tunneling through and reflection from a potential barrier
  3. Wave packets
  4. Eigenfunctions and the energy eigenvalue equation
Numerical methods:
  1. Verlet’s method of integration applied to space as well as time
  2. Shooting algorithm
  3. The bisection method
Programming in python:
  1. Use of complex arrays

Time-dependent Schroedinger’s equation

In this module, we will solve the problem of a particle confined to a one-dimensional box with an arbitrary potential within the box in the time domain. To do this, we will solve the /time-dependent/ Schroedinger’s equation:

\mathcal{H}\psi(x,t) &= i\hbar \frac{\partial \psi(x,t)}{\partial
t} \\
-\frac{\hbar^2}{2m}\frac{\partial^2 \psi(x,t)}{\partial x^2} +
 V(x)\psi(x,t) &= i\hbar \frac{\partial \psi(x,t)}{\partial t}

This equation may not have been covered yet in the paradigms, but they’ll catch up by the end of the term.

Integerating Schroedinger’s equation

Schroedinger’s equation isn’t suitable for Verlet’s method, since it is only a first-order differential equation with respect to time. In addition, it is complicated by the fact that it is inherently complex. If the wavefunction starts out real, then it will immediately gain an imaginary part. We will integrate this equation using an approach similar to the Yee lattice that is used in the Finite Difference Time Domain (FDTD) method in photonics.

The key is that we always want to perform centered derivatives. Provided all our derivatives are centered, things will be okay.

We begin by writing the complex differential equation as two coupled real equations:

\psi(x,t) &= \psi_R(x,t) + i\psi_I(x,t) \\
-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_R(x,t)}{\partial x^2} +
 V(x)\psi_R(x,t) &= -\hbar \frac{\partial \psi_I(x,t)}{\partial t}
-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_I(x,t)}{\partial x^2} +
 V(x)\psi_I(x,t) &= \hbar \frac{\partial \psi_R(x,t)}{\partial t}

Now we need to rewrite the derivatives as centered finite differences. Let’s begin with the first time derivative.

-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_R(x,t)}{\partial x^2} +
 V(x)\psi_R(x,t) &\approx -\hbar
  \frac{\psi_I(x,t+\frac{\Delta t}{2}) - \psi_I(x,t-\frac{\Delta t}{2})}{\Delta t}

At this point, we notice that we need to know \psi_I at half-integral time steps. Of course, we could have divided by 2\Delta t, but this approach is more conventional. So now we rewrite the imaginary part using a centered finite difference equation, keeping in mind that we need to know its value at half-integral timesteps.

-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_I(x,t+\frac{\Delta t}{2})}{\partial x^2} +
 V(x)\psi_I(x,t+\frac{\Delta t}{2}) &\approx \hbar
  \frac{\psi_R(x,t+\Delta t) - \psi_R(x,t)}{\Delta t}

So now we see that things are looking good: both equations involve the real part at integral timesteps and the imaginary part at half-integral timesteps. We now just need to convert the spatial derivatives into finite differences, and then we’ll be almost done.

-\frac{\hbar^2}{2m}\frac{\psi_R(x+\Delta x,t) + \psi_R(x-\Delta
x,t) - 2\psi_R(x,t)}{\Delta x^2} +
 V(x)\psi_R(x,t) &\approx -\hbar
  \frac{\psi_I(x,t+\frac{\Delta t}{2}) - \psi_I(x,t-\frac{\Delta t}{2})}{\Delta t}

\frac{\psi_I(x+\Delta x,t+\frac{\Delta t}{2}) + \psi_I(x-\Delta x,t+\frac{\Delta t}{2})
      - 2\psi_I(x,t+\frac{\Delta t}{2})}
{\Delta x^2} +
 V(x)\psi_I(x,t+\frac{\Delta t}{2}) &\approx \hbar
  \frac{\psi_R(x,t+\Delta t) - \psi_R(x,t)}{\Delta t}

At this point, we can see the light at the end of the tunnel. If we know the imaginary part at time t-\frac{\Delta t}2 and the real part at time t, then we can solve the first equation to find the imaginary part at time t+\frac{\Delta t}2. Similarly, if we know the real part at time t and the imaginary part at time t+\frac{\Delta t}2, we can solve for the real part at time t+\Delta t.

\psi_I\left(x,t+\frac{\Delta t}{2}\right) &=
  \psi_I\left(x,t-\frac{\Delta t}{2}\right)
  + \frac{\hbar\Delta t}{2m}\left(
      \frac{\psi_R(x+\Delta x,t) + \psi_R(x-\Delta x,t)
            - 2\psi_R(x,t)}{\Delta x^2}\right)
  - \frac{\Delta t}{\hbar} V(x)\psi_R(x,t)

\psi_R\left(x,t+\Delta t\right) &= \psi_R(x,t)
-\frac{\hbar\Delta t}{2m}
\frac{\psi_I(x+\Delta x,t+\frac{\Delta t}{2}) + \psi_I(x-\Delta x,t+\frac{\Delta t}{2})
      - 2\psi_I(x,t+\frac{\Delta t}{2})}
{\Delta x^2} +
 \frac{\Delta t}{\hbar}V(x)\psi_I\left(x,t+\frac{\Delta t}{2}\right)

Write a program that accepts a function V(x) and a complex array with real part equal to \psi_R(t) and imaginary part equal to \psi_I(t-\Delta t/2) and update the array to hold \psi_R(t+\Delta t) and \psi_I(t+\Delta t/2).

Creating animations

Once you’ve written the above function, you’ll want to see what is going on. To do so, write a wrapper function, which calls the above function to find out how the wavefunction changes, and then plots it as an animation. You could google for matplotlib animation, but to save you some time (and because there isn’t as good documentation out there as I’d like), you can use the following example:

import pylab

pylab.ion() # this turns on interaction---needed for animation

x = pylab.arange(0,7,0.01)            # x-array
line, = pylab.plot(x,pylab.sin(x))
for i in pylab.arange(1,100):
    line.set_ydata(pylab.sin(x+i/10.0))  # update the data
    pylab.draw()                         # redraw the canvas

I should note that the speed of this animation will depend on the speed of your computer. A simple, hokey way to slow things down in your simulation is to either increase your resolution in x or decrease your timestep \Delta t.

The above approach is what I recommend using for most purposes. It allows you to quickly run your program and see its results. However, sometimes (like when creating this webpage), you’d like to save your animation as a file. This is a bit more tedious, as you have to create a whole bunch of files and then join them together to create an animation file. Here is a simple example that does this. It isn’t strictly analogous to the previous program, but does something that is quite similar.

import os, sys, pylab

os.system("rm -f _tmp*.png") # clean up any preexisting png files
x = pylab.arange(0,7,0.01)            # x-array
omega = 2 # radian/second
k = 1 # radian/meter
pylab.plot(x, pylab.sin(k*x))
for t in pylab.arange(0,2*pylab.pi/omega,0.2):  # run for one period
   pylab.cla() # clear everything
   pylab.plot(x, pylab.sin(k*x-omega*t)) # plot a fresh copy
   fname = '_tmp%06.2f.png'%t # create a file name for this frame
# I use the "convert" utility, which is part of the imagemagick
# package that is readily available on linux systems to create an
# animated gif file.
os.system("convert  -delay 50 _tmp*.png movie.gif") # make the movie
os.system("rm _tmp*.png") # clean up all these png files we created

This is the sample movie output.

Creating a wave packet

We often like to work with plane waves:

\psi(x,t) &= e^{i(\mathbf{k}\cdot\mathbf{r} - \omega t)}

but for many purposes they aren’t really very convenient. A plane wave describes a particle with a precisely known momentum \hbar
\mathbf{k}, but that is equally likely to be anywhere in space. At the other extreme, we could work with the eigenstate of position:

\psi(x,t=0) &= \delta(\mathbf{r})

which describes a particle that is precisely known to be located at the origin. This has a couple problems associated with it. One is that we can’t very effectively represent a delta function on a grid with finite resolution. Another is that because of the uncertainty principle, a particle whose position is known so precisely could have any momentum, which means it won’t stay well-known for very long.

A standard approach for dealing with these issues is to construct a wave packet. A wave packet is a compromise, allowing us to describe a particle with a reasonably well-known position and a reasonably well-known momentum. We generally use Gaussian wave packets, which provide the best compromise in terms of total certainty, and correspond to a product of a Gaussian with a plane wave:

\psi(x,t) &\approx e^{-\frac{(\mathbf{r}-\mathbf{r}_0)^2}{2\sigma^2}}
    e^{i(\mathbf{k}\cdot\mathbf{r} - \omega t)}

This describes a particle that is located at position \mathbf{r}_0 and is moving with momentum \hbar\mathbf{k}. The time dependence in the above formula is only approximate, since the momentum still has some uncertainty.

Your first application for your time-stepping code will be to construct a wave packet and observe its time dependence. Once you have done so, try seeing what happens when you modify the wave packet in different ways. You can try changing the packet width, the momentum, and you can try using the complex conjugate or the real or imaginay part of a wave packet. This is a chance to have fun! To start with, we can leave the potential V(x) as zero, so we’ll be looking at a simple particle confined to a box.


This is to give you an idea of what a wave packet might look like. You can do better than this.

Numerical stability

An algorithm is numerically unstable if small errors grow exponentially. In the particular case of our finite difference integration of Schroedinger’s equation, our numerical stability is determined by the relationship between the resolution in space and time, \Delta x and \Delta t. To understand numerical stability physically, it is often helpful to consider the dimensions and behavior in the relevant dimensions.

In this case, it is useful to ask how quickly the wave function is going to change, and then to ensure that our \Delta t is considerably smaller than that value. The Schroedinger equation governs how quickly the wave function will change:

-\frac{\hbar^2}{2m}\frac{\partial^2 \psi(x,t)}{\partial x^2} +
 V(x)\psi(x,t) &= -i\hbar \frac{\partial \psi(x,t)}{\partial t}

We can now ask ourselves how quickly might the wave function be expected to change. The maximum magnitude of the second derivative \frac{\partial^2 \psi}{\partial x^2} will be approximately \frac{\psi}{\Delta x^2}, so we can write that

\Delta \psi \approx
   + V \psi
   \right)\Delta t

so the maximum possible fractional change in \psi over one time step is given by

\left|\frac{\Delta \psi}{\psi}\right|_{max} \approx
   + \left|V\right|_{max}
   \right)\Delta t

We expect that there will be problems if in one time step the wave function changes by anything close to 100%, so we have an upper limit on \Delta t.

\Delta t < \frac{1}{\frac{\hbar^2}{2m}\frac{1}{\Delta
   x^2} + V_{max}}

This gives us an approximate value for the largest possible \Delta t. If we use too large a value for \Delta t, the most likely result is that our wave function will develop huge oscillations that grow exponentially. But we do want to pick a value for \Delta t that is as large as possible, so as to enable our simulation to run quickly.

Potential barriers

Now that you’ve got a wave packet, let’s try bouncing it off of something. Create a potential that looks like:

V(x) = V_{max} e^{-\frac{(x-x_0)^2}{d^2}}

Or alternatively, you could create a square well potential barrier. See what happens when you bounce the wave packet off this barrier. See what happens when you try setting V_{max} either greater or less than the kinetic energy of your wave packet. Note that you should pick d and x_0 such that the initial value of the wave packet doesn’t overlap with the barrier, and such that the barrier completely fits inside of your box.


Here is an example of a wave packet bouncing off a barrier.

Energy eigenvalue equation

In this module, we will solve for the eigenstates of a particle confined to a one-dimensional box with an arbitrary potential within the box. To do this, we will solve the energy eigenvalue equation:

\mathcal{H}\psi_n(x) &= E_n \psi_n(x) \\
-\frac{\hbar^2}{2m}\frac{\partial^2 \psi_n(x)}{\partial x^2} +
 V(x)\psi_n(x) &= E_n \psi_n(x)

Unlike the previous module, this is an eigenvalue equation, which means that rather than solving for how the wave function evolves over time, we’re going to be looking for particular energies and particular wavefunctions that satisfy this equation. There will be an infinite number of solutions, so we’ll need to take some care to specify which solution we’re looking for. The conventional approach is to define n as the number of nodes in \psi_n(x)

I should note here that the solution to Schroedinger’s eigenvalue equation may be taken to be real for any bound state, provided there are no magnetic fields involved.


If we already knew the energy E_n, we could integrate Schroedinger’s equation just like we integrated F = ma using Verlet’s method. The code will look a little different because we will want to store \psi_n(x) in an array just like we did in the previous module.

In order to integrate using Verlet’s method, we need to know the value of \psi for two consecutive values of x. The first is easy: we know that the wave function must go to zero at the boundaries of a box. The second one is slightly more subtle, but turns out to be just as easy: we can pick any non-zero value for the value of \psi at the second point (x = \Delta x). The reason we can do this is that any function proportional to an eigenfunction is also an eigenfunction, which is a property of a linear differential equation.

So we can write the second derivative in Schroedinger’s eigenvalue equation as a finite difference, and solve for \psi(x+\Delta
x), and that gives us an equation for the next point. And we can continue finding the next point until we reach the other end of the box, at which time we will know \psi(x) everywhere within the box.

Write a function that does this. Given a function V(x) an array \psi(x) (to be filled in), an energy and a box size, your function should solve for \psi(x). Plot its output for an interesting potential (you could use a sinusoidal potential, for instance) and a few different energies.

One nice way to plot wave functions is to plot the wave function and the potential on the same plot, offsetting the wave function by its corresponding energy and scaling it arbitrarily (or normalizing it arbitrarily) so things are all visible. Write some code to do this, if you have time.

Aiming and bisection

When you looked at the solution for \psi(x) from the code you wrote when you plugged in some random energy, you hopefully didn’t like it very much. The wave function was zero at one of the two boundaries, but most likely wasn’t zero at the other boundary! The reason was that you probably didn’t use an eigenvalue of the Hamiltonian for your energy. If you did pick an eigenvalue, please try another energy and see what happens.

So while you’ve solved the differential equation, you haven’t yet solved the eigenvalue equation. Eigenvalue equations are funny that way. Fortunately, given the above function, it isn’t too hard to find the eigenvalue. Before you program anything, spend a little time running your function for different energies and try to find a few eigenvalues of the energy. Try looking for particular eigenstates, such as n = 5 or n = 3.

Once you’ve gotten the hang of finding eigenenergies by hand, write a function that solves for a particular specified eigenvalue and eigenfunction.

Systems with barriers

Try creating a potential within your box such that there are multiple wells.

Simple harmonic oscillator

Try creating a simple harmonic oscillator potential within your box, and see what the energy eigenvalues are. Try finding a very high energy state (many nodes), and see where such a particle is likely to be found. Is it more likely to be found in a state with low potential energy, or high potential energy? Why? How does this relate to the classical limit, which we expect to show up at large quantum numbers, according to the Correspondence Principle?

The time dependence of an eigenfunction

Take the eigenfunction you calculate, and use it as the initial conditions for your Schroedinger-equation solver.

Spherically symmetric potentials

When a potential is sherically symmetric (i.e. central forces), the three-dimensional energy eigenvalue equation can be reduced to a one-dimensional radial equation.

-\frac{\hbar^2}{2m}\frac{d^2u(r)}{dr^2} + \left(\frac{\hbar^2
 l(l+1)}{mr^2}V(r)\right)u(r) &= E u(r) \\
-\frac{\hbar^2}{2m}\frac{d^2u(r)}{dr^2} + V_{eff}(r)u(r) &= E u(r)

where I have assumed that one of the two particles in question is far more massive than the other, and thus that the reduced mass is approximately equal to the mass of the lighter particle (the electron, in the case of the hydrogen atom). As you can see, the radial equation looks just like the one-dimensional energy eigenvalue equation, with the single change that the potential is replaced with an angular-momentum-dependent effective potential. Therefore, we can easily modify our code to compute the eigenstates and eigenvalues for any central force.

Three-dimensional simple harmonic oscillator

V(r) = -\frac12 k r^2

This is the three-dimensional simple harmonic oscillator. It is a particularly interesting potential, because we can perform separation of variables either in Cartesian coordinates, or in spherical coordinates. Therefore, you will be able to relate your solutions to the one-dimensional simple harmonic oscillator solutions you found previously.

Hydrogen atom

V(r) = -\frac{e}{r}

Try solving for the eigenenergies of the hydrogen atom.