The goals of Physics 366x build on those of physics 365x. We aim to teach you to use computational methods to solve real-world physics problems. At the same time, we aim to help improve and solidify the material you are learning in the paradigms. At the same time, you should learn:

- Physics:
- Time-dependent Schroedinger’s equation
- Tunneling through and reflection from a potential barrier
- Wave packets
- Eigenfunctions and the energy eigenvalue equation

- Numerical methods:
- Verlet’s method of integration applied to space as well as time
- Shooting algorithm
- The bisection method

- Programming in python:
- Use of complex arrays

In this module, we will solve the problem of a particle confined to a one-dimensional box with an arbitrary potential within the box in the time domain. To do this, we will solve the /time-dependent/ Schroedinger’s equation:

This equation may not have been covered yet in the paradigms, but they’ll catch up by the end of the term.

Schroedinger’s equation isn’t suitable for Verlet’s method, since it
is only a *first-order* differential equation with respect to time.
In addition, it is complicated by the fact that it is inherently
complex. If the wavefunction starts out real, then it will
immediately gain an imaginary part. We will integrate this equation
using an approach similar to the *Yee lattice* that is used in the
Finite Difference Time Domain (FDTD) method in photonics.

The key is that we always want to perform *centered derivatives*.
Provided all our derivatives are centered, things will be okay.

We begin by writing the complex differential equation as two coupled real equations:

Now we need to rewrite the derivatives as centered finite differences. Let’s begin with the first time derivative.

At this point, we notice that we need to know at
*half-integral* time steps. Of course, we could have divided by
, but this approach is more conventional. So now we
rewrite the imaginary part using a centered finite difference
equation, keeping in mind that we need to know its value at
half-integral timesteps.

So now we see that things are looking good: both equations involve the real part at integral timesteps and the imaginary part at half-integral timesteps. We now just need to convert the spatial derivatives into finite differences, and then we’ll be almost done.

At this point, we can see the light at the end of the tunnel. If we know the imaginary part at time and the real part at time , then we can solve the first equation to find the imaginary part at time . Similarly, if we know the real part at time and the imaginary part at time , we can solve for the real part at time .

Write a program that accepts a function and a complex array with real part equal to and imaginary part equal to and update the array to hold and .

Once you’ve written the above function, you’ll want to see what is
going on. To do so, write a wrapper function, which calls the above
function to find out how the wavefunction changes, and then plots it
as an animation. You could google for *matplotlib animation*, but to
save you some time (and because there isn’t as good documentation out
there as I’d like), you can use the following example:

```
import pylab
pylab.ion() # this turns on interaction---needed for animation
x = pylab.arange(0,7,0.01) # x-array
line, = pylab.plot(x,pylab.sin(x))
pylab.xlabel('x')
pylab.ylabel('$\psi(x)$')
for i in pylab.arange(1,100):
line.set_ydata(pylab.sin(x+i/10.0)) # update the data
pylab.draw() # redraw the canvas
```

I should note that the speed of this animation will depend on the speed of your computer. A simple, hokey way to slow things down in your simulation is to either increase your resolution in or decrease your timestep .

The above approach is what I recommend using for most purposes. It allows you to quickly run your program and see its results. However, sometimes (like when creating this webpage), you’d like to save your animation as a file. This is a bit more tedious, as you have to create a whole bunch of files and then join them together to create an animation file. Here is a simple example that does this. It isn’t strictly analogous to the previous program, but does something that is quite similar.

```
import os, sys, pylab
os.system("rm -f _tmp*.png") # clean up any preexisting png files
x = pylab.arange(0,7,0.01) # x-array
omega = 2 # radian/second
k = 1 # radian/meter
pylab.plot(x, pylab.sin(k*x))
for t in pylab.arange(0,2*pylab.pi/omega,0.2): # run for one period
pylab.cla() # clear everything
pylab.plot(x, pylab.sin(k*x-omega*t)) # plot a fresh copy
fname = '_tmp%06.2f.png'%t # create a file name for this frame
pylab.savefig(fname)
# I use the "convert" utility, which is part of the imagemagick
# package that is readily available on linux systems to create an
# animated gif file.
os.system("convert -delay 50 _tmp*.png movie.gif") # make the movie
os.system("rm _tmp*.png") # clean up all these png files we created
```

We often like to work with plane waves:

but for many purposes they aren’t really very convenient. A plane wave describes a particle with a precisely known momentum , but that is equally likely to be anywhere in space. At the other extreme, we could work with the eigenstate of position:

which describes a particle that is precisely known to be located at the origin. This has a couple problems associated with it. One is that we can’t very effectively represent a delta function on a grid with finite resolution. Another is that because of the uncertainty principle, a particle whose position is known so precisely could have any momentum, which means it won’t stay well-known for very long.

A standard approach for dealing with these issues is to construct a
*wave packet*. A wave packet is a compromise, allowing us to describe
a particle with a reasonably well-known position and a reasonably
well-known momentum. We generally use *Gaussian* wave packets, which
provide the best compromise in terms of total certainty, and
correspond to a product of a Gaussian with a plane wave:

This describes a particle that is located at position and is moving with momentum . The time dependence in the above formula is only approximate, since the momentum still has some uncertainty.

Your first application for your time-stepping code will be to construct a wave packet and observe its time dependence. Once you have done so, try seeing what happens when you modify the wave packet in different ways. You can try changing the packet width, the momentum, and you can try using the complex conjugate or the real or imaginay part of a wave packet. This is a chance to have fun! To start with, we can leave the potential as zero, so we’ll be looking at a simple particle confined to a box.

An algorithm is numerically unstable if small errors grow exponentially. In the particular case of our finite difference integration of Schroedinger’s equation, our numerical stability is determined by the relationship between the resolution in space and time, and . To understand numerical stability physically, it is often helpful to consider the dimensions and behavior in the relevant dimensions.

In this case, it is useful to ask how quickly the wave function is going to change, and then to ensure that our is considerably smaller than that value. The Schroedinger equation governs how quickly the wave function will change:

We can now ask ourselves how quickly might the wave function be expected to change. The maximum magnitude of the second derivative will be approximately , so we can write that

so the maximum possible fractional change in over one time step is given by

We expect that there will be problems if in one time step the wave function changes by anything close to 100%, so we have an upper limit on .

This gives us an approximate value for the largest possible . If we use too large a value for , the most likely result is that our wave function will develop huge oscillations that grow exponentially. But we do want to pick a value for that is as large as possible, so as to enable our simulation to run quickly.

Now that you’ve got a wave packet, let’s try bouncing it off of something. Create a potential that looks like:

Or alternatively, you could create a square well potential barrier. See what happens when you bounce the wave packet off this barrier. See what happens when you try setting either greater or less than the kinetic energy of your wave packet. Note that you should pick and such that the initial value of the wave packet doesn’t overlap with the barrier, and such that the barrier completely fits inside of your box.

In this module, we will solve for the eigenstates of a particle confined to a one-dimensional box with an arbitrary potential within the box. To do this, we will solve the energy eigenvalue equation:

Unlike the previous module, this is an eigenvalue equation, which means that rather than solving for how the wave function evolves over time, we’re going to be looking for particular energies and particular wavefunctions that satisfy this equation. There will be an infinite number of solutions, so we’ll need to take some care to specify which solution we’re looking for. The conventional approach is to define as the number of nodes in

I should note here that the solution to Schroedinger’s eigenvalue equation may be taken to be real for any bound state, provided there are no magnetic fields involved.

If we already knew the energy , we could integrate Schroedinger’s equation just like we integrated using Verlet’s method. The code will look a little different because we will want to store in an array just like we did in the previous module.

In order to integrate using Verlet’s method, we need to know the value of for two consecutive values of . The first is easy: we know that the wave function must go to zero at the boundaries of a box. The second one is slightly more subtle, but turns out to be just as easy: we can pick any non-zero value for the value of at the second point (). The reason we can do this is that any function proportional to an eigenfunction is also an eigenfunction, which is a property of a linear differential equation.

So we can write the second derivative in Schroedinger’s eigenvalue
equation as a finite difference, and solve for , and that gives us an equation for the *next* point. And we can
continue finding the *next* point until we reach the other end of the
box, at which time we will know everywhere within the
box.

Write a function that does this. Given a function an array (to be filled in), an energy and a box size, your function should solve for . Plot its output for an interesting potential (you could use a sinusoidal potential, for instance) and a few different energies.

One nice way to plot wave functions is to plot the wave function and the potential on the same plot, offsetting the wave function by its corresponding energy and scaling it arbitrarily (or normalizing it arbitrarily) so things are all visible. Write some code to do this, if you have time.

When you looked at the solution for from the code you
wrote when you plugged in some random energy, you hopefully didn’t
like it very much. The wave function was zero at one of the two
boundaries, but most likely *wasn’t* zero at the other boundary! The
reason was that you probably didn’t use an eigenvalue of the
Hamiltonian for your energy. If you did pick an eigenvalue, please
try another energy and see what happens.

So while you’ve solved the *differential equation*, you haven’t yet
solved the *eigenvalue equation*. Eigenvalue equations are funny that
way. Fortunately, given the above function, it isn’t too hard to find
the eigenvalue. Before you program anything, spend a little time
running your function for different energies and try to find a few
eigenvalues of the energy. Try looking for particular eigenstates,
such as or .

Once you’ve gotten the hang of finding eigenenergies by hand, write a function that solves for a particular specified eigenvalue and eigenfunction.

Try creating a potential within your box such that there are multiple wells.

Try creating a simple harmonic oscillator potential within your box, and see what the energy eigenvalues are. Try finding a very high energy state (many nodes), and see where such a particle is likely to be found. Is it more likely to be found in a state with low potential energy, or high potential energy? Why? How does this relate to the classical limit, which we expect to show up at large quantum numbers, according to the Correspondence Principle?

Take the eigenfunction you calculate, and use it as the initial conditions for your Schroedinger-equation solver.

When a potential is sherically symmetric (i.e. central forces), the three-dimensional energy eigenvalue equation can be reduced to a one-dimensional radial equation.

where I have assumed that one of the two particles in question is far more massive than the other, and thus that the reduced mass is approximately equal to the mass of the lighter particle (the electron, in the case of the hydrogen atom). As you can see, the radial equation looks just like the one-dimensional energy eigenvalue equation, with the single change that the potential is replaced with an angular-momentum-dependent effective potential. Therefore, we can easily modify our code to compute the eigenstates and eigenvalues for any central force.

This is the three-dimensional simple harmonic oscillator. It is a particularly interesting potential, because we can perform separation of variables either in Cartesian coordinates, or in spherical coordinates. Therefore, you will be able to relate your solutions to the one-dimensional simple harmonic oscillator solutions you found previously.