In this article, we explore the implications of Fourier Transform on wavefunctions. We start with the simple example of a free particle and show the mathematical equivalence between the position and momentum space wavefunctions. The wavefunction of a free particle with momentum p' is an eigenfunction of the operator $\hat{p}$, given by

(1)Obviously this is not a normalizable wavefunction, but

(2)which clearly blows up at p = p'. The energy is given by

(3)The point is that a free particle wavefunction is also an eigenfunction of $\hat{p}^2$ which is just $\hat{p}$ applied twice. This means that a free particle has the distinction of having not only a well defined momentum but also a well defined energy. Naturally, its position is indefinite (its "free"—not under any potential). The Fourier Transform of the wavefunction $\psi(x)$ is sometimes denoted by $\psi(p)$ and is given by

(4)Plugging (1) into this, we get

(5)In p space, the wavefunction is just a spike at p = p' and zero everywhere, consistent with the fact that there is just one momentum component. Of course, if you replace the complex exponential by a sinusoid, then you get two spikes, but thats just because the system is equivalen to a right going particle superimposed (easy to say!) over a left going particle…still the same magnitude of momentum.

Note that the constants are important from the point of view of Parseval's Theorem, but that works only for normalizable functions where both sides are finite. Here, the wavefunction isn't even normalizable, so there's no point worrying too much about the constants. In a general quantum system, the wavefunction constants are very important because they reflect information about the probability in position space.

A periodicity in the position space leads to a discreteness in the momentum space representation, even if that is at the expense of introducing a singularity "function". But the amount of information in both representations is the same and if you were to hypothetically "store" this information, you would require infinite space to represent both the delta function representation and the complex exponential representation: for the exponential, its obvious because it extends to infinity either side. For the delta function, you need to precisely know how fast and how exactly the wavefunction blows up near p = p'. This is generally done by considering a sequence of functions that converge to the singularity function as some parameter (say the skirth width) tends to zero, making the height go to infinity. But all these sequences consist of well behaved functions which being continuous would require infinite storage space, even if they're functions with compact support.

Now, to the simple harmonic oscillator. I will assume that A is the (normalized) constant. The Hamiltonian of a 1D oscillator is simply

(6)Lets work with the ground state wavefunction. Its just the solution

(7)This is basically a Gaussian, and so is the probability density. We know that the Fourier Transform of a Gaussian is another Gaussian, so its natural to expect no singularity functions in p-space. Before we go down to the math, lets look at this for a minute. The absence of singularity functions in p-space means that the oscillator does not have a well defined momentum. Mathematically, this means that the wavefunction is not an eigenfunction of $\hat{p}$. But it does have a well defined energy, given by

(8)and we know its an eigenfunction of H (thats how we got it!). So this is a wavefunction that is not a momentum eigenfunction, but is an energy eigenfunction. If you think there's a problem here, then you're probably thinking that equation (3) should apply here. Thats wrong, because this is not a free particle: its a particle in a parabolic potential, which makes it an oscillator. For a free particle, it followed that an eigenfunction of momentum was also an eigenfunction of momentum squared, which meant it was an eigenfunction of energy. But the converse does not have to hold. In fact, this is a classic example of mathematical logic where

$P \implies Q$

does not say anything about whether P will hold given Q. The problem is that the Hamiltonian here is not just momentum squared. It involves a potential energy term that makes all the difference. If we could somehow distort the well, and make it vanish altogether, you would have to solve the problem all over again and the solutions would be complex exponentials.

But a better example is the one dimensional box. The Hamiltonian is just the first term, the so called Kinetic Energy term. The wavefunctions are thus eigenfunctions of momentum squared. But they are not eigenfunctions of momentum, consistent with the fact that momentum of a particle in a 1D box is not well defined. If it were, position wouldn't be definite and the particle could not be in the 1D box in the first place. Contradiction! This kind of a reductio-ad-absurdum argument is aesthetically pleasing to the physicist but the reasoning involves dealing with infinities, in a manner that all mathematicians may not feel comfortable with. At the heart of these arguments is the Uncertainty Principle, the mathematical roots of which lie in the "band-limitedness" of a "physically meaningful wavefunction". We have reason to believe that physically realizable quantum systems have well behaved wavefunctions, because we don't think things blow up to infinity the way they did a few paragraphs ago.

In fact this criterion is powerful enough to be important in two areas apparently distinct, but mathematically equivalent and physically unifed: the first is Shannon's sampling theorem which states that a signal that is bandlimited (which has a Fourier Transform of compact support) can be accurately reconstructed with a knowledge of its values at equally spaced points (samples). The second is the fact that bandlimitedness in p-space leads us directly to the strong form of the Uncertainty Principle for conjugate observables, from which the "everyday form" we are all familiar with ($\sigma_{p}\sigma_{x} \geq \hbar/2$) can be derived.

At first sight, the proof of Shannon's theorem (also known as the Nyqusit Shannon Sampling theorem) does not appear to be a formal mathematical proof. The proof is almost algorithmic: given a band limited signal, sample it in the time domain and compute the Fourier Transform of the sampled signal, and then use a brick wall low pass filter to crop out the portion of the signal outside the main band. Taking the inverse transform yields the precise input signal provided sampling has not led to aliasing. On the way you find a criterion for unaliased sampling/reconstruction. And thus, you say that if this criterion holds, exact reconstruction is possible in theory. The impulse train sampling method is termed "ideal" because in "real life", impulse train sampling is not possible: first of all, impulse trains don't "exist" and second, signal processing is done on computers capable of handling discrete samples due to finite space. In fact, with the emergence of the discrete Fast Fourier Transform algorithm, digital signal processing has almost completely taken over. So, the rich mathematical features available for exploration in the continuous time domain have been lost in some ways, since their significance is mostly theoretical. Sure enough, sampling theorem is still important—but thats the discrete time version of it.

In all these cases, the bandlimitedness of the Fourier Transform of an input signal, or wavefunction has come to our rescue. There are several more examples which draw upon this property of the transform when it holds, and several other properties may be derived from it. But this shows how fundamental an integral transform can be and how much useful information we can extract from one of its properties, bandlimitedness.