David Mumford

Archive for Reprints, Notes, Talks, and Blog

Professor Emeritus
Brown and Harvard Universities
David_Mumford@brown.edu

skip to content

An Easy Case of Feynman's Path Integrals

November 1, 2014

Like many pure mathematicians, I have been puzzled over the meaning of Feynman's path integrals and put them in the category of weird ideas I wished I understood. This year, reading Folland's excellent book Quantum Field Theory -- A Tourist Guide for Mathematicians, I got a glimmer of what was going on. In a seminar on quantum computers with my good friend John Myers a few years ago, I had played with finite dimensional quantum systems, so it was natural to work out Feynman path integrals in the finite dimensional case.What emerged was so clean and even undergraduate-linear-algebra ready, that I want to put this rigorous and simple result in my blog. A similar path was taken by Ben Rudiak-Gould in a recent arxiv submission "The sum-over-histories formulation of quantum mechanics".

To start, suppose \( A \) is an \( n\times n \) matrix (real or complex, doesn't matter). Of course, its powers \( A^n \) are given by the usual formula, here for \( n=3 \): $$ (A^3)_{i,\ell} = \sum_{j,k} A_{i,j} \cdot A_{j,k} \cdot A_{k,\ell}. $$ One can think of this in a new way: let \( S=\{1,2,\cdots,n\} \). Then \( \{i,j,k,\ell\} \) is a discrete path in the set \( S \) from \( i \) to \( \ell \), and the matrix coefficients of the power are sums of terms, one for each path from some column index to some row index.

Let's make the problem harder: instead of powers of a matrix, let's consider a 1-parameter group of matrices obtained by exponentiating a fixed matrix \( H \), namely \( U_t = e^{tH} \). Then one might expect that the matrix coefficients of \( U_t \) are sums (or better integrals) over continuous paths in \( S \). \( S \) being discrete, a path in \( S \) means a sequence of constant intervals interspersed with jumps, like a frog jumping on lily pads.

This is a finite version of what Feynman introduced in his path integral formalism for quantum mechanics. Note that in quantum mechanics the vectors in the space \( \mathbb R^n \) on which A operates are called the states of the system. (In QM, the states must actually be complex vectors, not real.) Feynman was dealing not with a matrix giving a linear operator on \( \mathbb R^n \) but with operators on a Hilbert space \( \mathcal H \). In the simplest case, \( \mathcal H \) could be \( L^2(\mathbb R) \), the states are then complex valued functions on \( \mathbb R \) and \( U_t \) could be an integral operator given by convolution with a kernel \( K(x,y,t) \). Then his goal was to write \( K(x,y,t) \) as a sum over all paths in \( \mathbb R \) from \( x \) to \( y \) of an expression involving the path and \( t \). He thinks of these as paths of an underlying classical particle moving in \( \mathbb R \). Of course, the set of paths is an infinite dimensional manifold and then to sum over all paths one needs a measure on the set of these paths with respect to which one can integrate. Finding the appropriate measure is one problem and showing the integrand he needs is in some sense integrable turned out to be even harder.

I want to develop his approach for finite dimensional \( U_t \) where everything is quite elementary. This is useful because finite dimensional quantum systems have come into prominence in the last decades as the setting for quantum computing. And the path integral formalism is the right one to use when you treat the interaction of this elementary system with the external world from which it can never be totally insulated.

Start by fixing a large integer \( N \) . Then: $$ \begin{align*} (U_t)_{a,b} &= \left((U_{t/N})^N\right)_{a,b} \\ &= \sum_{a=k_0,k_1,\cdots,k_N=b} \prod_{i=1}^{i=N} (U_{t/N})_{k_{i-1},k_i} \end{align*} $$ Now if \( N>>0 \) , \( U_{t/N}=e^{tH/N} \) is approximately equal to \( I+(t/N)H \). Thus if at some \( i \), \( k_{i-1}=k_i \), the term in the product is near 1 while otherwise it is a bounded number divided by \( N \), hence very small. From this we see that the more jumps the sequence \( k_i \) makes, the smaller the corresponding term in the product. So let \( J \) be the number of jumps and consider the sparser sequence of values \( a=k_0, k_1, \cdots, k_J=b \) where now \( k_{i-1} \ne k_i \) for all \( i \). The jumps take place at particular `times' \( \ell_i/N \) and we reformulate the above expression as: $$ \approx \sum_{J=0}^{\infty} \left( \frac{t}{N}\right)^J \!\! \sum_{\begin{array}{c} a=k_0\ne k_1\ne \cdots \ne k_J=b \\ 1 < \ell_1 < \cdots <\ell_J < N \end{array}} \prod_{i=0}^{i=J-1} H_{k_{i-1}k_i} \cdot e^{\sum_{\ell = 0}^{\ell = J} \frac{\ell_{i+1}-\ell_{i}}{N} H_{k_i,k_i}} $$ It shouldn't be hard to quantify the approximation error here but let's skip this and pass quickly to the limit as \( N\rightarrow \infty \) where the expression becomes exact again. This leaves the \( k \) sequence alone but now the \( \ell_i/N \)'s are replaced by intermediate times \( t_i \) in the interval \( [0,t] \) where the jumps take place, the sum over \( \ell \)'s is replaced by an integral over the \( t \)'s and you take into account the constant needed when the sum over the \( \ell \)'s is looked at as a Riemann sum for the integral over the \( t \)'s. What comes out is: $$ =\sum_{J=0}^{\infty} \! \underset{\begin{array}{c} a=k_0 \ne k_1 \ne \cdots \ne k_J=b \\ 0 < t_1 < \cdots < t_J < t \end{array}}{\sum\int} \prod_{i=0}^{i=J-1} H_{k_{i-1}k_i} \cdot e^{\sum_{\ell = 0}^{\ell = J} (t_{i+1}-t_i) H_{k_i,k_i}} dt_1\cdots dt_J $$ Note that the integrand is bounded by a constant to the power \( J \) and the integral is over a simplex with volume \( t^J/J! \), hence we get convergence of the sum over \( J \). Going a step further, let \( X \) be the path space of piecewise constant functions \( f:[0,t]\rightarrow \{1,2,\cdots,n\} \) with a finite number of jumps. \( X \) breaks up into pieces \( X_J \) according to the number of jumps and these into pieces depending the the sequence \( \vec k \) of values of \( f \) and finally what remains are simplices in \( \mathbb R^J \). We have the euclidean measure on these components, hence a finite measure \( \mu_X \) on \( X \). We may write a point of \( X \) as a pair of vectors \( (\vec k, \vec t) \) describing its jumps and values. Let \( X(a,b) \) be the paths that begin at \( a \) and end at \( b \). Then we have the final theorem:

Theorem. For any \( n \times n \) matrix \( H \) the matrix entries of \( e^{tH} \) are given by: $$ \left(e^{tH}\right)_{a,b} = \int_{X(a,b)} \prod_{i=0}^{i=J-1} H_{k_{i-1}k_i} \cdot e^{\sum_{\ell = 0}^{\ell = J} (t_{i+1}-t_i) H_{k_i,k_i}} d\mu(\vec k, \vec t) $$

I'm not sure one can convince college teachers of this but this result fits easily into the curriculum of undergrad linear algebra courses!

There is a definite reason why this way of writing \( U_t \) is important which we now sketch. Consider a quantum computer or any other quantum effect that is modeled by a finite dimensional space. The space is now always a complex vector space with a Hermitian inner product, that is a finite dimensional Hilbert space \( \mathcal H_\text{fin} \). But the rest of world always intrudes to some extent and this is modeled by a tensor product \( \mathcal H_\text{fin} \otimes \mathcal H_\text{heat} \). The second factor is another Hilbert space usually referred to as a heat bath because it is often assumed to be thermodynamic equilibrium. The evolution will be described a joint Hamiltonian operator \( H = H_\text{fin} \otimes I_\text{heat} + I_\text{fin} \otimes H_\text{heat} + H_\text{inter} \) where \( H_\text{inter} \) is the interaction term. Then the system evolves according to \( U_t = e^{itH} \) (imaginary powers here -- note that this doesn't affect anything we did before).

A classic 1963 paper of Feynman and Vernon showed how to describe the perturbation of the finite system that is caused by its coupling with the heat bath. Start by describing the evolution of \( \mathcal H_\text{fin} \) by integrating over paths \( f(t) \in X \). For each fixed path \( f \), the effects of coupling \( \mathcal H_\text{fin} \) on the heat bath is to add to its native Hamiltonian the term: $$ H_f:\mathcal H_\text{heat} \rightarrow \mathcal H_\text{heat}, \text{ where } \langle y, H_f(t) \rangle = \langle f(t)\otimes y, H_\text{inter}(f(t) \otimes x\rangle.$$ For simple models of heat baths, this \( H_f \) looks like adding an external force field to the heat bath and it urns out that the composite Hamiltonian can be integrated by an explicit formula. Using this formula, we find that the evolution of the joint system can be described by the path integral in \( \mathcal H_\text{fin} \).

But what happens now is that states of the finite system get entangled with the heat bath and this is not a useful description. We need to 'trace out' the heat bath in order to describe its effects on the finite system. This done by retreating a bit from describing the system by a single state and accepting that we need to describe it as a mixed state. A mixed state is a probabilistic combination of many states described by a density matrix. If the mixture is made up of a set of vectors \( \vec x^{(a)} \in \mathbb C^n \), each with a probability \( p(a) \), so that \( \sum_a p(a)=1 \), then one defines the density matrix describing this mixed state by the Hermitian matrix: $$ \rho_{i,j} = \sum_a p(a) \bar x_i^{(a)} x_j^{(a)}.$$ It's a sad fact of life that any system entangled with the world needs to be described by these \( \rho \)'s and is never 'pure' anymore. What Feynman and Vernon did was to show that at least the density matrix of the finite system coupled to a heat bath could be described by path integrals for the finite system if one adds a factor called the influence function determined by the heat bath. Because we're dealing with density matrices, we need to integrate over not one but two piecewise constant paths \( (f,f') \) of the finite system. Let the integrand in the the above theorem be denoted by \( H(f) \) for a path \( f \in X \). For the simplest case of a two state finite system, with values \( \{+1,-1\} \) now, we get $$ \begin{align*} (\rho_{\mathcal H_\text{fin}} \! (t))_{a,b} \! &= \! \! \! \iint \!\! (\rho_{\mathcal H_\text{fin}}(0))_{f(0),f'(0)} \! \! {\mathcal F}(f,f') H(f) \overline{H(f')} d\mu(f) d\mu(f') \\ \log({\mathcal F}(f,f')) \! &= \! \! \! \iint_{ 0 < r < s < t } \! \! \! iL_1(s-r)(f(s)-f'(s)(f(r)+f'(r))\\ & \qquad - L_2((s-r)(f(s)-f'(s))(f(r)-f'(r)) dr ds \end{align*} $$ where \( L_1, L_2 \) are determined by temp.erature coupling and frequency spectrum of heat bath. No one ever said physics was easy.