Peskin & Schroder 2

Quantum Field Theory: Perturbation Series

At the end of the last post introducing the physics of particle collisions and scattering, we introduced a formula for computing the cross section of a collision process. The essential idea of the formula is that cross section is computed as the product of the surface area of a particle detector and the quantum mechanical amplitude of a particular outcome. Remember, we’re considering outcomes of particle collisions to be the particles produced along with properties of where they are in space. For example, in the electron-positron annihilation case, we’re considering outcomes to be the possible configurations of muons produced and where they all end up hitting the detector in a hypothetical or actual experimental setup. Do they all hit a single point on the detector? Do they spread out and hit the detector in a sparse pattern? Also recall that the quantum mechanical amplitude of a process is the probability that a given particle leaves the collision given that another entered the collision.

This makes conceptual sense, but how does one compute the quantum mechanical amplitude, M, so as to then compute the cross section? This is a situation in which we generally don’t know the exact “true” value of a key quantity, but there is a “best we can do approach”, which is to compute M as a perturbation series. The annoyance of this calculation is the driving motivation for a pictographic alternative technique, called Feynman diagrams, which were devised by physicist Richard Feynman. To fully appreciate Feynman diagrams, I’ll take you through the basics of perturbation series. Now, I happen to think that series in mathematics are wonderful and fun objects to work with, but they can present challenges that sometimes aren’t worth dealing with on a regular basis.

First, a high-level summary of what perturbation series are about. Let’s start with the following question: What are the solutions of the equation x^5 + x^4 + x^3 + x^2 + x +1 = 0? It’s tempting to conclude that there must be some formula for degree-5 polynomials over the integers, much as there is for quadratics, to solve for the solutions exactly. But that’s not correct! The best that we can do is approximate the values of roots using techniques such as iterative methods; solving for more and more precise approximations of the values in discrete steps. Perturbation series are very much in this vein, but when we get to a mathematical expression that we can’t evaluate exactly, we’re going to take a particular workaround approach that is characteristic of this area of physics/mathematics.

One way to approach a difficult problem without a known solution is to formulate it as a variation of a problem that does have a known solution and to see how far that takes you. In physics, even at the quantum spatial scale, there are some situations where we can solve for the state of the system (i.e. specify values for all of its parameters) exactly. So whenever we think of, say, a particle collision about which few parameters, if any, are known, then we think about whether it shares any structural features with quantum-scale situations we can solve exactly, such as a single hydrogen atom.

Loosely speaking, we use a table of numbers called a matrix to record parameter values for the state of a system, where we consider a ‘state’ to be the collection of information about the system’s kinetic and potential energy (as we discussed in the earlier post on Hamiltonian mechanics). This matrix has been named the Hamiltonian, but you can think of it as the ‘system state matrix’. Solving a physical system, then, means completely filling in this table of numbers. If we could know the state of the entire universe, that would mean that, in principle, we could write down its Hamiltonian. But even for much simpler cases the corresponding Hamiltonian is unknown. A strategy for engaging with these cases is to borrow the Hamiltonian from a solved system and model the unknown situation after it based on known similarities between them.

Let’s say that I know the Hamiltonian for a single hydrogen atom smeared across space (This is a better analogy for atoms than the orbiting balls model.). I’m going to call that Hamiltonian H. Appealing to our previous simple example of the electron-positron annihilation, I’m going to call the (unknown) Hamiltonian for that system H_{EP}. The statement that I’m going to give myself to start tinkering with is this: H_{EP} \approx H, which says that these tables of numbers are approximately the same. Okay, I can work with this! What does this mean in terms of matrix operations? If \hat{H} is the Hamiltonian for a system for which the Hamiltonian is known exactly, and if H_{EP} is the Hamiltonian for the system we’re interested in, then we can write: H_{EP} = H + \epsilon \hat{V}. Here, \epsilon is a parameter ranging over small values, and \hat{V} is a matrix that encodes a “perturbation”, or means of slightly altering the numbers stored in H.

Can we say anything more precise than the above formula? Using the time-independent Schrödinger equation for H and techniques of perturbation series, yes, we can. The punchline is that going further with this approach for solving particle scattering situations gets involved pretty quickly in terms of number of calculations. In the interest of getting on with Feynman diagrams, for now I’ll just say that some folks would prefer not to do the perturbation series calculations if they can draw pictures instead and get results that are equally correct. In the next couple of posts I’ll go into Feynman diagrams and then do an optional post on the details of perturbation series for those few souls who are interested:-).

References

  1. https://www.phys.uconn.edu/~rozman/Courses/P2400_16S/downloads/perturbation-theory.pdf
  2. http://www.fulviofrisone.com/attachments/article/483/Peskin,%20Schroesder%20-%20An%20introduction%20To%20Quantum%20Field%20Theory(T).pdf
  3. https://www2.ph.ed.ac.uk/~ldeldebb/docs/QM/lect17.pdf
  4. https://www.feynmanlectures.caltech.edu/III_08.html