
Why Feynman Diagrams Are So Important - MaysonL
https://www.quantamagazine.org/20160705-feynman-diagrams-nature-of-empty-space/
======
ttctciyf
John Baez's _Prehistory of n-Categorial Physics_ [1] has some interesting
nuggets about Feynman diagrams, in the "Feynman (1947)" section.

In particular, "The mathematics necessary for [interesting Feynman diagrams]
was formalized later, in Mac Lane's 1963 paper on monoidal categories (see
below) and Joyal and Street's 1980s work on 'string diagrams.'"

In other words, Feynman diagrams were (or at least can be taken as) an early
precursor of diagrammatic approaches in category-theoretic mathematical
physics.

The paper is really great for getting some context around these diagrams and a
sense of the underlying mathematics, even for a category-theory ignoramus like
myself, with a little "suspension of comprehension."

1:
[http://math.ucr.edu/home/baez/history.pdf](http://math.ucr.edu/home/baez/history.pdf)

------
gpsx
Maybe someone reading this can enlighten me on the utility of thinking about
virtual particles in the vacuum. This is not related that directly to the
article, but it is mentioned in it.

Feynman diagrams are a perturbation theory expanding around a problem we can
solve - non-interacting free fields. The lines in the diagram represent the
particles in the free theory, not the interacting theory. Renormalization does
allow us to make some correspondence between the two, but that can only be
taken so far, at least to my understanding. (And perhaps this is my
limitation.)

There are lots of vacuum diagrams, for example. These represent the difference
between the vacuum in the free theory and the interacting theory. That does
not mean there is frothing going on in the interacting theory that is not in
the free theory. The vacuum wave function is still a combination of field
configurations each with an associated amplitude density. (And, relating to
the article, of course, this creates an associated non-zero vacuum energy.) It
looks just like the vacuum in a particle Schroedinger equation model, but just
a lot more degrees of freedom.

What are the virtual particles? I'd appreciate if someone could explain what
people mean and why when they talk about these.

~~~
KenoFischer
There is many ways to think about virtual particles. From a particle point of
view one can think of it as quantum mechanics allowing you to create a high
energy particle for a short amount of time, even if that would not be allowed
classically. From a field theory point of view, you can have excitations in
the fields that are not sufficient to be a particle, but can still interact
locally.

~~~
gpsx
I was looking for an explanation of why the lines of a Feynman diagram are
called virtual particles when they are really particles in the unperterbed (or
free) model. So why do you think of this as creation of a particle for a short
amount of time?

~~~
KenoFischer
Generally if you look at the Feynman diagram for an interaction, you'll have
some number of external legs which one thinks of as physical particles, as
well as a number of interactions. The lines between the interactions may be
particles whose mass exceeds the total amount of energy of the incoming
particles, so they obviously can't be real. They are thus generally called
virtual (or off-shell particles). How one can think of this is what I tried to
address in the original response.

~~~
gpsx
I appreciate you saying the external legs can be thought of as physical
particles, since they are not technically physical particles. The
renormalization groups tells us that those look like physical particles.
Internally to the interaction this is not true, which is why they would be
called virtual particles. But I see them as a pure mathematical relic of the
perturbation calculation. Perhaps I am just not able to extend my intuition
here. And likely a more depth description from you or someone else is not
really practical here. I just come back to the fact that in the end, particles
are just different excitation states of the system. There is a vacuum state, a
bunch of states we would consider one particle states, a bunch more we would
consider two particle state, and so on. Doesn't this cover all configurations
of the system and there are no "in-between" or what you would call off-shell
states? I think these only make sense when we are making an approximation
around the non-interactive model.

------
jballanc
Well for one thing without them we wouldn't have Penguin diagrams!
([https://en.wikipedia.org/wiki/Penguin_diagram](https://en.wikipedia.org/wiki/Penguin_diagram))

------
lisper
Another cool use of diagrams in abstract algebra:

[https://graphicallinearalgebra.net](https://graphicallinearalgebra.net)

~~~
mlpinit
This is a fantastic resource! The writing is really enjoyable.

------
beezle
A good read on the historical development and use of Feynman diagrams is
Drawing Theories Apart The Dispersion of Feynman Diagrams in Postwar Physics
[https://www.amazon.com/Drawing-Theories-Apart-Dispersion-
Dia...](https://www.amazon.com/Drawing-Theories-Apart-Dispersion-
Diagrams/dp/0226422674)

------
vanderZwan
Maybe a strange question, but could one of the stumbling blocks be that we
still have a lot of work to do on the mathematics of infinities? I mean,
saying that a positive and negative infinity might "cancel" sounds like that's
where the maths gets murky.

~~~
ssivark
It sounds murky because we were being silly the first time around. The
physically relavant quantity was always the sum of all those contributions and
we decided to (arbitrarily, in the grand scheme of physics) split things up
under some scheme. So, in hindsight, the positive and negative infinities were
artificially manufactured by our segregation scheme... there's not much to be
bothered about in that regard.

That might still mean that there's more to be understood regarding notions of
infinity or the convergence of amplitudes in quantum field theory.

~~~
darkmighty
As an example (not sure if applicable), you could do some calculation and get
an alternating series as the result:

R = 1 - 1/2 + 1/3 - 1/4 ...

If you "explain" the positive part as being contribution from field A, and the
negative part as a contribution from field B, both would be infinite, but the
reality might be that splitting them is nonsensical.

------
tlogan
One simplistic thing I heard about Feynman Diagrams: Feynman Diagrams are
important because they have 3 lines - not 4, 5, 6, 20, etc. The fact they have
just 3 lines say something very very important about the nature and the world.

Is the above correct?

~~~
solaris999
I heard a good rule of thumb to follow when drawing Feynman Diagrams is that
the fewer lines there are, the more probable the occurrence of the event
represented by the diagram. So at the simplest case, a single line
representing an unchanging particle, is far far more likely than any two
particle interactions (3 lines) and so on.

~~~
evanb
This is true if the system you're trying to describe is weakly coupled.

If the theory is strongly coupled, then in fact more and more complicated
diagrams count more and more. In this case the method of Feynman diagrams
because basically useless---in the case you describe you know you can get most
of the answer from a simple calculation (of the simplest diagrams). But if
more complicated diagrams count more (as in strong coupling) then you don't
have a place to start, because whatever diagram you pick to start at I can
make more complicated and be confident that my new diagram counts more than
the pieces you've computed.

In that case we need a different approach, the most generic of which is
lattice field theory

[https://en.wikipedia.org/wiki/Coupling_constant#Weak_and_str...](https://en.wikipedia.org/wiki/Coupling_constant#Weak_and_strong_coupling)

[https://en.wikipedia.org/wiki/Lattice_field_theory](https://en.wikipedia.org/wiki/Lattice_field_theory)

------
tempodox
Being used to live in a world of cause and effect, I'm having a really hard
time of getting my head around “random fluctuations” that seem to have an
effect but no cause. That's about as illogical to me as the idea of a magnetic
monopole. But then,

> ...particles are merely bubbles of froth, kicked up by underlying fields.
> Photons, for example, are disturbances in electromagnetic fields.

I can get behind this, even if only by way of misunderstanding. All deviation
from nonexistence (Void) is a disturbance. All existence is one giant software
bug in the fabric of Nothingness (if it could have something like fabric).

That's why I've got to love Physics. To me, it's the most exquisite collection
of brain-racking mysteries I can imagine (modulo the narrow bounds of my
imagination, of course).

~~~
russdill
Physics (as far as we know) doesn't care about the direction of time. Cause
and effect depends on a fixed direction of time.

~~~
rubidium
Physics does clearly "care" about the direction of time, because physics is a
description of the physical universe. Dissipation, entropy, quantum
measurement... etc. all have time as a key concept.

Math may not care (and it's true that most of the fundamental physics
equations are often time reversible). But the universe (and any experiment to
date) clearly shows that time has a direction and the equations work in that
direction.

~~~
russdill
That's only one time end of the universe has a very low entropy. If you have
an area with maximum entropy, there is no way to tell.

------
justinpombrio
Is there a list somewhere of all of the basic particle interactions we know
of, represented as Feynmann diagrams? My understanding is that there is a
small, finite number of them; it would be neat to see them enumerated.

------
ape4
Honest question, why does it take a super computer to come up with the
animated simulation in the article. Doesn't seem that there are that many
variables.

~~~
KenoFischer
The simulation in question was created using Lattice QCD (which many people in
the HPC world mostly know as a benchmark to optimize their compilers/systems
on, but it's also a useful physics technique).

Basically what you end up doing is discretizing spacetime on some grid, maybe
e.g. 100x100x100x100 (not much larger than that last I checked or the problem
becomes intractable even on very large computers) and then sample field
configurations monte-carlo style (the way to do it is actually really clever -
it turns out that there is a mathematical trick that can take you from the
description of your theory right to a distribution from which you can draw
your field configurations). The number of fields is decently large (some
discretization introduce extra auxiliary fields for every physical one) and in
general you need to consider the entire field configuration to compute an
observable. Then do that a few billion times (to bring down statistical error)
at a couple different lattice spacings (to be able to extrapolate to the
continuum limit) and you quickly end up with a very large computations.

There is a number of clever ways to reduce the amount of computation required,
but in general it's a very different problem.

Hope that makes some sense. I had looked into this in detail about two years
ago, but looking back, I feel like I only know every other word in my write-up
;).

~~~
evanb
I do lattice QCD, and this is a pretty decent explanation.

A few minor things:

People typically use lattice sizes that have lots of factors of 2, because
they fit on the supercomputers in a better way. So rather than 100 people do
96 or 128. Only some calculations require lattices this large.

The field content is usually a few fermions, and the gluons. On each site
there is (usually, depends on the discretization as you say) a Dirac spinor (2
spins * 2 particle/antiparticle * 3 spins = 12 numbers) for each fermion and
on each link (ie. the edges) there is an SU(3) matrix (3x3 complex doubles)
for the glue.

The field configuration is simply the glue. But this already is a large amount
of data. volume * (9 complex doubles per link) * (1 link per spacetime
direction) * (4D spacetime) ~= 9 gigs for a (32^3 * 64 spacetime volume). This
is effectively a giant (sparse) matrix, which we want to invert.

The mathematical trick is called importance sampling,
[https://en.wikipedia.org/wiki/Importance_sampling](https://en.wikipedia.org/wiki/Importance_sampling)
which people accomplish with the metropolis algorithm
[https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_al...](https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm)
. If only we could do a few billion samples! People usually make a few
thousand to a few hundred thousand samples.

~~~
semi-extrinsic
Having done some QFT, now doing hydrodynamics, I never knew that lattice
people use a staggered grid for the fermions and gluons. Do you know if it's
for the same reason as staggering velocity and pressure in CFD; to avoid
checkerboarding of the spinors (pressure)?

~~~
evanb
KenoFischer is right---it comes down to gauge invariance.

Gauge symmetry is a local symmetry that dictates a lot of the structure of
QCD. If your discretization breaks it (and say, only recovers it in the
continuum limit) you will have very bad renormalization and fine-tuning
problems. Putting the gauge on the links automatically guarantees that gauge
symmetry is respected, even at the discretized level, and jibes very well with
the fact that the gauge field / gauge connection describes, in some way, how
to perform parallel transport. So, it's very natural for the gauge to connect
the sites together in that you need to know the value of the link in order to
compare quantities on two adjacent spacetime sites.

[https://en.wikipedia.org/wiki/Gauge_theory](https://en.wikipedia.org/wiki/Gauge_theory)
[https://en.wikipedia.org/wiki/Connection_form](https://en.wikipedia.org/wiki/Connection_form)
[https://en.wikipedia.org/wiki/Parallel_transport](https://en.wikipedia.org/wiki/Parallel_transport)

------
tmaly
If you have read any of the books like Surely Your Joking, the first thing
that sprung to my mind were those diagrams he drew on napkins

------
guard-of-terra
I'm kind of confused by Feynmann diagrams where a stray gluon flies off. What
happens to it then? Where does it go?

~~~
nhatcher
That's just a part of a bigger diagram. Due to color confinement you can't
just have one gluon.

~~~
guard-of-terra
They have a diagram that emits a single gluon in the head of every particle
physics article in Wikipedia. Is that a make-believe diagram? I couldn't
expect that from them.

~~~
nhatcher
It is not a make-believe diagram. It is a completely valid diagram, but it i
just part of something bigger. Also the two quarks in the diagram need to
ultimately couple to something else. The "in" and "out" final states must be
colorless. QCD is a extremely complex theory, one usually computes a piece of
the puzzle and tries to extract physical information from that.

~~~
untilted
Just to add to this: The part that comes this initial gluon emission (
_hadronization_ ) is not very well understood since at this length scale the
perturbative (i.e., feynman diagram) approach to QCD breaks down. See e.g.
[http://www.quantumdiaries.org/2010/12/11/when-feynman-
diagra...](http://www.quantumdiaries.org/2010/12/11/when-feynman-diagrams-
fail/)

------
madengr
What does he mean by "gravity responds to all kinds of energy"? What about
photons?

~~~
egjerlow
yes, those too. Light is influenced by gravitational fields!

~~~
de_Selby
Is it not more a case of space-time being influenced rather than light?

~~~
madengr
Yes, I would think. Gravity warps space, and light just follows a line in
space, which is a curve in space-time. But since gravity can affect light,
does light (energy) affect gravity?

~~~
auntienomen
Yes. The gravitational field couples to any energy; it doesn't care what the
source is. Light carries energy, so the gravitational field responds to it.

