
Richard Feynman's Integral Trick - throwawaymath
https://medium.com/@jackebersole/richard-feynmans-integral-trick-e7afae85e25c
======
sillysaurus3
Here's a Feynman mystery that I've asked about for years:

[http://longnow.org/essays/richard-feynman-connection-
machine...](http://longnow.org/essays/richard-feynman-connection-machine/)

 _By the end of that summer of 1983, Richard had completed his analysis of the
behavior of the router, and much to our surprise and amusement, he presented
his answer in the form of a set of partial differential equations. To a
physicist this may seem natural, but to a computer designer, treating a set of
boolean circuits as a continuous, differentiable system is a bit strange.
Feynman 's router equations were in terms of variables representing continuous
quantities such as "the average number of 1 bits in a message address." I was
much more accustomed to seeing analysis in terms of inductive proof and case
analysis than taking the derivative of "the number of 1's" with respect to
time. Our discrete analysis said we needed seven buffers per chip; Feynman's
equations suggested that we only needed five. We decided to play it safe and
ignore Feynman._

How do you analyze boolean circuits using partial differential equations?

What else can you accomplish with this technique?

No one seems to know how he did it.

~~~
tlb
There's an idea used in the analysis of error correcting codes, to set up a
polynomial where the coefficients are the number of codewords with each
Hamming weight. It works out the same as adding up all the codewords, where
you replace each 0 by x and each 1 by y. So a code

    
    
      010 101 => xyx + yxy = x^2y + xy^2
    

[https://en.wikipedia.org/wiki/Enumerator_polynomial](https://en.wikipedia.org/wiki/Enumerator_polynomial)

Then you can use real (or even complex) analysis on these polynomials. There's
a thick book of things you can prove about sets of binary words using those
techniques: [https://www.elsevier.com/books/the-theory-of-error-
correctin...](https://www.elsevier.com/books/the-theory-of-error-correcting-
codes/macwilliams/978-0-444-85193-2)

And Noam Elkies' course notes on it:
[http://www.math.harvard.edu/~elkies/M256.13/index.html](http://www.math.harvard.edu/~elkies/M256.13/index.html)

~~~
sillysaurus3
_mic drop_

Thank you. This was a legendary answer. I really have been hunting this
question for years:
[https://news.ycombinator.com/item?id=13764917](https://news.ycombinator.com/item?id=13764917)

This seems like how he did it. Error correcting codes are a natural domain for
questions like "What are the fewest gates gates required to transmit some
information?" But the missing puzzle piece was that you can analyze them with
polynomials.

~~~
graycat
Error correcting codes, coding theory, etc. is an old, serious, and deep
field. Once I took a grad course in it. The course was heavily abstract
algebra, especially finite fields.

A respected text is

W.\ Wesley Peterson and E.\ J.\ Weldon, Jr., {\it Error-Correcting Codes,
Second Edition.\/}

Last I heard, there has been some surprisingly good progress in the field.
Might look up, say, "turbo codes".

------
xamuel
The article points out that the trick can be applied to integrals to which it
doesn't seem applicable, by introducing a new parameter to the integrand which
was not there before.

In general, that kind of tactic gives me hope someday mankind might find
short, easy solutions to problems that currently seem hopeless (P=NP, Riemann
Hypothesis, the 3n+1 problem, etc.)

For example, maybe someone will define spaces P(x) and NP(x), depending on a
parameter x, with P(1)=P and NP(1)=NP, and then they'll show (in some simple
way that makes us all kick ourselves) that P(x)=NP(x) for all x>=sqrt(2) and
P(x)<>NP(x) for all x<sqrt(2). "And thus, P<>NP."

It makes you realize we really don't have a good sense of distances in the
space of mathematical proofs. It's so high-dimensional, a solution that seems
light-years away can actually be right under your nose, if you just turn your
nose in the correct direction.

~~~
no_identd
Do you already have awareness of
[https://en.wikipedia.org/wiki/Parameterized_complexity](https://en.wikipedia.org/wiki/Parameterized_complexity)
?

~~~
xamuel
I was not aware of that, but it looks pretty obviously related to what I
wrote, lol! Thanks :)

------
vinchuco
> A given problem, such as the integral we just computed, may appear to be
> intractable on its own. However, by stepping back and considering the
> problem not in isolation but as an individual member of an entire class of
> related problems, we can discern facts that were previously hidden from us.

A professor put it this way: "the more you assume, the more you can prove"

------
throwawaymath
While we're here, I think that anyone who enjoys this article will probably
like this answer to a question on math.stackexchange:
[https://math.stackexchange.com/questions/562694/integral-
int...](https://math.stackexchange.com/questions/562694/integral-
int-11-frac1x-sqrt-frac1x1-x-ln-left-frac2-x22-x1/565626#565626).

The asker has an integral for which Mathematica and Maple both failed to
resolve a closed form solution. Then they use a symmetry to break up the
integral from 0 to infinity into two parts, substitute, integrate, substitute
again and finally apply the residue theorem.

------
gbacon
“Differentiation is mechanics; integration is art.”

~~~
commandlinefan
As I was studying math, I noticed this as a trend: somebody defines addition,
and everything works fine until somebody comes along and suggests "inverse
addition" which introduces this whole new class of numbers that we call
"negative numbers". Somebody else defines multiplication and everything's
fine, it can even be worked to account for negative numbers... until somebody
suggest inverse multiplication which introduces the complexity of this whole
new classification of numbers called rational numbers. Somebody defines square
roots and now we have irrational and even _imaginary_ numbers. And, of course,
somebody defines differentiation and everything is completely straightforward
until somebody else defines inverse differentiation and we have this whole
class of, while not numbers, problems that are intractable.

~~~
rm445
It's a nice thought, but I recall reading elsewhere that integration was
developed separately to differentiation (to find areas and volumes) until they
were found to be related.

------
graycat
Differentiation under the integral sign needs a theorem with a proof. There
are several versions of the theorem. The best proofs I know of are from the
dominated convergence theorem of measure theory.

For the Putnam exam, yes, it's challenging. It's so challenging that doing the
work to do well on the exam is comparable with creating and publishing peer
reviewed original research in math, and such a publication will likely do much
more for the academic progress for a student than the Putnam.

Indeed, soon enough in math graduate school at a good research university, a
student learns that everyone admits that no one can carry all the library
around between their ears and doesn't try to. Instead, the three most
important measures of progress are, in no particular order, research,
research, and research.

Finally, IIRC, how to do integration of algebraic expressions in closed form
has long been a solved problem and long ago was coded up in some software --
IIRC, the software will report the result or state that no closed form
solution exists.

~~~
imurray
_Finally, IIRC, how to do integration of algebraic expressions in closed form
has long been a solved problem and long ago was coded up in some software --
IIRC, the software will report the result or state that no closed form
solution exists._

Are you thinking of indefinite integrals?
[https://en.wikipedia.org/wiki/Risch_algorithm](https://en.wikipedia.org/wiki/Risch_algorithm)

This article is about a trick for definite integrals. If you can do the
indefinite integral you can do a definite one, but not necessarily vice versa.

~~~
graycat
You are no doubt ahead of me on this.

With a little review (the Internet, Google, and Wikipedia are a new world), I
recall that I was thinking of

Macsyma

with apparently a nice description at

[https://en.wikipedia.org/wiki/Macsyma](https://en.wikipedia.org/wiki/Macsyma)

Apparently the software is still available, likely better than the original
which apparently goes back to the MIT project MAC.

The Wikipedia description early on mentions indefinite integration.

IIRC the software implements algorithms that "solve" the problem of indefinite
integration, calculus course "anti-differentiation".

I'm plainly passing out what I remember, some of it confirmed by the Wikipedia
page, and memory of hearsay.

But likely now could download a copy of the software with whatever
documentation they have and see how good it is, how much it claims, and that's
a lot more than I could do the last time I heard anything about Macsyma.

I'm not going to set my startup aside and rush to have fun with Macsyma, as
much fun as I have had with calculus, but I will make a note in my little
system for such chunks of information!

------
inp
Thanks for this great link! Richard Feynman, always so pleasant and inspiring.

~~~
throwawaymath
Thanks! I like to collect mathematical "tricks" like this. This was on the
/r/math subreddit a couple of days ago and many people were expressing
surprise that they'd never encountered it.

It seems most people encounter this either as a brief coverage in real
analysis or in full emphasis in mathematical physics or quantum field theory
courses.

------
metaphor
> _It turns out [Leibniz integral rule is] not taught very much in the
> universities; they don’t emphasize it._

To be sure, this is typically touched for the first time in advanced calculus
at the undergraduate level, which--at least with my alma mater--isn't
mandatory for anyone except math majors.

~~~
throwawaymath
Is that "advanced calculus" course part of the university's formal calculus
sequence, or is it more of an analysis course? If it's not an analysis course,
what topics are covered that aren't in the lower calculus courses?

~~~
metaphor
It's a 2-semester analysis requirement for math majors, with a dual elective
track for the rest of us _engineers and scientists_.

------
jordigh
So how about a situation in which the trick doesn't work? I think an integrand
with some singularities might do it. Some variation where the integrand ends
up being x^3y/(x^2 + y^2)^2 should do it.

------
crb002
Is that the one where he bounds between exponential growing trapezoids? I
recall it being used by his student Jack Lutz like crazy in his complexity
theory classes.

------
topologie
I'm not 100% sure why f(1)=0...

I ran it in Mathematica and it checks, but I just can't see why at first
glance... Any help here?

~~~
alanbernstein
I had the same question. Not sure, but if you look at the plot of the
indefinite integral, you can see it appears to have period 2pi
([http://www.wolframalpha.com/input/?i=integral+log(2-2cos(x))](http://www.wolframalpha.com/input/?i=integral+log\(2-2cos\(x\)\))).
The indefinite integral for log(3-3cos(x)), for example, is not periodic - the
value 2 inside the log seems to be a critical point, so I guess it was chosen
deliberately to force f(1)=0.

------
steventhedev
Offtopic, but when did medium start with a soft-paywall? 3 articles per month?

~~~
cristoperb
It looks like authors have been able to opt-in to the paywall since late 2017:

[http://www.niemanlab.org/2017/10/starting-today-anyone-
who-p...](http://www.niemanlab.org/2017/10/starting-today-anyone-who-
publishes-something-on-medium-can-paywall-it/)

But I think this is the first paywalled Medium article I've actually seen (and
I think it was free when I came across it on r/math a few days ago)... but
I've developed a bit of a reflexive avoidance of Medium content anyway.

