Hacker News new | comments | ask | show | jobs | submit login
Richard Feynman's Integral Trick (medium.com)
182 points by throwawaymath 7 months ago | hide | past | web | favorite | 35 comments



Here's a Feynman mystery that I've asked about for years:

http://longnow.org/essays/richard-feynman-connection-machine...

By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

How do you analyze boolean circuits using partial differential equations?

What else can you accomplish with this technique?

No one seems to know how he did it.


There's an idea used in the analysis of error correcting codes, to set up a polynomial where the coefficients are the number of codewords with each Hamming weight. It works out the same as adding up all the codewords, where you replace each 0 by x and each 1 by y. So a code

  010 101 => xyx + yxy = x^2y + xy^2
https://en.wikipedia.org/wiki/Enumerator_polynomial

Then you can use real (or even complex) analysis on these polynomials. There's a thick book of things you can prove about sets of binary words using those techniques: https://www.elsevier.com/books/the-theory-of-error-correctin...

And Noam Elkies' course notes on it: http://www.math.harvard.edu/~elkies/M256.13/index.html


mic drop

Thank you. This was a legendary answer. I really have been hunting this question for years: https://news.ycombinator.com/item?id=13764917

This seems like how he did it. Error correcting codes are a natural domain for questions like "What are the fewest gates gates required to transmit some information?" But the missing puzzle piece was that you can analyze them with polynomials.


Error correcting codes, coding theory, etc. is an old, serious, and deep field. Once I took a grad course in it. The course was heavily abstract algebra, especially finite fields.

A respected text is

W.\ Wesley Peterson and E.\ J.\ Weldon, Jr., {\it Error-Correcting Codes, Second Edition.\/}

Last I heard, there has been some surprisingly good progress in the field. Might look up, say, "turbo codes".


His analysis was on a machine with a very unique architecture. I've generally assumed that the problem he was analyzing was somewhat unique to that design.

The Thinking Machines computers he was analyzing were massively parallelized, they had thousand of processors, but the processors were unbelievably primitive. They operated on a single bit and had a tiny number of instructions. The part Fenyman analyzed was the routing between the processors.

You also missed the followup. When the design was nearing completion they realized that they wouldn't have the silicon for 7 buffers per chip so the hardware was finalized with 5 buffers per chip. It was enough.


Here's a thread from last year which have a few insightful discussions on the topic: https://news.ycombinator.com/item?id=13762614

I imagine he used some of the "tools in his toolbox" he acquired from various fields of Physics. Given that it was a PDE his answer also probably was an approximation


You may call v the density (time-dependent) of 1 in the flow (hence 1-v is the density of 0). Now you simply assume the flow is a differentiable function.

As a matter of fact, it will ressemble reality much better than many economic models, for example (if not all): there is a HUGE amount of bits per second.


(Cheers for asking! I've been wondering about that too, ever since reading it.)


I mean 1 - x * y can be interpreted as !(x & y) in a straight forward manner and the NAND gate is universal, so you could certainly express any combinational logic. Obviously x and y can be time dependent functions and I think it would not be too hard to find a suitable equation for some kind of flip-flop and then you already got everything for building arbitrary finite state machines. I guess you could even introduce things like gate delays and more realistic impulse responses without too much trouble. There are certainly many other ways to model different aspects of digital logic and I have no idea what exactly Feynman needed and used.


The article points out that the trick can be applied to integrals to which it doesn't seem applicable, by introducing a new parameter to the integrand which was not there before.

In general, that kind of tactic gives me hope someday mankind might find short, easy solutions to problems that currently seem hopeless (P=NP, Riemann Hypothesis, the 3n+1 problem, etc.)

For example, maybe someone will define spaces P(x) and NP(x), depending on a parameter x, with P(1)=P and NP(1)=NP, and then they'll show (in some simple way that makes us all kick ourselves) that P(x)=NP(x) for all x>=sqrt(2) and P(x)<>NP(x) for all x<sqrt(2). "And thus, P<>NP."

It makes you realize we really don't have a good sense of distances in the space of mathematical proofs. It's so high-dimensional, a solution that seems light-years away can actually be right under your nose, if you just turn your nose in the correct direction.


Do you already have awareness of https://en.wikipedia.org/wiki/Parameterized_complexity ?


I was not aware of that, but it looks pretty obviously related to what I wrote, lol! Thanks :)


> A given problem, such as the integral we just computed, may appear to be intractable on its own. However, by stepping back and considering the problem not in isolation but as an individual member of an entire class of related problems, we can discern facts that were previously hidden from us.

A professor put it this way: "the more you assume, the more you can prove"


While we're here, I think that anyone who enjoys this article will probably like this answer to a question on math.stackexchange: https://math.stackexchange.com/questions/562694/integral-int....

The asker has an integral for which Mathematica and Maple both failed to resolve a closed form solution. Then they use a symmetry to break up the integral from 0 to infinity into two parts, substitute, integrate, substitute again and finally apply the residue theorem.


“Differentiation is mechanics; integration is art.”


As I was studying math, I noticed this as a trend: somebody defines addition, and everything works fine until somebody comes along and suggests "inverse addition" which introduces this whole new class of numbers that we call "negative numbers". Somebody else defines multiplication and everything's fine, it can even be worked to account for negative numbers... until somebody suggest inverse multiplication which introduces the complexity of this whole new classification of numbers called rational numbers. Somebody defines square roots and now we have irrational and even _imaginary_ numbers. And, of course, somebody defines differentiation and everything is completely straightforward until somebody else defines inverse differentiation and we have this whole class of, while not numbers, problems that are intractable.


It's a nice thought, but I recall reading elsewhere that integration was developed separately to differentiation (to find areas and volumes) until they were found to be related.


This is exactly right. You have to get creative sometimes.

I was excited as a kid when I was able to get integral(cos(x^(1/2))) - literally jumping up and down.


"Nature laughs at the difficulties of integration." [1]

[1] https://en.wikiquote.org/wiki/Pierre-Simon_Laplace


> Differentiation is mechanics; integration is art.

This is silly. In symbolic calculus, there are well-known algorithms for computing the primitive of any expression (and to decide whether it exists or not). In numerical calculus, integration is actually much easier than derivation.


Differentiation under the integral sign needs a theorem with a proof. There are several versions of the theorem. The best proofs I know of are from the dominated convergence theorem of measure theory.

For the Putnam exam, yes, it's challenging. It's so challenging that doing the work to do well on the exam is comparable with creating and publishing peer reviewed original research in math, and such a publication will likely do much more for the academic progress for a student than the Putnam.

Indeed, soon enough in math graduate school at a good research university, a student learns that everyone admits that no one can carry all the library around between their ears and doesn't try to. Instead, the three most important measures of progress are, in no particular order, research, research, and research.

Finally, IIRC, how to do integration of algebraic expressions in closed form has long been a solved problem and long ago was coded up in some software -- IIRC, the software will report the result or state that no closed form solution exists.


Finally, IIRC, how to do integration of algebraic expressions in closed form has long been a solved problem and long ago was coded up in some software -- IIRC, the software will report the result or state that no closed form solution exists.

Are you thinking of indefinite integrals? https://en.wikipedia.org/wiki/Risch_algorithm

This article is about a trick for definite integrals. If you can do the indefinite integral you can do a definite one, but not necessarily vice versa.


You are no doubt ahead of me on this.

With a little review (the Internet, Google, and Wikipedia are a new world), I recall that I was thinking of

Macsyma

with apparently a nice description at

https://en.wikipedia.org/wiki/Macsyma

Apparently the software is still available, likely better than the original which apparently goes back to the MIT project MAC.

The Wikipedia description early on mentions indefinite integration.

IIRC the software implements algorithms that "solve" the problem of indefinite integration, calculus course "anti-differentiation".

I'm plainly passing out what I remember, some of it confirmed by the Wikipedia page, and memory of hearsay.

But likely now could download a copy of the software with whatever documentation they have and see how good it is, how much it claims, and that's a lot more than I could do the last time I heard anything about Macsyma.

I'm not going to set my startup aside and rush to have fun with Macsyma, as much fun as I have had with calculus, but I will make a note in my little system for such chunks of information!


Thanks for this great link! Richard Feynman, always so pleasant and inspiring.


Thanks! I like to collect mathematical "tricks" like this. This was on the /r/math subreddit a couple of days ago and many people were expressing surprise that they'd never encountered it.

It seems most people encounter this either as a brief coverage in real analysis or in full emphasis in mathematical physics or quantum field theory courses.


> It turns out [Leibniz integral rule is] not taught very much in the universities; they don’t emphasize it.

To be sure, this is typically touched for the first time in advanced calculus at the undergraduate level, which--at least with my alma mater--isn't mandatory for anyone except math majors.


Is that "advanced calculus" course part of the university's formal calculus sequence, or is it more of an analysis course? If it's not an analysis course, what topics are covered that aren't in the lower calculus courses?


It's a 2-semester analysis requirement for math majors, with a dual elective track for the rest of us engineers and scientists.


My alma mater's undergrad physics curriculum required math through boundary value problems: basic calc, multivariate calc, vector calculus as a separate course with more in depth material, diff eq, partial differential equations/bvp. The whole math curriculum was about 6 semesters. Basically any physics major auto-qualified for a math cert/minor.


So how about a situation in which the trick doesn't work? I think an integrand with some singularities might do it. Some variation where the integrand ends up being x^3y/(x^2 + y^2)^2 should do it.


Is that the one where he bounds between exponential growing trapezoids? I recall it being used by his student Jack Lutz like crazy in his complexity theory classes.


I'm not 100% sure why f(1)=0...

I ran it in Mathematica and it checks, but I just can't see why at first glance... Any help here?


I had the same question. Not sure, but if you look at the plot of the indefinite integral, you can see it appears to have period 2pi (http://www.wolframalpha.com/input/?i=integral+log(2-2cos(x))). The indefinite integral for log(3-3cos(x)), for example, is not periodic - the value 2 inside the log seems to be a critical point, so I guess it was chosen deliberately to force f(1)=0.


Offtopic, but when did medium start with a soft-paywall? 3 articles per month?


It looks like authors have been able to opt-in to the paywall since late 2017:

http://www.niemanlab.org/2017/10/starting-today-anyone-who-p...

But I think this is the first paywalled Medium article I've actually seen (and I think it was free when I came across it on r/math a few days ago)... but I've developed a bit of a reflexive avoidance of Medium content anyway.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: