Hacker News new | past | comments | ask | show | jobs | submit login
Differentiation Under Integral Sign (2015) [pdf] (williams.edu)
101 points by dualvectorfoil 20 days ago | hide | past | favorite | 59 comments

Interesting! It wasn't immediately clear what exactly this "trick" is that Feynman was talking about. This document implies that the trick is to differentiate the integral according to another variable (in this case, 't'), and then see where that gets you.

Seeing this sort of creative mathematical process in action makes me think that maybe [1] is right, and math is sometimes more art than science.

[1] https://www.youtube.com/watch?v=Ws6qmXDJgwU&feature=emb_titl...

I used this "trick" in many contexts in grad school. Later on, I learned that my bible, Gradsteyn and Rhyzik, also used similar techniques for some of the integrals. I don't have the reference for this, it was verbally conveyed by a professor to me.

I used this in my thesis, in comparing an analytical solution to a problem to a numerical solution, in order to determine some parameters of the numerical solution for idealized wavefunctions. My simulations needed non-idealized wavefunctions, and this mechanism enabled me to optimize parameters for this, and set approximate error bounds.

It (math) really is a science, but there is a strong aspect of artistry involved.

Oh man, Gradsteyn and Rhyzik!

My masters was about modifying potential flow singularities (Singularities to cancel other singularities... eh hem, I was young) to model vortices shed from blunt surfaces - part of fast/cheap performance prediction for wave energy converters. Didn’t work amazingly well physically, but I will never forget the fun I had that summer figuring out some how to work with those singular integral equations. Working on a set of terms until at long last a form emerged that matched with R&G was such a breakthrough moment!

I saw this question posed a few weeks ago and it broke my brain for a few days pondering it:

Is math invented or discovered?

I think both! There are parts of mathematics that just look like truths that have been waiting the whole time to be discovered. On the other hand, people invent problem solving techniques which definitely feel more like inventions than discoveries. Then in the middle, there are made-up mathematical structures introduced to bridge between two “clearly discovered” canonical objects, but this made-up structure certainly has the invented flavour.

So I think it is a continuum, and really fantastic mathematics will feature ideas from all the way along the spectrum: “discoveries” for the beauty, “inventions” for the problem solving, and the “in between” for the subtlety and art.

How does math software such as Mathematica solve integrals?

Are maps invented or discovered?

Eh,this is just due to linguistic ambiguity. The syntax of Mathematics (as a language) is used to describe relationships and properties that are discovered.

It is discovered. Two alien species could effectively communicate with math alone (after exchanging notation translate keys).

I agree with your logic but not your premise. Two alien species could effectively communicate if they happened to agree on a shared set of fundamental axioms. The axiom of choice is somewhat contentious here on earth, since it underlies the Banach-Tarski paradox, and it's not clear at all that a sophisticated alien society would have ever accepted the axiom of choice into their mathematics.

Is there a list of generally accepted root mathematical axioms?

This is an interesting thought experiment. The space of all consistent and potentially useful mathematical constructs is gigantic, so I think there would be a good chance that two alien species would share almost no mathematical constructs, and would require decades or hundreds of years to discover - so in this sense, there is a large element of invention to mathematics as a human endeavor.

Even for physics, there are often many mathematical theories that can be used to model the same physical observations (talking about equivalent structures, not about competing theories). For example, many problems can be described equivalently using vectors, complex numbers, or linear algebra. There is a good chance that there are many (perhaps infinitely many) other systems that we haven't thought about that could be used equivalently.

So, while I agree that ultimately the structures in mathematics exist independent of our use of them, so we are only discovering pre-existing structures, I would also say that new mathematical theories are developed using a process that is more similar to invention than to discovery (i.e. you can't explore the space of mathematic theories to discover new ones, as it is infinite in every direction - you can only explore the properties of a structure you essentially invent for yourself).

A flat head screwdriver was designed to insert and remove screws. But it can also be used to open a paint can! OMG! Is the flat head screwdriver invented or discovered?

Our mathematical system is an invented human language. We know that all symbolic systems with sufficient complexity are equivalent (see Turing machines.) Finding an arbitrary one to be useful and flexible is not evidence of magic.

That thing to which all symbolic systems with sufficient complexity are equivalent, discovered or invented?

(Yours is a "turtles all the way down" argument, I think.)

Just to explore this idea a little more: insofar as math involves inspiration from the natural world and the logical consequences of axioms, I would consider it a "science" (since these are sort of exploratory and discovered consequences of "facts"). Insofar as it involves redefining axioms, looking at them in a new way, or inventing new idealized objects/methods altogether, I would consider it an art.

It's almost like you're backing up a bit and taking a different route forward to see if it gets you around a roadblock.

These are standard for a (graduate) course of real analysis — see for instance section 2.3 and exercises after it in Folland, "Real analysis".

The reason they are not usually covered in calculus is, to justify such differentiation, one needs the notions of Lebesgue integral and measure. The Riemann integral from calculus courses is just not robust enough. Of course, if the function inside the integral is nice enough, nothing bad happens, and the differentiation is valid.

>one needs the notions of Lebesgue integral and measure. The Riemann integral from calculus courses is just not robust enough.

definitely not the case. leibniz's rule


only requires fubini's theorem for exchanging order of integration


which i'm pretty sure everyone learns in multivariable calc.

i personally learned it from apostol's calc (not analysis) books.

This is not true - it's not necessarily obvious that things should hold when you're dealing with infinity as one of the bounds. In fact, there are cases where interchanging the order is not allowed when you're in that situation depending on the integrand.

i'm at a loss.

>In fact, there are cases where interchanging the order is not allowed when you're in that situation depending on the integrand.

yup exactly in the cases where fubini's theorem doesn't hold and therefore those cases for which the integral theorem doesn't hold.

Leibniz Theorem: Let f(x, t) be a function such that both f(x, t) and its partial derivative fx(x, t) are continuous in t and x in some region of the (x, t)-plane, including a(x) ≤ t ≤ b(x), x0 ≤ x ≤ x1. Also suppose that the functions a(x) and b(x) are both continuous and both have continuous derivatives for x0 ≤ x ≤ x1...

Fubini's Theorem: If f(x,y) is a *continuous function* on a rectangle R=[a,b]×[c,d]...

Under certain technical conditions, differentiation under the integral sign also works for Riemann integration (see Marsden & Hoffman). There's no need to develop Lebesgue theory to demonstrate this technique in a calculus course, but uniform convergence must be understood.

Alternatively you can study physics and then you don't need to worry about these tiny details.

I took a course on "mathematical methods in physics" which covered some complex analysis, and my math friends where shocked at how non-rigorous we were going through the theorems. Luckily for physicists these techniques tends to be valid because functions from the real world are well-behaved. For me personally it was so fun with a course where we did advanced mathematics for "practical" problems.

As someone who loved both math and physics, this was why I always found math a bit easier. Everything rests on a solid foundation and you can justify each step. When I got into higher physics, it was so riddled with intuitive arguments as opposed to rigor that I didn't fare so well. I'm sure one can find mathematical justifications for their methods, but it's not part of the curriculum, and almost none of the professors (in a top 10 physics school) knew them either.

Apart from the occasional Einstein and Newton every couple of centuries, physics seems to advance by throwing a semi-random selection of PhD dissertations at the real world and seeing if any of them happen to match experiment.

Case in point: Newton's work was not that rigorous. It was not till the 1800's that calculus was put on a firm foundation. Of course, things were all different back then.

It's all fun and games until you're trying to calculate trajectories over a Cantor set.

Or integrate over an interval of surreals.

Best place to learn about a lot of topics in math including the necessary background for understanding the Feynman trick and related maneuvers is Terrence Tao's website:


Measure theory in particular:


I think part of the reason Feynman got as far as he did--apart from his unusual innate talents--was his skepticism of formality.

Measure theory will tell you exactly when this result is true, but it is possible to grok the result with only a basic understanding of differentiation and integration. Feynman called this "the Babylonian approach" to mathematics.

    F(t) = integral(a,b) f(t, x) dx
         ~ sum(i) f(t, xi) * Dx

    F'(t) ~ (F(t+Dt) - F(t)) / Dt
          = integral(a,b) f(t+Dt, x)-f(t, x) dx/Dt
          ~ sum(i) (f(t+Dt, xi) - f(t, xi))/Dt Dx
          ~ sum(i) f'(t, xi) Dx
          ~ integral(a, b) f'(t, x) dx

Feynman was brilliant but I don't think his skepticism of formailty sets him aparts from other good mathematicians or even a typical good mathematics student.


Terrence Tao's measure theory text is one of the best undergraduate math textbooks I've ever read. It teaches you not just about the subject, but about how to approach it the way a mathematician does.

That is exactly why I recommended it along with his entire website. But thanks for adding the comment.

Suggested edit:

"... the way a world-class mathematician does."

(because it may have a lot to do with why his book is so much better than other texts on this topic like Rudin, Royden, and Cohn.)

Very neat trick! My first reaction was that this must be related to to the Laplace transform, and looking through the Wikipedia article seems that this is basically the same trick: https://en.wikipedia.org/wiki/Laplace_transform#Evaluating_i...:

    \int f(x) g(x) \dx = \int L[f] L^-1[g] \dt
In their sine wave example we have f(x)=sin(x) and g(x)=1/x, and luckily for us the inverse Laplace of 1/x is just 1 (at least from 0 to \infty). I've learnt that finding the inverse Laplace is practically impossible (no good algorithm), but the regular Laplace can be often be found by "just" integrating. So I'm guessing this technique is mostly useful when we have a term with 1/x^n since the inverse is trivial.

I had no idea Laplace transforms could do this so this was a nice discovery!

That's indeed a neat property!

Re: "no good algorithm for inverse Laplace" -- there are certainly reasonably good numerical algorithms based on evaluating contour integrals (see, e.g., https://en.wikipedia.org/wiki/Inverse_Laplace_transform). The inversion formulas are not usually taught in undergrad courses anymore (at least not in the US) because complex function theory has been largely taken out of undergrad engineering curricula, and even for math majors is very much optional.

This trick can also be used with the Gaussian integral to compute similar integrals of a Gaussian multiplied with a power of the integration variable [1], e.g. the integral of x*exp(-x^2).

[1] https://en.wikipedia.org/wiki/Gaussian_integral#Integrals_of...

Yeah, I used this example in classes I taught. Very awesome application of this technique.

Of course the indefinite integral of x*exp(-x^2) is elementary.

I think parent meant to say _even_ powers like x^(2n) exp(-x^2), which can be done by differentiating exp(-Bx^2) and evaluating at B = 1.

I remember seeing someone use this trick to integrate an expression that integrals.wolfram.com couldn't.

Anyone knows if is there any symbolic math package that implements this 'trick'?

Is there an error on the second page where it says:

"This clearly converges for all t>=0, and our aim is to evaluate G(0)."

It clearly converges for all t>0, and it would be reasonable to do limit analysis, but I don't quite see how we could say it "clearly converges" for >=.

Break the integral from 0 to infinity as a sum over the intervals demarcated by zeros of (sin(x))/x. Since the integrand is always positive or always negative on such an integral, and the area under the curve is decreasing in absolute value, you can use the alternating series test. (This is a bit clearer if you graph it with e.g. Wolfram Alpha.)

Note that it does not converge absolutely, however.

“The story is told of G. H. Hardy (and of other people) that during a lecture he said ‘It is obvious… Is it obvious?’ left the room, and returned fifteen minutes later, saying ‘Yes, it's obvious.’ I was present once when Rogosinski asked Hardy whether the story were true. Hardy would admit only that he might have said ‘It is obvious… Is it obvious?’ (brief pause) ‘Yes, it's obvious.’”

The excellent book on advanced calculus by Edwin Bidwell Wilson also discusses this method.

While I'm not sure I've seen it in a mathematics textbook, they use this trick all the time in physics. And I'm pretty sure I've had some engineering professors show it in class as well.

The second equation of Section 1 says one can "easily" see how

    ∫0->∞ of x*e^-tx dx
equals -1/t², but I'm just not seeing it. Can somebody help me out here?

You differentiate both sides of the equation given above with respect to t. Differentiating 1/t yields -1/t^2. On the LHS you apply the differentiation inside the integral.

∫0->∞ x*e^-tx dx = -∂_t ∫0->∞ e^-tx dx = -∂_t (1/t)∫0->∞ e^-u du = -∂_t (1/t) = 1/t^2

Kevin Conrad has a more in-depth paper on this at roughly the same maturity level, also motivated by Feynman's story: https://kconrad.math.uconn.edu/blurbs/analysis/diffunderint....

See also the Feynman's Integral Trick thread from 8 days ago:


Can someone explain to me how Int[0,∞](t^{n+1} x^n e^{-tx} dx) = Int[0,∞](x^n e^{-x} dx)? That is, the last two equations of section 1.

Let t -> 1 which you can do because this entire trick is about interchanging limits (since both an integral and a derivative are limits)

Ah, thanks! So it's just "has to hold for all values of t > 0, including for t = 1"

So are there any good example where this trick gives you answers to otherwise very hard problems?

I don't know if this counts, but I solved [problem A5 on the 2005 Putnam exam](https://kskedlaya.org/putnam-archive/2005.pdf) using this method.

So what parameter did you introduce?

The academic oeuvre of Dr Feynman (Nobel laureate) himself, largely.

His academic work is not based on/essentially a clever integral trick.

Multiple clever integral tricks...

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact