Seeing this sort of creative mathematical process in action makes me think that maybe  is right, and math is sometimes more art than science.
I used this in my thesis, in comparing an analytical solution to a problem to a numerical solution, in order to determine some parameters of the numerical solution for idealized wavefunctions. My simulations needed non-idealized wavefunctions, and this mechanism enabled me to optimize parameters for this, and set approximate error bounds.
It (math) really is a science, but there is a strong aspect of artistry involved.
My masters was about modifying potential flow singularities (Singularities to cancel other singularities... eh hem, I was young) to model vortices shed from blunt surfaces - part of fast/cheap performance prediction for wave energy converters. Didn’t work amazingly well physically, but I will never forget the fun I had that summer figuring out some how to work with those singular integral equations. Working on a set of terms until at long last a form emerged that matched with R&G was such a breakthrough moment!
Is math invented or discovered?
So I think it is a continuum, and really fantastic mathematics will feature ideas from all the way along the spectrum: “discoveries” for the beauty, “inventions” for the problem solving, and the “in between” for the subtlety and art.
Even for physics, there are often many mathematical theories that can be used to model the same physical observations (talking about equivalent structures, not about competing theories). For example, many problems can be described equivalently using vectors, complex numbers, or linear algebra. There is a good chance that there are many (perhaps infinitely many) other systems that we haven't thought about that could be used equivalently.
So, while I agree that ultimately the structures in mathematics exist independent of our use of them, so we are only discovering pre-existing structures, I would also say that new mathematical theories are developed using a process that is more similar to invention than to discovery (i.e. you can't explore the space of mathematic theories to discover new ones, as it is infinite in every direction - you can only explore the properties of a structure you essentially invent for yourself).
Our mathematical system is an invented human language. We know that all symbolic systems with sufficient complexity are equivalent (see Turing machines.) Finding an arbitrary one to be useful and flexible is not evidence of magic.
(Yours is a "turtles all the way down" argument, I think.)
The reason they are not usually covered in calculus is, to justify such differentiation, one needs the notions of Lebesgue integral and measure. The Riemann integral from calculus courses is just not robust enough. Of course, if the function inside the integral is nice enough, nothing bad happens, and the differentiation is valid.
definitely not the case. leibniz's rule
only requires fubini's theorem for exchanging order of integration
which i'm pretty sure everyone learns in multivariable calc.
i personally learned it from apostol's calc (not analysis) books.
>In fact, there are cases where interchanging the order is not allowed when you're in that situation depending on the integrand.
yup exactly in the cases where fubini's theorem doesn't hold and therefore those cases for which the integral theorem doesn't hold.
Leibniz Theorem: Let f(x, t) be a function such that both f(x, t) and its partial derivative fx(x, t) are continuous in t and x in some region of the (x, t)-plane, including a(x) ≤ t ≤ b(x), x0 ≤ x ≤ x1. Also suppose that the functions a(x) and b(x) are both continuous and both have continuous derivatives for x0 ≤ x ≤ x1...
Fubini's Theorem: If f(x,y) is a *continuous function* on a rectangle R=[a,b]×[c,d]...
I took a course on "mathematical methods in physics" which covered some complex analysis, and my math friends where shocked at how non-rigorous we were going through the theorems. Luckily for physicists these techniques tends to be valid because functions from the real world are well-behaved. For me personally it was so fun with a course where we did advanced mathematics for "practical" problems.
Measure theory in particular:
Measure theory will tell you exactly when this result is true, but it is possible to grok the result with only a basic understanding of differentiation and integration. Feynman called this "the Babylonian approach" to mathematics.
F(t) = integral(a,b) f(t, x) dx
~ sum(i) f(t, xi) * Dx
F'(t) ~ (F(t+Dt) - F(t)) / Dt
= integral(a,b) f(t+Dt, x)-f(t, x) dx/Dt
~ sum(i) (f(t+Dt, xi) - f(t, xi))/Dt Dx
~ sum(i) f'(t, xi) Dx
~ integral(a, b) f'(t, x) dx
"... the way a world-class mathematician does."
(because it may have a lot to do with why his book is so much better than other texts on this topic like Rudin, Royden, and Cohn.)
\int f(x) g(x) \dx = \int L[f] L^-1[g] \dt
I had no idea Laplace transforms could do this so this was a nice discovery!
Re: "no good algorithm for inverse Laplace" -- there are certainly reasonably good numerical algorithms based on evaluating contour integrals (see, e.g., https://en.wikipedia.org/wiki/Inverse_Laplace_transform). The inversion formulas are not usually taught in undergrad courses anymore (at least not in the US) because complex function theory has been largely taken out of undergrad engineering curricula, and even for math majors is very much optional.
Anyone knows if is there any symbolic math package that implements this 'trick'?
"This clearly converges for all t>=0, and our aim is to evaluate G(0)."
It clearly converges for all t>0, and it would be reasonable to do limit analysis, but I don't quite see how we could say it "clearly converges" for >=.
Note that it does not converge absolutely, however.
∫0->∞ of x*e^-tx dx