
Advances in number theory inspired by physics - digital55
https://www.quantamagazine.org/secret-link-uncovered-between-pure-math-and-physics-20171201/
======
ncmncm
He's not alone in applying physics math to number theory. One thing that makes
mathematicians leery of physics-inspired proofs is that physicists accept
mathematical transformations that produce correct experimental results but
that have not been proven axiomatically.

For example, here is a proof of the Riemann Hypothesis:

[http://aip.scitation.org/doi/10.1063/1.5012170](http://aip.scitation.org/doi/10.1063/1.5012170)

It uses operations on an analytic continuation of the zeta function. Are they
OK? I don't know. If they are, it's Fields Medal material.

But this is nothing new. Oliver Heaviside revolutionized differential equation
solving with the Heaviside Operator, which gives correct answers to
electromagnetic problems. Nowadays we call it the Laplace Transform, because
Laplace had used what turned out equivalent, for other purposes.

~~~
21
> One thing that makes mathematicians leery of physics-inspired proofs is that
> physicists accept mathematical transformations that produce correct
> experimental results but that have not been proven axiomatically.

I think it's way beyond that. Physicists are know to be fast and loose with
their maths.

For a period of time they were basically deleting infinities which appeared in
the equations and pretended everything was fine, because after this operation
the results agreed with experiments.

To quote Dirac: I must say that I am very dissatisfied with the situation,
because this so-called "good theory" does involve neglecting infinities which
appear in its equations, neglecting them in an arbitrary way. This is just not
sensible mathematics. Sensible mathematics involves neglecting a quantity when
it turns out to be small—not neglecting it just because it is infinitely great
and you do not want it!

Only later mathematicians put this technique on more proper grounds -
renormalization.

~~~
ncmncm
In other words, physicists were entirely correct to toss the infinite values,
and mathematicians were later persuaded it was mathematically sound.

Nowadays, although physicists say they are "renormalizing", they still just
toss the infinite values. They're justified by results of experiments.
Mathematicians are limited because, typically, they have no handy universe to
run experiments in, never mind budget for a supercollider.

~~~
wfo
Or to phrase this differently, physicists do not really understand what they
are doing. They do some operation because it sometimes happens to produce
results that match some physical experiments that they are trying to model.
Their derivation rules are due to habit (in the sense of Hume), not due to
inference or logic.

Almost certainly, a rigorous formulation will come about later and explain
when these derivations should actually hold (rather than just assuming they
hold "whenever necessary").

This is actually how a lot of mathematics works: researchers notice some
relationship empirically, which gives some intuition and suggests some
hypothesis, and then the serious mathematical work is trying to determine
exactly what assumptions are necessary and proving it.

~~~
andrepd
>Or to phrase this differently, physicists do not really understand what they
are doing.

I think this is very unfair to say. I would like to remind who was it that
discovered the renormalization group: Wilson, a physicist.

------
mathgenius
> “What I started out trying to find” was a least-action principle for the
> mathematical setting, he wrote in an email. “I still don’t quite have it.
> But I am pretty confident it’s there.”

CS people should think of Bellman's equation, Dijkstra's algorithm, etc.
Minhyong Kim is looking for the dynamic programming solution to "thickets of
paths emanating from rational points".

------
CurtMonash
Come to think of it -- calculus wasn't really rigorous until a LONG time after
Newton introduced it. Isn't Riemann sometimes credited as the guy who finally
straightened it out?

~~~
pitaj
Also, IIRC, many very important things like vector calculus, Fourier and
Laplace transforms, and other differential equation stuff was added far later.

~~~
qubex
Stuff is still being added nowadays: non-integer derivatives and integrals (”
_fractional calculus_ ”), alternative formulations that rely not on the limit
of the addition but on the limit of a multiplier (“ _L-calculus_ ”),
stochastic integration (Itō integrals), automatic differentiation (vital to
estimating weights for neural networks in machine learning), derivatives of
integers (???!!!!), and other things that might or might not be important
going forward are all branches that have been developed since the 1960s.

~~~
somezero
What's "L-calculus"? And what does "alternative formulations that rely not on
the limit of the addition but on the limit of a multiplier" mean? Any
resources?

~~~
qubex
A derivative is basically how a small addition to the input value changes the
output value; this change in output value is also studied in terms of being
something “additional” (though it could also be a subtraction).

In L-calculus this approach is altered: how does multiplying the input value
by a tiny value greater than one change the amount by which the output value
gets multiplied?

Here’s the relevant Wikipedia page to get you started down the rabbit-hole:
[https://en.wikipedia.org/wiki/Multiplicative_calculus?wprov=...](https://en.wikipedia.org/wiki/Multiplicative_calculus?wprov=sfti1)

------
crb002
Curious when Topology will become a core math subject.

------
empath75
tl;dr;

Basically he’s trying to find something analogous to ‘action’ in physics,
which when minimized in the space of possible solutions, will let him find
rational solutions to Diophantine equations.

