
Feynman on Fermat's Last Theorem (2016) - balele
http://www.lbatalha.com/blog/feynman-on-fermats-last-theorem
======
ColinWright
Previously discussed:

[https://news.ycombinator.com/item?id=16041560](https://news.ycombinator.com/item?id=16041560)

[https://news.ycombinator.com/item?id=14940636](https://news.ycombinator.com/item?id=14940636)

[https://news.ycombinator.com/item?id=14355834](https://news.ycombinator.com/item?id=14355834)

[https://news.ycombinator.com/item?id=12018221](https://news.ycombinator.com/item?id=12018221)

... and previously submitted without discussion:

[https://news.ycombinator.com/item?id=17581023](https://news.ycombinator.com/item?id=17581023)

[https://news.ycombinator.com/item?id=15904199](https://news.ycombinator.com/item?id=15904199)

~~~
SilasX
Is the "past" link being deprecated or something?

------
x3tm
> Feynman concluded: “for my money Fermat’s theorem is true”. > "the main job
> of theoretical physics is to prove yourself wrong as soon as possible."

Great example of the main difference between mathematicians and theoretical
physicists .

This reminds me of another magician, Enrico Fermi, who was also an extremely
good mathematician but didn't pursue rigor or precision for the sake of it:
20% was good enough precision for him for most cases.

~~~
JadeNB
> > Feynman concluded: “for my money Fermat’s theorem is true”.

> > "the main job of theoretical physics is to prove yourself wrong as soon as
> possible."

> Great example of the main difference between mathematicians and theoretical
> physicists.

Actually, I'm not sure I agree: even before Wiles's proof, almost every
mathematician would have been willing to wager, at least conversationally, on
the truth of FLT; and mathematicians also are in the business of proving
themselves wrong as soon as possible. The only catch is that we don't count an
inability to prove yourself wrong as a proof that you're right ….

~~~
x3tm
The difference lies in the fact that absolute rigor to assess truths is not as
fundamental in theoretical physics as it is in mathematics. Uncertainty is
accepted. Physics puts a premium on empirical results and intuition over the
more formal treatments common in mathematics (many important results/tools are
not mathematically well-defined e.g. Feynman path-integral in d > 1).

~~~
JadeNB
Agreed! I didn't mean to claim that there isn't a difference, for there is a
wide one; only that the two particular quotes chosen seemed (unlike most other
things Feynman said!) not to illustrate them.

~~~
x3tm
Fair enough. Agree that the second quote doesn’t illustrate my point, contrary
to the first one. Cheers!

------
jordigh
This proof (or "plausibility argument") bugs me so much. Just because
something thins out and becomes rare doesn't mean it doesn't exist.

As n gets bigger, the probability of n being a perfect square gets smaller and
smaller. In the limit, the probability is zero.

Does this mean square numbers don't exist?

~~~
fanzhang
By Feynman's argument, you can prove that square numbers almost certainly keep
on existing.

Roughly, it goes as such:

1) the probability of N being a perfect square is proportional to 1/sqrt(N).

2) For any N_0 arbitrarily high, if you integrate from N_0 to infinity the
expression (1/sqrt(N) dN), you get infinity.

3) The expression in 2) is the "Feynman equivalent" of the expected number of
square numbers above N_0.

So Feynman's nonproof actually turns out to be true, despite it not being a
proof in this case as well.

~~~
jordigh
Okay, let's pick something rarer. Rational numbers.

If you integrate the characteristic function of the rational numbers over any
interval, you get zero because rational numbers are very rare.

So they don't exist either?

To be less glib, I don't see Feynmann's argument to be bringing anything new.
We already knew that counterexamples, if they existed, would be very rare
because we tried looking for them with computers and we couldn't find them.
But stuff being rare still doesn't prove anything.

Many of us were fooled by Skewe's number:

[https://en.wikipedia.org/wiki/Skewes%27s_number](https://en.wikipedia.org/wiki/Skewes%27s_number)

There's no way to conclude that this exists via brute calculation. It's just
inconceivably large and would have eluded any of Feynmann's methods.

~~~
sweezyjeezy
Your rational numbers argument does indeed fail, but for a different reason.
It's not valid to go from a sum over the rationals to an integral in the reals
in that way, however it _is_ perfectly valid to go from a sum over the
integers to an integral over the reals (in certain situations), if you only
want an estimate (see eg. the Euler Maclaurin formula
[https://en.wikipedia.org/wiki/Euler–Maclaurin_formula](https://en.wikipedia.org/wiki/Euler–Maclaurin_formula)
). This was fine in Feynman's argument.

------
syn0byte
Is anyone still trying to come up with Fermat's original "truly marvelous
proof"? Or have math folk talked themselves out of its possible existence?

~~~
OneWordSoln
Well, I'm not 'math folk' (grey beard programmer), but I love the Horizon
documentary on Andrew Wiles and his solution, and I'd love to hear why my
intuitive understanding is inapplicable from people who know better than
myself. (Note that this will not in any way be a proof, but just the train of
thought I believe to be the line of thinking Fermat may have used to construct
his proper mathematical proof.)

My idea here is based upon physical/visual intuition, starting with why it
works for n=2 (squares) and then why it cannot work for n=3 (cubes) and then
that n>3 is necessarily more complex than n=3 thus cannot work either.

[Note that I will use lower case letters for the sides/roots and their
uppercase letters to denote the areas or volumes. Thus, the full equation is
Z=Y+X, with X = x^n, resulting in z^n = y^n + x^n. I also use (for n=2), dy =
z - y, and Dy = 2(y(dy)) + dy^2, and dx = z - x, and Dx = 2(x(dx)) + dx^2. I'm
sorry my dx and dy conflict with calculus notation but my notation means dx is
"the difference between z and x" which is the same as "the length that must be
added to x to equal z" and Dx is "total amount that must be added to X to get
Z". Therefore (for n=2), Dx = Y = 2(x(dx)) + dx^2, and Dy = X = 2(y(dy)) +
dy^2. ]

For n=2, Z=Y+X works because (what can be visualized as a square) X can be
"smushed" over two sides and their joining corner of (the other square) Y
evenly, such that Z = Y + Dy = Y + 2(y(dy)) + dy^2. The term "2(y(dy))" is the
amount that must be added along each of the two sides, and the term "dy^2" is
the amount that must be added at the corner to complete the perfect square Z.

So, for example, 5^2 = 4^2 + 3^2 because both 3^2 = 9 = 2(4(1)) + 1^2 = 2(4) +
1 = 8 + 1, and 4^2 = 16 = 2(3(2)) + 2^2 = 2(6) + 4 = 12 + 4.

Now, for n=3, we must visualize the situation where the cube X is smushed over
the cube Y's three faces and its joining corner. (Now X=x^3 and Y=y^3.)

The equations for dx and dy are the same, but Dx and Dy have expanded by a
dimension: Dx = Y = 3(x^2)(dx) + 3(x)(dx^2) + dx^3, and likewise Dy = X =
3(y^2)(dy) + 3(y)(dy^2) + dy^3.

The term "3(x^2)(dx)" is the amount that must be added to three faces of the
cube Y, the term "3(x)(dx^2)" is the amount that must be added along the three
edges joining those three faces of the cube Y, and the term "dx^3" is the
amount that must be added at the corner.

Now, I haven't the maths to prove why Dx and Dy for n=3 won't have integer
solutions but my intuition says it has something to do with the fact that it's
three dimensions and, therefore, a couple of odd numbers multiplying around in
there (the first two terms) and the fact that there are only two cubes being
smushed together to try and reconstitute another perfect cube. I also imagine
Fermat could actually mathematically prove why it's impossible. Perhaps it can
be shown that Dx and Dy cannot both have diophantine solutions. These are just
guesses.

As for n>3, the terms (and physical/visual representations) will only become
more complex and there will be still only two terms with which to reconstitute
the hypercube.

Anyway, that's my intuition about the entire problem and I have to imagine
that a proof that Fermat can easily intuit yet is (a bit?) too large to fit in
the margin must surely tread down a simple path, perhaps even one that relies
on a physical/visual interpretation of what the equations can be likened to.

I look forward to this being eviscerated or flatly rejected, if appropriate,
or at least corrected for inconsistencies. If it serves anyone in their
exploration of this insidiously complex yet apparently simple-seeming problem,
my joy would only grow. If my name would someday appear in a mathematical
paper that a real mathematician produces as a result of this, well that would
be out of this world for this poverty-striken math wannabe.

[Edited to fix my n=3 equations.]

~~~
gowld
Your intuition is similar in spirit to Feynman's. For n>2, the number of
plausible candidates is sparse, whereas for n<=2, there are many plausible
candidates.

------
ox_n
_DON'T DOWN VOTE JUST BECAUSE YOU CAN'T DO MATH_

The proof Fermat hinted to was about the difference between squares. All whole
numbers taken to a power greater than two (n^3) can be represented as the
difference between two whole squares (x^2 - y^2). These differences can then
be shown as the sum of consecutive odd numbers:

    
    
        2^3 = 3^2 - 1^2 = (1+3+5) - (1) = 8,
        3^3 = 6^2 - 3^2 = (1+3+5+7+9+11) - (1+3+5) = 27,
        4^3 = 10^2 - 6^2 = (1+3+5+7+9+11+13+15+17+19) - (1+3+5+7+9+11) = 64
    
        5^3 = 15^2 - 10^2 = (21+23+25+27+29) = 125
    

When you examine the odd number series that results from each base, you'll
discover that there will always be a gap if you try and combine two odd number
series together, which explains Fermat's little joke about margins. The same
trick works for higher powers.

It's not that hard people. Stop believing everything you're told about how
"hard" something is.

 __ _HINT:_ __The number of odd numbers in the series exactly matches the
starting square base number

~~~
yters
Sounds interesting, but not sure what you mean by:

> you'll discover that there will always be a gap if you try and combine two
> odd number series together

Can you elaborate?

~~~
ox_n
Consecutive base numbers will necessarily alternate between even and odd. So
even the closest base numbers still have a gap between their resulting odd
number series, which only increases as the distance between base numbers
increases.

~~~
andrepd
Still don't get it. What do you mean by "base numbers" and what do you mean by
"alternate between even and odd"?

~~~
gowld
'ox_n appears to be trying to prove (by a pigeonhole agument) that the
equation x^n + y^n = z^n in unsolvable for _some z_. That's much weaker than
proving that it is unsatisfiable at _every_ z.

