
Counterexample to Euler's conjecture on sums of like powers - johnaston
http://fermatslibrary.com/s/counterexample-to-eulers-conjecture-on-sums-of-like-powers
======
fdej
The submitted paper is well-written and appears to be correct, but it is far
too short.

No reference is made to previous attempts to study Euler's conjecture or
similar conjectures (there are surely a dozen other papers that ought to be
cited in the introduction).

What led the authors to consider a computer search as opposed to any other
approach? Does the method generalize?

It is indicated that the CDC 6600 was used. How was the programming done? How
long time did the execution take? The instance is claimed to be "the
smallest". In what norm? How can you be sure that the computer program did not
skip any smaller examples?

The most serious problem with the paper, however, is that the only
contribution is a purely computational statement which, though a
counterexample to a conjecture, does not advance mathematical theory in any
way.

Due to these shortcomings, my recommendation to the editor is to REJECT the
submission.

~~~
kmiroslav
> though a counterexample to a conjecture, does not advance mathematical
> theory in any way.

[I realize it's sarcasm but in case it's not clear to everyone, I'll speak my
mind about it]

I think it's a pretty serious advancement. Confirming a conjecture is just as
important as invalidating it with a counter example. It doesn't matter if that
counter example fits on one line.

~~~
vostok
Maybe my mind is warped, but I actually agree with that. The conjecture isn't
the point. There are plenty of interesting counterexamples, but a direct
search does nothing to advance mathematics.

~~~
ipsin
It advances mathematics by saving the time of people who might otherwise try
to prove a false conjecture?

------
gunnihinn
This is great from so many angles. The counterexample is solid mathematics,
the paper is wonderfully to the point, and the use of computers to
successfully investigate a pure math problem was innovative.

I only wish they'd described the method they used to find the counterexample.

~~~
CamperBob2
As a first attempt, I guess you'd create a table of 5th powers of integers
through (say) 1000, then look for combinations of two, three, or four entries
from the same array that added up to one of the same numbers. Or more likely,
walk through the array and subtract all combinations of one, two, three, or
four elements from each, looking for zeroes. The most naive search through
such a table could take on the order of a trillion trials to find a
counterexample, which is pretty painful, but I'm sure there are plenty of ways
to optimize it. Especially if you worked from high to low, you could skip
enormous chunks of the search space that would take the computation out of
range.

The CDC 6600 was a pretty hardcore machine; it was Seymour Cray's first design
at his Chippewa Falls skunk works, according to Wikipedia. Would definitely be
interesting to know how long their solution took.

~~~
Someone
The naive way:

\- compute S0 = {n^5 | 1 < n <= 1320} (3960 multiplications)

\- compute S1 = {n^5 | 1 < n <= 1000} (zero multiplications)

\- compute S2 = {s+t | s,t ∈ S1} (500,500 additions)

\- compute S4 = {s+t | s,t ∈ S2} (around 10^11 additions)

\- compute the intersection of S0 with 4

You don't have to store S4, so 2 megabytes of memory or so should suffice.
10^11 operations likely was out of reach for the CDC600, though.

To speed up things, do the above for numbers up to 100 or so using modulo
arithmetic. That gives you candidate solutions modulo your word size. That
way, you don't have to compute large parts of S4.

~~~
gshubert17
I computed an array of fifth powers, up to 200, then had four nested loops.
Using python3 on a 2.66-GHz Core 2 Duo, it took 48 seconds to find the
reported numbers. If there had been no solutions, it would have taken only a
couple of seconds longer.

~~~
Someone
That's modern hardware, where recomputing _n^5 + m^5_ when needed may on
average be faster than doing the lookup with its risk of a cache miss.

That CDC6600 dates from a time where register access wasn't much faster (if it
was faster at all.
[https://en.m.wikipedia.org/wiki/CDC_6600#Description](https://en.m.wikipedia.org/wiki/CDC_6600#Description):
_For instance, a processor might take 15 cycles to multiply two numbers, while
each memory access took only one or two_ ) than reading from memory, and
integer multiplication took >1 cycle
([https://en.m.wikipedia.org/wiki/CDC_6600#Description](https://en.m.wikipedia.org/wiki/CDC_6600#Description):
_in addition to the clock being faster, the simple processor executed
instructions in fewer clock cycles; for instance, the CPU could complete a
multiplication in ten cycles._ )

------
sillysaurus3
The recent counterexample to Fermat's Last Theorem was pretty interesting,
too.

434437^15 + 588129^15 = 588544^15

[https://www.google.com/search?q=588129%5E15+%2B+434437%5E15+...](https://www.google.com/search?q=588129%5E15+%2B+434437%5E15+%2B+-588544%5E15)

~~~
teddyh
No?

434437¹⁵ + 588129¹⁵ =
352114612798389918432959038931874707477944791745072116429171623348641601872653349744942

588544¹⁵ =
352114612798389919037509191076162751408804961099244071160111294752887772695040228327424

It’s apparently close enough to fool Google, but it’s not correct.

~~~
andrewstuart2
~10^-18 difference as a fraction of the latter power [0], but still a
substantially huge number (on the order of 10^68) [1].

[0]
[https://www.wolframalpha.com/input/?i=588129%5E15+%2B+434437...](https://www.wolframalpha.com/input/?i=588129%5E15+%2B+434437%5E15+%2B+-588544%5E15)

[1]
[https://www.wolframalpha.com/input/?i=(+588129%5E15+%2B+4344...](https://www.wolframalpha.com/input/?i=\(+588129%5E15+%2B+434437%5E15+%2B+-588544%5E15+\)+%2F+588544%5E15)

------
GFK_of_xmaspast
[1966]

(Also a nice early-ish use of supercomputers to solve pure math problems).

------
matheist
Is it known what the smallest power is for which three summands are
insufficient? Is there a smallest power? I imagine that would be hugely
difficult, but for all I know it falls to the same methods as Fermat. (I'm no
expert.)

Edit: I guess the answer might be "5" depending on what the paper means by
"smallest" example. (smallest largest number vs smallest smallest number.)

~~~
octonion
95800^4 + 217519^4 + 414560^4 = 422481^4

------
andrewstuart2
A fine example of the fact that brute force is often a powerful heuristic. I
believe I recall hearing from the MIT OCW Algorithms course that they often
rely on computer time doing exhaustive search (i.e. brute force) to "not
disprove" some algorithm they develop before sinking too much time into
developing a mathematical proof.

~~~
waterhouse
I think "exhaustive search" is a misnomer here, as one obviously cannot
exhaustively search the integers. (Unless we might say the machine gets
exhausted.)

~~~
andrewstuart2
I think the particular quote I'm remembering was in the context of graph
algorithms, where exhaustive search is more appropriate a term. :-)

~~~
reubenmorais
I believe this is the quote you're referring to, by Erik Demaine on 6.006:

"I use BFS a lot, myself, to check mathematical conjectures. [...] You think
something is true, you can imagine some graph of all possible inputs to that
theorem and you'd need to check them. Typical way of doing that is BFS through
that entire graph of states. Usually we're testing finite special cases of a
general conjecture, but if we find a counterexample, we're done, don't have to
work on it anymore. If we don't find a counterexample then we have to do the
mathematics."

[https://www.youtube.com/watch?v=s-CYnVz-
uh4&t=8m51s](https://www.youtube.com/watch?v=s-CYnVz-uh4&t=8m51s)

------
ipsin
Any good sources on the min number (>2) of terms of sums of positive powers of
n equaling another power of n, for each n?

I'm curious if there are low or high outliers.

------
ahazred8ta
27^5 + 34^5 + 110^5 + 133^5 = 144^5

~~~
Sniffnoy
You miscopied; it's 84, not 34.

