> They've verified that the Collatz conjecture holds for all numbers up to ~2^68, but that's precisely 0% of all the numbers that need to be checked.
This is the crux of what has me looking like a fool to every mathematician in the thread, but I don't mind: why is 2^68 0% of the "numbers that need to be checked"? From a physicist standpoint, you can do a lot with numbers from 0 to 2^68. After all, 64-bit floats are quite useful. Is there 0% value in proving the Collatz conjecture for all possible numbers one might want to use in a normal programming language without big number libraries?
I know the question must sound pretty crude, but it's also a source of mystery. Mathematicians are so obsessed with exactness. Is there no room for empirical analysis in number theory?
In other words, number theory relies on certain assumptions. What if one of your assumptions is "a number system from 0 to 2^64"? Why is there no value in that?
Because it's uninteresting. The point of pure math is not to be "useful", it's to be interesting. The Collatz Conjecture is (as yet) a completely useless result. Like I said, if God himself came down and told the world "The Collatz Conjecture is true", all we'd get is a useless piece of trivia. "The Collatz Conjecture is true for the first 2^68 natural numbers" is even more worthless than that. Maybe it'd be useful if we had an application for it, but for context, many pure mathematicians are quite derisive at the idea that their work should have practical applications.
Here's a digression, a simple math problem. If you take a checkerboard and remove two opposite corner squares, can you tile the remaining 62 squares with 31 dominoes?
You can probably write a program that can exhaustively churn through all the possible arrangements of dominoes in a checkerboard, and it'll spit out the answer (it's "no"). But is that interesting? No. This is a boring fact, "if you take a checkerboard and remove the two opposite corners you can't tile the remaining squares with 31 dominoes". No one cares about that.
But, here's a proof that this is true. If you look at the colors of each square on a checkerboard, there are 32 black squares and 32 white squares. When you remove the two opposite corner squares, you're removing two squares of the same color. So you have 30 black squares and 32 white squares left (or the converse). Meanwhile, every domino takes up one black square and one white square. So no matter how you place 31 dominoes, they should cover 31 black squares and 31 white squares. Therefore, we've proven the tiling is impossible.
That's somewhat interesting. You have an easily understandable argument for why the fact is true, and you have an application of a method (here, invariants) for looking at other math problems. Plus, it's kinda fun and satisfying and "elegant" to solve a problem like this. The proof is much, much more interesting than knowing the answer to the problem. Hopefully this helps convey that.
This is the crux of what has me looking like a fool to every mathematician in the thread, but I don't mind: why is 2^68 0% of the "numbers that need to be checked"? From a physicist standpoint, you can do a lot with numbers from 0 to 2^68. After all, 64-bit floats are quite useful. Is there 0% value in proving the Collatz conjecture for all possible numbers one might want to use in a normal programming language without big number libraries?
I know the question must sound pretty crude, but it's also a source of mystery. Mathematicians are so obsessed with exactness. Is there no room for empirical analysis in number theory?
In other words, number theory relies on certain assumptions. What if one of your assumptions is "a number system from 0 to 2^64"? Why is there no value in that?