It shocked me at the time that there were algorithms like this.
I clearly remember all such research papers. After studying all of them and after three complete rewrite of the engine, we finally implemented a classic 2 step algo: sweepline polygon decomposition to monotone polygons, and then monotone polygons to triangles.
We had really heavy headaches dealing with intersections, autointersections, splitting edges, comparing and merging original vertexes with vertexes created by splitting edges, managing edges connectivity and topology.
After some try, we totally discarded the idea to write it in floating point arithmetic, preferring fixed point integers instead (and this was a real turning point for the robustness!).
Anyway, good times to remember!
In practice, it wins starting at around 10^4 bytes.
There aren't many non-trivial lower bounds out there, and I think they all use one of three ideas (diagonalization, crossing sequences or oracle reconstruction)!
The vastly bit is kind of an understatement.
Each stable isotope is indistinguishable from every other carbon isotope. So, you can probably encode a few bits per atom assuming you can somehow read this data. However they are talking vastly larger numbers of digits here. Where there are only ~10^80 digits worth of atoms in the visible universe, but hey bump that to 10^90 it does not help.
Sure you might encode 10^6 or hell I will give you 10^100 bits of data per atom or something but that’s not even close to helpful. It’s still on the order of K atoms in the universe and you want to encode 10^400+ * K bits of data. So each atom needs to encode 10^400+ bits and remember each stable isotope is indistinguishable from every other atom of that stable isotope.
8 binary bits in order can encode 256 states, as if every bit was capable of encoding 32 states.
Which is why I suggest comparing it to the number of bits required to encode the universe. If all you wanted was to store a single arbitrary number and could read write the universe you could do anything out to 2^(10^(80 * k)) where k is larger than 1 but I suspect below 100.
When people use this phrase they're not trying to claim that the number of atoms in the universe is directly related to the problem at hand; they're just trying to convey scale.
Another argument would perhaps be the energy required to do the computation. Maybe that relates more directly to the number of atoms in the universe, via Einstein's equation?
This is a pretty huge well except. When people talk about information that can be stored in the universe this is exactly the limitation they have in mind.
Measuring distances, I suppose there could be numerous ways, like measuring the gravity or electric pull (not sure what it is called in English). I think only Quantum theory says we can not measure to arbitrary precision, or at least if we do, there are other issue. Still, that would be another argument than simply pointing at the number of atoms.
Why is this hyperlinked? I was hoping the link would help explain why 1729 and not some other number, but it’s just trivia...
> One immediate practical effect would be to earn the discoverer a million dollar prize from the Clay Mathematics Institute.
I mean, I can’t argue with the practicality of that.
BTW who things the other person was really just baiting Ramanujan to say something like this?
> I remember once going to see him when he was ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. “No,” Ramanujan replied, “it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.
The real question is, did he know of this fact already or did he come up with it on the spot after hearing what Hardy said?
And 1729 is 1728 + 1. Not even kidding, that is a thing that happens.
> In Section 5 we establish (1.3) with the explicit constant K= 1728, and in Section 5.4 we list some optimisations that improve it to K= 8.
> In Section 5, we will simply take d:= 1729 (any constant larger than K would do)
If and when practical quantum computers are built Peter’s algorithm will be one of the first algorithms run on them. Right now it is a galactic algorithm. But, perhaps it is the best example of the importance of galactic algorithms."
> As with these, it is so large that the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies one Planck volume, possibly the smallest measurable space. But even the number of digits in this digital representation of Graham's number would itself be a number so large that its digital representation cannot be represented in the observable universe. Nor even can the number of digits of that number—and so forth, for a number of times far exceeding the total number of Planck volumes in the observable universe.
Just so I'm clear. They're saying that not only can I not fit Graham's Number in all the Planck volumes of the universe; and I can't even count the digits of GN and write that in the Planck volume of the universe (and so on), but the number of "indirections" is itself so large as to not fit in the universe?
1. GN (can't fit).
2. Number of digits in GN (can't fit).
3. Number of digits in #2 (can't fit).
4. Number of digits in #3 (can't fit).
N. <-- The numbered list item itself won't fit.
Where log = base 10 logarithm.
Hofstadter talks a bit about this abstraction in his article 'On Number Numbness'
> If, perchance, you were to start dealing with numbers having millions or
billions of digits, the numerals themselves (the colossal strings of digits)
would cease to be visualizable, and your perceptual reality would be forced
to take another leap upward in abstraction-to the number that counts the
digits in the number that counts the digits in the number that counts the
In practical algorithms, one does not ignore constants, one takes into account actual hardware (particularly cache effects), and one does not ignore real world distributions of inputs.
However, its runtime is only guaranteed to be in polytime, if someone guarantees that P = NP.
If P=NP there is a turing machine that always outputs a satisfying assignment if one exists with t being polynomial bounded by the input size. As this turing machine is constant, the algorithm runs in poly-time.
- start with a file size (let's say 4kb), and a duration (let's say 1 hour).
- generate all possible files of the given size (there are 2^32768 of them), mark them as executable and run them. If they don't finish after the given duration, kill them.
- check the output of each program that didn't crash. If one matches the solution, OK. If it doesn't, try again with a longer duration and larger size.
It doesn't just solve SAT. At galactic scales, it will either be the optimal solution to any problem, or be as fast as checking the answer, whatever is slower. If all that code generation thing doesn't depend on the input size, it is constant time. That the constant is many times the age of the universe doesn't change the complexity class.
^ It would be funny though if P=NP, but for no turing machine exists a proof that it solves SAT. This would have to be ruled out for the algorithm to work.