
Galactic Algorithm - jonbaer
https://en.wikipedia.org/wiki/Galactic_algorithm
======
gmoot
Ah. I still remember my computational geometry professor telling us about an
algorithm to do triangulation (composing a polygon into triangles) in linear
time: "It's very complex. I don't think anyone has actually implemented it".

It shocked me at the time that there were algorithms like this.

[https://en.wikipedia.org/wiki/Polygon_triangulation#Computat...](https://en.wikipedia.org/wiki/Polygon_triangulation#Computational_complexity)

~~~
carlmr
They mention O(n log* n) being quasi linear time, I had to look up log * n
because I never saw this before.

~~~
pfdietz
And then there's O(n alpha(n)), where alpha(n) is an inverse the Ackermann
function, and which grows even more slowly than log* n. The famous union-find
algorithm has this complexity.

------
cperciva
In addition to galactic algorithms, there are also galactically _proven_
algorithms. One example is the algorithm for matching with mismatches which I
presented in the first chapter of my doctoral thesis; I proved that it was
faster than other algorithms for inputs of at least ~10^30 bytes.

In practice, it wins starting at around 10^4 bytes.

~~~
Gehinnn
How do you prove that something is faster than everything else? Was it already
running in linear-time?

There aren't many non-trivial lower bounds out there, and I think they all use
one of three ideas (diagonalization, crossing sequences or oracle
reconstruction)!

~~~
cperciva
Sorry, I was unclear. I proved that my algorithm was faster than other known
algorithms; I didn't prove that it was faster than all other _possible_
algorithms.

------
asdaddasdad
I've never understood the "number of the atoms in the universe" argument. The
number of states the universe can be in doesn't seem to be equal to the number
of atoms. For example, just two atoms could encode lots of numbers simply by
using their distance. Quantum physics would affect it, but I mean in
principle: we are not switching atoms on and off to encode state.

~~~
Retric
_It’s 2^1729 digits, (vastly) more digits than there are atoms in the
universe._

The vastly bit is kind of an understatement.

Each stable isotope is indistinguishable from every other carbon isotope. So,
you can probably encode a few bits per atom assuming you can somehow read this
data. However they are talking vastly larger numbers of digits here. Where
there are only ~10^80 digits worth of atoms in the visible universe, but hey
bump that to 10^90 it does not help.

Sure you might encode 10^6 or hell I will give you 10^100 bits of data per
atom or something but that’s not even close to helpful. It’s still on the
order of K atoms in the universe and you want to encode 10^400+ * K bits of
data. So each atom needs to encode 10^400+ bits and remember each stable
isotope is indistinguishable from every other atom of that stable isotope.

~~~
nine_k
You forgot about ordering.

8 binary bits _in order_ can encode 256 states, as if every bit was capable of
encoding 32 states.

~~~
Retric
Yea, that specific number fits into 2 kilobytes of memory, but it’s so large
it’s hard to compare it to anything.

Which is why I suggest comparing it to the number of bits required to encode
the universe. If all you wanted was to store a single arbitrary number and
could read write the universe you could do anything out to 2^(10^(80 * k))
where k is larger than 1 but I suspect below 100.

------
thewarrior
Can’t wait to be told to implement some of these for my next whiteboard
interview

~~~
hrgiger
Perfect candidate for primality testing!
[https://www.mersenne.org/various/math.php#lucas-
lehmer](https://www.mersenne.org/various/math.php#lucas-lehmer)

------
cornstalks
> _1729_

Why is this hyperlinked? I was hoping the link would help explain why 1729 and
not some other number, but it’s just trivia...

> _One immediate practical effect would be to earn the discoverer a million
> dollar prize from the Clay Mathematics Institute._

I mean, I can’t argue with the practicality of that.

~~~
edanm
I also wonder why that number. But as others said, it's a pretty famous number
all by itself so it makes sense for there to be _some_ notice paid to that- if
there were no hyperlink I would've thought it's weird and maybe someone made a
mistake.

~~~
nabla9
1729 has no special meaning for the multiplication algorithm. In the paper
Harvey writes:

> In Section 5 we establish (1.3) with the explicit constant K= 1728, and in
> Section 5.4 we list some optimisations that improve it to K= 8.

> In Section 5, we will simply take d:= 1729 (any constant larger than K would
> do)

~~~
_kst_
So 1729 is the 42 of 4-digit numbers.

------
jakeogh
[https://rjlipton.wordpress.com/2010/10/23/galactic-
algorithm...](https://rjlipton.wordpress.com/2010/10/23/galactic-algorithms/)

~~~
jonbaer
"The famous quantum factoring algorithm of Peter Shor may or may not be a
galactic algorithm. It is of course one of the great results in theory, ever.
It has sparked funding for research centers on quantum computation that have
promoted many other advances.

If and when practical quantum computers are built Peter’s algorithm will be
one of the first algorithms run on them. Right now it is a galactic algorithm.
But, perhaps it is the best example of the importance of galactic algorithms."

------
leeoniya
is Graham's Number a galactic number?

[https://en.m.wikipedia.org/wiki/Graham's_number](https://en.m.wikipedia.org/wiki/Graham's_number)

~~~
saagarjha
You could say so:

> As with these, it is so large that the observable universe is far too small
> to contain an ordinary digital representation of Graham's number, assuming
> that each digit occupies one Planck volume, possibly the smallest measurable
> space. But even the number of digits in this digital representation of
> Graham's number would itself be a number so large that its digital
> representation cannot be represented in the observable universe. Nor even
> can the number of digits of that number—and so forth, for a number of times
> far exceeding the total number of Planck volumes in the observable universe.

~~~
function_seven
> _for a number of times far exceeding the total number of Planck volumes in
> the observable universe._

Just so I'm clear. They're saying that not only can I not fit Graham's Number
in all the Planck volumes of the universe; and I can't even count the digits
of GN and write _that_ in the Planck volume of the universe (and so on), but
the _number_ of "indirections" is itself so large as to not fit in the
universe?

Like:

    
    
        1. GN (can't fit).  
        2. Number of digits in GN (can't fit).  
        3. Number of digits in #2 (can't fit).  
        4. Number of digits in #3 (can't fit).  
        ...  
        N. <-- The numbered list item itself won't fit.
    

Am I understanding that right?

~~~
Gunax
Yes. log(log(log... GN))) applied X times (where X is the number of planck
volumes in the universe) is still greater than X.

Where log = base 10 logarithm.

Hofstadter talks a bit about this abstraction in his article 'On Number
Numbness'

> If, perchance, you were to start dealing with numbers having millions or
> billions of digits, the numerals themselves (the colossal strings of digits)
> would cease to be visualizable, and your perceptual reality would be forced
> to take another leap upward in abstraction-to the number that counts the
> digits in the number that counts the digits in the number that counts the
> objects concerned.

~~~
saalweachter
And despite that, Graham's Number is still in the countable set. :-)

------
odomojuli
Related discussion:
[https://news.ycombinator.com/item?id=21151646](https://news.ycombinator.com/item?id=21151646)

------
mormegil
Just yesterday, I read the comments on the Physical Polynomial-Time complexity
class on Complexity Zoo:
[https://complexityzoo.uwaterloo.ca/Complexity_Zoo:P#php](https://complexityzoo.uwaterloo.ca/Complexity_Zoo:P#php)

------
pfdietz
This sort of thing highlights the difference between theoretical and practical
algorithms.

In practical algorithms, one does not ignore constants, one takes into account
actual hardware (particularly cache effects), and one does not ignore real
world distributions of inputs.

------
elil17
This reminds me of the spaghetti sort algorithm:
[https://en.m.wikipedia.org/wiki/Spaghetti_sort](https://en.m.wikipedia.org/wiki/Spaghetti_sort)

~~~
imglorp
Same thing, but a little more practical for computing in case your computer
doesn't have a hand and a table:
[https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort](https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort)

------
Gehinnn
The godfather of a galatic algorithm is the known (!) algorithm that solves
SAT in polytime.

However, its runtime is only guaranteed to be in polytime, if someone
guarantees that P = NP.

~~~
Gehinnn
Before an unbeliever comments (as I would do myself if I wouldn't know the
algorithm): This algorithm involves enumerating f(M, t) where M is the Gödel
number of a turing machine T, t an integer and f the output of T after t steps
on the original input. This output is interpreted as variable assignment. If
it satisfies the formula, the SAT instance is satisfiable. Otherwise the
enumeration continues.

If P=NP there is a turing machine that always outputs a satisfying assignment
if one exists with t being polynomial bounded by the input size. As this
turing machine is constant, the algorithm runs in poly-time.

~~~
GuB-42
So, in simpler terms (for me):

\- start with a file size (let's say 4kb), and a duration (let's say 1 hour).

\- generate all possible files of the given size (there are 2^32768 of them),
mark them as executable and run them. If they don't finish after the given
duration, kill them.

\- check the output of each program that didn't crash. If one matches the
solution, OK. If it doesn't, try again with a longer duration and larger size.

It doesn't just solve SAT. At galactic scales, it will either be the optimal
solution to any problem, or be as fast as checking the answer, whatever is
slower. If all that code generation thing doesn't depend on the input size, it
is constant time. That the constant is many times the age of the universe
doesn't change the complexity class.

~~~
Gehinnn
Yeah, you got the point ;) This however only works for problems where you can
validate a solution in polynomial time. There is still the problem what
happens when a solution does not exist. Even in this case, the algorithm must
terminate in polynomial time.

------
angel_j
If all that is required is a super large constant (as in some cases), why not
assume the large constant, do the calculation, then factor out the large
constant?

~~~
nabdab
Because the “faster” algorithm is faster than the “slow” algorithm in the
range where this constant is added... but not faster than just applying the
“slow” algorithm directly. So the only thing you would achieve is to slow down
the calculation overall.

