Hacker News new | past | comments | ask | show | jobs | submit login
The 8000th Busy Beaver number eludes ZF set theory (2016) (scottaaronson.com)
190 points by czhu0217 on May 3, 2018 | hide | past | favorite | 110 comments



The limit has since been pushed lower, to 1919 states: https://github.com/sorear/metamath-turing-machines


> You’d certainly never run across such a program by chance—not even if you had a computer the size of the observable universe, trying one random program after another for billions of years in a “primordial soup”!

This is a remarkable claim, given that we did encounter this program by random chance, and it only took about 10 billion years (the estimated age of the universe) give or take a few billion years.


We didn't encounter it by chance though; we encountered it by complex goal-directed physical processes that are known to produce extremely high distictiveness-to-kolmogorov-probability outputs.


But chance ‘built’ the machine that built the machine that… …that built the machine that can run such complex goal-directed processes.


I think it's clear that the author is simply referring to the odds of finding this program by trying random programs one after the other, not making any philosophical claims about whether the Universe is deterministic.


It's random, but the probability distribution is not uniform.

Just because you have a random process choosing between outcomes A and B doesn't mean that there is a 50/50 chance you'll see either outcome.

For a related interesting result relating to this see the recent paper where they analysed the Drake equation using probability distributions.


The universe doesn't try random states sequentially, it's governed by things like gravity that tend to lead to states where the forming of optimization processes is possible


> But chance ‘built’ the machine that built the machine that…

Or perhaps this consists of (slight) evidence that favourable conditions for the evolution of intelligence are more common than we currently suspect.


IMHO Amazing: There's a boy in his room playing with his relatively large computer. He dreams up quantum mechanics and general relativity, types in the equations, clicks on the push button Big Bang, zooms forward 12 billion years of simulated time, and sees what he gets.

This was not his first try. His other efforts all ended in "Poof".

But this effort yielded life that figured out the equations! So, the equations generated life smart enough to figure out the equations.

So, the equations were both (1) general enough to generate life and (2) simple enough that the generated life could figure out the equations.

IMHO, amazing juxtaposition!

Is this set of equations essentially the only possible set of equations with both (1) and (2)?


No. Chance + selection. VERY important difference.


Is selection here not a byproduct of chance?


If I aim a gun and pull the trigger, the results are not the byproduct of chance, excepting that in some sense everything is.

This was consciously aiming a gun at an explicitly-named remote target, ricocheting the bullet off an unprecedentedly-reasonable number of explicitly-chosen elements, and hitting the target.

Don't let semantics muddy the water about this achievement


No. The root of all selection is selection for reproductive fitness. That is decidedly non-random.


There is no selection for reproductive fitness until there is a replicator. Once a random process has stumbled upon a replicator, selection for reproductive fitness is inevitable, so long as some of the replicator's descendants survive. Reproductive fitness, therefore, is a feature of sufficiently large, complex and long-running random processes, not something additional that is imposed from outside. This is what Dennett called "Darwin's dangerous idea".


No. The "dangerous idea" is that design might not require a designer.

As for the first replicator, we simply do not yet know how (or even when) that came about, though random chance does seem likely. So I guess if you add enough layers of "built the machine that..." then you do end up with pure randomness (probably). But I would conjecture that there's a lot less "informational distance" between the cosmos and the first replicator than there is between the first replicator and Adam Yedidia.


It might not be what Dennett meant, but I don't think issues of informational distance refute the claim that we can expect selection for reproductive fitness to be a feature in any random process that gives rise to replicators. After the appearance of the replicator, the random process continues to operate under the same rules (laws of physics) as before.


After the appearance (and sufficent success) of the replicator, it stops being a random process. Edit: or at least not a completely/primarily random process.


Oh, I certainly agree with that.


I think the selection is a byproduct of the underlying laws, thought one must rely on chance until a system complicated enough to engage in selection emerges. Now, are those underlying laws reliant upon chance? This requires understanding beyond that of the current universe, so maybe.


Etymological arguments for either position. Regardless, I wouldn’t claim the distinction exists, and would find it hard to justify that it’s VERY important.


It's the difference between draw poker and a fresh hand every time.


It's not really that way.

The universe has produced things that basically must exist given the laws of physics (when certain chemicals exist together under certain conditions we know they will react in certain ways, etc).

We are one of those things, and the things we make are things that we are compelled to make by our very nature.

So it's not chance. Our universe as constructed necessarily gives rise to the things we see and experience.


So the universe is deterministic and nothing happens by chance? I think this is essentially arguing definitions of terms.


It doesn't need to be deterministic, it just needs to be biased.


It's more akin to arguing for the anthropic principle.


I'm not arguing about anything, I'm just stating facts


>we encountered it by complex goal-directed physical processes

If you mean evolution, yes. But nothing about the big bang for example makes it a "goal directed process".

(In fact even evolution only looks that way from hindsight -- there's not some end goal there either).


Acually, the primary complex goal-directed physical process I was talking about was Scott Aaronson. Also various other mathematicians, and mathematics/mathematical research collectively. Evolution is goal directed with regard to locally maximizing inclusive reproductive fitness, but it's not particularly involved by this point.


If a machine arises by chance that creates that goal directed process, that doesn’t seem any less interesting to me than if the output arose by chance instead. In some ways, it seems even more remarkable.


I'll play!

This distinction, between chance and goal-directed physical processes, is artificial, though.

It's artificial because it was produced by humans.

:-)


Ultimately, the thing that makes the complex goal-directed physical processes which make up life on this planet actually work is the inexorable, unavoidable increase of entropy. In some sense, living organisms are just a lucky arrangement that happen to make the direction in which entropy increases the one where life thrives, reproduces and grows in complexity. So I guess on some level maybe this is all the result of chance?


At an observation scale of, say, our local supercluster, could the electromechanical activity of the processes employed be distinguishable from random physical processes throughout the rest of the observable universe? Honest question, if the question makes sense. If not distinguishable, then perhaps the parent of your comment has a good point.


It depends if you believe in free will, or if you think we are just entropic particles and waves.

The more precise statement is that you'd never encounter this number by uniform sampling within the range of objects of (some reasonable choice of finite size)


No, it doesn't. "Random chance" in this context clearly implies the exclusion of goal-oriented interference from intelligent entities, whether you ascribe any special mystical meaning to "intelligence" or not.


I think the author intends to say something like: You'd never† run across such a program by uniform random sampling of programs of the appropriate length.

† or, rather, you'd have a vanishingly small probability of doing so


If you shuffle a deck of 52 cards and look at their sequence you're almost certainly the first person to see this particular sequence and, if you keep it a secret, almost certainly the last person to see it.


Wasn't really by chance; people put in a concerted effort to find it.


>To preempt the inevitable question in the comments section: yes, we did run these Turing machines for a while, and no, none of them had halted after a day or so. But before you interpret that as evidence in favor of Goldbach, Riemann, and the consistency of ZFC, you should probably know that a Turing machine to test whether all perfect squares are less than 5, produced using Laconic, needed to run for more than an hour before it found the first counterexample (namely, 32=9) and halted. Laconic Turing machines are optimized only for the number of states, not for speed, to put it mildly.

Does anyone know of any speed-optimized tests of ZFC currently running? I'd expect that they're not using Turing machines. . .


I mean, there might not be a point in optimizing a mechanical search of all possible contradictions in ZFC. Surely any contradiction would be ridiculously complex, and probably doesn't exist. No human-achievable amount of optimization will distinguish between those two.


That'd make for a bad-ass proof of work.


How would you create an arbitrary hard version of the problem? Or scale its hardness?


Shh, coins !


Well 1 coin. If you are lucky. Unspendable.


The coins in Bitcoin's genesis block are also unspendable.


It’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle.

Statements like this make me suspicious that one day we will redefine the axioms of mathematics so that such a function is no longer considered “perfectly well-designed”. Just feels like it will lead to paradox, like the set of all sets which do not contain themselves. I am not a professional mathematician though ;-)


You need not be suspicious, for mathematicians have long since already done that. What you're reading as a "math" statement is actually a meta-math statement. It doesn't say "For this set of axioms, only finitely many values of the function can be proved." It says "For any given axiomization of mathematics, only finitely many values can be proved." The number of finite values may differ for different axioms, but it will always be finite.

There isn't "a" mathematics; at this level there's lots of them, and you need to state which one you are using, which is why the title mentions ZF set theory by name. ZF is a very popular one, and is more-or-less the underpinnings of what most people learn in K-12 and even into college [1], but it is not the only one.

We can be confident about this because we can produce contradictions if some axiom set "claims" to be able to prove all of them and thereby prove the axiom set inconsistent.

[1]: I wouldn't say this is 100% true, but more because "school" mathematics simply accepts some contradictions and/or fuzziness in it in the interests of keeping it simple. For instance, in the probably-about-three-day intro to set theory you got in high school, Russel's Paradox ("the set of all sets that don't contain themselves") is probably best thought of as not so much being a paradox as simply ill-defined; school set theory isn't strong enough to hold up such a statement long enough for it to contradict itself, so to speak. If you take a good calculus course in high school you can also run up against some limitations of the somewhat informal definition of "real numbers" that you tend to get, but then you end up just sort of backing away, because the next step up in rigor from there is generally college-level-math-specialist (math major, comp sci masters, etc.)... even if a given student could handle it, it's not something we can ask for from our math teachers at scale.


Many mathematicians have similar attitudes. The general philosophy is know as Constructivism[0]. Broadly, the idea is that we should only consider mathematical objects that we can actually "get our hands on".

[0] https://en.wikipedia.org/wiki/Constructivism_(mathematics)


Well, with Godel's Incompleteness Theorems, such behavior appears to be an unavoidable property of any mathematical or computational system that can do anything of note. We may or may not be able to build systems where such properties are further removed from the 'practical' aspects... although such work would necessarily build upon the work in the linked post, which is finding just how far removed it is in the current system.


Not quite "anything of note" it has to be able to do arithmetic. There are very simple systems that are useful but not powerful enough for Gödel's trick and so we think those would be safe.


Geometry, for example.


If your axioms of Geometry allow statements about the length of line segments, it's probably possible to implement integer arithmetic.



How does one write down those axioms?


Well there's Euclid's, but the famous modern one is Hilbert's. https://en.m.wikipedia.org/wiki/Hilbert%27s_axioms?wprov=sfl...


Maybe I should only believe in the existence of functions that can be implemented by a terminating computer program.


I implemented Busy Beaver the other day. Here's the code and nice looking graphs:

http://www.catonmat.net/blog/busy-beaver/


"The other day" in 2009? :-)

Why did you say that the 6th busy beaver number "shall never be known"? The best refinement of the formal proof by Aaronson's student puts the 1919th value absolutely out of reach (per a link from HN user panic), but that's a ways away from 6...

I know some authors, I think including Martin Gardner and maybe Donald Knuth, have said that they don't necessarily expect humanity to establish the 6th busy beaver number with current mathematical methods, but "shall never be found" seems a bit too strong to me.


Impossible to know in principle is different from impossible to know in the physical universe.


Dude wrote his own language which compiles to an intermediate language, then Turing machines.

He then managed to test it on a simple statement about square numbers.

Sounds unimaginably complex.

How many people could help you write unit tests for such a thing?


Sounds pretty straightforward. This is a normal way of working in functional languages: you write your business logic in a high-level representation (aka an "embedded DSL") and a succession of interpreters that interpret slightly higher-level representations into slightly lower-level representations, until you get down to one that you can implement directly. It's a nice structure for testing because you can test each interpreter separately on each of its instructions and then the composition behaviour is guaranteed.


Sounds like an interesting assignment at a good undergrad compilers course.


Unit tests are mostly worthless for machines like this, you need formal proofs.


BB(1) ?


> As I stressed there, if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory. For BB(n) grows faster than any computable sequence of integers: indeed, if it didn’t, then one could use that fact to solve the halting problem, contradicting Turing’s theorem.

I don't get this. You write "BB(10000)" and they write "BB(10000)+1". Why don't they win?

Yes, under the hypothesis they are innocent of computability theory. That just means they have no idea what "BB(10000)" means, but all they need to know for the biggest-number-naming-contest is that (1) it is a number, and (2) adding 1 to any number produces a bigger number.

I think that even little kids figure out #2 and will use it when confronted with a number they do not understand in such a contest. You might be able to challenge them on #1 and if they are young they might not realize that if their number is disqualified for not being a number then so is your number.

Or are we assuming you had the foresight to specify in the rules that players can only use numbers that they can personally explain, thus allowing you to use BB(10000) because of your familiarity with computability theory, but forbidding them from doing so because they only know it is something that you claim represents a big number?


Basically, the game doesn't make sense when you take turns, instead contestants write down their answers and show them simultaneously.

See https://www.scottaaronson.com/writings/bignumbers.html .


I think what the author is trying to get across is that BB(10000) would beat any large number generated through "conventional" means.

From the article: > BB(5) ≥ 47,176,870 > BB(23) > Graham’s number[1]

[1] https://en.wikipedia.org/wiki/Graham%27s_number


How about BB(BB(10000))? I would call that conventional means, well-defined notation, etc. And I can keep going in this direction, creating ever larger numbers. (In fact, if space permits, I could define this sequence and then take the BB(10000)th element of this sequence.)


Of course you can go up the hierarchy, but BBⁿ(10000) is not "generated through conventional means" unless you take BB(10000) as given. And the point was that you're not given that.


My guess at an explanation -- have to be more clear on the "contest":

(1) You don't know about the BB function.

(2) We put our money in the pot.

(3) Secretly you write down the digits of the largest number you can think of.

(4) Secretly I see what BB(10,000) is and write down its digits. Uh, we are able to write quickly and have a lot of paper!

(5) We show our numbers, and the person with the largest number wins.


There are ways to determine the relative size of numbers even when you cannot write down the digits. It's how we know TREE(3) is larger than Ackermann(20) (or maybe larger than any description of simple recursive uses of Ackermann using only Ackermann(20) symbols -- so even [It's Ackermann sub 6 applied Ackermann sub 5 times to Ackermann sub 5 and so on, where Ackermann sub 1 is the Ackermann function] doesn't get close). Notice that claim is strong enough it doesn't matter what number you start with. 20, 3, a googolplex. It's all basically the same size.


Sure.


An equivalent response would be to say that "innocent of computability theory" here means "has no useful exposure to it at all (including how to use its notation in this contest)" rather than "doesn't understand its content".


> if you’re in a biggest-number-naming contest, and you write “BB(10000),” you’ll destroy any opponent—however otherwise mathematically literate they are—who’s innocent of computability theory. For BB(n) grows faster than any computable sequence of integers

So is BB(10000) computable or no? If not, does "greater than" still have meaning?


The Busy Beaver _function_ BB() is non-computable. Computability is a property functions have. BB(10000) is a number, although we don't and never will know which one. It doesn't mean anything to ask whether particular numbers are "computable".


Oh, I see, I think. So for the sake of argument BB(10000) could be, like, 5. But there's no way for us to know that for sure because BB is non-computable above a certain N. 5 itself remains just a number. Takes a bit of the drama out of the topic.

(And while "Aha: BB(10000) is NOT 5", one could make some function BBOBB(N) := BB(N)/BB(N-1), and maybe BBOBB(N) for whatever reason is 5 for some large N, but is not computable and thus can't be proven so. Or something like that. Which also takes some of the drama away).


> Oh, I see, I think. So for the sake of argument BB(10000) could be, like, 5. But there's no way for us to know that for sure because BB is non-computable above a certain N. 5 itself remains just a number. Takes a bit of the drama out of the topic.

Sure, although BB(3) is already larger than 5 and every BB number is much larger than the previous one, so the specific estimate of 5 is probably going to be more of an understatement than almost anything anyone has ever said. :-)

> (And while "Aha: BB(10000) is NOT 5", one could make some function BBOBB(N) := BB(N)/BB(N-1), and maybe BBOBB(N) for whatever reason is 5 for some large N, but is not computable and thus can't be proven so. Or something like that. Which also takes some of the drama away).

BBOBB(N) also grows faster than any computable function, and for N≥4, BB(N+1)>BB(N)². So also it's not going to be 5 for any large N.


Well so BBOBBSQUARED or whatever. The point being that it's conceivable to define a simple function that is not computable yet has a simple result.

Though really that's what (computable.bb)(n) already is, in a far more interesting fashion. So really, never mind this.


I'm not quite sure where you're going with this. The BB numbers are definitely regular integers in the same way that twelve or a googol is, but the uncomputability and fast growth of the function tend to "infect" other transformations and related functions, so that it also becomes very hard to say specific things about them.


Yeah I don't really know either. Something like if you name a number by saying BB(10000), but it's not computable, then have you actually named a number? But really that's just a boring question of how you referee the game.


Of course. The number of Turing machines is countable, so you can number them. HALT(n) is 1 when Turing machine halts and 0 when it doesn't.

Incomputable but always 0 or 1.


Except if you doubt that "when it doesn't halt" is an actual condition.


There's an error here - I said that computability isn't a property of numbers. That's not really fair. Real numbers on the whole can be non-computable, but all the Integers are computable, and all these BB() family functions have integer results, so those numbers would always be computable, we just have no idea what they are exactly.

Chaitin's Constants are not computable. They're some fraction (between 0 and 1) which is the probability of a randomly chosen program for some Turing equivalent system halting. This constant will vary depending on how the system works, but we can't compute it for any system at all for obvious reasons.


It is possible to generate a lower bound, so it can be compared with usual numbers, like g_64 etc.


Vaguely related, I implemented Busy Beavers in C#. It also has some support for automated proofs (i.e. making it run exponentially faster). It's open source at https://github.com/nerai/Nibbler


It's a minor shame that they named their new machine Z. There's already a virtual machine called Z - the one used to run Zork. Imagine if you got the two mixed up! That could make for some very confusing adventuring and/or mathematics indeed.


If you mixed those up, the probability of being eaten by a grue is very high.


I find this article hard to understand since there is so much about this topic space (set theory, computational complexity, etc) that I don't know! How do I get to place there I can whiz through this kinda stuff?

Any recommendations for books or youtube videos?


If you changed the question to something like "process X uses a tape no longer than all the atoms in the universe and will terminate before the heat death of the universe" then it becomes finite, so ZF set theory can handle it fine in principle, via simulation.

So it seems like this paper is about a very small Turing machine, and so seems very concrete, but it's also about very long-running behaviors that are likely to be indistinguishable from an infinite loop in practice, so it avoids any practical result?


What's the smallest known busy beaver with this property?


The one described in Scott's next post [0] is the current champion, I think, coming in at 1919 states and halting iff ZFC is inconsistent.

[0] https://www.scottaaronson.com/blog/?p=2741


The one described in the post.


(2016) but still a great analysis.

The site is slow to load, probably from traffic, but it explicitly blocks the Internet Archive. Who does that?

  User-agent: ia_archiver
  Disallow: /


Scott has had some, shall we say, controversial posts in the past. I'd guess that he's hoping that if he ever posts something and then regrets it and retracts it a minute later, he might actually be able to remove it without it staying up forever.

At least, that's the only thing I could guess :/


What were those controversial posts?


If you email hn@ycombinator.com Subject: (2016) with a link to this comment, they'll fix it.


Thanks! We've updated the headline.


The title is misleading in the sense that the particular number depends on the model of TMs (one- or two-way infinite? what alphabet? is a left-move from the 0th cell a no-op or a halt? etc.)

The author kind of acknowledges this when he says: `Theoretical computer scientists might object that this is “merely a question of constants.”'. But then he brushes that aside with some irrelevant non-sequitur about the origin of life in our universe.

It's like if you found a zero-day exploit in Windows, typed up an implementation in QBasic, printed that out on size A4 paper (one-sided, double-spaced), and then held a press conference that "80 pages of code suffices to hijack Windows"


He also includes this paragraph, which gives some motivation for why should should care about this formalism in particular:

> Some people might wonder “why Turing machines,” as opposed to a more reasonable programming language like C or Python. Well, first of all, we needed a language that could address an unlimited amount of memory. Also, the BB function is traditionally defined in terms of Turing machines. But the most important issue is that we wanted there to be no suspicion whatsoever that our choice of programming language was artificially helping to make our machine small. And hopefully everyone can agree that one-tape, two-symbol Turing machines aren’t designed for anyone’s convenience!


A fun fact that may appear surprising at first is somewhat implicit in this last paragraph: C is not Turing-complete!

The reason is precisely that C cannot "address an unlimited amount of memory", even if an unlimited amount were available. For more information, see:

https://cs.stackexchange.com/questions/60965/is-c-actually-t...

This is in contrast to languages such as Lisp and Prolog: Conceptually, they can reason about arbitrarily large structures such as lists and terms, and can faithfully model any Turing machine.


C can allocate memory whose address isn't taken in the form of new stack frames. Local variables whose address is never taken as a pointer are not constrained to have an address which fits into a pointer.

That, by itself, though gets us only as far as Push Down Automaton (PDA), not Universal Turing Machine.

If we regard the stack as the machine's tape, the problem is that the tape gets erased when the machine rewinds (terminations of scopes obliterate local variables).

The C language also describes an I/O facility: streams. Streams have no inherent bound on their length. If the functions `fgetpos` and `fsetpos` are not used, but only relative positioning, then the stdio stream gives us a tape that is infinite, in terms of language semantics.


This is all true, and a good summary of the situation. These points are also explained in more detail in the post I linked to.

If we use streams as an oracle, then we can represent an unbounded tape, and C together with such streams let us model any Turing machine. However, this is not sufficient to prove that C is Turing-complete: To show that (in this context), one has to express also the semantics of such streams within C! It is at this point that the mentioned limitation manifests itself with this approach.


How is it not within C? FILE * Streams are defined by the C language. When you write a byte into a file, and there is no external meddling, you can read the same byte back. A stream is just a language-defined dynamic data structure which retains the last value that was stored into it, like a list or hash table.

C also has declarations and for loops. If you use them to build your Universal Turing Machine, why doesn't the same objection apply to those features? I mean: "but you're not expressing declarations or for loops in C, you're just invoking these ready-made feature by their convenient, ready-made syntax!"


These are all good points. However, there are now assumptions that do not follow from the C standard, such that you can read the same byte back from a file after writing it. As the post I linked to mentions, you can indeed model an unbounded tape if you assume the existence of streams with such strong properties and rely on their semantics in your solution.

This assumes facilities that are not prescribed by the C standard though, such as a data structure whose semantics cannot be expressed in C in general. This is a rather striking difference to other constructs such as declarations and for loops: If pressed, we could express their semantics within C while relying exclusively on features that are prescribed by the standard.


C99, 7.19.2 Streams, ¶3: Data read in from a binary stream shall compare equal to the data that were earlier written out to that stream, under the same implementation. Such a stream may, however, have an implementation-defined number of null characters appended to the end of the stream.


Again a very good point! The key assumption that breaks down in general, that is, unless you assume an unbounded storage system, is that the data can be written at all. As also the post I linked to states: C together with an unbounded storage system is definitely Turing-complete!

How to specify such a system with C remains unsolved though, unless you assume its existence and availability via streams in the first place. This is in contrast to the other languages I mentioned, where you can specify the semantics of an unbounded storage system via built-in mechanisms and data structures of these languages.


> unless you assume its existence and availability via streams in the first place

That's given by a conforming, hosted implementation of ISO C (the usual kind). You seem to have been writing about freestanding implementations. We are then severely restricted; there might not be a malloc.

C99 4. Conformance ¶6: "The two forms of conforming implementation are hosted and freestanding. A conforming hosted implementation shall accept any strictly conforming program. A conforming freestanding implementation shall accept any strictly conforming program that does not use complex types and in which the use of the features specified in the library clause (clause 7) is confined to the contents of the standard headers <float.h>, <iso646.h>, <limits.h>, <stdarg.h>, <stdbool.h>, <stddef.h>, and <stdint.h>."

If we are talking hosted, then <stdio.h> streams have to be present as a required feature.

> you can specify the semantics of an unbounded storage system via built-in mechanisms and data structures of these languages.

Which language states that programs must be successfully processed by an implementation, regardless of the resources that they require? The next cons call in your Lisp image could fail. That's no different from a stream-related resource problem.

The thought experiment behind "can this programming language express Turing computation" requires us to imagine, for all languages, that the resource constraints don't exist. The question is whether there are some limitations in the language semantics which cannot be thus hand-waved away. Freestanding C (without streams) has those limitations in the abstract semantics; hosted C doesn't. The abstract semantics of streams is available, and if we imagine that to be free of platform-related resource limitation, then we are in the same league as the other languages that we have imagined to be free of platform-related resource limitations.


Wouldn't lambda calculus have been more tractable and at least as mathematically sound?

#deathtoturingmachines


Indeed; lambda calculus comes up several times in the comments below the post, e.g. in

https://www.scottaaronson.com/blog/?p=2725#comment-1084317

https://www.scottaaronson.com/blog/?p=2725#comment-1085117


I find it fascinating that he doesn’t feel like he understands lambda calculus. It’s so simple in comparison to what he’s doing, at least at the base level.


It is telling though, that even someone like Aaronson, who presumably understands Kolomogorov complexity a 1000 times better than I ever will, doesn't have a more rigorous way of establishing the "look, no fudgin here" criterion.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: