Hacker News new | past | comments | ask | show | jobs | submit login
The very laws of physics imply that artificial intelligence must be possible (aeon.co)
23 points by gregschlom on Oct 1, 2014 | hide | past | favorite | 4 comments



Seriously, how is this getting so little points. I would upvote a thousand times if I could.

Truly interesting read about the real 'Next Big Thing'. The future is not about how someone made yet another cheap hosting or another programming language or cat website startup.

It's right here: in the ideas.

> just a single idea stands between us and the breakthrough

And yet, no one seems to care.


Conway's Law: prov.

    The rule that the organization of the software and the organization of the software team will be congruent; commonly stated as “If you have four groups working on a compiler, you'll get a 4-pass compiler”. The original statement was more general, “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” This first appeared in the April 1968 issue of Datamation. Compare SNAFU principle.

    The law was named after Melvin Conway, an early proto-hacker who wrote an assembler for the Burroughs 220 called SAVE. (The name ‘SAVE’ didn't stand for anything; it was just that you lost fewer card decks and listings because they all had SAVE written on them.) There is also Tom Cheatham's amendment of Conway's Law: “If a group of N persons implements a COBOL compiler, there will be N-1 passes. Someone in the group has to be the manager.”


> This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

But here's the thing: arbitrarily-fine detail may not be good enough. So much of physics ends up being chaotic, to the point where, well, "A consequence of sensitivity to initial conditions is that if we start with only a finite amount of information about the system (as is usually the case in practice), then beyond a certain time the system will no longer be predictable." [Quick quote from Wikipedia - it explains it better than I can.] The brute-force "just simulate everything" approach only works if you can get information about the system at a deep enough level to mitigate the sensitivity of initial conditions - and if there is no such "deep enough" level...


> Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough.

It was the journalists and futurists who were and are prophesying it. Nobody in the field said that, all said the opposite - there is no way it's gonna happen anytime soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: