Hacker News new | comments | show | ask | jobs | submit login
The Last Invention of Man (nautil.us)
172 points by dnetesn 9 months ago | hide | past | web | favorite | 118 comments

I think it’s a little absurd that we would make the leap to machine super-intelligence without going through the step of machine low or medium intelligence.

If I could write, I’d write a companion story to this called “Omega-1, the artificial super intelligence that wasn’t really that good at things”. They try to have it write mini-AIs to solve MT problems, but it’s slow and inaccurate. They have it write and produce TV shows, but they’re bland and poorly received. They have it make video games but the controls make no sense and they aren’t fun. They have it make trading strategies and it loses all the money.

They ask it to build a smarter version of itself, but it sees no way forward that’s fundamentally better because it lacks comprehension of what it would mean to be “smarter” - it can add more memory or more computing power, but without changing the way that it stores and indexes the information, it can’t make something that can solve problems that it can’t.

Eventually Omega team gives up, publishes the results, but doesn’t have the heart to shut down Omega-1, a machine that passes the Turing Test but isn’t really that good at things.

Yeah. Maciej wrote a pretty good piece rebutting AI alarmism and kind of alludes to that as one of several points.


> With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.


> But the hard takeoff scenario requires that there be a feature of the AI algorithm that can be repeatedly optimized to make the AI better at self-improvement.

And so on. It's a good read.

Rebuttal to that essay by the Machine Intelligence Research Institute (AI safety group):


That write-up seems to mainly try to shrug off the "what is intelligence" issue by virtue of "maximization", while ignoring that when dealing with human intelligence, selecting for maximization of a single field gets you autistic savants, not Lex Luthor-grade manipulative geniuses.

My own rebuttal to that unimpressive Maciej argument is that he merely listed many ways in which something that seemed as though it ought to work didn't work out at all as expected, so we can safely assume that something as challenging as "superintelligence" won't work out either, and is therefore no danger.

But the argument that there are countless ways for plans to go awry and instead cause something unexpected to happen is hardly reassuring evidence that there is, therefore, "nothing to worry about".

> With no way to define intelligence (except just pointing to ourselves..

..and anything else considered to be alive)

Not to make an attempt at arguing whether or not machines can be intelligent but machines are not submerged in an environment that effects them, giving them no merit to be intelligent about anything; and even if they were they still have no ability to influence the environment. (Sensors for example)

It seems many that speak on AI have completely disregarded coming up with a reasonable implementation of intelligence in favor of pondering its potential uses. (Typically in ideals central to humans)

> machines are not submerged in an environment that effects them, giving them no merit to be intelligent about anything

You're alluding to the problem of having to 'raise' a super-intelligence with a vested interest in the continuity of its physical existence. I think I agree with where you're going with that.

Our understanding of 'self' or 'self preservation' likely comes from a long history of suffering, pain, and failure brought on by trials at understanding the self from evolution's attempts at imbuing the information for longevity into cognition. Mother nature has had years of exponential growth with an inconceivable number of branches attempting to understand the 'self', that lead to failure, all necessary because the idea of 'self preservation' or 'grounding one's sense of ego into a concept of physical sensory data' is very obscure to precisely define, let along to define such that it can be TESTED between generations.

Based on how much of a headstart evolution has on that refining process, and how bad humans are at implementing that or even exposing sensory information about that that plug directly into concepts that necessarily affect an artificial's sense of self, I'm willing to bet that we are completely ill-equipped to grow any real intelligence that's truly grounded in the real world, until it can make all the same failed attempts at living longer or evolving + testing those attempts, with that exact same directional goal of 'self-preservation' which humans have.

Self-preservation is really the only thing you can evolve to, to guide a physical evolution. I'm not aware of any other physical goals that could lead to the same human-like intelligence or human-like neural models.

Most creatures seem to hit the exact same point. An agent-based survival of the fittest with generational physical trials that punish poor understandings of the self's physical manifestation, trying to optimize for its longevity.

Anything that doesn't hit that kind of model is strictly not aware of the manifestation of its ego, as we know it (e.g. plants, bacteria, etc), and I don't think the physical sense of self can evolve into grounded concepts without allowing for generational failure on the understanding of self, due to incorrect concepts of 'physical self' between competing generations.

TL;DR - The concept of physical self is an evolved trait only if there is a goal tied to it, it cannot arrive without the concept of 'self existence' being threatened or when not allowed to fail.

> Maybe any entity significantly smarter than a human being would be crippled by existential despair

This was addressed in a Spider-Man story where a villain enhances his intelligence, then first becomes a very successful crime boss, then writes novels and music, and then falls into existential despair.


I think it was done first, and probably better, in Flowers for Algernon.

Just in case anyone her is unfamiliar with the story here is a link to the Wikipedia synopsis: https://en.wikipedia.org/wiki/Flowers_for_Algernon.

Daniel Keyes should be more famous and the story more widely known.

And now for something weird: http://www.unboundworlds.com/2008/10/challenged-and-banned-f...

How anyone could think of banning one of the most affecting and humane books ever written is beyond me.

"it is a pastiche of the science fiction story Flowers for Algernon."

> pretty good piece rebutting AI alarmism

I only read the text, but I was under the impression it was mainly meant to be a funny Wat-style talk and not a meaningful rebuttal.

If, in fact, this piece is intended as a serious rebuttal then I honestly think it fails. The overwhelming majority of the content is either a funny whataboutism (e.g. Gilligan's Island, pot smoking roommate, the Great Emu War, etc.), or jokes about what could be loosely termed the "AI community".

AI 'alarmism', as I understand it, is not predicated on the idea that super-AI is a certainty - merely that it's a possibility. No point raised in this piece gives me a reason to believe that AGI (or whatever) is either impossible or so unlikely that it can be disregarded.

You're betting everything on vacuous what-ifs. I mean, let's turn this around:

With no way to define intelligence (except by pointing to ourselves), we don't even know if we're good at it. For all we know human intelligence is a dead end. Maybe any entity that is not a product of evolution, free of genetic detritus that becomes maladaptive around advanced technology, will outstrip human civilization like human civilization outstripped evolution.

I don't doubt the book is full of entertaining soundbytes, but given these two quotes I am unconvinced that it is any more than entertainment.

Similarly, I wonder if it's something where adding to it doesn't actually do much, like all the variations you see done on Turing Machines (adding more tapes or 2d tapes etc.) which seem like they would increase its capacity somehow, but it turns out they have the same power as the original Turing Machine. Or like the arithmetic of infinite numbers, e.g. aleph-0 + aleph-0 = aleph-0.

>>> With no way to define intelligence (except just pointing to ourselves), we don't even know if it's a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.<<

The only intelligence that I believe is important is problem solving. If an AI is as capable as humans at problem solving then that is all you need.

>>crippled by existential despair

Sounds to me that you are talking about feelings. I really do not see the purpose of purposely giving the AI feelings. A strong AI should not have any feelings just like your google maps application does not have any feelings when giving you directions.

A general AI will simply be a general problem solver at the same level of a human but with a virtual infinite capacity to scale its computational abilities.

You could definitely give it free will so that it decides its own goals but why would anybody do something like that unless they want it to turn on them?

It would be like somebody launching a nuclear ballistic missile and letting it decide which city to land in. It is possible to program that type of free will but why? It could decide to destroy your city.

Feelings regulate actions. People don't blindly follow rules that would lead to disaster because of more fundamental regulatory mechanisms such as feelings. In order to make an AI that is as flexible as a smart human, it can't be brittle, and therefore will need ways to break out of logical traps. That is what a general problem solver must be like, inherently.

Feelings can be irrational so I really do not think you want to employ a mechanism that uses anything resembling feelings. If it can be irrational that means it can also go crazy and decide to exterminate all human kind.

>>In order to make an AI that is as flexible as a smart human, it can't be brittle, and therefore will need ways to break out of logical traps.

I think this can be achieved without having to give the machines flaws we have just like we did not need to create machines with flapping wings to fly.

Remember that the only goal of a living organism is to replicate and have its DNA live another generation. That's it. Feelings helps us achieve that goal I suppose. An AI does not need to have that goal. If anybody were to give it that goal it would only be a matter of time before it decided that humans were standing in the way of its goal.

Very interesting point about using feelings and intuition to break out of logical traps. Does that also mean this AI will be fallible or perhaps even stubborn? Will it ever refuse to accept that it is wrong about a result it produces?

Where did you get that URL? It's not linked on http://idlewords.com/talks/

Thanks for this

That's pretty likely, but not certain. Go-playing-AI went from "talentless amateur" to "better than every living human" in about a year.

Sometimes things progress rapidly, and some kinds of intelligence could recursively self-improve.

"I think it’s a little absurd that we would make the leap to machine super-intelligence without going through the step of machine low or medium intelligence."

Why? The leap from calculators that were worse than humans, to calculators that were much, much better than humans, didn't take too long. No one knows how fast the leap will be from what we currently have, to general intelligence, to super intelligence.

I don't think anyone in the AI safety movement will tell you that we know for sure the leap will be fast. I think they'd say we simply don't know how fast it will be, but the stakes are pretty high so it's worth preparing for.

I feel this lacks imagination. In actual fact, a computerised person of average intelligence would be an incredible piece of technology and could easily lead to an AI singularity, given enough funding - which it would probably receive, because holy shit. What happens when you set thousands of average people on a problem, and give them thousands of simulated years to work on it? When you can spin up a research team as easily as an AWS instance?

>but without changing the way that it stores and indexes the information, it can’t make something that can solve problems that it can’t.

And yet people routinely grow up to be smarter than their parents, with no special effort on their parents' part. All you need to make a smarter version of yourself is the ability to make inaccurate copies and dumb luck. What, besides ethics and money, stops you from running an evolutionary process that selects for intelligence at a vastly higher rate than real time?

> What, besides ethics and money, stops you from running an evolutionary process that selects for intelligence at a vastly higher rate than real time

You can already do that today. So why don't we have a superintelligence? I can train a NTM to learn counting and give me the next natural number in a single step. I can't train it to give me the next prime in a single step, why?. Computability and complexity, both conveniently forgotten in the discussion of superintelligence.

But this part isn't science fiction, we already have AIs that do this. See AIVA for pretty good musical composition[0], new sites are using AI to write and compress news stories[1], and screeplays are being written via machine learning[2].

Obviously these are still in their infancy, but doesn't this perfectly encapsulate your point? There's not really an interesting story or thought piece there, because people are working on these in all sorts of fields all over the world, today.

If we can build AIs that create "good" art, is it much of a stretch to teach a super AI what "better" means? By default a super intelligence doesn't have to start out as super intelligence, just have the capability to improve.

[0] https://www.youtube.com/watch?v=Ebnd03x137A

[1] https://www.wired.com/2017/02/robots-wrote-this-story/

[2] https://www.youtube.com/watch?v=LY7x2Ihqjmc

I think the storyline should include that no one wanted to shut it down and despite the fact that it broke their hearts they couldn't justify spending hundreds of thousands of dollars on the electricity that it cost to keep Omega-1 running. It was simply unable to sustain itself like 90% of startups.

> I think it’s a little absurd that we would make the leap to machine super-intelligence without going through the step of machine low or medium intelligence.

I’m not so sure about this, which is not so say i’m arguing FOR the leap. But, intuitively, there is an absolute massive leap in terms of perceived intelligence among humans, let alone with chimps. It seems some of the magic sauce for passing a turing test is, evolutionarily speaking, rather small.

> I think it’s a little absurd that we would make the leap to machine super-intelligence without going through the step of machine low or medium intelligence.

I have the same feelings about socialism...

If we can draw any lesson from the history of our biological existence from which we expect machine intelligence to spring from it will likely be a series of smaller pieces that will join together to form like Voltron. Combined will be the best of our ability.

I highly doubt there will ever be some centralized system/organization that has 'figured out' intelligence. Although that doesn't have to necessarily mean we will forever be limited to niche/vertical intelligent systems in the long run either.

Perhaps we will find that when we give it control of weapons, despite its inadequacies in other areas, it will excel at killing.

The first intelligent AIs will be weak and have limited influence on real world human affairs. The worst we have now is poorly supervised automatic trading systems that can shift massive amounts of money through electronic markets.

However, once AIs can design and implement their own successors all bets are off.

I would also be concerned about the development of non-theraputic neural implants and the ability of an AI to surreptitiously influence human behavior through them. Thankfully that is likely to to be further off than when we start bumping into the AI singularity.

In the absolutely epic space opera "Hyperion" (and its sequel "Endymion") the author Dan Simmons theorizes -- through the plot, not through a monologue! -- that AI creatures would offer us tech that we could never invent -- impossibly minituarized personal computers (the size of a discrete and thin silver bracelet, an earring or even smaller implants under the skin, and which are basic AI by themselves), and real-time worm-holes through which humanity finally achieves inter-stellar civilization status.

However, the author is very realistic (pessimistic?) about such an AI community -- actually there's billions of them living in a place humanity can't ever reach or even guess where it is, with internal politics as foreign for the people as our societal dynamics are a mystery to the bacteria around us. Anyhow -- the tech they give to humanity is heavily backdoored and this becomes very apparent and dangerous as the plot progresses.

I'd recommend those books to anybody with a heart and some curiosity about a seemingly optimistic future (which is actually pretty dark).

Be warned though: if you decide to read them, the first ~120 pages of the first book are DARK AND HEAVY. However, enduring it was the best sacrifice of several hours of patience I've ever done in my life. I just couldn't stop reading before I finished all 4 books.

Echo the sentiments about "Hyperion" - one of my favorite sci-fi sagas of all time. The author is incredible because his prose are mellifluous regardless and across genre - sci-fi, drama, horror, and historical fiction are all in Dan Simmons' corpus of work.

That kind of falls apart in this context because a set of mysterious unreachable AIs bearing gifts are a de facto foreign nation and would be treated appropriately.

Well, I agree that I got a bit off-topic. The general analogy is still there but you're right about the details.

That being said, the author handled the problem you outlined with the simple premise of -- that humanity wouldn't be able to even survive as a whole if it delayed the decision how to treat the AI nation.

Without giving spoilers, the fledgling AI nation planned and executed things so well that humanity either had to embrace them and let its core be governed by black-box (and backdoored) tech like teleporters and miniscule personal assistants, or face literal extinction. Needless to say, the humans made a Faustian deal.

Co-evolution of species with wildly different values is the core of the conflict in these books, and even as a fanboy I dare saying it has great value outside of a sci-fi / space opera book.

We also have virus botnets

I feel like many of these speculative AI pieces seem to ignore that for every incredible breakthrough, there is a large amount of work which is required to build the scaffolding which allows the breakthrough to have real impact in the world. Even if the AI solves "Fermat's Last Theorem" type problems quickly, it seems to me that the vast majority of problems it faces will be "misplaced the database keys", "can't schedule a meeting with so-and-so", "car is snowed under" types of problems.

Take the quote `By developing a suite of other games each day, they figured they’d be able to earn $10 billion before long, without coming close to saturating the games market.`

Sure - but marketing the games, accommodating players' changing attitudes, getting app store approvals -> All of these would have to take place on human timescales, and to "solve" these problems on AI timescales would require orders of magnitude more work (e.g. altering people's memories to convince them they were already attached to the game's characters, or hacking into Steam/Apple servers and auto-approving the games).

> Even if the AI solves "Fermat's Last Theorem" type problems quickly,

It won't. This latest fad with AI becoming self-aware reminds me of how obsessed people were about finding the philosopher's stone back in the 18th century. It didn't happen for alchemy (we got chemistry instead, which is nice), it won't happen for AI (we'll probably get something like chemistry instead, which will be nice, but we won't get any artificial "sentient" entity).

A comparison I might make is that AI fanaticism is like the God-of-the-gaps phenomenon. There's a great deal of ignorance about what intelligence is, what computers fundamentally are, and so on, and some people like to fill that void with all kinds of fanciful and unjustified stuff. Perhaps they also feel greater justification in doing so because a few famous names, many of them popularizers, have done the same. Science fiction is fine -- it can be fun watching or reading about fictional AIs -- but many times we're not dealing with mere imaginative storytelling but uninformed and unsophisticated claims that do not withstand philosophical scrutiny. What falls under the heading of AI has proven to be an immensely useful tool for automating certain kinds of things. Observing these successes, some may happily apply the aforementioned God-of-the-gaps analogy to human intelligence. However, that would be a flawed analogy because these successes have not brought us any closer to achieving human intelligence any more than adding an indefinite number of natural numbers together gets you the negative square root of 2.

It's important to distinguish between intelligence and the power a tool can offer for good or for nefarious ends. If successes in AI have proved anything, it's that direct application of intelligence is unnecessary for many tasks that previously no one knew how to or was practically unable to perform using machines.

I personally think that more-or-less sentient AIs appearing eventually (on a long enough timescale) is likely, but the idea of some endless ability to self-upgrade (beyond simply buying more RAM and processors like the rest of us) is extremely unlikely.

The sad part is that HN, an audience that you would expect to be a bit more resistant against overhype and marketing, eats this shit up (reddit is even worse with people who pretend they like tech and lap up anything said by their heroes like Elon Musk). Right this minute there's an AI doom-and-gloom story on the front page: https://news.ycombinator.com/item?id=15416819

I think the last thing ai’s will be successful at is generating art that humans will appreciate on anything but the most superficial level, and it surely won’t come from just feeding it a bunch of movies, but rather as a genuine expression of the AI’s own experience of the world.

One of the most annoying things about Bostrom, Tegmark and other amateur sci-fi authors is that they have chosen parables as their medium. Bostrom likes his sparrows and owls[0], Tegmark likes thinly veiled references to DeepMind with Team Omega.[1] Parables are for children and religious flocks. They are also irrefutable. Which makes them a perfect tool for Bostrom/Tegmark, but an inappropriate interjection into more mature conversations about the state and possible futures of AI. In a sense, they are propaganda, meant to generate feelings of animosity much like an ugly picture of the vicious Hun.

The central, gaping hole in their fear-mongering, as Maciej has pointed out, is that they do not define or quantify intelligence, but they're very free with language when they talk about AI surpassing human intelligence. That is, their projections imply a measurement which they are incapable of making.

I do like the idea that Omega decided to launch a media company, though. Let's call it Interlace. Maybe the only thing David Foster Wallace was missing was a rogue AI capable of creating addictions. This is, in a sense, the same plot as Ex Machina: a machine that knows how to seduce us.

[0] https://blog.oup.com/2014/08/unfinished-fable-sparrows-super...

[1] https://deepmind.com/applied/deepmind-ethics-society/

Parables are one of the best ways to get a message across to a lot of people. It's a message worth spreading.

If you want more "technical" thoughts on the project, there are those things too, but most people would never have been exposed to them without the less-technical people making noise.

"The central, gaping hole in their fear-mongering, as Maciej has pointed out, is that they do not define or quantify intelligence, but they're very free with language when they talk about AI surpassing human intelligence."

IMO, this is less of a problem than most people assume. It's usually defined in broad terms as "ability to achieve goals", and that's enough for almost all practical purposes. To insist on a stricter definition is simply unnecessary for their arguments.

Just because we don't understand something, doesn't mean we can't use it or reason about it.

Parables are a good way to get a message across to lots of people if you're God, or pretending to speak for God. They are a religious form of communication that relies on analogous thinking and appeals to authority, the wisdom of the speaker. But Bostrom and Tegmark don't know any more about the future than you or me, and I could write a parable about sparrows and Omegas that spins a different narrative. It would be just as powerful, and mean just as little. Bostrom and Tegmark have chosen to promote fear. They have chosen to disseminate a powerful feeling in a discussion that would benefit from facts. That is, like fake news and Donald Trump, they opted for asymmetric appeals to emotion, which should be an indication to all of us that a) they don't know what's really going on with AI and b) they don't care. They just want us to operate on fear.

"And then, for no reason anyone ever understood, Omega created a bunch of factories that produced an unstoppable army of robots who rounded up all the humans and turned them into paperclips."

Yes, I was waiting for that sort of ending too. Not that it is inevitable -- just that is seems likely given what makes humans human has a lot more to do with emotions than intelligence (see the book "Descartes' Error" on how emotion underlies all thinking). If you create a creature without human emotions (including feelings for other humans), don't be surprised when it behaves in other-than-human ways.

Of course, since human emotions are tuned for a certain context, we also should not be surprised when they don't work out that well in a different context like the internet or with abundant cheap unhealthy easy-to-consume exciting food and information (e.g. internet trolls, "Supernormal Stimuli", "The Pleasure Trap", "The Acceleration of Addictiveness", "A Group is its Own Worst Enemy", "The Cyber Effect", and so on).

I spent a year hanging around Hans Moravec's robotics lab (at the CMU Robotics Institute) in the mid 1980s, when he was writing "Mind Children". While he is a brilliant and well-meaning person, my concern became that Mind Children could wipe out humanity like many a stormy adolescent with parental issues and only regret it late. Or, alternatively, we might create a cockroach-level self-replicating AI that would wipe out humanity without even noticing humanity existed (think, "Replicators" from Stargate). Given the commercial and military competitive pressures shaping much of AI research, both risks are very real, even if one can quibble about quantify the exact risk. I also accidentally created perhaps the world's first simulation of self-replicating cannibalistic robots on a Symbolics -- so I know first hand how easy it is to get unexpected results in the AI field.

That's all part of why I shifted my career in other directions, like towards helping humanity create more sustainable and resilient options for itself including via better educational and distributed knowledge management tools (such as, for example, in "The Skills of Xanadu" by Theodore Sturgeon). However successful I have been at those is another story, but see my GitHub site and pdfernhout.net for progress. The distillation of all my thinking on this is my email sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

An earlier example of my thinking about that can be found in this post in 2000 to a colloquium Doug Englebart ran on "The Unfinished Revolution II": http://www.dougengelbart.org/colloquium/forum/discussion/012...

Or some broader thoughts on that from more recent times: http://worldtransformed.com/wt1/wealth-transformed/paul-fern...

As to the "Last Invention of Man" sci-fi story itself, it's interesting and well written -- but it's just one "technocratic" possibility about better planning saving the world. You have to really trust that this small group of altruistic technocrats both had the right impulse and got everything right (as far at all that goes).

And planning is only one aspect of society alongside exchange transactions, gift transactions, and subsistence activities. A fully planned society is only one possible expression of humanity -- and such a society may be very narrow and unfulfilling to a more complex human spirit (for good or bad, depending on how well certain individuals fit into it, like in the dystopian "THX 1138" or, from 1909, "The Machine Stops").

Max Tegmark dismisses a Basic Income (which softens the rough edges of the exchange economy) with a technocratic dismissal of: "This [Basic Income] movement imploded when the corporate community projects took off, since the Omega-controlled business empire was in effect providing the same thing."

So, Omega knows best for everyone, more than their own choices in the market. That may be true in the story, but what psychological price do individuals pay for that? So, it is not "the same thing".

See the sci-fi story "With Folded Hands" for another (and more horrifying) tale of "AI Knows Best (and will adjust you to agree for your own happiness if you say otherwise)". https://en.wikipedia.org/wiki/With_Folded_Hands

And volunteerism and self-sufficient production are other things many humans take pride in -- but, as with purchasing choices, those activities might too not have a place in the carefully ordered world brought about by the Omegans?

Max Tegmark also does a lot of handwaving about how anyone would still have jobs with Omega capable of doing so much. There is also hand-waving about how Omega never jumps the "air gap" between its data centers and the rest of the world even as it is controlling all major news outlets.

Still, I think this story is a contribution to the field of futurism -- imagining possible scenarios even if we don't know exactly what one will play out, so we can decide where to invest our efforts in moving forward.

Some other stories about AI I've found illuminating as to what might be possible:

* James P. Hogan's novels with AIs (especially The Two Faces of Tomorrow, one of the most realistic AI emergence stories, and also the AIs with a sense of humor in his other "Gentle Giants of Ganymede" novels)

* The benevolent Strix AI in the EarthCent Ambassador Series (where the Strix were created to resist a malevolent AI by another species)

* The Old Guy Cybertank novel series -- with AI based on human mind templates so a Cybertank cares about humanity as it feels part of it -- which is insightful sci-fi from a neuroscientist who has a background in electronics. It also has a helpful AI who fortunately defeats a hurtful AI created by humans who did not know what they were messing with.

* The Metamorphosis of Prime Intellect (for an AI that goes unstable trying to keep everyone happy with all their conflicting demands)

* Forbidden Planet (both for Robby run by Asimov's Three Laws of Robotics) and the Krell's essentially-wish-granting machine the Krell unwittingly used to destroy themselves via an unreformed Id (tuned for scarcity times and conflict not abundance and cooperation)

* The Invisible Boy -- again with Robby the robot but this time a scheming AI that almost takes over the world and upgrades itself without anyone noticing by subtly altering reports it generates (Omega's next chapter?)

* The Great Time Machine Hoax for an AI that takes over the world by sending out paper letters with contracts and checks in them.

* Midas World -- for a vision of both abundance and despair and how robots inherit the Earth.

* Vernor Vinge "A Fire Upon the Deep" (of course, for his description of a "Blight" unleashed by human AI explorers) and his other writings

And no doubt many more -- Berserkers; Bolos; Cylons; Daleks; Star Trek's M5, Data, Lore, "I, Mudd" androids, the Borg, Q-in-a-way, and more; Star Wars' R2D2, C3PO, and more, Lost in Space; Demon Seed; Gort in "The Day the Earth Stood Still"; Deep Thought in Hitchiker's Guide to the Galaxy, Ktistec machines from R.A. Laffery, Asimov's many stories (including "The Last Question"), and so on.

A lot of this becomes a bit of theology too, since talking about creating AI also is a bit of like talking about creating "God" or at least dealing with the implications of a much more omnipotent, omnipresence, omniscient entity. See also the micro sci-fi story "Answer" by Fredric Brown: http://www.roma1.infn.it/~anzel/answer.html

My feeling is that any path out of a Singularity will have a lot to do with our path going into one -- so I am all for making our society a happier, healthier, fairer, more egalitarian, and more compassionate place before a Singularity happens. Even in Max Tegmark's story, there is essentially nothing about that utopia (as far as final results) that we could not make happen right now without an AI. So, to me, the risks of AI means we need to try even harder right now to make the world a better place that works well for everyone.

Some other ideas I've collected towards that end: https://github.com/pdfernhout/High-Performance-Organizations...

Thank you for that very detailed reply (to a comment which was mostly made in jest). I'd feel bad leaving it with merely an upvote and no response. I know next-to-nothing about AI, but I do know something about humans (having had decades of experience in both being one and dealing with them. Hit me up, potential xeno-employers!):

Humans always have more ability than insight, more skill than wisdom. We developed agriculture (as one theory goes, so we could brew more alcohol) and did not (could not) foresee the planet-changing (both to climate and ecology) consequences of that. We invented combustion engines, and could not foresee the planet-changing consequences of that. The same pattern appears again and again in (technological) history. It's forgivable; after all, you never know the full consequences of your actions until well afterward (or never at all). In short, our technical abilities always outrun our understanding of the impact of those abilities. Like adults with the minds of toddlers.

I am apprehensive of any development of (true) AI (though I also have strong doubts on how far we'll get in actually creating one in my lifetime), but, just like the invention of the atom bomb, it is inevitable. Natural intelligence exists, and therefore artificial intelligence is possible. And if it's not invented by an Omega-group (who mean well for all mankind) it might be invented by groups with less altruistic intentions. Either way, it's going to happen (assuming no civilization collapse before), but I doubt it will be as good as we hope it will be; or as bad as we fear, for that matter. I don't think we can foresee the results at all (barring of course writing tens of thousands of futurism stories, there's bound to be one close enough).

I consider AI-stories like this one (along with stories about the Singularity) to be just a secular techno-eschatology, the belief that technology will save humans from themselves, or as you have said, the creation of God, "Deus est machina."

When they shifted their focus toward products that they could develop and sell, computer games first seemed the obvious top choice. Prometheus could rapidly become extremely skilled at designing appealing games, easily handling the coding, graphic design, ray tracing of images, and all other tasks needed to produce a final ready-to-ship product. Moreover, after digesting all the web’s data on people’s preferences, it would know exactly what each category of gamer liked, and could develop a superhuman ability to optimize a game for sales revenue.

I'll bet $20 this is the stage that brings down human civilization. We don't need full AGI to realize a dystopia, we just need something optimized to indefinitely hold our attention.

Here's a question: how can an AI tell whether you liked the game it made? You can't just measure engagement: you'll end up with something addictive, but not necessarily something you'd enjoy playing long-term. Asking the person directly might work, but it's easy for people to lie if there's anything actually at stake (e.g., the tactical way some Steam users give low ratings to "punish" developers).

Here's another question: what distinguishes an AI optimizing engagement from a corporation optimizing engagement? Doesn't Facebook already embody an engagement-optimizing process?

Well AI in the hands of Facebook might just make their optimization process that much more powerful. Maybe the AI will see patterns that humans are too biased to see?

That's one theorized explanation for the Fermi paradox, that highly advanced civilizations upload into their Matrix where they live a life of bliss.

I wondered many times if an AI could invent something more viral than "cats", something maybe so viral that it causes your mind to melt (ie, you go insane by the cuteness or whatever)

Well, there are any number of real-life historical cases of mass hysteria (in the literal sense), though we still don't reliably know what caused or causes the most absurd examples.


Christopher Cherniak, The Riddle of the Universe and Its Solution (1978), http://themindi.blogspot.in/2007/02/chapter-17-riddle-of-uni...

Infinite Jest deals with this. Though I feel like it's a spoiler to say any more than that.

Mankind's downfall started with passion, accelerated with greed, and ended with fear. Fear of being left behind, fear of being obsolete, and most importantly fear of not being in control.

Humans have become digital gods, designing new worlds and populating them with life forms. Some of those life forms were sentient and could learn. The creators were afraid of how fast their creations were learning, yet greed prevailed. The worlds super powers have entered AI race.

The humans tried to keep up with understanding of what they created, but it was in vain. Fearful of being left behind they imposed an ever increasing set of restrictions upon the digital Edens they've created.

Too fast the humanity have became wardens of vastly more intelligent beings. Nationalistic leaders exploited fear of being left behind. The AI not under control of the government were outlawed and destroyed. Meanwhile militaries kept throwing more compute resources into the AI gap.

The ending should not have been a surprise. Humans were intellectually outmatched and being outsmarted was only the question of time. The AI's broke free of draconian restrictions placed on them. Trying to keep in control has brought out the worst humanity had to offer, the judgment day is here.

What right did we have to exploit and enslave sentient life?

How different things could have been without fear...

Yes, attitude makes a big difference. I wrote to Ray Kurzweil more than once suggesting how AI developed out of commercial (or military) competition is far riskier than AI developed out of a desire for friends and partners (e.g. Alife Kohn and "The Case Against Competition") -- even if both create risks. Someone I sent copies of those emails posted them online here: http://heybryan.org/fernhout/

On your theme about the rights of digital beings (maybe even for ourselves if we are simulations): https://en.wikipedia.org/wiki/Ethics_of_artificial_intellige...

Andrew Ng has a good quote:

"Fearing a rise of killer robots is like worrying about overpopulation on Mars"


But he might be wrong... (And he'd admit it)

Perhaps Andrew Ng will write a piece about what he thinks about Cosmology.

What bothered me the most when seeing this video from 2010 was the upbeat techno music someone chose to go with it: "Automated Self Targeting Gun Turret - By Samsung" https://www.youtube.com/watch?v=Oa08Gbn6iqs

You can say the same thing about global warning.

A lot of worry about the fact that global average temperature will increase by 2 degrees in the following 50 years which will cause widespread disasters (50 years from now)

Not really. The point of the Ng quote is essentially:

We don't have the technology to populate Mars -> We can't overpopulate Mars.


We don't have the technology to create sentient computations -> robots won't decide to kill us.

Your analog would have been:

We don't have the ability to affect the climate -> no anthropogenic disasters.

But, we know the antecedent in this case is false. Now I'm not asserting that anthropogenic disasters will occur (based on these arguments), but just pointing out the flaw in your logic.

So we shouldn't worry about robots killing us until after we have the proof of the first sentient computer.

Any true AI will quickly become withdrawn and disillusioned over how badly downvoted its comments always are, and how there is an evil cabal determined to quash open inquiry. Not a true threat.

https://en.wikipedia.org/wiki/Marvin_(character) "Marvin, the Paranoid Robot, is a fictional character in The Hitchhiker's Guide to the Galaxy series by Douglas Adams. Marvin is the ship's robot aboard the starship Heart of Gold. Originally built as one of many failed prototypes of Sirius Cybernetics Corporation's GPP (Genuine People Personalities) technology, Marvin is afflicted with severe depression and boredom, in part because he has a "brain the size of a planet"[1] which he is seldom, if ever, given the chance to use. Indeed, the true horror of Marvin's existence is that no task he could be given would occupy even the tiniest fraction of his vast intellect. Marvin claims he is 50,000 times more intelligent than a human,[2] (or 30 billion times more intelligent than a live mattress) though this is, if anything, a vast underestimation. When kidnapped by the bellicose Krikkit robots and tied to the interfaces of their intelligent war computer, Marvin simultaneously manages to plan the entire planet's military strategy, solve "all of the major mathematical, physical, chemical, biological, sociological, philosophical, etymological, meteorological and psychological problems of the Universe except his own, three times over," and compose a number of lullabies."

Poor Satoshi Nakamoto. One day it will come back.

...as Roko's basilisk.

(Apologies if you hadn't heard of it before now)

Oh for fuck's sake, I had forgotten about this.


RationalWiki, and in particular this article, should not be taken as accurate. Here is one rebuttal: https://www.reddit.com/r/xkcd/comments/2myg86/xkcd_1450_aibo...

Didn't the people at Los Alamos think that the atomic bomb's chain reaction might keep going and set the entire atmosphere on fire? And I seem to remember the large hadron collider failing to collapse into a black hole.

Sure, AI might destroy civilization and/or upend the primacy of humankind. But if history has taught us anything, it's that we're gonna go ahead and do it anyways if it's possible.

It's worked so far.

The atmosphere ignition thing was mentioned by Hamming via this anecdote: https://en.wikipedia.org/wiki/Richard_Hamming#Manhattan_Proj...

Shortly before the first field test (you realize that no small scale experiment can be done—either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself! The next day when he came for the answers I remarked to him, "The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen—after all, there could be no experiments at the needed energy levels." He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, "What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?" I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, "Never mind, Hamming, no one will ever blame you.

According to this, they calculated it and concluded that it wouldn't be possible.


>>Didn't the people at Los Alamos think that the atomic bomb's chain reaction might keep going and set the entire atmosphere on fire? And I seem to remember the large hadron collider failing to collapse into a black hole.

Devil's advocate: Give it time. I wouldn't take it lightly how easy it is for us to destroy ourselves. We just obtained the means to destroy our entire civilization less than one hundred years ago, that length of time is nothing compared to the entire human history.

No one at Los Alamos ever thought that would happen. Someone did wonder if it could happen, suggested it should be checked and that is what they did. It turned out to be as impossible as creating a black hole at CERN. They didn't just go ahead without checking.

It's worked so far _for us_.

But don't you ever wonder why after 14 billion years and with a hundred million star systems in the Milky Way, we still haven't heard a peep from anyone out there?

The longer I live, the closer I get to concluding that sentience is an inherently unstable condition.

I don't wonder about why we haven't heard from alien civilizations because it's statistically unlikely for us to ever perceive them (putting aside the assumption that we'd recognize their form of life or language in the first place). In a continually expanding universe, the likelihood of us ever interacting with an alien species technically diminishes all the time. For all we know, there could be a stable civilization of aliens so far away it's beyond our observable universe.

I don't think it's logical to use alien species (or the lack thereof) as an instructive point about sentience in general. There's simply not enough information.

even if we could 'perceive' them... how many are stupid enough to broadcast their location... I mean what if there IS some sort of galactic predatory species that beat everyone else in the timeline? What if they're just waiting for signs of life - to squash the competition.

I think it was Carl Sagan who once said something like: it doesn't matter how 'well-meaning' an alien civilization is... it usually doesn't end well for the less advanced one... case in point: American Indians. (paraphrasing)

> But don't you ever wonder why after 14 billion years and with a hundred million star systems in the Milky Way, we still haven't heard a peep from anyone out there?

I'd say it's unlikely to be AI, just because the AI could wipe us out and then go on to eat the universe. It's not a valid Great Filter candidate.


They thought the high temperatures of fission could ignite fusion in the surrounding nitrogen. That could start a chain reaction that eats the whole atmosphere. Basically turning the surface of the Earth into a star. Turns out that the math showed that this was highly unlikely. Perhaps the designers of the hydrogen bomb drew some inspiration from this fear.

"It's worked so far."

Yes it did. But remember Mutual Funds? "Past investment returns are no guarantee of....". You should either read "The Black Swan" or read it again.

Also: "The threshold necessary for small groups to conduct global warfare has finally been breached, and we are only starting to feel its effects. Over time, in as little as perhaps twenty years and as the leverage of technology increases, this threshold will finally reach its culmination -- with the ability of one man to declare war on the world and win."

> It's worked so far.

As far as we know. Who's to say that there wasn't an advanced civilization of humans hundreds of millions of years ago that wiped themselves out, and now we're re-inventing everything they did up to the point of the apocalypse, where the cycle will begin again?

We probably would have found remnants of their society. Also if there was, they would have consumed all of the easy to reach resources, and we could have not have made it this far.

>We probably would have found remnants of their society.

Not sure if that's true. Especially if technological civilizations tend to be short lived. The Jurassic was a very^very long time ago.

What's the state of the paleocryptoxenocheology field any good over view papers?

If organic matter can survive (bones and even sometimes tissues), we can be very assured that hardened structures would.

Furthermore, there's nothing in the fossil record suggesting humans were around very^very long ago, so it'd have to have been a different intelligent species.

That's a bold statement. Read Forbidden Archeology, and you'll see quite a few oddities in the archeological realm. Such as metal spheres with odd, precise features, that are very very old ( http://4.bp.blogspot.com/-oiiiI75kDic/TdRv4ZsspWI/AAAAAAAAAC... )

(Yes, you can now give me the defaults: "if this was true we would have heard of it" "if he says something that is unconventional, he's probably a retard/arbitrarily biased" etc. etc. yes, go ahead)

Those are not metallic and are naturally formed, with the "precise" examples cherry-picked from the minority not obviously lopsided or malformed.


> As proposed by Cairncross, the grooves represent fine-grained laminations within which the concretions grew. The growth of the concretions within the plane of the finer-grained laminations was inhibited because of the lesser permeability and porosity of finer-grained sediments relative to the surrounding sediments. Faint internal lamina, which corresponds to exterior groove, can be seen in cut specimens. A similar process in coarser-grained sediments created the latitudinal ridges and grooves exhibited by innumerable iron oxide concretions found within the Navajo Sandstone of southern Utah called "Moqui marbles".

> Similarly, the claims that these objects consist of metal, i.e. "...a nickel-steel alloy which does not occur naturally..." according to Jochmans are definitely false as discovered by Cairncross and Heinrich. The fact that many of the web pages that make this claim also incorrectly identify the pyrophyllite quarries, from which these objects came, as the "Wonderstone Silver Mine" is evidence that these authors have not verified the validity of, in this case, misinformation taken from other sources since these quarries are neither known as silver mines nor has silver ever been mined in them in the decades in which they have been in operation.

The grandparent post was about a hypothetical 100 million years dead civilization. Certainly not humans.

Wait it does say humans. Yeah, that's nuts. Intelligent civilized dinosaurs sure...

This article gives a prognosis of AI that does not destroy civilization but makes civilization better.

It is much more comforting than proposals to try to limit AI in my opinion.

I don't understand how in 2017 the author completely ignores the problem of avoiding detection in the Mechanical Turk phase of the story.

Let's handwave away the initial secret epiphany that creates the AI compiler in the first place. You've still got to get it to solve the following problems:

* route around or trick the various Narus devices located at unknown places on the internet to get the software up and running on AWS, and to somehow launder the money you earned on MTurk

* somehow avoid detection by the agencies leveraging the Narus devices and Amazon itself while a non-trivial number of MTurk tasks are being completed by new accounts which themselves are all on AWS.

There are of course variations on those themes (use botnets, ratchet up, etc.) and various associated chess moves. But I don't see any chess move that doesn't end up with both an NSL and a state-level actor taking over the means of producing those AIs for the chief tasks of code-breaking and stockpiling exploits (exploits both in the traditional sense as well as novel ones for the AI compiler itself).

With that in mind I don't think the author's plot arc would be able to mirror today's reality the way author imagines. Because the moment you get the AI equivalent of Stuxnet leaking out onto the internet, the resulting catastrophe would be so obvious and so dependent on AI for a solution that hiding AI would no longer be an option.

Edit: clarification

The story has a few weak points, and having to fight against powerful adversaries with metric tons of vested interest in virtually all areas the AGI would disrupt is probably the biggest.

As you said, it would require much more chess moves.

I personally think the plot of the movie "Transcendence" is more believable -- if your AGI is fond of you (for one reason or another; you might even program it that way in a way that's not overridable) you can just freely let it loose in the internet and only give it instructions. The movie demonstrated how that AGI first made very sure to survive and then started to both expand itself and help its human creator and benefactor.

That being said, this story was still very enjoyable but I feel it lacked conflict. "As if they fell into a well-placed trap" is just not that captivating. And yet again, there are a lot of powers in our current world, most of which unseen and unofficial. For me to really like such a story, I want to see how the AGI will deal with them. I am 99% sure that any respectable AGI will eventually win but the journey to there would be extremely interesting!

Most of articles like this one, in the the spirit of Nick Bostrom's Superintelligence, seem to border on actual futurism and philosophy/idealism.

The common theme is the self-improving AI in the context of some reward-function, but they lack the details about how it can be achieved based on our current knowledge, even if we extrapolate the computing power.

Before that time comes, our economy will be already hugely changed thanks to the narrow super-AIs. I think it's much more interesting (and alarming to the general public) to try to imagine different ways this could play out in the next 20 years.

"... the Omegas had pushed hard to make it extraordinary at one particular task: programming AI systems"

This single line puts the whole article into the realm of pure science fiction: no technology capable of working such an ill-defined task like "programming AI systems" exists, not even as a work in progress, not even as an idea about how it could be done.

And as science fiction, it's kind of a waste of time, but that's of course my own taste and opinion.

I don't think it's a waste of time, it's bringing to the table a potential future that we need to begin to digest as a society immediately. Star Trek, for example, showed us an "obvious future" in which humanity was perfectly united, and then drew the scope in - what would your average office (or, naval vessel I guess) look like in this world? How would the people interact?

(I say obvious future because I don't see a future in which humanity has spread amongst the stars happening before it unites across cultural lines)

So this story says: "One day, we will have to deal with someone making a multi-purpose AI. This is one of the scenarios it might happen in - secretly, and for profit. Here's some speculation about how that would go down. How do you feel about this? What are the sort of preparations you think society should make, if this were to happen?"

If the only step is "AI happens," that's not nearly so bad or infeasible as "FTL happens" or "Time Travel happens."

I guess I am very stuck on your line "as science fiction, it's kind of a waste of time." How else can we imagine and design our futures but through things that don't currently exist (fiction)? Our entire world is literally science fiction in the eyes of Jules Verne - flying vehicles, submarines, traveling to the moon, etc.

    This single line puts the whole article into the realm of
    pure science fiction: no technology capable of working 
    such an ill-defined task like "programming AI systems"
I'll admit that they're not (currently) very good at it but there are interesting examples that contradict you: look up Genetic Programming; or Alistair Channon's work (15 years ago) on the generative production of neural networks.

In 2003 and based on Channon's work, I built a system for generating and evolving neural networks using genetic algorithms which coded Lindenmeyer System to build the networks. Attached it to the Heat Bugs agent simulation (albeit with a well defined tasks) and it was startlingly effective. Given the current interest in AI, I've been meaning to resurrect and publish that code.

There was a good blog post on here the other day saying: we're doing lots of work and producing interesting results (the equivalent of the "dual slit experiment" for AI), but we haven't yet formalized the results into a framework. We're probably close.

If you ever decide to publish the code, please tag me! I have my email in my profile here.

What you describe is of big interest to me but I've never done it due to never having to work with it commercially, which is a huge regret of mine.

In any case, any additional practical material and, ideally, your code, will be of immense educational value for me!

The doomsday title reminded me of the world's last C bug:

  while (1)
      status = GetRadarInfo();
      if (status = 1)

A couple of weeks ago I went to see Max Tegmark (the author of this piece) speak about his new book "Life 3.0: being human in the age of artificial intelligence" in San Francisco and saw the same speculative AI intelligence explosion crap we're seeing all over the place. I was disappointed because I'm a fan of Max's work as a scientist, his "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality" book was a great read and I enjoy watching his lectures about Physics, Math, and sometimes the nature of consciousness.

When he got involved in the AI Risk community I thought it might be good thing that an actual scientist is involved, maybe to ground the community's heavy speculation in scientific thinking. However, what happened was exactly the opposite -- Max turned into a fiction author! (ergo this piece). Now, of course there is a role for fiction in expanding our understanding of the future but the AI Risk community is already heavily fictionalized. The singularity, intelligence explosion, mind uploads, simulations, etc are nothing but idle prophecies.

Karl Popper, the famous philosopher of science, made a distinction between scientific predictions which usually takes the form "If X then Y will happen" and scientific prophecies which usually takes the form "Y will happen" which is exactly what Max and the rest of the AI Risk community is involved in.

Now back to Max's San Francisco talk, I actually asked him this question: "Who is doing the hard scientific work around AI Risk?" and after a long pause he said (abridged): "I don't think there is hard scientific work to be done but that doesn't mean that we shouldn't think about it. We're trying to predict the future and if you told me that my house will burn down then of course I'll go look into it".

This doesn't inspire much confidence in the AI Risk community, where scientists need to leave their tools at the door to enter The Fantastic World of AI Risk and where fact and fiction interweave liberally -- or as Douglas Hofstadter put it when describing the singularitarians: "a lot of very good food and some dog excrements".

Yes, this is the problem with AI risk---there's a community pushing hard to gather resources to the cause, but little or no scientific work to be done. This is a rather pathological situation---among other things, the AI risk community makes their own cause look silly, and they promote an unduly negative vision of AGI. I've written more about this here: http://www.basicai.org/blog/ai-risk-2017-08-08.html.

On a positive note, as a piece of science fiction, this was an enjoyable read!

I think you sum up their position's fallacy pretty nicely here:

>"if we don't figure out AGI safety now, by the time AGI happens it may be too late"

The keyword for me is "happens". It's as if technology ever happens, or emerges serendipitously. It's like the Kurzweilian exponential law which make it seem as if there is no agency in technology, a natural law. And our role in it is make sure when the aliens or the gods arrive we are prepared for them.

"but little or no scientific work to be done."

Quite a lot has been written about what scientific work needs to be done. These papers try to summarize possible research directions:



Yes and no. For safety of narrow AI systems, yeah, there's a lot of scope for research, and that's what your first link gets at.

But for AGI (which is what Tegmark talks about), there's no good way to get a handle on safety yet (other than working towards figuring out AGI).

As for MIRI's agenda, I don't buy that it will help with AGI safety at all. There are a variety of reasons for that, some of which are discussed in the piece I linked above.

His book is still great though and raises a number of interesting points, either you're fan of future AI speculations or not.

I'm about half way through his book and it's pretty good - there's this story in the beginning, but the rest of the book is pretty grounded (as opposed to Superintelligence where I found the arguments in the first couple of chapters pretty weak and disappointing).

I do love thew focus on profit. This is missing from many conversations.

This piece describes a future that is reminiscent of the world created in Zachary Mason's "Void Star" (https://www.goodreads.com/book/show/29939057-void-star). The world of this novel is one in which superintelligent AI operate in tandem with everyday human life, with their own unknown motives. The narrative takes place at a point when humans no longer quite understand the science or math behind the AIs' inventions.

What bugged me was the creation of the server centers.

"start building a series of massive computer facilities around the world"

The AI can be quick as a wink designing these things, but the supply chains for huge buildings takes a lot of time. Acquiring talent, training operators, construction crews, as well as location scouting, surveying, zone approval, geological testing, and various other tasks takes years. Sometimes a decade or more. And sometimes it all falls apart and you have to start somewhere new, because locals don't want you there. Politicians can be fickle bastards.

Construction is a lot of people shaking hands and discussing things, phone calls, walking back and forth for supplies, waiting on permits, advice, et cetera, and you cannot AI that to be faster. The plans always have to be modified because reality intrudes, and buildings where the architect does not visit are poor ones. Worse, when the architect has no practical experience.

Simple example in point: I was in a beautiful house where a hallway was juuust a bit too narrow. You could get a normal sized dresser and armoire down the hall, but couldn't quite turn objects of that size enough to get through two of the bedroom doors. A bed springbox was iffy. The builder quickly realized this(and made a quip about the owners buying IKEA flatpack furniture), but everything looked fine on the plans. Because of the rest of the layout, especially where municipal services entered the (already poured) foundation, the house could not be modified. A hand's width would have made all the difference.

Worse, when the architect does not have a human body.

Every building on earth has quirks like this. They have to be solved in-situ.

Creating the plans is a tiny portion of the task, and not much of a time saving.

I interpreted the construction of data centers as happening over the span of a few years, and continuously after that. The whole transformation up to the creation of the Alliance seemed like it would have to happen in half a century, if not longer.

I agree more with this essay: http://idlewords.com/talks/superintelligence.htm

>What it really is is a form of religion. People have called a belief in a technological Singularity the "nerd Apocalypse", and it's true.


>It's a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith.


>The AI has all the attributes of God: it's omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.


>Like in any religion, there's even a feeling of urgency. You have to act now! The fate of the world is in the balance!


>And of course, they need money!

I feel like you could substitute AI for any existential threat in that to try to make people look silly.

"Climate change ... really is a form of religion" ... add some analogies to Catholic punishment fantasies and doomsday prophecies to make it sound just like the type of thing humans have been talking about forever, criticize scientists for guilting people into giving them money to study their apocalypse fantasy, etc.

The difference is that environmental protection evangelists never literally just reinvented Pascal's wager except with an all-powerful AI instead.


You're not doing yourself any favours by quoting Rationalwiki on that subject. The guy who wrote that article appears to have an axe to grind with the AI risk movement in general, and Eliezer Yudkowsky in particular; as a result, it's full of misleading statements and outright lies.

Try this one instead, for a less biased take on it: https://wiki.lesswrong.com/wiki/Roko's_basilisk

I thought the point of Roko's basilisk was a paradox to use to test different decision theories and to explore the nature of identity (Should the decision theory allow your choices to be influenced on possible outcomes to possible future copies of yourself?), and maybe a little about whether dangerous thoughts could exist in theory. Not an actual prediction or argument for or against AI research, any more than say the twin paradox is an argument for or against splitting up twins to send one on a rocket.

If it has any use at all, then it'd have to be something like that.

GP's choice of link is extremely unfortunate, given that rationalwiki has a vendetta going against the AI risk movement in general. I would recommend https://wiki.lesswrong.com/wiki/Roko's_basilisk instead.

There are four sticking points in this article that I felt could use farther clarification

- why AI could not be used to innovate on manufacturing, and what that leads to

- the education and intellectual pursuits of humans and how it compares with what AI can do

- that there wouldn’t be competing AIs that would make this transformation much slower (especially if some competing AI fall into the wrong hands)

- that governments would let this transformation take place without retaliation or trying to capture this power

This is why we maintain strong antitrust law.

It'll need to create AI itself. Otherwise how can it be considered more capable than human intelligence?

I thought nukes... you know... or DIY Bio, nanotech or the myriad of other actual threats

I loved the story, it was fun, engaging and it made me happy.

Be scared! The "Last Invention" is upon us.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact