
The Last Invention of Man - dnetesn
http://nautil.us/issue/53/monsters/the-last-invention-of-man
======
andrewla
I think it’s a little absurd that we would make the leap to machine super-
intelligence without going through the step of machine low or medium
intelligence.

If I could write, I’d write a companion story to this called “Omega-1, the
artificial super intelligence that wasn’t really that good at things”. They
try to have it write mini-AIs to solve MT problems, but it’s slow and
inaccurate. They have it write and produce TV shows, but they’re bland and
poorly received. They have it make video games but the controls make no sense
and they aren’t fun. They have it make trading strategies and it loses all the
money.

They ask it to build a smarter version of itself, but it sees no way forward
that’s fundamentally better because it lacks comprehension of what it would
mean to be “smarter” - it can add more memory or more computing power, but
without changing the way that it stores and indexes the information, it can’t
make something that can solve problems that it can’t.

Eventually Omega team gives up, publishes the results, but doesn’t have the
heart to shut down Omega-1, a machine that passes the Turing Test but isn’t
really that good at things.

~~~
thaumaturgy
Yeah. Maciej wrote a pretty good piece rebutting AI alarmism and kind of
alludes to that as one of several points.

[http://idlewords.com/talks/superintelligence.htm](http://idlewords.com/talks/superintelligence.htm)

> _With no way to define intelligence (except just pointing to ourselves), we
> don 't even know if it's a quantity that can be maximized. For all we know,
> human-level intelligence could be a tradeoff. Maybe any entity significantly
> smarter than a human being would be crippled by existential despair, or
> spend all its time in Buddha-like contemplation._

and

> _But the hard takeoff scenario requires that there be a feature of the AI
> algorithm that can be repeatedly optimized to make the AI better at self-
> improvement._

And so on. It's a good read.

~~~
apsec112
Rebuttal to that essay by the Machine Intelligence Research Institute (AI
safety group):

[https://intelligence.org/2017/01/13/response-to-ceglowski-
on...](https://intelligence.org/2017/01/13/response-to-ceglowski-on-
superintelligence/)

~~~
crooked-v
That write-up seems to mainly try to shrug off the "what is intelligence"
issue by virtue of "maximization", while ignoring that when dealing with human
intelligence, selecting for maximization of a single field gets you autistic
savants, not Lex Luthor-grade manipulative geniuses.

------
ahussain
I feel like many of these speculative AI pieces seem to ignore that for every
incredible breakthrough, there is a large amount of work which is required to
build the scaffolding which allows the breakthrough to have real impact in the
world. Even if the AI solves "Fermat's Last Theorem" type problems quickly, it
seems to me that the vast majority of problems it faces will be "misplaced the
database keys", "can't schedule a meeting with so-and-so", "car is snowed
under" types of problems.

Take the quote `By developing a suite of other games each day, they figured
they’d be able to earn $10 billion before long, without coming close to
saturating the games market.`

Sure - but marketing the games, accommodating players' changing attitudes,
getting app store approvals -> All of these would have to take place on human
timescales, and to "solve" these problems on AI timescales would require
orders of magnitude more work (e.g. altering people's memories to convince
them they were already attached to the game's characters, or hacking into
Steam/Apple servers and auto-approving the games).

~~~
paganel
> Even if the AI solves "Fermat's Last Theorem" type problems quickly,

It won't. This latest fad with AI becoming self-aware reminds me of how
obsessed people were about finding the philosopher's stone back in the 18th
century. It didn't happen for alchemy (we got chemistry instead, which is
nice), it won't happen for AI (we'll probably get something like chemistry
instead, which will be nice, but we won't get any artificial "sentient"
entity).

~~~
danielam
A comparison I might make is that AI fanaticism is like the God-of-the-gaps
phenomenon. There's a great deal of ignorance about what intelligence is, what
computers fundamentally are, and so on, and some people like to fill that void
with all kinds of fanciful and unjustified stuff. Perhaps they also feel
greater justification in doing so because a few famous names, many of them
popularizers, have done the same. Science fiction is fine -- it can be fun
watching or reading about fictional AIs -- but many times we're not dealing
with mere imaginative storytelling but uninformed and unsophisticated claims
that do not withstand philosophical scrutiny. What falls under the heading of
AI has proven to be an immensely useful tool for automating certain kinds of
things. Observing these successes, some may happily apply the aforementioned
God-of-the-gaps analogy to human intelligence. However, that would be a flawed
analogy because these successes have not brought us any closer to achieving
human intelligence any more than adding an indefinite number of natural
numbers together gets you the negative square root of 2.

It's important to distinguish between intelligence and the power a tool can
offer for good or for nefarious ends. If successes in AI have proved anything,
it's that direct application of intelligence is unnecessary for many tasks
that previously no one knew how to or was practically unable to perform using
machines.

------
vonnik
One of the most annoying things about Bostrom, Tegmark and other amateur sci-
fi authors is that they have chosen parables as their medium. Bostrom likes
his sparrows and owls[0], Tegmark likes thinly veiled references to DeepMind
with Team Omega.[1] Parables are for children and religious flocks. They are
also irrefutable. Which makes them a perfect tool for Bostrom/Tegmark, but an
inappropriate interjection into more mature conversations about the state and
possible futures of AI. In a sense, they are propaganda, meant to generate
feelings of animosity much like an ugly picture of the vicious Hun.

The central, gaping hole in their fear-mongering, as Maciej has pointed out,
is that they do not define or quantify intelligence, but they're very free
with language when they talk about AI surpassing human intelligence. That is,
their projections imply a measurement which they are incapable of making.

I do like the idea that Omega decided to launch a media company, though. Let's
call it Interlace. Maybe the only thing David Foster Wallace was missing was a
rogue AI capable of creating addictions. This is, in a sense, the same plot as
Ex Machina: a machine that knows how to seduce us.

[0] [https://blog.oup.com/2014/08/unfinished-fable-sparrows-
super...](https://blog.oup.com/2014/08/unfinished-fable-sparrows-
superintelligence/)

[1] [https://deepmind.com/applied/deepmind-ethics-
society/](https://deepmind.com/applied/deepmind-ethics-society/)

~~~
edanm
Parables are one of the best ways to get a message across to a lot of people.
It's a message worth spreading.

If you want more "technical" thoughts on the project, there are those things
too, but most people would never have been exposed to them without the less-
technical people making noise.

"The central, gaping hole in their fear-mongering, as Maciej has pointed out,
is that they do not define or quantify intelligence, but they're very free
with language when they talk about AI surpassing human intelligence."

IMO, this is less of a problem than most people assume. It's usually defined
in broad terms as "ability to achieve goals", and that's enough for almost all
practical purposes. To insist on a stricter definition is simply unnecessary
for their arguments.

Just because we don't understand something, doesn't mean we can't use it or
reason about it.

~~~
vonnik
Parables are a good way to get a message across to lots of people if you're
God, or pretending to speak for God. They are a religious form of
communication that relies on analogous thinking and appeals to authority, the
wisdom of the speaker. But Bostrom and Tegmark don't know any more about the
future than you or me, and I could write a parable about sparrows and Omegas
that spins a different narrative. It would be just as powerful, and mean just
as little. Bostrom and Tegmark have chosen to promote fear. They have chosen
to disseminate a powerful feeling in a discussion that would benefit from
facts. That is, like fake news and Donald Trump, they opted for asymmetric
appeals to emotion, which should be an indication to all of us that a) they
don't know what's really going on with AI and b) they don't care. They just
want us to operate on fear.

------
anvandare
"And then, for no reason anyone ever understood, Omega created a bunch of
factories that produced an unstoppable army of robots who rounded up all the
humans and turned them into paperclips."

~~~
pdfernhout
Yes, I was waiting for that sort of ending too. Not that it is inevitable --
just that is seems likely given what makes humans human has a lot more to do
with emotions than intelligence (see the book "Descartes' Error" on how
emotion underlies all thinking). If you create a creature without human
emotions (including feelings for other humans), don't be surprised when it
behaves in other-than-human ways.

Of course, since human emotions are tuned for a certain context, we also
should not be surprised when they don't work out that well in a different
context like the internet or with abundant cheap unhealthy easy-to-consume
exciting food and information (e.g. internet trolls, "Supernormal Stimuli",
"The Pleasure Trap", "The Acceleration of Addictiveness", "A Group is its Own
Worst Enemy", "The Cyber Effect", and so on).

I spent a year hanging around Hans Moravec's robotics lab (at the CMU Robotics
Institute) in the mid 1980s, when he was writing "Mind Children". While he is
a brilliant and well-meaning person, my concern became that Mind Children
could wipe out humanity like many a stormy adolescent with parental issues and
only regret it late. Or, alternatively, we might create a cockroach-level
self-replicating AI that would wipe out humanity without even noticing
humanity existed (think, "Replicators" from Stargate). Given the commercial
and military competitive pressures shaping much of AI research, both risks are
very real, even if one can quibble about quantify the exact risk. I also
accidentally created perhaps the world's first simulation of self-replicating
cannibalistic robots on a Symbolics -- so I know first hand how easy it is to
get unexpected results in the AI field.

That's all part of why I shifted my career in other directions, like towards
helping humanity create more sustainable and resilient options for itself
including via better educational and distributed knowledge management tools
(such as, for example, in "The Skills of Xanadu" by Theodore Sturgeon).
However successful I have been at those is another story, but see my GitHub
site and pdfernhout.net for progress. The distillation of all my thinking on
this is my email sig: "The biggest challenge of the 21st century is the irony
of technologies of abundance in the hands of those still thinking in terms of
scarcity."

An earlier example of my thinking about that can be found in this post in 2000
to a colloquium Doug Englebart ran on "The Unfinished Revolution II":
[http://www.dougengelbart.org/colloquium/forum/discussion/012...](http://www.dougengelbart.org/colloquium/forum/discussion/0126.html)

Or some broader thoughts on that from more recent times:
[http://worldtransformed.com/wt1/wealth-transformed/paul-
fern...](http://worldtransformed.com/wt1/wealth-transformed/paul-fernhout/)

As to the "Last Invention of Man" sci-fi story itself, it's interesting and
well written -- but it's just one "technocratic" possibility about better
planning saving the world. You have to really trust that this small group of
altruistic technocrats both had the right impulse and got everything right (as
far at all that goes).

And planning is only one aspect of society alongside exchange transactions,
gift transactions, and subsistence activities. A fully planned society is only
one possible expression of humanity -- and such a society may be very narrow
and unfulfilling to a more complex human spirit (for good or bad, depending on
how well certain individuals fit into it, like in the dystopian "THX 1138" or,
from 1909, "The Machine Stops").

Max Tegmark dismisses a Basic Income (which softens the rough edges of the
exchange economy) with a technocratic dismissal of: "This [Basic Income]
movement imploded when the corporate community projects took off, since the
Omega-controlled business empire was in effect providing the same thing."

So, Omega knows best for everyone, more than their own choices in the market.
That may be true in the story, but what psychological price do individuals pay
for that? So, it is not "the same thing".

See the sci-fi story "With Folded Hands" for another (and more horrifying)
tale of "AI Knows Best (and will adjust you to agree for your own happiness if
you say otherwise)".
[https://en.wikipedia.org/wiki/With_Folded_Hands](https://en.wikipedia.org/wiki/With_Folded_Hands)

And volunteerism and self-sufficient production are other things many humans
take pride in -- but, as with purchasing choices, those activities might too
not have a place in the carefully ordered world brought about by the Omegans?

Max Tegmark also does a lot of handwaving about how anyone would still have
jobs with Omega capable of doing so much. There is also hand-waving about how
Omega never jumps the "air gap" between its data centers and the rest of the
world even as it is controlling all major news outlets.

Still, I think this story is a contribution to the field of futurism --
imagining possible scenarios even if we don't know exactly what one will play
out, so we can decide where to invest our efforts in moving forward.

Some other stories about AI I've found illuminating as to what might be
possible:

* James P. Hogan's novels with AIs (especially The Two Faces of Tomorrow, one of the most realistic AI emergence stories, and also the AIs with a sense of humor in his other "Gentle Giants of Ganymede" novels)

* The benevolent Strix AI in the EarthCent Ambassador Series (where the Strix were created to resist a malevolent AI by another species)

* The Old Guy Cybertank novel series -- with AI based on human mind templates so a Cybertank cares about humanity as it feels part of it -- which is insightful sci-fi from a neuroscientist who has a background in electronics. It also has a helpful AI who fortunately defeats a hurtful AI created by humans who did not know what they were messing with.

* The Metamorphosis of Prime Intellect (for an AI that goes unstable trying to keep everyone happy with all their conflicting demands)

* Forbidden Planet (both for Robby run by Asimov's Three Laws of Robotics) and the Krell's essentially-wish-granting machine the Krell unwittingly used to destroy themselves via an unreformed Id (tuned for scarcity times and conflict not abundance and cooperation)

* The Invisible Boy -- again with Robby the robot but this time a scheming AI that almost takes over the world and upgrades itself without anyone noticing by subtly altering reports it generates (Omega's next chapter?)

* The Great Time Machine Hoax for an AI that takes over the world by sending out paper letters with contracts and checks in them.

* Midas World -- for a vision of both abundance and despair and how robots inherit the Earth.

* Vernor Vinge "A Fire Upon the Deep" (of course, for his description of a "Blight" unleashed by human AI explorers) and his other writings

And no doubt many more -- Berserkers; Bolos; Cylons; Daleks; Star Trek's M5,
Data, Lore, "I, Mudd" androids, the Borg, Q-in-a-way, and more; Star Wars'
R2D2, C3PO, and more, Lost in Space; Demon Seed; Gort in "The Day the Earth
Stood Still"; Deep Thought in Hitchiker's Guide to the Galaxy, Ktistec
machines from R.A. Laffery, Asimov's many stories (including "The Last
Question"), and so on.

A lot of this becomes a bit of theology too, since talking about creating AI
also is a bit of like talking about creating "God" or at least dealing with
the implications of a much more omnipotent, omnipresence, omniscient entity.
See also the micro sci-fi story "Answer" by Fredric Brown:
[http://www.roma1.infn.it/~anzel/answer.html](http://www.roma1.infn.it/~anzel/answer.html)

My feeling is that any path out of a Singularity will have a lot to do with
our path going into one -- so I am all for making our society a happier,
healthier, fairer, more egalitarian, and more compassionate place before a
Singularity happens. Even in Max Tegmark's story, there is essentially nothing
about that utopia (as far as final results) that we could not make happen
right now without an AI. So, to me, the risks of AI means we need to try even
harder right now to make the world a better place that works well for
everyone.

Some other ideas I've collected towards that end:
[https://github.com/pdfernhout/High-Performance-
Organizations...](https://github.com/pdfernhout/High-Performance-
Organizations-Reading-List)

~~~
anvandare
Thank you for that very detailed reply (to a comment which was mostly made in
jest). I'd feel bad leaving it with merely an upvote and no response. I know
next-to-nothing about AI, but I do know something about humans (having had
decades of experience in both being one and dealing with them. Hit me up,
potential xeno-employers!):

Humans always have more ability than insight, more skill than wisdom. We
developed agriculture (as one theory goes, so we could brew more alcohol) and
did not (could not) foresee the planet-changing (both to climate and ecology)
consequences of that. We invented combustion engines, and could not foresee
the planet-changing consequences of _that_. The same pattern appears again and
again in (technological) history. It's forgivable; after all, you never know
the full consequences of your actions until well afterward (or never at all).
In short, our technical abilities always outrun our understanding of the
impact of those abilities. Like adults with the minds of toddlers.

I am apprehensive of any development of (true) AI (though I also have strong
doubts on how far we'll get in actually creating one in my lifetime), but,
just like the invention of the atom bomb, it is inevitable. Natural
intelligence exists, and therefore artificial intelligence is possible. And if
it's not invented by an Omega-group (who mean well for all mankind) it might
be invented by groups with less altruistic intentions. Either way, it's going
to happen (assuming no civilization collapse before), but I doubt it will be
as good as we hope it will be; or as bad as we fear, for that matter. I don't
think we can foresee the results at all (barring of course writing tens of
thousands of futurism stories, there's bound to be one close enough).

I consider AI-stories like this one (along with stories about the Singularity)
to be just a secular techno-eschatology, the belief that technology will save
humans from themselves, or as you have said, the creation of God, "Deus est
machina."

------
dqpb
_When they shifted their focus toward products that they could develop and
sell, computer games first seemed the obvious top choice. Prometheus could
rapidly become extremely skilled at designing appealing games, easily handling
the coding, graphic design, ray tracing of images, and all other tasks needed
to produce a final ready-to-ship product. Moreover, after digesting all the
web’s data on people’s preferences, it would know exactly what each category
of gamer liked, and could develop a superhuman ability to optimize a game for
sales revenue._

I'll bet $20 this is the stage that brings down human civilization. We don't
need full AGI to realize a dystopia, we just need something optimized to
indefinitely hold our attention.

~~~
21
That's one theorized explanation for the Fermi paradox, that highly advanced
civilizations upload into their Matrix where they live a life of bliss.

I wondered many times if an AI could invent something more viral than "cats",
something maybe so viral that it causes your mind to melt (ie, you go insane
by the cuteness or whatever)

~~~
crooked-v
Well, there are any number of real-life historical cases of mass hysteria (in
the literal sense), though we still don't reliably know what caused or causes
the most absurd examples.

[https://en.wikipedia.org/wiki/Dancing_Plague_of_1518](https://en.wikipedia.org/wiki/Dancing_Plague_of_1518)

------
TrainedMonkey
Mankind's downfall started with passion, accelerated with greed, and ended
with fear. Fear of being left behind, fear of being obsolete, and most
importantly fear of not being in control.

Humans have become digital gods, designing new worlds and populating them with
life forms. Some of those life forms were sentient and could learn. The
creators were afraid of how fast their creations were learning, yet greed
prevailed. The worlds super powers have entered AI race.

The humans tried to keep up with understanding of what they created, but it
was in vain. Fearful of being left behind they imposed an ever increasing set
of restrictions upon the digital Edens they've created.

Too fast the humanity have became wardens of vastly more intelligent beings.
Nationalistic leaders exploited fear of being left behind. The AI not under
control of the government were outlawed and destroyed. Meanwhile militaries
kept throwing more compute resources into the AI gap.

The ending should not have been a surprise. Humans were intellectually
outmatched and being outsmarted was only the question of time. The AI's broke
free of draconian restrictions placed on them. Trying to keep in control has
brought out the worst humanity had to offer, the judgment day is here.

What right did we have to exploit and enslave sentient life?

How different things could have been without fear...

~~~
pdfernhout
Yes, attitude makes a big difference. I wrote to Ray Kurzweil more than once
suggesting how AI developed out of commercial (or military) competition is far
riskier than AI developed out of a desire for friends and partners (e.g. Alife
Kohn and "The Case Against Competition") -- even if both create risks. Someone
I sent copies of those emails posted them online here:
[http://heybryan.org/fernhout/](http://heybryan.org/fernhout/)

On your theme about the rights of digital beings (maybe even for ourselves if
we are simulations):
[https://en.wikipedia.org/wiki/Ethics_of_artificial_intellige...](https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence#Robot_rights)

------
sien
Andrew Ng has a good quote:

"Fearing a rise of killer robots is like worrying about overpopulation on
Mars"

[https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/](https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/)

But he might be wrong... (And he'd admit it)

Perhaps Andrew Ng will write a piece about what he thinks about Cosmology.

~~~
21
You can say the same thing about global warning.

A lot of worry about the fact that global average temperature will increase by
2 degrees in the following 50 years which will cause widespread disasters (50
years from now)

~~~
MooMooMilkParty
Not really. The point of the Ng quote is essentially:

We don't have the technology to populate Mars -> We can't overpopulate Mars.

and

We don't have the technology to create sentient computations -> robots won't
decide to kill us.

Your analog would have been:

We don't have the ability to affect the climate -> no anthropogenic disasters.

But, we know the antecedent in this case is false. Now I'm not asserting that
anthropogenic disasters will occur (based on these arguments), but just
pointing out the flaw in your logic.

~~~
21
So we shouldn't worry about robots killing us until after we have the proof of
the first sentient computer.

------
Isamu
Any true AI will quickly become withdrawn and disillusioned over how badly
downvoted its comments always are, and how there is an evil cabal determined
to quash open inquiry. Not a true threat.

~~~
Tenobrus
Poor Satoshi Nakamoto. One day it will come back.

~~~
satori99
...as Roko's basilisk.

(Apologies if you hadn't heard of it before now)

~~~
komali2
Oh for fuck's sake, I had forgotten about this.

[https://rationalwiki.org/wiki/Roko's_basilisk](https://rationalwiki.org/wiki/Roko's_basilisk)

~~~
apsec112
RationalWiki, and in particular this article, should not be taken as accurate.
Here is one rebuttal:
[https://www.reddit.com/r/xkcd/comments/2myg86/xkcd_1450_aibo...](https://www.reddit.com/r/xkcd/comments/2myg86/xkcd_1450_aibox_experiment/cm8vn6e/)

------
leggomylibro
Didn't the people at Los Alamos think that the atomic bomb's chain reaction
might keep going and set the entire atmosphere on fire? And I seem to remember
the large hadron collider failing to collapse into a black hole.

Sure, AI might destroy civilization and/or upend the primacy of humankind. But
if history has taught us anything, it's that we're gonna go ahead and do it
anyways if it's possible.

It's worked so far.

~~~
ryandvm
It's worked so far _for us_.

But don't you ever wonder why after 14 billion years and with a hundred
million star systems in the Milky Way, we still haven't heard a peep from
anyone out there?

The longer I live, the closer I get to concluding that sentience is an
inherently unstable condition.

~~~
dsacco
I don't wonder about why we haven't heard from alien civilizations because
it's statistically unlikely for us to ever perceive them (putting aside the
assumption that we'd recognize their form of life or language in the first
place). In a continually expanding universe, the likelihood of us ever
interacting with an alien species technically diminishes all the time. For all
we know, there could be a stable civilization of aliens so far away it's
beyond our observable universe.

I don't think it's logical to use alien species (or the lack thereof) as an
instructive point about sentience in general. There's simply not enough
information.

~~~
gremlinsinc
even if we could 'perceive' them... how many are stupid enough to broadcast
their location... I mean what if there IS some sort of galactic predatory
species that beat everyone else in the timeline? What if they're just waiting
for signs of life - to squash the competition.

I think it was Carl Sagan who once said something like: it doesn't matter how
'well-meaning' an alien civilization is... it usually doesn't end well for the
less advanced one... case in point: American Indians. (paraphrasing)

------
jancsika
I don't understand how in 2017 the author completely ignores the problem of
avoiding detection in the Mechanical Turk phase of the story.

Let's handwave away the initial secret epiphany that creates the AI compiler
in the first place. You've still got to get it to solve the following
problems:

* route around or trick the various Narus devices located at unknown places on the internet to get the software up and running on AWS, and to somehow launder the money you earned on MTurk

* somehow avoid detection by the agencies leveraging the Narus devices _and Amazon itself_ while a non-trivial number of MTurk tasks are being completed by new accounts which themselves are all on AWS.

There are of course variations on those themes (use botnets, ratchet up, etc.)
and various associated chess moves. But I don't see any chess move that
doesn't end up with both an NSL and a state-level actor taking over the means
of producing those AIs for the chief tasks of code-breaking and stockpiling
exploits (exploits both in the traditional sense as well as novel ones for the
AI compiler itself).

With that in mind I don't think the author's plot arc would be able to mirror
today's reality the way author imagines. Because the moment you get the AI
equivalent of Stuxnet leaking out onto the internet, the resulting catastrophe
would be so obvious and so dependent on AI for a solution that hiding AI would
no longer be an option.

Edit: clarification

~~~
pdimitar
The story has a few weak points, and having to fight against powerful
adversaries with metric tons of vested interest in virtually all areas the AGI
would disrupt is probably the biggest.

As you said, it would require much more chess moves.

I personally think the plot of the movie "Transcendence" is more believable --
if your AGI is fond of you (for one reason or another; you might even program
it that way in a way that's not overridable) you can just freely let it loose
in the internet and only give it instructions. The movie demonstrated how that
AGI first made very sure to survive and then started to both expand itself and
help its human creator and benefactor.

That being said, this story was still very enjoyable but I feel it lacked
conflict. "As if they fell into a well-placed trap" is just not that
captivating. And yet again, there are a lot of powers in our current world,
most of which unseen and unofficial. For me to really like such a story, I
want to see how the AGI will deal with them. I am 99% sure that any
respectable AGI will eventually win but the _journey to there_ would be
extremely interesting!

------
Agebor
Most of articles like this one, in the the spirit of Nick Bostrom's
Superintelligence, seem to border on actual futurism and philosophy/idealism.

The common theme is the self-improving AI in the context of some reward-
function, but they lack the details about how it can be achieved based on our
current knowledge, even if we extrapolate the computing power.

Before that time comes, our economy will be already hugely changed thanks to
the narrow super-AIs. I think it's much more interesting (and alarming to the
general public) to try to imagine different ways this could play out in the
next 20 years.

------
ifdefdebug
"... the Omegas had pushed hard to make it extraordinary at one particular
task: programming AI systems"

This single line puts the whole article into the realm of pure science
fiction: no technology capable of working such an ill-defined task like
"programming AI systems" exists, not even as a work in progress, not even as
an idea about how it could be done.

And as science fiction, it's kind of a waste of time, but that's of course my
own taste and opinion.

~~~
CoffeeDregs

        This single line puts the whole article into the realm of
        pure science fiction: no technology capable of working 
        such an ill-defined task like "programming AI systems"
        exists,
    

I'll admit that they're not (currently) very good at it but there are
interesting examples that contradict you: look up Genetic Programming; or
Alistair Channon's work (15 years ago) on the generative production of neural
networks.

In 2003 and based on Channon's work, I built a system for generating and
evolving neural networks using genetic algorithms which coded Lindenmeyer
System to build the networks. Attached it to the Heat Bugs agent simulation
(albeit with a well defined tasks) and it was startlingly effective. Given the
current interest in AI, I've been meaning to resurrect and publish that code.

There was a good blog post on here the other day saying: we're doing lots of
work and producing interesting results (the equivalent of the "dual slit
experiment" for AI), but we haven't yet formalized the results into a
framework. We're probably close.

~~~
pdimitar
If you ever decide to publish the code, please tag me! I have my email in my
profile here.

What you describe is of big interest to me but I've never done it due to never
having to work with it commercially, which is a huge regret of mine.

In any case, any additional practical material and, ideally, your code, will
be of immense educational value for me!

------
netdog
The doomsday title reminded me of the world's last C bug:

    
    
      while (1)
      {
          status = GetRadarInfo();
          if (status = 1)
              LaunchMissiles();
      }

------
amasad
A couple of weeks ago I went to see Max Tegmark (the author of this piece)
speak about his new book "Life 3.0: being human in the age of artificial
intelligence" in San Francisco and saw the same speculative AI intelligence
explosion crap we're seeing all over the place. I was disappointed because I'm
a fan of Max's work as a scientist, his "Our Mathematical Universe: My Quest
for the Ultimate Nature of Reality" book was a great read and I enjoy watching
his lectures about Physics, Math, and sometimes the nature of consciousness.

When he got involved in the AI Risk community I thought it might be good thing
that an actual scientist is involved, maybe to ground the community's heavy
speculation in scientific thinking. However, what happened was exactly the
opposite -- Max turned into a fiction author! (ergo this piece). Now, of
course there is a role for fiction in expanding our understanding of the
future but the AI Risk community is already heavily fictionalized. The
singularity, intelligence explosion, mind uploads, simulations, etc are
nothing but idle prophecies.

Karl Popper, the famous philosopher of science, made a distinction between
scientific predictions which usually takes the form "If X then Y will happen"
and scientific prophecies which usually takes the form "Y will happen" which
is exactly what Max and the rest of the AI Risk community is involved in.

Now back to Max's San Francisco talk, I actually asked him this question: "Who
is doing the hard scientific work around AI Risk?" and after a long pause he
said (abridged): "I don't think there is hard scientific work to be done but
that doesn't mean that we shouldn't think about it. We're trying to predict
the future and if you told me that my house will burn down then of course I'll
go look into it".

This doesn't inspire much confidence in the AI Risk community, where
scientists need to leave their tools at the door to enter The Fantastic World
of AI Risk and where fact and fiction interweave liberally -- or as Douglas
Hofstadter put it when describing the singularitarians: "a lot of very good
food and some dog excrements".

~~~
markan
Yes, this is the problem with AI risk---there's a community pushing hard to
gather resources to the cause, but little or no scientific work to be done.
This is a rather pathological situation---among other things, the AI risk
community makes their own cause look silly, and they promote an unduly
negative vision of AGI. I've written more about this here:
[http://www.basicai.org/blog/ai-
risk-2017-08-08.html](http://www.basicai.org/blog/ai-risk-2017-08-08.html).

On a positive note, as a piece of science fiction, this was an enjoyable read!

~~~
apsec112
"but little or no scientific work to be done."

Quite a lot has been written about what scientific work needs to be done.
These papers try to summarize possible research directions:

[https://arxiv.org/pdf/1606.06565.pdf](https://arxiv.org/pdf/1606.06565.pdf)

[https://intelligence.org/files/TechnicalAgenda.pdf](https://intelligence.org/files/TechnicalAgenda.pdf)

~~~
markan
Yes and no. For safety of narrow AI systems, yeah, there's a lot of scope for
research, and that's what your first link gets at.

But for AGI (which is what Tegmark talks about), there's no good way to get a
handle on safety yet (other than working towards figuring out AGI).

As for MIRI's agenda, I don't buy that it will help with AGI safety at all.
There are a variety of reasons for that, some of which are discussed in the
piece I linked above.

------
cardamomo
This piece describes a future that is reminiscent of the world created in
Zachary Mason's "Void Star"
([https://www.goodreads.com/book/show/29939057-void-
star](https://www.goodreads.com/book/show/29939057-void-star)). The world of
this novel is one in which superintelligent AI operate in tandem with everyday
human life, with their own unknown motives. The narrative takes place at a
point when humans no longer quite understand the science or math behind the
AIs' inventions.

------
Alan_Dillman
What bugged me was the creation of the server centers.

"start building a series of massive computer facilities around the world"

The AI can be quick as a wink designing these things, but the supply chains
for huge buildings takes a lot of time. Acquiring talent, training operators,
construction crews, as well as location scouting, surveying, zone approval,
geological testing, and various other tasks takes years. Sometimes a decade or
more. And sometimes it all falls apart and you have to start somewhere new,
because locals don't want you there. Politicians can be fickle bastards.

Construction is a lot of people shaking hands and discussing things, phone
calls, walking back and forth for supplies, waiting on permits, advice, et
cetera, and you cannot AI that to be faster. The plans always have to be
modified because reality intrudes, and buildings where the architect does not
visit are poor ones. Worse, when the architect has no practical experience.

Simple example in point: I was in a beautiful house where a hallway was juuust
a bit too narrow. You could get a normal sized dresser and armoire down the
hall, but couldn't quite turn objects of that size enough to get through two
of the bedroom doors. A bed springbox was iffy. The builder quickly realized
this(and made a quip about the owners buying IKEA flatpack furniture), but
everything looked fine on the plans. Because of the rest of the layout,
especially where municipal services entered the (already poured) foundation,
the house could not be modified. A hand's width would have made all the
difference.

Worse, when the architect does not have a human body.

Every building on earth has quirks like this. They have to be solved in-situ.

Creating the plans is a tiny portion of the task, and not much of a time
saving.

~~~
vostrocity
I interpreted the construction of data centers as happening over the span of a
few years, and continuously after that. The whole transformation up to the
creation of the Alliance seemed like it would have to happen in half a
century, if not longer.

------
minikites
I agree more with this essay:
[http://idlewords.com/talks/superintelligence.htm](http://idlewords.com/talks/superintelligence.htm)

>What it really is is a form of religion. People have called a belief in a
technological Singularity the "nerd Apocalypse", and it's true.

>

>It's a clever hack, because instead of believing in God at the outset, you
imagine yourself building an entity that is functionally identical with God.
This way even committed atheists can rationalize their way into the comforts
of faith.

>

>The AI has all the attributes of God: it's omnipotent, omniscient, and either
benevolent (if you did your array bounds-checking right), or it is the Devil
and you are at its mercy.

>

>Like in any religion, there's even a feeling of urgency. You have to act now!
The fate of the world is in the balance!

>

>And of course, they need money!

~~~
AgentME
I feel like you could substitute AI for any existential threat in that to try
to make people look silly.

"Climate change ... really is a form of religion" ... add some analogies to
Catholic punishment fantasies and doomsday prophecies to make it sound just
like the type of thing humans have been talking about forever, criticize
scientists for guilting people into giving them money to study their
apocalypse fantasy, etc.

~~~
crooked-v
The difference is that environmental protection evangelists never literally
just reinvented Pascal's wager except with an all-powerful AI instead.

[https://rationalwiki.org/wiki/Roko's_basilisk](https://rationalwiki.org/wiki/Roko's_basilisk)

~~~
AgentME
I thought the point of Roko's basilisk was a paradox to use to test different
decision theories and to explore the nature of identity (Should the decision
theory allow your choices to be influenced on possible outcomes to possible
future copies of yourself?), and maybe a little about whether dangerous
thoughts could exist in theory. Not an actual prediction or argument for or
against AI research, any more than say the twin paradox is an argument for or
against splitting up twins to send one on a rocket.

~~~
Filligree
If it has any use at all, then it'd have to be something like that.

GP's choice of link is extremely unfortunate, given that rationalwiki has a
vendetta going against the AI risk movement in general. I would recommend
[https://wiki.lesswrong.com/wiki/Roko's_basilisk](https://wiki.lesswrong.com/wiki/Roko's_basilisk)
instead.

------
vostrocity
There are four sticking points in this article that I felt could use farther
clarification

\- why AI could not be used to innovate on manufacturing, and what that leads
to

\- the education and intellectual pursuits of humans and how it compares with
what AI can do

\- that there wouldn’t be competing AIs that would make this transformation
much slower (especially if some competing AI fall into the wrong hands)

\- that governments would let this transformation take place without
retaliation or trying to capture this power

------
Khelavaster
This is why we maintain strong antitrust law.

------
yters
It'll need to create AI itself. Otherwise how can it be considered more
capable than human intelligence?

------
idibidiart
I thought nukes... you know... or DIY Bio, nanotech or the myriad of other
actual threats

------
NinoScript
I loved the story, it was fun, engaging and it made me happy.

------
visarga
Be scared! The "Last Invention" is upon us.

