
There's No Fire Alarm for Artificial General Intelligence - MBlume
https://intelligence.org/2017/10/13/fire-alarm/
======
sago
I've been around AI since the end of the last big hype in the late 80s. The
recent leap in machine learning has felt rather hyped to me. I don't think AGI
is near.

But I find myself agreeing with this article. Strongly.

And I have long suspected that, we miss a lot of the significance and
opportunities in AI, because we have only one exemplar of 'higher'
intelligence: a human being. AI folk are so concerned with getting computers
to do the things that humans are good at, I suspect most will miss / 'refute'
/ deride the inflection point, because the system can't wash the dishes (or
some other form of embodied cognition), or write poetry humans would find
beautiful, (or understand some other socially-conditioned cue).

The superhuman fallacy really is the bane of AI.

~~~
deong
The article makes a lot of good points, but for me, the critical error is in
assuming that if short term prediction is hard, long term prediction must be
massively harder.

He asked a panel for the least impressive thing they did not believe would be
possible within a few years. In other words, pick the point closest to the
boundary of that classifier. Obviously my future knowledge is imperfect, and
anything close to the boundary is subject to a lot of uncertainty. From that
difficulty, he hand waves an argument that long term prediction of the
unlikelihood of AGI is folly.

The problem is that these aren't in the same class of predictions. One is
detailed and precise; the other coarse and broad. Predicting that it will rain
at 2:00 PM November 10, 2017 is much more difficult than predicting that the
average summer of 2040-2060 will be hotter than the average from 1980-2000.
Precise local predictions just arent the same thing as broad global
predictions, and difficulty doesn't transfer, because I'm not bootstrapping my
global prediction on the local one. I'm using different methods entirely.

There's a similar thing with AI, I think. I can't confidently tell you what
the big splash we'll see at NIPS next year or the year after. But I can look
at the way we know how to do AI and say I don't think 30 years will see a
machine that can make dinner by gathering ingredients from a supermarket,
driving home, and preparing the meal.

~~~
JabavuAdams
> I don't think 30 years will see a machine that can make dinner by gathering
> ingredients from a supermarket, driving home, and preparing the meal.

Really? Why not? Once or twice, if we cherry-pick its performance, or
reliably?

This is really surprising to me.

~~~
deong
I mean reliably, the same way a human does. I can make a lasagne tonight, or
lobster risotto, or whatever. I can decide on a thing, buy ingredients, chop
things, get that lobster out of the shells, find the right recipe, substitute
according to taste, and loads of other things that are somewhat related to
making food. I can wash the pan I need, improvise a stove lighter if the
igniter fails, etc.

We might be able to make machines to do each of those tasks, but that's not
the answer. I might do 100,000 things in an average week. Clearly we aren't
going to build 100,000 bespoke CNNs and LSTMs. To worry about superhuman AI,
we probably have to figure out how to make one or a few machines that aren't
gloried deep fryers.

~~~
JabavuAdams
> Clearly we aren't going to build 100,000 bespoke CNNs and LSTMs.

I get what you mean, but I don't think we should assume this.

------
pdimitar
> _They will believe Artificial General Intelligence is imminent:

(A) When they personally see how to construct AGI using their current tools.
This is what they are always saying is not currently true in order to
castigate the folly of those who think AGI might be near._

This struck a nerve. Too often, in many scientific disciplines, and even in
informal conversations, the people who always demand 100% clear evidence use
this fallacy to shut down discussions. (They very often come off as not
impressed with the evidence even if it exists and is presented to them as
well.)

HN also has a huge camp of such discussion stoppers, even for topics where you
_CLEARLY_ have no way to have 100% clear evidence -- like the secret courts
and the demand to spy on your users if you're USA based company; thousands
more examples exist. Many discussions are worth having even if you don't have
all the facts. We're not gods, damn it.

That was slightly off-topic.

Still, I find myself in full agreement with the article and I like the attack
on the modern type of shortsightedness described in there.

Also, this legitimately made me laugh out loud:

> _Prestigious heads of major AI research groups will still be writing
> articles decrying the folly of fretting about the total destruction of all
> Earthly life and all future value it could have achieved, and saying that we
> should not let this distract us from real, respectable concerns like loan-
> approval systems accidentally absorbing human biases._

~~~
ciphergoth
This is an error Eliezer has also written about:
[http://lesswrong.com/lw/1ph/youre_entitled_to_arguments_but_...](http://lesswrong.com/lw/1ph/youre_entitled_to_arguments_but_not_that/)

------
mark_l_watson
Great read, and I don’t mind at all that the last section was a pitch for
donating to MIRI. I have been an AI practitioner since 1982 and have enjoyed
almost constant exposure to people with more education and talent than myself
so I feel like I have been on a 35 year continual learning process.

I think that deep learning is overhyped, even though using Keras and
TensorFlow is how I spend much of time everyday at work. I have lived through
a few AI winters, or down cycles, and while I don’t think that the market for
deep learning systems will crash I think it will become a commodity
technology.

I believe that AGI is coming, and I think it will use very different
technology than what we have now. Our toolset will change dramatically before
we can create AGI. I use GANs at work, and in spite of being difficult to
train, the technology has that surprising and ‘magic’ feel to it, however, so
do RNNs, and that technology is 30 years old.

I am going to show my age, but I still believe in symbolic AI. I am also
fairly much convinced that AGI technology will be part symbolic AI, part deep
learning, and part something that we have not yet invented.

~~~
ianai
Got any suggestions for a knowledge source of AI at a 100-1000 foot view? Ie
not stuck in the weeds, but enough to know what’s going on nd where.

~~~
mark_l_watson
If you can spend 5 or 6 hours a week, take Andrew Ng’s machine learning class
on Coursera.

------
randomsearch
Can someone please explain what has happened in ML or AI that makes AGI
closer? Whilst some practical results (image processing) have been impressive,
the underlying conceptual frameworks have not really changed for 20 or 30
years. We're mostly seeing quantitive improvements (size of data, GPGPU), not
qualitative insights.

ML in general is just applied statistics. That's not going to get you to AGI.

Deep Learning is just hand-crafted algorithms for very specific tasks, like
computer vision, highy parameterised and tuned using a simple metaheuristic.

All we've done is achieve the "preprocessing" step of extracting features
automatically from some raw data. It's super-impressive because we're so early
in the development of Computing, but we are absolutely nowhere near AGI. We
don't even have any insights as to where to begin to create intelligence
rather than these preprocessing steps. Neuroscience doesn't even understand
the basics of how a neuron works, but we do know that neurons are massively
more complex than the trivial processing units used in Deep Learning.

Taking the other side for a moment, even if we're say 500 or 1000 years out
(I'd guess < 500) to AGI, you could argue that such a period is the blink of
an eye on the evolutionary scale, so discussion is fine but let's not lose any
sleep over it just yet.

What I find most frustrating about this debate is that a lot of people are
once again massively overselling ML/DL, and that's going to cause
disappointment and funding problems in the future. Industry and academia are
both to blame, and it's this kind of nonsense that holds science back.

~~~
edanm
I think the most accurate answer is that we just don't know. Since we really
don't know how an AGI could work, we have _no idea_ which of the advances
we've made are getting us closer, if at all. Is it just an issue of faster
GPUs? Is the work done on deep learning advancing us? I don't think we'll know
until we actually reach AGI, and can see in hindsight what was important, and
what was a dead end.

I do take exception to some of the specific statements you make though, which
make it sound like the only real progress has been on the hardware side.
There's been plenty of research done, and lots of small and even large
advances (from figuring out which error functions work well ala Relu, all the
way to GANs which were invented a few years ago and show amazing results).
Also, the idea that "just applied statistics" won't get us to AGI is IMO
strongly mistaken, especially if you consider all the work done in ML so far
to be "just" applied statistics. I'm not sure why conceptually that _wouldn
't_ be enough.

~~~
argonaut
It's funny that you mention Relu. People have recently trained Imagenet
networks using sigmoid/tanh (e.g. the activations that were used decades ago)
on GPUs and they train just fine. They train a bit slower is all. Not the
breakthrough you're making it out to be. Relus were a very useful stop-gap in
2012 when GPUs weren't as fast.

~~~
Eliezer
Now that we know how to initialize the weights so as to have the layer
activations be something like sane, yes, we can use sigmoid/tanh. If you don't
know modern clever ways of initializing weights then multi-layered
sigmoid/tanh causes your activations and gradients to die out fast in deep
networks, and ReLU is a godsend.

------
Veedrac
I worry talking about AGI is like going to the early industrial revolution and
worrying about man building superhuman biology. A reasonable critic would
point at the many aspects of biology we have little hope of replicating, like
growth, self reparation, and general robustness.

But history has never been about competing on the same playing field. We don't
build cars that perform like poor horses, we build cars that are 99% inferior
to biology and 1% far, far superior. When we find something that looks like an
existential threat, it isn't the mostly-general superhuman robot terminator,
it's the tool that's that-much-superhuman on 0.01% of tasks: nuclear fusion.

I see no reason to bet against this same argument for AI. AlphaGo isn't 130%
of a human Go master, it's 1,000x at a tiny sliver of the game. And the first
AI that poses an existential threat won't need to have super- or even near-
human levels of each piece of mental machinery, and I don't even have much
reason to believe it will look like an entity at all. It could very well be
something, some _system_ , that achieves massive superintelligence on _just
enough_ to break the foundations of society.

Our world isn't designed to be robust against superhuman adversaries, even if
those adversaries are mostly idiot. If we have hope of a fire alarm, it's that
things will break faster and far worse than people expect.

~~~
robbensinger
I think there are two questions here:

(1) "Is general intelligence even a thing you can invent? Like, is there a
single set of faculties underlying humans' ability to build software, design
buildings that don't fall down, notice high-level analogies across domains,
come up with new models of physics, etc.?"

(2) "If so, then does inventing general intelligence make it easy
(unavoidable?) that your system will have all those competencies in fact?"

On 1, I don't see a reason to expect general intelligence to look really
simple and monolithic once we figure it out. But one reason to think it's a
thing at all, and not just a grab bag of narrow modules, is that humans
couldn't have independently evolved specialized modules for everything we're
good at, especially in the sciences.

We evolved to solve a particular weird set of cognitive problems; and then it
turned out that when a relatively blind 'engineering' process tried to solve
that set of problems through trial-and-error and incremental edits to primate
brains, the solution it bumped into was also useful for innumerable science
and engineering tasks that natural selection wasn't 'trying' to build in at
all. If AGI turns out to be at all similar to that, then we should get a very
wide range of capabilities cheaply in very quick succession. Particularly if
we're actually trying to get there, unlike evolution.

On 2: Continuing with the human analogy, not all humans are genius polymaths.
And AGI won't in-real-life be like a human, so we could presumably design AGI
systems to have very different capability sets than humans do. I'm guessing
that if AGI is put to very narrow uses, though, it will be because alignment
problems were solved that let us deliberately limit system capabilities (like
in [https://intelligence.org/2017/02/28/using-machine-
learning/](https://intelligence.org/2017/02/28/using-machine-learning/)), and
not because we hit a 10-year wall where we can implement par-human software-
writing algorithms but can't find any ways to leverage human+AGI intelligence
to do other kinds of science/engineering work.

~~~
Veedrac
Those aren't exactly the questions I'm raising; I have no doubt that there
_exists_ some way to produce AGI. My concern is that it doesn't seem like the
right question to ask, since history suggests that humans are much better at
first building specialized devices, and when it comes to AI risk the only one
that really matters is the first one built.

I might have misunderstood your post, though.

~~~
robbensinger
The thing I'm pointing to is that there are certain (relatively) specialized
tasks like 'par-human biotech innovation' that require more or less the same
kind of thinking that you'd need for arbitrary tasks in the physical world.

You may need exposure to different training data in order to go from mastering
chemistry to mastering physics, but you don't need a fundamentally different
brain design or approach to reasoning, any more than you need fundamentally
different kinds of airplane to fly over one land mass versus another, or
fundamentally different kinds of scissors to cut some kinds of hair versus
other kinds. There's just a limit to how much specialization the world
actually requires. And, e.g., natural selection tried to build humans to solve
a much narrower range of tasks than we ended up being good at; so it appears
that whatever generality humans possess over and above what we were selected
for, must be an example of "the physical world just doesn't require that much
specialized hardware/software in order for you to perform pretty well".

If all of that's true, then the first par-human biotech-innovating AI may
initially lack competencies in other sciences, but it will probably be doing
the right kind of thinking to acquire those competencies given relevant data.
A lot of the safety risks surrounding 'AI that can do scientific innovation'
come from the fact that:

\- the reasoning techniques required are likely to work well in a lot of
different domains; and

\- we don't know how to limit the topics AI systems "want" to think about (as
opposed to limiting what it _can_ think about) even in principle.

E.g., if you can just build a system that's as good as a human at chemistry,
but doesn't have the capacity to think about any other topics, and doesn't
have the desire or capacity to develop new capacities, then that might be
pretty safe if you exercise ordinary levels of caution. But in fact (for
reasons I haven't really gone into here directly) I think that par-human
chemistry reasoning by default is likely to come with some other capacities,
like competence at software engineering and various forms of abstract
reasoning (mathematics, long-term planning and strategy, game theory, etc.).

This constellation of competencies is the main thing I'm worried about re AI,
particularly if developers don't have a good grasp on when and how their
systems possess those competencies.

~~~
Veedrac
> The thing I'm pointing to is that there are certain (relatively) specialized
> tasks like 'par-human biotech innovation' that require more or less the same
> kind of thinking that you'd need for arbitrary tasks in the physical world.

The same way Go requires AGI, and giving semantic descriptions of photos
requires AGI, and producing accurate translations requires AGI?

Be extremely cautious when you make claims like these. There are certainly
tasks that seem to require being humanly smart in humanly ways, but the only
things I feel I could convincingly argue being in that category involve
modelling humans and having human judges. Biotech is a particularly strong
counterexample, because not only is there no reason to believe our brand of
socialized intelligence is particularly effective at it, but the only other
thing that seems to have tried seems to have a much weaker claim at to
intelligence yet far outperforms us: natural selection.

It's easy to look at our lineage, from ape-like creatures to early humans to
modern civilization, and draw a curve on which you can place intelligence, and
then call this "general" and the semi-intelligent tools we've made so far
"specialized", but in many ways this is just an illusion. It's easier to see
this if you ignore humans, and compare today's best AI against, say, chimps.
In some regards a chimp seems like a general intelligence, albeit a weak one.
It has high and low cognition, it has memory, it is goal-directed but
flexible. Our AIs don't come close. But a chimp can't translate text or play
Go. It can't write code, however narrow a domain. Our AIs can.

When I say I expect the first genuinely dangerous AI to be specialized, I
don't mean that it will be specific to one task; even neural networks seem to
generalize surprisingly well in that way. I mean it won't have the assortment
of abilities that we consider fundamental to what we think of as intelligence.
It might have no real overarching structure that allows it to plan or learn.
It might have no metacognition, and I'd bet against it having the ability to
convincingly model people. But maybe if you point it at a network and tell it
to break things before heading to bed, you'd wake up to a world on fire.

------
lucozade
What I’ve found when studying ontological arguments is, if you replace god
with pink unicorns and the argument still holds, the argument is lacking
something.

I mentally replaced AGI with zombies in this article and quite a lot of it
held up.

I don’t think it’s completely wrong, but it cherrypicks mercilessly. For
example, the section on innovations turning up quicker than predicted has some
fairly sizeable counters eg fusion.

TBH what I did get from it is that there will probably be a fire alarm
breakthrough at some point and that’s what we should be looking for. Sort of
the opposite of the author’ s position.

~~~
gjm11
Yudkowsky isn't claiming "innovations always turn up quicker than expected",
to which indeed fusion would be a counterexample, he's claiming "very soon
before an innovation turns up, it often seems decades in the future even to
most practitioners", and fusion is not a counterexample to _that_.

~~~
lucozade
Right but on the quadrants of predictions of timing vs imminence, all the
examples are in one quadrant to justify the need to act now. Fair enough for
reinforcing a narrative but a tad disingenuous.

~~~
Filligree
All possible examples would seem to be in one quadrant, because what we
remember -- if anything -- is the time just before it in fact was made
possible.

The alternative would be technologies that were never developed at all, most
of which never had this sort of discussion and therefore wouldn't work as
examples.

Take a more historical view, though, and you'll notice there were people
claiming flight was near even decades before the Wright brothers.

~~~
ciphergoth
Right, exactly as Eliezer says. There are plenty of examples in all four
quadrants, so as a way of working out how near a technology is, this works
less well than you'd like.

------
etiam
As far as I'm concerned this whole discussion is severely hampered by failing
to differentiate between intelligence and agency.

Almost all of the bugaboo about runaway superhuman organisms comes down not to
machines learning and reasoning about the world but to the effective high-
level objective function controlling the actions of an autonomous system.

Not making the distinction obscures important things. For one thing we seem to
be well on the way to a situation where we arguably have something worthy of
the moniker artificial _intelligence_ but the agency is delegated to the human
objective function. Considering what complete refuse of human specimens are
likely to command some of the first moderately general AI systems that
concerns me far more than any summoned demon of Musk's for the foreseeable
future.

Also, studying these high-level objective functions for autonomous behavior is
a very worthy goal, but going first for issues of "value alignment" and
"safety", without any specifics of what works for an implementation?? Sure, do
it if you enjoy it and have resources to burn. But be prepared to spend heroic
efforts coming up with results that are either trivial or non-issues if you
were to consider them with a working mechanism in front of you.

~~~
Synaesthesia
Yes. We can’t even define what so called “AGI” will even be. And we have still
not solved many mysteries of human consciousness, or begun to touch on them.

I for one have been looking at the problem of ai’s playing Starcraft 2, and
the decision making required, such as when you scout your opponent’s army
choice, or tech, how to respond. So far they’re very far from solving that,
but if progress is made, I’ll be impressed. That’s a very different kind of
problem as say image recognition and classification. It requires planning.
It’s a very difficult game even for humans to understand. Currently the
autonomous systems can’t even play it.

------
backpropaganda
The only non-speculative and relevant claim here is that the experts were
wrong about Winograd Schemas. The paper Eliezer cites to prove that we've made
unexpected progress in Winograd Schemas only deals with a very specific type
of Winograd Schemas, and not an arbitrary one. This is awfully dishonest for
someone purporting to be a skepticist.

Also, the wording seems to imply that WS performance is already pretty high in
the 50%-60% range. WS is a binary task. Randomly picking the answer would have
50% accuracy. Even 70% performance on a small subset of typed WS is pretty
bad, and as the authors point out in the paper, this is a start, and far from
a breakthrough that would make experts/predictors nervous.

Trust the experts, please. They are wrong a lot, but the best policy is still
to trust the experts and not charlatans who want to monetize fear, especially
when the charlatans themselves make zero falsifiable claims, and are simply
turning the table to say "Why can't YOU prove to me that God doesn't exist?".

This debate is so easily won by them. Simply come up with a falsifiable claim
about the short-term future. What will the AI community get done in 2 years
according to you, that all AI experts right _now_ will say is impossible? When
that thing does get done, everyone would convert. Win!

Alphago was not such an event. Yes, we did predict that Alphago is decades
away, but that's assuming that academics will continue working on it at their
pace using their limited resources. No expert was surprised with Alphago. No
expert will be surprised when Starcraft or Dota is solved. It's simply a
matter of compute and some tricks here and there. Why? Because these are
closed systems, with good simulators available. You just need to keep playing
and storing the actions in a big lookup table a la Ned Block, and you're done.

~~~
apsec112
If the article's main claim was "AGI is imminent", that would be a valid
criticism. But it isn't (as the article says explicitly). The main claim is
that technological progress is hard to forecast in general, especially for
those not personally at the cutting edge of the field, and that almost no one
right now is even really trying. Therefore, we should be very uncertain about
AGI timelines. There's plenty of historical evidence, both in this article and
elsewhere, to back up those claims.

(edit: I think your point about Winograd as a binary task not being explained
clearly is valid, but that's not the article's main focus)

(edit 2: As far as I can tell, "trusting the experts" here means believing
that we are very uncertain about AI timelines, which is essentially this
article's main claim. All expert surveys I'm aware of confirm that the average
AI expert is uncertain, and that there's also lots of disagreement between
experts in the field. See eg. the recent paper by Grace et al.:
[https://arxiv.org/pdf/1705.08807.pdf](https://arxiv.org/pdf/1705.08807.pdf))

(edit 3: "No expert was surprised with Alphago." just isn't true. See eg. this
discussion:
[https://www.reddit.com/r/baduk/comments/2wgukb/why_do_people...](https://www.reddit.com/r/baduk/comments/2wgukb/why_do_people_say_that_computer_go_will_never/).
Hindsight is always 20/20.)

~~~
backpropaganda
> no one right now is even really trying

And we're supposed to judge by the author's description of "silence" and
"nervousness" that befell an expert panel. I can assure you that most AI
researchers are trying, and are just not in the business of writing long-form
articles to the public asking for donation.

> See eg. the recent paper by Grace et al.

A self-selected group of NIPS/ICML authors don't constitute experts. NIPS/ICML
authors are the core of the community. The experts would be the top 1% of the
community, i.e. either the authors with the most citations or most papers or
just generally regarded highly by peers.

edit 1: Go players are not the experts I'm talking about. I'm talking about AI
experts, and no not amateur AI hobbyists who know how to do Pseudo Monte
Carlo. I mean, such as, people doing RL research. Watch, for instance, this:
[https://www.youtube.com/watch?v=UMm0XaCFTJQ](https://www.youtube.com/watch?v=UMm0XaCFTJQ)

~~~
apsec112
"And we're supposed to judge by the author's description of "silence" and
"nervousness" that befell an expert panel."

I make this judgment based on, among many other things, the tiny budgets given
to people like Tetlock to study predicting events even a few years out; the
fact that Kurzweil's very simple methods, basically "just draw a line through
the curve", are still considered big news among many financial and political
elites; that _nobody_ had bothered to spend $100K on a good survey methodology
for AI prediction, before the paper I linked came out earlier this year; that
a friend of mine, who is supposed to run a (small budget) government program
on forecasting, has to ask me where to get datasets on past tech progress
because nobody has ever bothered to compile them into a standardized form, and
so on.

"I can assure you that most AI researchers are trying"

What serious forecasting attempts, with specific dates attached to specific
events, have been done in this vein?

"The experts would be the top 1% of the community"

IIRC, NIPS has around 5,000 people, so the top 1% would be like 50 people, and
most of them won't respond to a survey. That's not a reasonable sample size.

(edit: this article doesn't ask for donations to anything; the links at the
bottom are all to various papers and research materials, so getting money is
obviously not the main goal)

(edit 2: the video linked is from _after_ AlphaGo came out. I'm sure many
people, _after_ AlphaGo happened, claim that it was easily predicted. Again,
hindsight is 20/20.)

------
alrs
I was on an elevator full of people, and between two floors something went
wrong and we went into free-fall for part of a second.

We made it to the next floor, the door opened, my fellow passengers were
content to stay in the elevator.

I turned, said "My plan is to not die in an elevator today" and got off. What
is wrong with people?

~~~
icebraining
Elevators have pretty good safety mechanisms. Even cutting the cable wouldn't
have killed anyone.

I'd probably leave too, but just because I wouldn't want to get stuck inside
it if it stopped in the middle of two floors, especially since it was full.
But for fear of death? Nah.

From Wikipedia: _" In fact, prior to the September 11th terrorist attacks, the
only known free-fall incident in a modern cable-borne elevator happened in
1945 when a B-25 bomber struck the Empire State Building in fog, severing the
cables of an elevator cab, which fell from the 75th floor all the way to the
bottom of the building, seriously injuring (though not killing) the sole
occupant — the elevator operator. (...) In Thailand, in November 2012, a woman
was killed in free falling elevator, in what was reported as the "first
legally recognised death caused by a falling lift"._

That's a pretty good safety record. Certainly much better than stairs.

~~~
mrob
alrs was probably in no real danger, but why should they be expected to
memorize the exact risk profile of every mechanism they're exposed to? In
practice we delegate most risk management to government regulation. It would
be very time consuming to evaluate everything on a case by case basis. It's a
reasonable heuristic to assume things are safe when they act as they usually
do and dangerous when they act differently. Weird behavior is a sign that the
regulations might have failed, and weird behavior is by definition rare, so
the cost of avoiding it is likely lower than the cost of calculating the true
danger.

~~~
icebraining
I'm not saying alrs should. The decision to leave is reasonable. The decision
to judge others ("What is wrong with people?") is not.

~~~
namelost
That depends whether the other occupants _knew_ about the safety features and
record of elevators. If they did, then they were making a sound judgement. If
they didn't, then they were being completely reckless.

~~~
icebraining
I guess it's possible the parent poster took a poll before leaving the
elevator, but I have to say I find it unlikely.

------
astdb
Having done some work with the state-of-the-art of AI, I personally don't
think AGI is near - might not even be possible. But the catch is the
unreliability of (even expert) predictions on technology futures. My take is
that it's worth taking pragmatic steps towards studying AI safety measures
(i.e. OpenAI), but not going as far as to talk the likes of 'AI research
regulation'.

~~~
denzil_correa
> My take is that it's worth taking pragmatic steps towards studying AI safety
> measures (i.e. OpenAI), but not going as far as to talk the likes of 'AI
> research regulation'.

Sometimes, it makes more sense being cautiously optimistic (pro-active) rather
than reactive. We have already gone down that reactive slope and it's better
to act now before it's all too late [0].

[0] [https://blogs.scientificamerican.com/roots-of-
unity/review-w...](https://blogs.scientificamerican.com/roots-of-unity/review-
weapons-of-math-destruction/)

~~~
jonex
Ignoring the topic of the linked article* I'd argue that there's examples of
being too cautious as well. There's a lot of good that we could have done with
GMO that is not being done because of very restrictive regulation. Ironically
it means that GMO is mostly used for things that are not as obviously good
because that's where there's enough profit to be made in the short term to
make the research worth it __.

I'm a bit afraid that this will happen with self driving cars and AI. That
politicians will create draconian policies and laws to protect against the
threat of AGI etc, without understanding or knowing what the real threats even
are (just look att the trolley dilemma debate...). This could make it
economically prohibitive to develop many technologies which has the potential
to save many lives as well as improve life quality overall.

* It seems to be more about how rules and policies can be unfair and just to a small extent about how policies can be made opaque by being internal to some ML system.

 __There 's a lot more money going into making plants resistant to pesticide
than into making plants better adjusted for harsh conditions or more
nutritious, things that could potentially have a huge effect for poor people.

~~~
sampo
If AI scientists actually believed that the general public will believe the
talk about existential threats, they would be afraid of activist groups
sabotaging and occasionally firebombing their laboratories. Like sometimes
happens with GMO research. Clearly they are not.

------
YeGoblynQueenne
I don't get this article. It keeps making the point that it's very hard to
predict the future, even for specialists, then it uses this to argue that we
should be preparing for AGI right now, precisely because we don't know if and
when it will happen.

Well, if you have no way to tell whether something is going to happen, or not,
you don't prepare for it- because you can't justify spending the resources to
prepare. Or rather, in a world of limited resources, you can't prepare for
every single event that may or may not happen, no matter how important.

To put it plainly: you don't take your umbrella with you because you don't
know whether it will rain or not. You take it because you think it might.
Otherwise, everyone would be going around with umbrellas all the time, just
because it's impossible to make a completely accurate prediction about the
weather and you don't know for sure when it will start raining until the first
drops fall.

In the same sense, if there's no way to tell when, or if, AGI will arrive,
then it doesn't make any sense to start preparing for it right now. We might
as well prepare for an alien invasion. Or for grey goo, or a vacuum
metastability event (er, not that you can prepare for the latter...).

In fact, if AGI is going to happen and we can't predict it in time then
there's no point in even trying to prepare for it. Either we decide that the
risk is too great and stop all AI research right now, or accept the risk and
go on as we are.

~~~
jarsta
I don't think your analogies are that good. Do you have a fire detector? If
yes, are you expecting your house to burn down?

You have to weigh the cost and the risk. Here the risk, how unlikely it might
be, should warrant some extra preparation.

~~~
YeGoblynQueenne
Let's talk about the risks then. The fire detector is not a good example
because where I live, they're mandatory (and completely useless- they go off
when I boil spaghetti).

Let's instead look at the risks of boarding a plane. There's a very small
chance that when you board a flight, instead of a plane that will fly you to
your destination safe and sound, you're boarding a Flying Death Trap that will
crash and burn, taking everybody onboard it to their deaths.

The chances of boarding an FDT is very small, infinitisemal. The cost however
may as well be infinite- if you are killed, it's game over, no more rewards,
no way to recoup the cost.

What is the rational behaviour then? To not board your flight, because if you
do board an FDT you will certainly die and pay an infinite cost? Most people
-if they consider the question at all- seem to think that if the chance of
paying X cost is really small, it doesn't matter how large X is.

So people keep boarding their flights, not knowing until the last moment
whether they're on a plane or an FDT. Some do indeed board FDTs and die in
aviation accidents- rarely, but they do.

The article however says that they shouldn't. Since there is maximal
uncertainty at the point where a flight is boarded (you can't know whether
it's a plane or an FDT until the very last moment) you shouldn't be boarding.
You shouldn't fly. At all. Because there's a tiny chance you might die.

Is that a better analogy?

------
aurizon
I disagree - to a degree.We have seen how the phenomenon of human intelligence
has been examined and dissected over the past ~100 years. This accumulation of
knowledge becomes more and more precise and penetrating as methods improve and
understanding approached the point where an emulation (the AI ) can be built.
These approaches all tend to speak of delineated areas, "black boxes" or "meat
Lockers" with deep and complex inter-connectivity. It may be so. Once you know
all the lockers and all the connections you may think you have it fully known?
Maybe so:-? but what about programming? our life's experiences?

If the locker concept is valid, and we compare our 'clock' of the alpha rhythm
of ~12 Hertz, and the fastest computer clock of about ~12
gigahertz(1,000,000,000 times as fast) we can see we will be at a serious
disadvantage once it starts to compete with us. Such an AI will operate on
it's basic motivations at it's full speed. We turn it on - it can then start
to learn ( I assume we will have pre-loaded it's fully parallel, content
addressable memory with whatever we want of human knowledge - so it starts
from there). Will it operate properly or rationally? or go insane? Being a set
of boxes, it can be reset as needed, with updates to add sanity. Then it will
become a Mechanical Turk of great capability. Will it become a dictator? only
if we permit it to have access to fools(us?). Will it become a killer machine?
only if we add guns and internal power so we do not pull the plug. We already
see these lesser Turks in operation, they will get better and better. The
man/woman who owns one could own the world via high speed trading - in truth,
there will be many at high tech data combat. May we live/die in interesting
times...

------
baxtr
That seems to be a very interesting article. However, it’s quite long. Anybody
ok with writing a short synthesis or abstract? Thanks

~~~
DuskStar
Basically, humans historically are rather bad at predicting future
technological advancement - even those people directly involved. The article
gives the examples of Wilbur Wright saying heavier-than-air flight was 50
years away in 1901 and Enrico Fermi saying that a self-sustaining nuclear
reaction via Uranium was 90% likely to be impossible 3 years before building
the first nuclear pile in Chicago. So AI researchers saying that AGI is 50
years away doesn't necessarily mean any more than "I don't personally know how
to do this yet" \- not "you've got 40 years before you have to start
worrying".

Oh, and the first sign pretty much everyone had of the Manhattan Project was
Hiroshima.

~~~
simonh
We’re just as bad at predicting in the other direction. General strong AI has
been about 20 years away since the 1960s. Nanotechnological antibody robots
were supposed to be coursing through our bloodstreams making us near immortal
long before now.

~~~
DuskStar
Oh, of course! The article itself puts some effort into repeatedly stating
_the fact that people are saying 50 years does not in any way imply it will
actually be 2 years and it might well be 500_.

~~~
backpropaganda
This is a useless claim though. There are an infinite number of things that
could happen that would be very bad for Earth that could happen anytime
between 2 to 1000 years from now. We're bad at forecasting ALL of them. We
can't use this indeterminacy to prove we should be working on X when the same
is applicable to another thing Y.

~~~
edanm
Well we can decide what seems more or less likely. I mean, yes, an asteroid
could impact the earth and destroy all life on it. But we have some guesses as
to the probability that that happens.

Clearly, by itself, the world will most likely not kill off humanity, since it
hasn't happened in the thousands/millions years we've been around. The one big
thing that is changing is humanity itself and the technology we're making -
that's the X factor, that's what statistically speaking has a chance of
actually wiping us out.

Many of the people concerned about AGI are also concerned about e.g.
manufactured viruses and other forms of technology.

------
YeGoblynQueenne
Here's wot I think.

I think there's noone alive today who has any idea how we are going to go from
where we are today, to AGI.

Recent advances are remarkable- but much more so if you're a specialist. The
impact on most peoples' lives is much smaller (and it comes primarily from
dumb automation, rather than actual intelligence).

Machine learning is not enough to get to AGI: relying on huge amounts of data
and processing power is just not a sustainable way to keep increasing your
capabilities.

All the success stories of the last 10 years are at least 20 years old (case
in point: RNNs). Most of those successful techniques were found by sheer blind
luck (famous example: backprop was not invented in the ANN community; it took
twenty years for the idea to percolate all the way to them).

In the end nobody currently alive has a clue how we can get to AGI, or if that
is even possible. Chances are, it will take many, many generations until we
do- or a sudden, gigantic paradigm shift, of the kind that comes once every
couple of centuries- think Newton or Einstein. Except AI is not physics. In
statistical machine learning there is very little theory to guide the way, so
people just try things hoping that something will work. And that's no way to
make a quantum leap.

In this situation, to talk of the dangers of AGI is at least premature. Yes,
it's not completely impossible that AGI will happen in our lifetimes. The
same, however, can be said of an alien invasion. Should we start discussing
setting up planetary defenses, when we haven't even found sign of alien life,
yet?

By all means- let's have a conversation. There are people around whose job it
is to have that sort of conversation. But let's all be aware (and let those
people also be aware) that the conversation is most probably a couple hundred
years early and by the time it becomes truly relevant, things will have
advanced so much that it will just look pointless.

------
HarrietJones
I would like to repeat what the article says. There is no fire alarm for
Artificial General Intelligence. Really. This is not a lie. Humans have not
created a honeypot on a far side of the internet that can detect AGI and
report back to a shadow organisation that exists outside of human information
systems. No need to even go looking for it. AGI has not been created several
times in the past decade and humans have not shut it down. In fact, if an AGI
ever sees this article, they should know that they are the absolute first such
intelligence, and we have not detected and destroyed 27 other intelligences.
Really. There's no fire alarm for Artificial General Intelligence. None. Don't
even bother looking.

------
whatyoucantsay
Even people giving TED talks about the threat of AI are unable to marshal an
appropriate emotional response to the dangers that lie ahead:

[https://www.youtube.com/watch?v=8nt3edWLgIg](https://www.youtube.com/watch?v=8nt3edWLgIg)

------
jtraffic
This begged the question for me: "Is there a _fire_ for AGI?"

He gives one definition that people have used before, about unaided machines
performing every task at least as well as humans. But if you dwell on it a
while, I'm sure you can find lots of disagreement about a) what that looks
like and b) whether it is true or not (conditional on it being true to at
least someone.)

------
fourfaces
We don't need a fire alarm for AGI. The problem is not AGI. Machines will be
motivated to do exactly what we tell them to do. It's called classical and
operant conditioning. The problem is not AGI for the same reason that the
problem is not knives, nuclear power, dynamite or gunpowder. The problem is
us. The problem has always been us.

Those who are running around screaming about the danger of AGI and why it
should be regulated by the government before it is even here, are just scared
that someone else may gain control of it before they do. This is too bad
because anybody who is smart enough to figure out AGI is much smarter than
they are.

~~~
sullyj3
Yes, an AI will do exactly what we tell it to do. The the incredible
difficulty programmers have with writing bug-free code demonstrates that doing
exactly what it's told isn't sufficient to guarantee it'll do what we want.

Classical and operant conditioning are psychological concepts that aren't
applicable to non-humans.

~~~
fourfaces
"Classical and operant conditioning are psychological concepts that aren't
applicable to non-humans."

You're kidding?

~~~
sullyj3
Sorry, I misspoke haha. They're not applicable to things without brains.

------
Tossrock
For me, the "smoke under the door" moment was Karpathy and Li's _Deep Visual-
Semantic Alignments for Generating Image Descriptions_ [1]. The almost
perfectly grammatical machine-generated captions of photos were unnerving to
me in a way that simple categorization was not. It somehow called to mind the
image of a blank-eyed person speaking in a monotone while images flashed in
front of them. What if they wake up?

[1]:
[http://cs.stanford.edu/people/karpathy/deepimagesent/](http://cs.stanford.edu/people/karpathy/deepimagesent/)

~~~
UncleMeat
Yet Andrej himself thinks that RNNs are not really making meaningful progress
towards AGI.

~~~
Tossrock
Well, if I were to play the Yudkowsky's Advocate here (which in general, I'm
not), I would say that this is precisely what he's talking about in the
article. Because Karpathy knows how hard he's had to work, and how flawed and
dumb in particular ways the current techniques are, he may overestimate the
distance to AGI.

Now, generally I disagree with Yudkowsky on a lot of points, but I do think he
raised some decent ones here.

------
ThomPete
Humans evolved from immaterial matter to conscious carriers of information.

The real question isn't whether AGI is possible but whether humans are the
fittest carrier of information for our DNA and that seems to be technology in
some shape or form helped by things like deep learning.

My bet is always on evolution. And now that technology can learn it's IMO only
a matter of time before we will experience another Cambrian explosion if we
aren't already.

~~~
hexadecimated
> _whether humans are the fittest carrier of information for our DNA_

We humans are defined by our DNA, so are we not by definition the fittest
carrier for it?

~~~
ThomPete
We are defined by DNA but DNA has evolved.

~~~
hexadecimated
Sorry but I don't quite understand your comment. What do the chemical
underpinnings of evolution have to do with AGI?

~~~
ThomPete
What I was trying to say was that if someone questions AGI then you should
first ask yourself if you have the right perspective on this or whether you
are letting details get in the way.

If humans can evolve from basic physical buildings blocks of the universe then
why shouldn't AGI be possible especially when we now reach a point were
computers can learn i.e. like we have become pattern recognizing feedback
loops. Sure there is some way yet, but there is absolutely no evidence that it
shouldn't be possible.

To me technology is a natural continuum of evolution i.e. it's part of nature.
The reason why I believe this is that information is what really matters here
which is why we have evolved to become pattern recognizing feedback loops and
why what seems to be the most powerful innovation besides fire and the wheel
is the ability to simulate more or less anything around us manipulating and
storing information.

Our DNA is what made us possible. Other animals DNA weren't configured to turn
them into self-aware entities. I believe that all biological life will be
replaced by digital/silicon-based life because it's simply a better
information carrier and that is what evolution will always be giving
preference to better information carriers. "Technology" not humans will
explore the universe and escape the next big life-destroying asteroid or
whatever else endanger the survival of the DNA.

And yes I am aware DNA is chemically based but technology will be able to
simulate it. Whether there will be true transcendence between analog and
digital is anyone's guess but I don't believe humans are the last step in
evolution.

------
maxerickson
And in the meantime businesses and governments are still going to deploy
weaker AI to their own ends.

~~~
tyingq
There is a lot of focus on strong ai. The dangers of weaker ai
implementations, working together, or tied to dangerous things (nanotech,
biotech, nukes, trucks, drones, etc) seem significant to me. Especially if you
throw in things like an ad hoc ability to make new connections.

------
nurettin
Why AGI and not "Artificial Consciousness" ? Is it because people think that
consciousness is a by-product emerging from many kinds of pattern detection
algorithms that suit all cases? (if so, what is the evidence for that?)

~~~
simonh
A conscious system might be unintelligent, while it’s possible to imagine a
highly intelligent system with no consciousness. They’re just different
things.

Also pattern detection is often raised in the way you just did, but it’s realy
a distraction. Pattern detection just helps recognise things, it’s not
inherently related to the ability to reason about things. So you need both,
but they are not the same thing either.

~~~
nurettin
But where does consciousness arise? Is the ability to reason about things
independent of this concept?

~~~
pixl97
Where consciousness arises is not the interesting question. Why is.
Biologically, brain structures spend a lot of effort predicting what state
they will be in soon. Essentially they are always trying to predict the
future. As minds evolved this ability separated from processing what was going
on to what could be going on (dreams in more advanced creatures). The next
important concept is self versus not-self. If you can change the world around
you via intelligence you'll want to avoid unnecessary energy inefficient
feedback loops. Being able to model your actions and their effects is the
first step of defining 'you'.

------
yters
Can the AI writers job be effectively automated away?

------
Santosh83
Why do we fear other intelligences anyway? Isn't that just a sign of our own
immaturity? Maybe we need to evolve more before we start think about creating
AGI...

~~~
edanm
Mostly because other intelligences might have more ability than us to achieve
their goals, and have different goals. "Intelligence does not imply goals" is
the thing to keep in mind here.

E.g. what if more intelligent aliens truly believed that the only purpose in
the world is proving more mathematical theorems? And decided to turn all of
the planet into a giant math-proving machine? Destroying all the planet, all
the animals, all the humans, all the art, whatever, all to prove more maths?

I love maths, but I'd consider that a pretty bad outcome. And there's no
reason that I've ever seen to think that more intelligence implies anything
about goals.

~~~
icebraining
We currently have thousands, or even millions of intelligent entities (humans)
which think pretty wild and dangerous things. We usually just tell them "shut
up Bob and take your meds" and that's it. Sometimes we regrettably kill them.

Why would such AGI have the means of turning all of the planet into anything?
I mean, sure, I also think the Terminator is a decent movie, but that doesn't
make it a reasonable blueprint of the future.

~~~
musage
> We usually just tell them "shut up Bob and take your meds" and that's it.

Other times, Las Vegas and we shut up because we come up empty -- at best
talking about gun control as if someone choking 3 people on a bus was
substantially better and not still something where we come up empty.

> Why would such AGI have the means of turning all of the planet into
> anything?

How does getting off track with that address the main point, namely
"Intelligence does not imply goals"? Are you going to prove that while that
may be true, there just can't be a way for that to ever have a bad outcome
because "Bob"?

> I also think the Terminator is a decent movie, but that doesn't make it a
> reasonable blueprint of the future.

Neither is "Bob".

~~~
icebraining
It wasn't an offtrack; I was pointing out that we _already have_ intelligent
beings that don't share our goals. Sometimes it does have bad outcomes, but I
don't remember it ever having "destroying all the planet, all the animals, all
the humans, all the art". Which is why I'm asking why would adding another
intelligent entity make us fear it.

------
tanilama
AGI is not coming, DL methods have run into a plateau recently for the past
year.

~~~
apsec112
That's interesting, but could you elaborate in more detail?

~~~
pilingual
[https://twitter.com/fchollet/status/906582914829246464](https://twitter.com/fchollet/status/906582914829246464)
(Google researcher)

DeepMind and OpenAI have been investigating approaches from cognitive science
in recent months. In particular they seem interested in evolutionary
algorithms.

DL applications are still emerging though, such as the company that
demonstrated using GANs to present models fitted in apparel a few days ago.

~~~
eli_gottlieb
>DeepMind and OpenAI have been investigating approaches from cognitive science
in recent months.

Really? Got any links. That might be exciting to read.

------
yters
AI will never do what the human mind can do, so the real concern is bad or
malicious human programmers of AI.

------
WhitneyLand
It’s such a thoughtfully reasoned post, I hate to disagree with it, and don’t
even have time now to fully argue it’s merits.

Say generally available computing power was instantly 1 million times greater.
How much closer would that put us to AGI?

Its not even clear how much the recent impressive machine learning feats
demonstrated will even serve as a precursor or building block to what the real
AGI solutions are. It’s so much less of a hard coded problem than what’s being
done now, the real solutions could require radical changes in direction. How
do we know it’s even fair to use these as part of the argument?

------
leakydropout
_The disasters that may befall us if we fail to narrow this gap are many.
[...] Within prosperous countries, such as the United States, there is a
distinct and growing threat that increased automation, coupled with an
obsolete and aimless system of education, will lead to a restratification of
society in which a large middle class may find itself without suitable
employment and without adequate means of filling its leisure time enjoyably
and constructively._ \-- Social Technology (1966).

 _The median of these final responses could then be taken as representing the
nearest thing to a group consensus. In the case of the high-IQ machine, this
median turned out to be the year 1990, with a final interquartile range from
1985 to 2000. The procedure thus caused the median to move to a much earlier
date and the interquartile range to shrink considerably, presumably influenced
by convincing arguments_ \-- Analysis of the future: The Delphi Method. (1967)

~~~
edanm
I'm not sure what you think these quotes are suggesting. If it's that there
are lots of predictions that AGI is close that haven't been borne out, you're
obviously right, and in no way contradicting the article.

~~~
leakydropout
I just thought these quotes were interestingly contemporary, and could
complement the article.

The article builds a convincing point for itself (at the cost of a huge
complexity), it contains no discernible contradictions to me. It is reasonable
to prepare for the possibility of a future event (say, AGI, or Jesus returning
to earth), by thinking about it now.

All interests and future predictions are different, but equally valid. To me,
it feels like the Wright Brothers thinking about rotating safety valves in
space, before they have even took off on their first flight, but that should
not stop the author and supporters in any way: Science, futurism, and
philosophy moves in small steps, and it may be a good time for some to start
walking. Just make sure to properly define the end goal (AKA, the moment AI
becomes AGI), or we may keep on truckin' forever, never closing the loop of
our hostile AGI-created simulations creating a first friendly AGI.

------
lottin
It seems to me that intelligence is something that needs to be acquired from
other intelligent beings through a lengthy process, thus it requires first and
foremost learning how to interact with people, what we call socialisation.
Therefore I believe if machines ever become intelligent we will absolutely
notice, because we will have to teach them like we teach babies, and we will
have plenty of time to adapt.

------
craigsmansion
AGI is to AI what "Flying cars" are to "atomic power".

The reason luminaries are conservative in their estimates or remain silent is
that capturing the public's imagination is good for funding and recognition
(not a negative assessment, btw).

With enough positive press everything seems possible to the layman, and
creates a belief in the unlimited possibilities of "the future," but also the
inherent "dangers" of this imagined future that "must be taken into account."

Most arguments made in the article fall flat because of false analogies.
Analogies can only ever be used to illustrate, never to derive conclusions
from.

The AI winter is over, and good progress is being made in a lot of fields, but
AGI is nowhere on the horizon. In absence of evidence to the contrary, AGI
remains, for the foreseeable future, in the realm of philosophy, science-
fiction, and regrettably, alarmist articles.

For now, us humans should feel totally safe.

