
AGI Has Been Delayed - chubot
http://rodneybrooks.com/agi-has-been-delayed/
======
Tenoke
Let me honestly summarize this article:

Part 1 - Musk and others said we'll have self-driving cars all over the place
by 2020. Author tweeted that we wont and discusses that for roughly half the
article.

This part finishes with a quote by Urmson that he expects self-driving cars to
take "up to 30-50 years" before they are really common.

Part 2 - this one quote aobve agrees with the author so he then looks at an
actual forecast from a 2018 “Human Level AI” conference where some/most think
AGI will occur soonish.

For some reason, he decides Urmson is the only relevant expert, and says these
"more large hats than cattle, but [...] people with paying corporate or
academic jobs" from the conference must be wrong because Urmson's prediction
doesn't match them.

He then does the same for Kurzweil and FHI - they dont match Urmson's
prediction either.

Thus AGI has been 'delayed'.

_______

The argument is in my opinion so badly made that I am wondering if the 17
people who upvoted read it, upvoted just because they are anti-AI in general,
or because they like the author for other reasons.

Edit: This article is also from May, so it's appearance on the frontpage given
the quality puzzles me further.

~~~
YeGoblynQueenne
I don't think it's very productive to divide the debate about AI and AGI in
"pro-AI" and "anti-AI". Brooks is certainly hard to see as an "anti". He has a
long career in AI research, he's the founder of at least three robotics
companies (that I know of) including the one that created the Roomba, perhaps
the only example of a robot that people actually have in their homes today,
and in general he has made significant contributions to AI. He's best known
for the "subsumption architecture", a robotics architecture that at the very
least shook up things in its time (I personally think it's so much old
cobblers but that's not the point).

The situation with the current debate about AI and AGI is that there is an
awful lot of hype and people saying things that make little sense, like the
Nick Cave song goes. That is causing all sorts of problems with public
understanding of the subject and people interested in the subject are
understandably disturbed. Some, like Brooks, try to redress the balance and
show up overblown fantasies of self-driving cars in 5 years or AGI in 10, for
what they are.

This is actually a stance that comes from at the very least, a genuine
interest in the subject, if not a bit of passion for it. It's exactly
backwards to see it as an attitude hostile to AI. Brooks is clearly motivated
by a wish to reduce the amount of noise and clear the discussion of bullshit.
(And of bragging about his past accomplishments, too, indoubitably). That (the
bit outside parentheses) can only benefit AI research.

~~~
Tenoke
>I don't think it's very productive to divide the debate about AI and AGI in
"pro-AI" and "anti-AI".

I agree, but his article very much plays with the different camps trope. If he
had made an argument like you did, I would've upvoted it, but the way he makes
that argument comes down to 'This one expert's prediction disagrees with a
bunch of other experts prediction thus those other experts are wrong', which
is silly.

------
Beltiras
AGI hasn't been delayed. It's been overpromised. And the proponents will
continue to overpromise right up till when it arrives. We need a breakthrough
in AI research of at least two magnitudes before even tackling the AGI
problem.

One is we need to understand the mechanisms of neural networks at least an
order of magnitude better than we currently do. Best article on the subject
that concisely describes NNs is The NN zoo[0]. I have no idea what needs to
happen for us to collectively understand the mechanisms better. We need an
Einstein moment I think where some researcher has a happy thought and a decade
where the rest of us catch up (at minimum).

The other is we need to understand the problem we are solving. At it's core it
seems very easy to define intelligence but you very soon realize that it's an
endless Matryoska with no discernible root. It's very easy to define
intelligence in terms of solving elementary cogitation tasks and this is what
we are doing writ-large. This leads to a culture of solving Weak AI tasks,
hoping that the aggregation of the solutions will lead to AGI. Well, it's only
going to do so if we are engineering solutions that will solve the larger
problem and if we don't understand it well enough it's going to be a really
random occurrence finding it.

[0] [https://www.asimovinstitute.org/neural-network-
zoo/](https://www.asimovinstitute.org/neural-network-zoo/)

~~~
zamalek
> we need to understand the mechanisms of neural networks at least an order of
> magnitude better than we currently do

The problem seems more fundamental than this. We may understand intelligence,
but we do not understand human-like intelligence. Human-like intelligence
originates from more than a neural network; there are hormones and other
things involved. Possibly even quantum mechanics, in some of the more esoteric
explanations.

> The other is we need to understand the problem we are solving.

Indeed, we are asserting that we will soon have a solution (AGI) without first
knowing what the problem is (human-like intelligence).

> it's going to be a really random occurrence finding it.

I'm placing my bets on this. Human-like intelligence (and by unavoidable
association, experience) is, by definition, subjective. The tool that we use
to observe and explain the universe around us is science. We have no such tool
for the universe within us.

~~~
hnick
Even dog-like intelligence would be extremely useful.

It's interesting when training a dog. Sometimes you can see the gears turning
and know their thought process and conclusion before they do, since we are
just much smarter. And other times, they come up with something that makes no
sense at all based on their "dog logic". I guess their brain just works
differently.

Anyway, focusing on simply humans may be short sighted. I often understand
systems better by comparing them to similar systems and working out the
differences. If anything bears fruit I think it'll be the researchers that are
starting small and trying to replicate a worm brain. Then build from there,
faster than evolution, because we can.

------
phkahler
>> Now a self driving car does not need to have general human level
intelligence, but a self driving car is certainly a lower bound on human level
intelligence.

This is exactly the misconception that allows people to believe in fully
autonomous cars.

Following a lane is easy (in good weather). Stopping when the car ahead does
is easy. Route planning when you have detailed maps and GPS is reasonable.

Understanding what unusual things might be lying in the road and what to do
about them is hard. Navigating a construction zone with a flag man directing
you on the opposite shoulder requires a brain, not pattern matching. Opening
the window and following verbal directions...

The range of scenarios we encounter when driving requires AGI to cope with.

~~~
xyzzy_plugh
I used to disagree, then one day I was showering and _thought_ I heard
something I couldn't possibly have heard (a train crossing bell). False
positive, my audio neural net heard some sound and with all the distortion got
it wrong. My other facilities quickly ruled it out and concluded that it was
unlikely that a train was in my apartment, but also that there is no train
crossing there either.

This problem (pattern matching, cross referencing experiences, deducing truth)
mirrors the SDV problem pretty well. Can they tell that the plastic bag on the
road is safe to drive over? Can they infer that if the truck in front of them
is overflowing with poorly-secured tools and equipment/soon-to-be debris, to
maybe keep more distance? If a human waves to indicate they should go around,
or the road ahead is closed, or they need help, will the car understand?

The way to solve self-driving cars in a few years is to start building roads
and infrastructure for such automation. Maybe even have a human driving
multiple cars, computer assisted, simultaneously from a remote office.

~~~
missosoup
OT:

I don't know if this effect has a name, but the reason you hallucinated a
train crossing bell was because shower sounds a lot like white noise, and
white noise contains all sounds. Your brain 'found' and selectively filtered a
bell in it. It's somewhat similar to the Ganzfeld effect.

If you ever experience a new auditory environment like a factory floor or
hospital, you will keep hearing sounds from that environment for a few days if
exposed to white noise.

~~~
tluyben2
That sounds interesting; when i'm in the shower I hear a lot of things over
time (all of them friendly, nothing aggressive strangely enough, so far
anyway); music (quite complete songs as if there is music on in the house),
dogs trying to get in at the door, people calling me, people on the phone;
when I turn the shower off, it is silence (dogs are already inside as well). I
have tinnitus and that might make it worse?

------
sigsergv
Self driving cars are not realistic in that form that general audience
understand. Most people think that autonomous car is a regular car with some
AI but that not the case. Not the solution for general problem. Regular cars
are not advanced form of horse-powered carriage, they are much more different
and complex that people could imagine because “car” is a part of extremely
complex infrastructure that completely replaced old one ~70 years ago. Gas
stations, light poles, traffic rules, changes in laws, education etc etc.

Autonomous car must introduce a new global infrastructure that will be
(inevitably) incompatible with current one. I think that man-driven vehicles
won't be allowed on AI-roads. AI-driven car doesn't require optical sensors to
detect road signs, traffic lights etc, these things will be replaced by
multiple invisible RF-modules. And so on. AGI is just not needed here, look at
ants/bees, they are “dumb” but build and maintain complex things.

~~~
antupis
Also, we don't have the same kind tolerance to deadly accidents as before,
look airplanes it was rather dangerous to fly (5.2 deaths per 100,000 hours as
the 50s) but there was still industry around flying.

------
himinlomax
The fundamental problem with autonomous vehicles is that I haven't seen (and
can't think of) a gradual rollout plan. As it stands, the idea seems to be, as
soon as the AI is good enough, autonomous cars will flood the streets. But in
the mean time, zero.

Well, that's not going to work, and not just for technical reasons.

That reminds me of IPv6. There is no technical challenge to implementing it,
but we're still running out of IPv4 addresses, simply because the designers
had no plan for gradual adoption. (Dual stacks was not such a plan, in fact
dual stacks was an effective way of delaying adoption compared to any other
design choice.)

People (users, insurers, legislators, regulators, lenders, investors ...)
won't be confident with autonomous cars if they have no practical experience
to look at to trust them. And maybe they will, but at the first significant /
publicized accidents, they will be demonized, banned or penalized.

~~~
cjg
The first few autonomous vehicles have clear utility. The first few machines
on IPv6 did not.

There's no network effect holding back autonomous vehicles.

------
fdsa_111111111
Maybe people working in ML separate AGI from consciousness / self awareness /
sentience but I don't think anyone else does. (If so, what is _general_ non-
human intelligence?)

Saying AGI is around the corner is like saying "cloud backups for
consciousness" is just around the corner. We don't yet:

1\. Understand the location of consciousness (if that even makes sense). We
think it's in the brain area as that seems to be our CPU. Our knowledge stops
there.

2\. Understand the physics of consciousness. Is it some sort of emergent-y
brainwave pattern thing? Again, pure speculation.

So, we're assuming:

1\. Machine learning is in any way analogous to what processes our
consciousness arises from. Which is just horse shit, repeated because someone
decided to call college level math a "neural network".

2\. If we do enough ML simultaneously we'll cross some threshold into
sentient-computer land.

3\. Or, there exist "magical algorithms" that will do the same.

A self-driving car is as self-aware as a calculator. It's not going to start
learning, in earnest, or solving problems at the level humans can. Ever.

Our level of problem solving requires conscious awareness (theory of mind,
imagination, et al). We won't be able to recreate that until we understand our
own. There's a distinct possibility that is, by definition, unknowable.

But, hey, let's keep bullshitting greedy investors and scaring technophobes.
The computers are taking over!!

~~~
olalonde
The idea the human brain must be reverse engineered in order to achieve AGI is
speculation. It might be true but we don't know that. To quote "Artificial
Intelligence - A Modern Approach":

> The quest for “artificial flight” succeeded when the Wright brothers and
> others stopped imitating birds and started using wind tunnels and learning
> about aerodynamics.

------
sauwan
Based on all the negativity I've seen on self driving cars, I would have
guessed we were further along the hype cycle than where Gartner puts us now:

[https://blogs.gartner.com/smarterwithgartner/files/2019/08/C...](https://blogs.gartner.com/smarterwithgartner/files/2019/08/CTMKT_741609_CTMKT_for_Emerging_Tech_Hype_Cycle_LargerText-1.png)

------
anon234345566
given the current state of the art of IT, any close to AGI system most
probably would need capacity close to a couple of datacenters (maybe a nuclear
reactor close to them to provide power).

Before stating "AGI was delayed" I would double check what's running in any
massive DC complex built in the last 5 years, and more interesting, how much
power are they using compared to standard datacenters.

If AGI is actually "discovered" at some point, it will spoted by leaked
information about power consumption (any money dump in that kind of technology
would be seriously disguised probably).

------
jdkdnfndnfjd
> so what does this say about predictions that AGI is right around the corner?

Literally nothing

------
eps
Apparently, AGI stands for "Artificial General Intelligence".

~~~
ramblerman
I think you are being 'unfairly' downvoted because AGI is very commonly used
on here, or at least it comes up a lot.

But yes, AGI is the holy grail of AI research. Currently all AI successes have
been training computers to be really good at one thing, chess, go, driving
etc... And in many of these domains AI has outperformed humans by a large
margin.

But take those AI systems and ask them to do something else outside of their
specialty and their knowledge doesn't transfer. In essence they are really
just optimized functions for a very specific set of inputs.

AGI would be an AI that increases in intelligence across multiple domains and
can respond to novel problems (like a human)

note: you could argue the real definition of AI is actually AGI but that is a
different discussion :)

~~~
tialaramex
Right, this is why Turing's "imitation game" test matters. The humans won't
accept that a machine is intelligent, whatever the machine does it's just
another clever trick that isn't "real" intelligence. So you have to humiliate
the humans. You have to rub their faces in the dirt or they'll disregard all
sense rather than admit that they weren't all that. It isn't enough to beat
them at chess, at Go, at a million other problems, you have to pretend to be a
human so well the other humans don't know you're a machine and only then are
they forced to confront the reality that they're just a half-arsed compute
substrate made of warm biological soup and not after all anything special.

The animal intelligence researchers face a similar problem.

~~~
TheOtherHobbes
The idea that human intelligence is defined by skill in board games or
terminal chat is a hilariously inept failure of human intelligence.

Turing's test is useless for real AGI, because human-level intelligence is
embodied. That means a human-equivalent AGI has to be able to improvise
solutions with physical objects, parse body language, _generate_ body
language, understand social and cultural expectations in different situations,
and do all of this while recognising and generating everyday speech in at
least one human language.

These are all defining basic skills for humans. Literally every school age and
over human can do them at a basic level. And gifted or exceptional humans can
take these "simple" challenges to very advanced levels.

A chatbot doesn't come close to approaching them. Nor does a chess-bot or a
go-bot. Nor does an equation solver.

AGI will fail until AI research stops trying to build a better AI research
nerd, and starts trying to understand what applied human intelligence looks
like in the wild.

~~~
tialaramex
> Literally every school age and over human can do them at a basic level

This is the other side of the same coin. No. These "defining basic skills"
aren't defining anything except if you're going to be a fascist who eradicates
people that defy their beliefs about embodiment.

Understanding social expectations is tricky and lots of humans can't do this
at all. They require lifetime care as a result, but they are still definitely
human and there's no reason to believe they lack general intelligence although
it may be stunted if their problems make it hard to undertake activities that
let intelligence thrive.

Although human infants do try to bootstrap language, the bootstrap process
will not spontaneously produce a working human language from zero in the
absence of exposure to existing human language. Humans brought up without
language (typically through extreme circumstances because doing this
experimentally would be unethical) create a proto-language but not a full-
blown human language. We _think_ that a few generations of humans otherwise
unexposed to external culture would turn this into a full-blown human language
through some mechanism akin to creolisation but we can't check because - as
mentioned - it would be grossly unethical. So, there definitely are humans
(though not many) who don't recognise and generate "everyday speech" even
taking that very broadly to include sign and the procedure used to talk to
deaf-blind people.

Elderly people also often cease to be able to generate speech, or give no sign
they continue to understand it, while continuing to apparently be intelligent.

