
My AI Timelines Have Sped Up - T-A
https://www.alexirpan.com/2020/08/18/ai-timelines.html
======
umvi
Dang, 90% chance we'll have AGI* in 50 years? I would bet money against that
prediction if I could. Any bookies out there? There's just no way AGI can
replace blue collar work like plumbers and electricians, etc. that quickly.
Programmers, also unlikely. Sure, you can have AIs generate more code for you,
but then you'll just have the programmers working one abstraction layer up
from that. Also, AI-generated entertainment always has and always will suck. I
will eat my hat if an AI in 50 years can generate a full length, original
hollywood-esque movie that I enjoy. Heck, I'll even settle for a book that can
rival human authors.

Sure, I could see a lot of medical professions and other "knowledge bank" type
jobs being replaced. I've always thought optometrists could largely be
replaced with "measure my prescription" booths controlled by a computer. But
anything requiring any creative juice whatsoever will likely not be replaced.

*Yes, it's not true AGI but AI that replaces 95% of all jobs

~~~
melling
In 1900 the earth’s population was 1.6 billion. In 1903 the Wright Brothers
built the first airplane. In 1947, humans flew at supersonic speed for the
first time. That was in the span of 50 years. 22 years later, we were on the
moon.

A lot can happen in 50-70 years.

We now have 7 billion people, with more of the world coming online to do R&D.
China, specifically, has made it a goal to lead the world in AI by 2025.

India’s economy should grow over the next 2 decades and they will also become
a world leader.

With so many resources, the world should easily advance more in the next 50
than it did in the last 100 years.

~~~
nitwit005
In the 50s and 60s aviation and space people often thought that things would
keep improving exponentially, as they'd lived through such incredible
advancements. Instead things slowed enormously, despite vast efforts put in.

We've lived through enormous computing advances, but it's been fairly obvious
for some time that hardware improvements are slowing.

I'm sure there will be amazing advancements in the next 50 years, but I expect
a lot of the progress to be in fields that are either currently unknown, or
seem unimportant today. Those new fields will see better return on investment.

~~~
api
> despite vast efforts put in.

I disagree with that part. Things slowed dramatically because we stopped
putting extreme amounts of effort in, largely because there wasn't enough
market demand for aviation or space flight beyond 1970s technology... for a
while.

~~~
nradov
The worldwide market for aviation and space launches is far larger now than in
the 1970s.

------
causality0
I think there are fundamental questions still unanswered that could put the
brakes on the whole thing.

For example, I would put forth that for a given problem, there is a lower
limit to how simple you can make divided pieces of that problem. You can't
compute the trajectory of a thrown baseball using only a single transistor.
Granted most problems can be divided into _incredibly_ simple steps. The
question we face is "is AGI reductive enough for human beings to create it?"
Is the minimum work-unit for planning and constructing an AGI small enough to
be understood by a human?

That's of course putting aside the problem of scale. The neocortex alone has
160 trillion synapses, each of which exhibits behavior far more complex than a
single transistor. You could argue that for many commercially-viable tasks
we've found much better ways than nature, and that's true, but AGI is a
different game entirely. Our current AI methodologies may be as unrelated to
AGI as a spear is to a nuclear missile despite them both performing the same
basic function.

~~~
lostmsu
I think it is incorrect to say, that synapses exhibit more complex behavior in
this context. How much of their behavior is actually part of cognition, as
opposed to the needs of their own survival?

~~~
jugg1es
The synapse isn't the complex part of the brain. The synapse is just the part
of the neuron membrane that is responsible for input and output. The complex
part is how the synapse affects potentiality inside the neuron itself. The
difference between a transistor and a neuron is that a transistor has
discrete, digital states. A neuron is much less deterministic than that. If
you want to try to find a part of the brain that may represent a transistor,
it would be a network of a number of specific neurons that fire (or just as
importantly, dont fire) within an even larger superstructure. Ultimately, you
just can't draw a clean line between a transistor and the brain at all.

------
ouid
"10% chance by 2045, 50% chance by 2050"

I'm really pessimistic about the next 30 years, but really optimistic about
the next 5 after that.

~~~
computerphage
From the post: "I also noticed that my 2015 prediction placed 10% to 50% in a
5 year range, and 50% to 90% in a 20 year range. AGI is a long-tailed event,
and there’s a real possibility it’s never viable, but a 5-20 split is absurdly
skewed. I’m adjusting accordingly."

------
bra-ket
Wishful thinking, make large and dumb models even larger, throw in more data
and it will somehow magically lead to artificial common sense. The only thing
missing is philosopher's stone and the right mix of hyperparameters.

~~~
R0b0t1
>make large and dumb models even larger, throw in more data and it will
somehow magically lead to artificial common sense.

What exactly do you think a human brain is?

~~~
heavyset_go
Not an organ that performs backpropagation or gradient descent.

~~~
AQXt
What if I tell you it is? Will that information back-propagate into your
brain?

~~~
heavyset_go
I'd ask you to show me how and where backpropogation takes place in the brain.
Then I'd ask how thin you're willing to stretch the "neural net" metaphor to
re-apply it to biology.

------
lostmsu
This basically matches the timeline I arrived to after seeing GPT-2 in action,
except 10% by 2045 is too low (coincidentally, he realized that also
mismatches 50% by 2050).

I believe most people underestimate chances of AGI arrival because they
overestimate humans. The famous post "Humans who are not concentrating are not
general intelligences" got most of the point.

~~~
Cyphase
[https://www.skynettoday.com/editorials/humans-not-
concentrat...](https://www.skynettoday.com/editorials/humans-not-
concentrating)

------
jdkee
It is a race between AI and the massive disruptions of civilization due to
climate change and the mass extinction of ecosystems. The fulfillment of those
two are not mutually exclusive.

~~~
dane-pgp
I agree, and I think that the Venn diagram of possible futures to consider
includes not just AI, climate change and mass extinction of ecosystems, but
also nuclear war.

~~~
monadic2
Or even just nuclear accidents.

------
zitterbewegung
Everyone remembers when you are right and no one remembers when you are wrong
when you make timelines like this.

~~~
paulcole
Honestly nobody’s going to remember this prediction either way.

~~~
EchoAce
Really? He’s a researcher at Google Brain; it’s not like his words have no
weight.

~~~
paulcole
Big echo chamber here. 99+% of people (myself included) have never even heard
of Google Brain.

~~~
crittendenlane
I mean, I have no idea what the expert climate science groups are but I still
trust that they have a better idea of what's going on than I do. In this case,
they just happen to know that this is one expert AI group.

------
amanaplanacanal
I don’t credit any predictions about AGI. We don’t know enough about how
_human_ intelligence works. On the software side, we don’t know what pieces we
are missing or what discoveries need to be made to make progress.

There is no way to predict when scientific discoveries will happen before they
happen. This is a fools errand.

------
AQXt
> "For this post, I’m going to take artificial general intelligence (AGI) to
> mean an AI system that matches or exceeds humans at almost all (95%+)
> economically valuable work."

One doesn't have to be "economically valuable" to be considered intelligent.
Think of philosophy majors, for instance.

Now, imagine an AI that could replicate the intelligence of a 6 year old; it
wouldn't be "economically valuable" at first, but it would keep learning, year
after year, until it exceed humans.

Wouldn't that be a prime example of an AGI? Or would it only be accepted as
such when it matched or surpassed humans "at almost all (95%+) economically
valuable work"? What if it decided to pursue a degree in philosophy?

When it happens, we will be _way_ past the first AGI, and entering the
singularity.

------
red2awn
Narrow AI has made a lot of progress in recent years, but in my opinion we are
completely lost regarding AGI.

~~~
Findeton
Not completely, see this paper on a new neurobiological theory on
consciousness [https://www.cell.com/trends/cognitive-
sciences/fulltext/S136...](https://www.cell.com/trends/cognitive-
sciences/fulltext/S1364-6613\(20\)30175-3) and this Twitter thread discussing
it
[https://twitter.com/jaaanaru/status/1298164256777658370](https://twitter.com/jaaanaru/status/1298164256777658370)

------
stuhlmueller
Alex Irpan's post inspired the AGI timelines discussion at
[https://www.lesswrong.com/posts/hQysqfSEzciRazx8k](https://www.lesswrong.com/posts/hQysqfSEzciRazx8k)
which shows 12 people's timelines as probability distributions over future
years and their reasoning behind the distributions.

(I work on Elicit, the tool used in the thread.)

~~~
1wheel
Have you tried plotting the CDFs? Might be easier to read than the overlaid
areas.

~~~
stuhlmueller
Good idea. We'll integrate that into Elicit in a few weeks. In the meantime,
here's a Colab that shows the CDFs:
[https://colab.research.google.com/drive/1pl3fIaeIKIS77IDM_rn...](https://colab.research.google.com/drive/1pl3fIaeIKIS77IDM_rnaFUyPpyRepFmT?usp=sharing)

------
Pandabob
Here's a random collection of thoughts that I have about the progress of
Artificial Intelligence (AI):

Machine learning (ML) and Deep Learning (DL) in particular benefit from fast
computer chips. The most impressive gains in AI in the 2010s (Computer Vision
& Natural Language Processing) were made thanks to DL. There's a famous post
by AI researcher Rich Sutton, which summarises this fairly neatly [1].

Now, this connection between DL and fast chips would tie the progress of AI
pretty tightly to the progress of Moore's Law [2]. There's some compelling
evidence that Moore's Law is at least slowing down [3]. On the other hand,
there are industry experts like Jim Keller who pretty strongly disagree with
these assessments [4] and even TSMC seems to be bullish on being able to keep
up with Moore's Law [5].

Some estimates of GPT-3s training cost put it in the range of ~5-10 million
dollars [6]. It's hard to say how big of an impact on the economy GPT-3 will
have. It's probably safe to assume though, that OpenAI is already working on
GPT-4. The jump in parameter sizes from GPT-2 to 3 was roughly 100x (1.5
billion vs. 175 billion) and I'm assuming the price of training the model
increased in roughly the same proportion (I might be wrong here and if anyone
can point me to evidence on this, it would be much appreciated). With these
assumptions, and provided that GPT-4 won't be affected by diminishing returns
of adding more parameters (big if), the price for it would be somewhere
between 500 million and a billion dollars. That's still not an insane amount
of money to put into R&D, but you'd probably want it to be at least be somehow
economically viable to be able to justify putting a hundred billion dollars
into GPT-5.

All this is to say, that I find making predictions of the progress of AI
really hard due to the large amount of uncertainty related to the field and
the underlying technologies (mainly the hardware).

[1]:
[http://www.incompleteideas.net/IncIdeas/BitterLesson.html](http://www.incompleteideas.net/IncIdeas/BitterLesson.html)

[2]:
[https://arxiv.org/pdf/2007.05558.pdf](https://arxiv.org/pdf/2007.05558.pdf)

[3]:
[https://p4.org/assets/P4WS_2019/Speaker_Slides/9_2.05pm_John...](https://p4.org/assets/P4WS_2019/Speaker_Slides/9_2.05pm_John_Hennessey.pdf)

[4]:
[https://www.youtube.com/watch?v=oIG9ztQw2Gc](https://www.youtube.com/watch?v=oIG9ztQw2Gc)

[5]: [https://www.nextplatform.com/2019/09/13/tsmc-thinks-it-
can-u...](https://www.nextplatform.com/2019/09/13/tsmc-thinks-it-can-uphold-
moores-law-for-decades/)

[6]: [https://venturebeat.com/2020/06/11/openai-launches-an-api-
to...](https://venturebeat.com/2020/06/11/openai-launches-an-api-to-
commercialize-its-research/)

------
alpineidyll3
The power cost of transistorized agi will never compete with meat neurons ,
ever. It's still off by like 9 orders of magnitude and worsening

~~~
computerphage
So, you think the median estimate (50% estimate) is in 25 Billion years?

------
ddmma
COVID-19 speedup quite many digital transformations like wars created
infrastructures and advanced technologies. This might be the stone age of AI
but one day some people will create one ‘particle collider’ as planet brain.

