
Artificial intelligence risk research - BenjaminTodd
https://80000hours.org/career-guide/top-careers/profiles/artificial-intelligence-risk-research/
======
rhaps0dy
Be sure to de-lurk if that's the case!

I guess. A lot of people already did.

[http://lukemuehlhauser.com/if-youre-an-ai-safety-lurker-
now-...](http://lukemuehlhauser.com/if-youre-an-ai-safety-lurker-now-would-be-
a-good-time-to-de-lurk)

------
andreyf
_The possibility of human-level artificial intelligence poses significant
risks to society [...]_

No. It simply does not outside of the realm of science fiction. Understanding
HGI will help us understand enough about ourselves and our motivations and
ethics that it won't be an issue.

Humans augmented by machine learning systems, on the other hand, are here now
and a wholly different question. Institutions empowered by AI research have a
power / person ratio unlike any of those of the past.

~~~
BenjaminTodd
The purpose of the profile isn't to argue a risk exists. We largely defer to
the people we take to be experts on the issue, especially Nick Bostrom. We
think he presents compelling arguments in _Superintelligence_ , and although
it's hard to say anything decisive in this area, if you think there's even
modest uncertainty about whether AGI will be good or bad, it's worth doing
more research into the risks.

If you haven't read Bostrom's book yet, I'd really recommend it.
[http://www.amazon.com/Superintelligence-Dangers-
Strategies-N...](http://www.amazon.com/Superintelligence-Dangers-Strategies-
Nick-Bostrom/dp/1501227742)

~~~
manish_gill
I read this book. Got about 4 chapters in before I had to give up at the sheer
ridiculousness of the whole thing. The problem with this entire line of
reasoning is that at this point it is nothing more an a thought experiment.
Many of the key underlying assumptions that are required for Artificial
General Intelligence simply have not been realised, and while Weak AI is
progressing at a strong rate, we are hardly anywhere close to a place where we
should start worrying about all this stuff.

It's about as likely as a meteor hitting the planet and wiping out the entire
human life. Possible? Sure. Should I be panicking about this right now? Nah.

The book is nonsense. I'll start paying attention when someone who has real
experience in the field of AI research (and I'm not talking charlatans like
Yudkowsky here), but someone like say Norvig come out and say it's a
reasonable concern today.

~~~
BenjaminTodd
The expert consensus says 10% chance of human-level AI in 10 years:
[http://www.givewell.org/labs/causes/ai-risk/ai-
timelines](http://www.givewell.org/labs/causes/ai-risk/ai-timelines)

Many computer science professors have publicly said they think AI poses
significant risks. There's a list here:
[http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-
ri...](http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-risk/)

Also see this open letter, signed by hundreds of experts:
[http://futureoflife.org/ai-open-letter/](http://futureoflife.org/ai-open-
letter/)

~~~
manish_gill
Your last link, the open letter, says nothing about human or > human level AI.
Just "robust and beneficial usage" of AI. In all likelihood that means the
current AI technology and the letter is aimed at (I'm assuming) people who are
trying to use these techniques in things such as modern weapon systems. While
that is of course a concern, it's not the same as the concern for a Skynet
like scenario.

Some experts in the second link you gave are concerned, sure. But I can
probably find an equal number who dismiss it as well. There isn't a clear
consensus over AGI. I still remain skeptical. Same with your first link, which
tries to "forecast" AGI. People can't forecast next month's weather correctly,
so forgive me for not believing in a 10% chance in 10 years.

Actually, I take back my appeal to authority argument in its entirety, because
I just remembered the first thing I saw in my AI class was a video of experts
claiming the exact same thing. The video was from the 50s.

EDIT: Found it:
[https://www.youtube.com/watch?v=rlBjhD1oGQg](https://www.youtube.com/watch?v=rlBjhD1oGQg)

~~~
BenjaminTodd
The letter isn't (just) about modern weapon systems. It was put together by
this group: [http://futureoflife.org/ai-news/](http://futureoflife.org/ai-
news/)

Also, no-one is worried about a skynet scenario. The worrying scenario is just
any powerful system that optimises for something that's different from what
humans want.

Second, the point is that even uncertainty is enough for action. For AI to not
be a problem, you'd need to be _very confident_ that it'll occur a long way in
the future, and that there's nothing we can do until it's closer. As you've
said, we don't have confidence in the timeline. We have large uncertainty. And
that's _more_ reason for action, especially research.

Consider analogously:

"We've got no idea what the chance of run-away climate change is, so we
shouldn't do anything about it."

Seems like a bad argument to me.

~~~
manish_gill
Except, in the case of climate research, we have a plethora of evidence of the
possible harmful effects and we can see it happening today. Everything about
the potential harmful effects of AI is pure conjecture, because there is no
human-level general intelligence AI system that exists today. It's, once
again, something philosophers like Bostrom will make a career out of.

I'm 100% with Torvalds on this when he laughs at the prepostorous notion that
AI will become a doomsday scenario. I think it'll become more and more
specialised, branch out to other fields and become reasonably good. But
there's a huge leap to go from there to HGI.

> any powerful system that optimises for something that's different from what
> humans want.

Except this notion rests on the premise that humans will not be in full
control, which leads to the exponential growth argument which leads back to
the Skynet like scenario.

> the point is that even uncertainty is enough for action

And that action is...what exactly? People won't stop building intelligent
systems. There is no real path that we have from where we are to HGI, so it's
not like researchers have a concrete path. Just what exactly does this
research look like?

> "We've got no idea what the chance of run-away climate change is, so we
> shouldn't do anything about it."

Extremely Poor analogy. We have decades worth of concrete data that tells us
the nature and reality of climate change. We _demonstrably_ know that it's a
threat. Can you say the same about AI?

Also, I'll take the time to re-iterate how heavily skeptical I remain of
groups like MIRI that are spearheaded by people who don't believe in the
scientific method, believe in stuff like cryogenics, have history of trying to
profit off of someone else's copyrighted material and have someone managed to
convince a whole lot of people that donating them is the best way to fight off
the AI doomsday scenario. People should do their research before linking to
stuff like that. :(

~~~
BenjaminTodd
I'm not saying we don't know whether climate change poses a tail risk or not
(it obviously does). I'm just saying that claiming uncertainty isn't a good
reason to avoid action.

In general, if there's a poorly understood but potentially very bad risk, then
(a) more research to understand the risk is really high priority (b) if that
research doesn't rule out the really bad scenario, we should try to do
something to prevent it.

With AI, unfortunately waiting until the evidence that it's harmful is well
established is not possible, because then it could be too late.

What AI risk research could involve is laid out in detail in the link.

~~~
argonaut
"poorly understood but potentially very bad risk" is something you could say
about the risk of an alien invasion.

