
Superintelligence: The Idea That Smart People Refuse to Think About - Moshe_Silnorin
https://medium.com/@LyleCantor/superintelligence-the-idea-that-smart-people-refuse-to-think-about-be9dae3b8d62#.in3snbp5v
======
Analemma_
I could be misrepresenting Maciej here (idlewords, please correct me if so),
but my understanding of that talk was that "superintelligence is not worth
worrying about" was only part of it. A lot of it was more about the
deleterious knock-on effects that seem endemic to AI worriers, developers, SV
bigwigs, and the rationalist community. A lot of the shitty behavior of these
groups, that Maciej has been hammering on for some time now in previous talks,
is closely correlated to belief in AI X-risk. He seemed to be claiming that
this correspondence is not just correlative but causative. That may or may not
be true, but even if not we should focus on improving it all the same. To wit:

1\. Stop treating every problem in the world as an engineering problem to be
solved with clever software, and realize that people and social problems are
more complex than that. This is one of the motivating factors behind wanting
AI (so we can turn social problems into programming problems) and leads to
megalomania on our part and deep resentment on everyone else's part.

2\. Similarly, stop thinking us developers are the noble heroes who are going
to save the world from the ignorance of the masses. This is both dead wrong
(we are in a rapacious capitalist industry that mostly wants to eliminate
anything that stands in its way, whether or not the impediment has good
reason) and again is building resentment toward us.

3\. Start worrying more about the extremely serious problems that are being
caused by data retention and machine learning _right now_ , like the permanent
loss of privacy, increasing wealth consolidation in the hands of a few
capitalists, and the "laundering" of human bias through supposedly-but-not-
actually-neutral algorithms.

4\. Realize that whatever neurons in our brains are lit up by religious belief
don't necessarily require a "religion" in the traditional sense of the word,
and that a lot of this eschatological transhumanist and AI X-risk stuff seems
to slot into the same pathways, especially for very smart engineers. That
doesn't make them _wrong_ necessarily, but it does mean their claims should be
analyzed with the same bias toward extreme skepticism that most of us would
automatically use with religious claims.

I suspect Maciej would be much less distressed by AI worry if the AI worriers
also concerned themselves more with 1 through 4. At that point it would be at
worst a harmless distraction, and maybe even beneficial if AI X-risk is in
fact a legitimate concern. That, I think, would be best for everyone.

------
andr
I think the argument about AI overtaking humanity is wrongly framed. Sci-fi
has taught us to be on the look out for an all-powerful machine, robot,
algorithm, turning against humanity. A single, united swarm of machines that
only needs humans for energy. This is unlikely to happen soon.

But if we borrow an idea from Richard Dawkin's The Selfish Gene, things look
differently. He argues evolution is not about the success of individuals or
even species, but about the proliferation of their DNAs. The organisms are
just the vessels for DNA's replication.

Let's look at the corpus of AI technology as an evil machine's DNA. It already
claims a lot of human energy and capital for its evolution and proliferation.
Our top researchers invest their time into making it better. We burn fossil
fuels to provide the power for its computational cycles. What's the primary
fuel for its evolution? Money. Which comes from stock markets and VCs, which
use AI models. I'm sure a lot of fundamentals-based financial models, based on
very different lines of reasoning, end up valuing AI-based companies higher.
It is in no way a deliberate conspiracy, just a result of the AI doing it's
job well.

Of course, self driving cars, painting funny moustaches on your faces, or
playing Goare all benign, non-threating abilities of AI. But it is easy to see
how falling into the wrong hands, they might lead to not-so-great results. Our
stereotypical AI researcher is surely very careful about AI's potential
dangers. But do you think Uber, a notoriously profit-driven company, would
invest this much care if it doesn't help the bottom line?

The cat in a cage argument is surely convincing, but AI is already taking a
lot of decisions in place of humans. Be it mortgages, crime detection and
prevention, or what information reaches us through our news feeds. And it is
easy to see how a single imperfectly-configured algorithm can lead to real-
world effects.

For example, imagine a threat assessment AI, as already used in some shape and
form by police forces around the world, which is configured not to minimize
deaths and injuries, but to minimize damage to the economy by an individual. A
benign sounding setup. Imagine such a system has to evaluate the threat level
of a known researcher, who specializes in AI safety, thus reducing the profit
of a bunch of the world's biggest companies. Is he a threat to the economy?
Should he go to jail?

------
somestag
I think the fundamental difference of opinion--between those who worry about
the existential risk of gAI and those who don't--is not whether or not gAI
_can_ pose a risk to humankind, but rather whether it _will_ , and if so,
whether there's anything meaningful we can do about it _now_ to prepare for
it.

This article is written as a rebuttal to another article
([http://idlewords.com/talks/superintelligence.htm](http://idlewords.com/talks/superintelligence.htm))
adapted from a talk. That article reconstructs the gAI-is-risky argument and
then provides a series of counter-arguments that show that the gAI future is
not guaranteed--that there are many ways in which it might not come to pass.
The point it's trying to show is that the gAI-dystopia is not a given, and
therefore we should question its plausibility. In particular, it raises the
notion that there may be something about very high-level thinking ("being
smart") that makes causes very knowledgeable people to overestimate the
probability of gAI-dystopia. It then suggests that the indirect consequences
of promoting the gAI-dystopia theory might outweigh any possible benefits we
get from worrying about it right now.

Then this article responds to each of those arguments (about how it might
_not_ come to pass) with an argument about how it _might_ come to pass. It
closes with a remark that any engineering discipline should ask what the
consequences are if we succeed.

Obviously, the two camps could go back and forth all day, both being
completely correct, neither making any progress, because these arguments are
based in rhetoric, not analysis. There's nothing wrong with rhetorical
arguments, but any two sufficiently skilled rhetoricians can push their point
infinitely far; the real question is which premise you accept:

After hearing the arguments put forth by the gAI-dystopia group, do you think
the dystopian scenario is likely and that we can meaningfully prepare for it
at this time, or do you think it's unlikely or that we can't meaningfully
prepare for it?

The gAI-dystopia crowd is not necessarily wrong, but it's a minority of the
community because it can't attach any meaningful risk analysis to its claims
without making shaky assertions for the prior probabilities. There are
infinite ways we could end up in a gAI dystopia, and there are infinite ways
we could not. Existential threats are hard to accept, so until we have more
evidence that such a future is at all likely, it will be hard to convince any
significant number of people. In this sense, it _is_ just like a religion (but
I don't say that as an insult).

~~~
DiThi
The problem is that, unlike atomic bombs, we can't calculate and make
predictions of what will happen, because we would need precisely the same
machines that are required to make higher-than-human intelligence. Therefore
we don't know, we can't know, so we should at least understand the risks with
the best of our very limited understanding, to at least be somewhat prepared
when we are able to predict the possible distaster shortly before it happens.

------
Moshe_Silnorin
This isn't a dupe. It's a response to the other article with a similar title.
I've reposted under a less-confusing title.

------
tbrownaw
How exactly does "refuse to think about" fit with the existence of OpenAI[0]
and the Singularity Institute[1]?

[0]
[https://en.wikipedia.org/wiki/OpenAI](https://en.wikipedia.org/wiki/OpenAI)

[1]
[https://en.wikipedia.org/wiki/Machine_Intelligence_Research_...](https://en.wikipedia.org/wiki/Machine_Intelligence_Research_Institute)

