
Book Review: Reframing Superintelligence - nikbackm
https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/
======
glenstein
The claim is apparently that superhuman, savant-level intelligence could be
confined to a domain of knowledge and not risk becoming generalized
intelligence. I'm skeptical.

If you're "only" superintelligent at language translation, or writing movies,
or chess, I suspect as we ascend the tiers of increasingly super
superingelligences, there's a depth of informational, structural understanding
that avails itself of abstract meta principles, and meta-meta principles, and
meta-meta-meta principles, and on to infinity. And that at a sufficiently high
level of abstraction, something about be brilliant at translating the subtle
irony of a Shakespearian sonnet into a dead tribal language is also at play in
weighing strategic options in an incredibly complicated game of chess, is also
at play in reading culture and finding out what kind of movie will be most
successful at the box-office.

I think any domain-specific intelligence, as it approximates "perfect", would
independently discover and solve similar high level questions and be
transferable to other domains, the way there are general principles of
manufacturing that apply to multiple products. And from a sufficiently
advanced perspective, "solving" chess and "solving" Shakespearan sonnet
translation would look as similar to each other and panting a car red vs
painting a car blue.

~~~
vermilingua
If ImageGAN approximated perfect classification skills, would it spontaneously
develop the capability to detect and evade predators?

No it wouldn’t, and I think you may have missed the point. It’s about
constrained domains in the mathematical sense, that there is a clearly defined
scope of inputs and outputs. It would simply be impossible for an AI service
to develop outside that scope.

~~~
glenstein
One of the things I was really at pains to emphasize in my previous comment
was that I think there are elements of information processing that are
outrageously abstract, that would be familiar and accessible to AI but non-
obvious to us. The transferability and generality that emerges would be
nothing remotely so crude as the ImageGAN -> predator detection example that's
being used to dismiss my point.

An AI service that's only intended to solve one well defined problem may have
pieces in its anatomy that are functionally equivalent to those in a different
AI service solving a different kind of problem. You don't need to assume it's
escaped the confines of it's well-defined domain, or that domains aren't well
defined, for this to be true.

------
andreyk
See also this summary of the report:
[https://singularityhub.com/2019/06/02/less-like-us-an-
altern...](https://singularityhub.com/2019/06/02/less-like-us-an-alternate-
theory-of-artificial-general-intelligence/)

The argument makes a lot of sense to me as an AI researcher; the idea that we
will somehow get to self improving agents that can do any and all things they
want to maximize the output of paper clips (as in the famous Bostrom thought
experiment) is miles away from how AI is done today in practice AND how
software functions. I suspect most software engineers, especially with
knowledge of AI, would find the 'AI service' idea which actually reflects how
AI is done today much more plausible and worth worrying about than the Bostom
science fiction-y fears of AGI...

~~~
bgilroy26
Taking the example of games like chess and Go, where AlphaZero dominates,
wouldn't we have to fear people misusing AI by setting up specific arenas and
applying AI there long before AI misuses itself?

~~~
wmf
Yes, the post mentions this:

 _Drexler is more concerned about potential misuse by human actors – either
illegal use by criminals and enemy militaries, or antisocial use to create
things like an infinitely-addictive super-Facebook. ... Paul Christiano ...
worries that AI services will be naturally better at satisfying objective
criteria than at “making the world better” in some vague sense. Tasks like
“maximize clicks to this site” or “maximize profits from this corporation” are
objective criteria; tasks like “provide real value to users of this site
instead of just clickbait” or “have this corporation act in a socially
responsible way” are vague. That means AI may asymmetrically empower some of
the worst tendencies in our society without giving a corresponding power
increase to normal people just trying to live enjoyable lives._

The arenas of finance, business, journalism, and politics already exist, so we
should watch for AIs trained to "win" those games.

------
mnemonicsloth
I don't understand why people are so worried about superintelligence. Moore's
law is dying. Computers are not going to get much faster, or much cheaper.
Parallelization is going to be the only way to get more powerful computers and
there are real, possibly insurmountable, limits on how well you can do that.
The computing sector is going to look like aerospace: the Boeing 737 of today
is better in some ways, but it still looks almost exactly like the Boeing 737
of 50 years ago

~~~
elcomet
> Moore's law is dying. Computers are not going to get much faster, or much
> cheaper.

Moore's law doesn't really matter. We're just at the beginning of computers,
they will get much faster and powerful in the years - decades to come.

I just read this for example: [https://www.sciencenews.org/article/chip-
carbon-nanotubes-no...](https://www.sciencenews.org/article/chip-carbon-
nanotubes-not-silicon-marks-computing-milestone)

~~~
goatlover
So computers are unlike any other technology?

~~~
elcomet
What do you mean? I think they are like any other technology, we keep making
improvements to them.

~~~
goatlover
But we don't make exponential improvements once the technology matures. The
parent was either claiming that computers are different, or computing hardware
won't mature for decades to come.

------
codeisawesome
> But in the end, it would just be a translation app. It wouldn’t want to take
> over the world. It wouldn’t even “want” to become better at translating than
> it was already. It would just translate stuff really well.

> It could have media services that can write books or generate movies to fit
> your personal tastes.

I'm not an expert in this topic, but wouldn't you say that the ability to
create compelling narratives (even more so than 'mechanical' translation
between languages), pretty much _relies_ on your ability to empathise at some
level with people who want to take over the world or the people who want to
stop them?

How would AI come up with the plot for one of the most profitable franchises
in recent history - "Avengers: Infinity War" movies? It would have to be
programmed with an understanding for Thanos' perspective, and the fundamental
will of most everyone around him to be terrified of that and disagree - and
even then it's not a good movie franchise without the romance and familial
dynamics between so many characters.

If an AI can already understand all that (and _know_ that it has to understand
all that) - well you've created a pretty smart human already - and the OP's
argument about differentiating these powers doesn't seem to hold...

------
Upvoter33
I've never been very interested in the Bostrom view of AI, as the viewpoint
there seems to come from someone fairly removed from how computer programming
works. It's always felt more like an intellectual exercise rather than very
grounded in practicality (but that's just my two cents).

Where I got more interested in Bostrom was some notions I heard from him (I
think) on the general nature of science. I'd always assumed that learning more
about how nature actually works was a strict positive, but now am convinced
(more than ever) that we are just in a race to discover a technology that will
harm us all (we've found some already, but who knows what worse discoveries
are out there?)

------
etxm
Bostrom also wrote a paper on The Simulation Argument that was pretty
awesome[1]. It was referenced recently in The End of the World podcast. [2]

1- [https://www.simulation-
argument.com/simulation.pdf](https://www.simulation-
argument.com/simulation.pdf) 2-
[https://www.theendwithjosh.com/](https://www.theendwithjosh.com/)

------
prvc
I see no a priori reason why "superintelligent services" as defined in the
review would necessarily be inherently safer, due to the complexity of the
systems they affect, and the possibility of emergent effects that might be in
some sense equivalent to the hypothetical effects of an "agent type" AI. I
also think that the concept needs more elaboration for that very reason.
Furthermore, it could be that there is a "recipe" for forming general AI by
integrating a small number of "service type" AIs, assuming that term refers to
a real concept in the first place.

------
scottlocklin
A review of a book by a serial fabulist (Drexler) compared to that of a bozo
moonlighting as a science fiction writer (Bostrom) done by a psychologist on a
subject none of them have the slightest whit of a clue about.

> All of this seems kind of common sense to me now. This is worrying, because
> I didn’t think of any of it when I read Superintelligence in 2014

Dunning Kruger is something that should come to mind here, doctor. People who
know a decision tree from an echo state network kind of saw that as being
incredibly dumb when it came out.

What has happened in the last 5 years isn't that the field has matured; it's
as gaseous and filled with prevaricating marketers, science fiction hawking
twits and overt mountebanks as ever. The difference is, 5 years later, rather
than the swingularity-like super explosion of exponential increase in human
knowledge, we're actually just as dumb as we were 5 years ago when we figured
out how to classify German traffic signs, and we have slightly better
libraries than we used to. No great benefit to the human race has come of "AI"
-and nothing resembling "AI" or any kind of "I" has even hinted of its
existence. In another 5 years I'd venture a guess machine learning will remain
about as useful as it is now, which is to say, with no profitable companies
based on "AI," let alone replacing human intelligences anywhere. And we'll
sadly probably still have yoyos like Hanson, Drexler and Yudkowsky lecturing
us on how to deal with this nonexistent threat.

Meanwhile, the actual danger to our society is surveillance capitalism and
government agencies using dumb ass analytics related to singular value
decomposition. Nobody wants to talk about this, presumably because it's real
and we'd have to make difficult choices as a society to deal with it. Easier
and more profitable to wank about Asimovian positronic brain science fiction.

~~~
nradov
You shouldn't be downvoted. The article is clickbait nonsense written by
someone who's just making things up. There's no actual scientific basis to
believe that the article or referenced books have any more insight or
predictive power than astrology does. The reality is that no one has any clue
what's going to happen and everyone is just guessing.

~~~
balfirevic
> The article is clickbait

The article is a review of the book "Reframing Superintelligence" titled "Book
review: Reframing Superintelligence".

~~~
solveit
Yes but words don't mean things any more and haven't in a while.

~~~
luc4sdreyer
Conversations about AGI always get really messy because it's so polarized.
Everyone that's written a line of code, or watched The Terminator, or have
heard about the singularity have an opinion on this topic. It's easy to fall
into the Dunning-Kruger trap if you consider the above to be approximately the
limit of human knowledge on the subject. We don't have an AGI yet, so how much
could we know about AGIs? Quite a lot, as it turns out.

------
falcor84
I was really surprised that there was no mention of APIs there. Obviously as
"services" many (most?) of these AI services would be available via APIs.
There are already machine-readable directories of APIs and AI services relying
on external APIs, so we can extrapolate that we'll see more and more AI/ML
systems experimenting with various external APIs as part of their learning.

From this perspective, it's very clear to me that there's a big difference
between a translation service and a service that would "steer Fortune 500
companies". The latter will be much more open-ended and most likely
dynamically rely on many other services. Indeed, I would also expect it to
rely on artificial-artificial-intelligence services such as Mechanical Turk
and Upwork, giving it a lot more flexibility.

What is to prevent that complex service from evolving beyond its creator's
expectations?

