Hacker News new | past | comments | ask | show | jobs | submit login
Book Review: Reframing Superintelligence (slatestarcodex.com)
52 points by nikbackm 49 days ago | hide | past | web | favorite | 42 comments



The claim is apparently that superhuman, savant-level intelligence could be confined to a domain of knowledge and not risk becoming generalized intelligence. I'm skeptical.

If you're "only" superintelligent at language translation, or writing movies, or chess, I suspect as we ascend the tiers of increasingly super superingelligences, there's a depth of informational, structural understanding that avails itself of abstract meta principles, and meta-meta principles, and meta-meta-meta principles, and on to infinity. And that at a sufficiently high level of abstraction, something about be brilliant at translating the subtle irony of a Shakespearian sonnet into a dead tribal language is also at play in weighing strategic options in an incredibly complicated game of chess, is also at play in reading culture and finding out what kind of movie will be most successful at the box-office.

I think any domain-specific intelligence, as it approximates "perfect", would independently discover and solve similar high level questions and be transferable to other domains, the way there are general principles of manufacturing that apply to multiple products. And from a sufficiently advanced perspective, "solving" chess and "solving" Shakespearan sonnet translation would look as similar to each other and panting a car red vs painting a car blue.


That's not the whole extent of the claim - the claim is also that these things have defined inputs and outputs as well as defined optimization procedures, and can't just go off and start doing whatever they want to maximize their utility. So even if you have a superhuman quality Google translate which can translate whole books, which may very well be able to exhibit strong intelligence, due to the API it is defined within it is still not going to go off and do nefarious things. Which strikes me as very plausible indeed.


Just like factory automation it will require plenty of custom tweaking, assembling, and lining up everything in the right order with the correct timing to flow smoothly, even if the fundamental ideas are the same. Even harder is when it's not a simply multi-step process on a conveyor belt like a factory (ie, product repair vs assembly) where there are a lot of potential question marks it must adapt to.


> I think any domain-specific intelligence, as it approximates "perfect", would independently discover and solve similar high level questions and be transferable to other domains

There is a trap here, in the sense that we tend to lump all the unsolved AI problems into to the "requires AGI" bucket. Some computer scientists thought that a program that could beat a human in chess would have to be an AGI (can't find a source for this, can someone help?). Language translation that's good enough to convey basic meaning is another one that turned out to be _much_ easier to do than what was suspected. I'm still dumbfounded by GPT-2 and suddenly the Turing test doesn't seem that far out of reach as it used to. What if 99% of human intellectual tasks can be solved by domain-specific intelligence? What if we're not as smart as we think we are? That seems to be a trend.

On the other hand, there are some tasks that cannot be subdivided, and necessarily requires a massive amount of context about human culture, the real world, and excellent logical reason ability. The program that links the domain-specific programs might have to be like that, and then we're back at something that looks _very much_ like an AGI.


If ImageGAN approximated perfect classification skills, would it spontaneously develop the capability to detect and evade predators?

No it wouldn’t, and I think you may have missed the point. It’s about constrained domains in the mathematical sense, that there is a clearly defined scope of inputs and outputs. It would simply be impossible for an AI service to develop outside that scope.


One of the things I was really at pains to emphasize in my previous comment was that I think there are elements of information processing that are outrageously abstract, that would be familiar and accessible to AI but non-obvious to us. The transferability and generality that emerges would be nothing remotely so crude as the ImageGAN -> predator detection example that's being used to dismiss my point.

An AI service that's only intended to solve one well defined problem may have pieces in its anatomy that are functionally equivalent to those in a different AI service solving a different kind of problem. You don't need to assume it's escaped the confines of it's well-defined domain, or that domains aren't well defined, for this to be true.


I think glenstein is already past that point, you might be talking about different things. Drexler's solution is pretty good but it seems like a stopgap measure. Sure, if the subsystem is bound and specific there's no AGI risk, but are we just going to call it a day and stop AI research?

The article seems to agree with my sentiment:

> The very fact that [a Drexlerian agent] is less effective than the Bostromian agent suggests there will be pressure to build the Bostromian agent eventually (Drexler disagrees with this, but I don’t understand why)


would independently discover and solve similar high level questions and be transferable to other domains

From where would this curiosity and desire to find analogies and to use knowledge across domains arise? Are you saying it would spontaneously appear? You're going to have to explain how I think. How would a chess machine suddenly aquire a taste for sonnets?

These traits have obvious advantages that led to their being selected for in humans. In an AI we'd presumably have to program it in purposefully.


>From where would this curiosity and desire to find analogies and to use knowledge across domains arise? Are you saying it would spontaneously appear?

From the perspective of an AI it wouldn't at all be some new desire or curiosity for something outside its defined domain. It would be something squarely within its own domain that nevertheless has applicability outside of it.


It would be something squarely within its own domain that nevertheless has applicability outside of it

Why would it step outside its own domain to make this realization?


You're continuing to attribute claims to me that I don't believe I'm making. If you think a freestanding statement about "applicability" in a general sense needs to be construed as an AI actively stepping outside it's domain, and as AI having it's own "realization", you should support that contention, so I can understand why you think that that's what I'm saying and we can go from there.


We're talking about the risk of an AI going rogue and doing/thinking things we didn't want it to. That's why we're here.

If all you're saying is, "an AI may learn things that might be applicable to other domains", that's fine but it doesn't support your point that you don't think AI will be easily confined to a single domain of knowledge. It's entirely possible a chess AI would be potentially incredibly good at some other domain like murdering humans (say), but you're going to have to explain why it would acquire a taste for murdering humans if you're saying we should be worried.

I think any domain-specific intelligence, as it approximates "perfect", would independently discover and solve similar high level questions

Solving a question requires intent. What would prompt it to consider those questions? If you're not saying those questions are in other domains, how is it relevant?


The question is whether AIs will remain tools, however powerful. Or whether they will acquire their own goals and take over, for better or worse.


See also this summary of the report: https://singularityhub.com/2019/06/02/less-like-us-an-altern...

The argument makes a lot of sense to me as an AI researcher; the idea that we will somehow get to self improving agents that can do any and all things they want to maximize the output of paper clips (as in the famous Bostrom thought experiment) is miles away from how AI is done today in practice AND how software functions. I suspect most software engineers, especially with knowledge of AI, would find the 'AI service' idea which actually reflects how AI is done today much more plausible and worth worrying about than the Bostom science fiction-y fears of AGI...


Taking the example of games like chess and Go, where AlphaZero dominates, wouldn't we have to fear people misusing AI by setting up specific arenas and applying AI there long before AI misuses itself?


Yes, the post mentions this:

Drexler is more concerned about potential misuse by human actors – either illegal use by criminals and enemy militaries, or antisocial use to create things like an infinitely-addictive super-Facebook. ... Paul Christiano ... worries that AI services will be naturally better at satisfying objective criteria than at “making the world better” in some vague sense. Tasks like “maximize clicks to this site” or “maximize profits from this corporation” are objective criteria; tasks like “provide real value to users of this site instead of just clickbait” or “have this corporation act in a socially responsible way” are vague. That means AI may asymmetrically empower some of the worst tendencies in our society without giving a corresponding power increase to normal people just trying to live enjoyable lives.

The arenas of finance, business, journalism, and politics already exist, so we should watch for AIs trained to "win" those games.


Exactly ; that's where we area already today in fact - people using facial recognition or DeepFakes in malicious ways.


I don't understand why people are so worried about superintelligence. Moore's law is dying. Computers are not going to get much faster, or much cheaper. Parallelization is going to be the only way to get more powerful computers and there are real, possibly insurmountable, limits on how well you can do that. The computing sector is going to look like aerospace: the Boeing 737 of today is better in some ways, but it still looks almost exactly like the Boeing 737 of 50 years ago


Moore's Law is indeed dying.

But consider this...

The valuations of Intel, Nvidia, AMD and other chip makers are tied to Moore's Law, right? What's the point of buying a new GPU card, if your old one is good enough to play games? Their stock price depends on the continued belief that they can continue to produce something with some combination of better, faster and cheaper than what's currently available, and thereby have increasing sales. If Moore's Law comes to an end, their stock prices should crash, because then computing is a "boring" utility service, and doesn't warrant the massive R&D or stock valuations we have now.

Ditto for the companies that depend on the chip makers: Apple, Samsung, Google, LG, Dell, etc. (yes, some of these make chips themselves) What's the point of buying a new PC that isn't faster than the old one? Or a new phone? They'd also be relegated to a maintenance mode as well, and their stock prices should also fall considerably, because future growth isn't there.

And then there's all the companies that depend on those above. Game companies, etc.

So now we're talking about billions and billions of dollars of investment. And are these shareholders of Intel and everyone going to be content with seeing valuations fall?

No, I tell you they won't. If the current management of Intel can't promise new growth via new technology, they will be replaced by people who do promise it. If they can deliver on that promise is another story...

So companies will be researching new a better computing substrates. And so the tech will move on. And get better, faster, denser, and more cost effective.


A lot of research has already been generated looking for better materials than silicon, asynchronous logic synthesis strategies, post-von Neumann architectures, methods for writing non-traditional, and especially parallel software. If any of them worked we'd be using them right now. They didn't. We aren't.

And no amount of research is going to give us transistors less than 30 atoms on a side (we're around 100 now). There will be no chips that signal faster than the speed of light, which means no chip will be able to get bigger than a few centimeters across. Between those two constraints the design space is pretty much bounded.


> Moore's law is dying. Computers are not going to get much faster, or much cheaper.

Moore's law doesn't really matter. We're just at the beginning of computers, they will get much faster and powerful in the years - decades to come.

I just read this for example: https://www.sciencenews.org/article/chip-carbon-nanotubes-no...


> they will get much faster and powerful in the years

No one knows that, and we are extremely bad at predicting scientific and engineering breakthroughs. Yes, it's plausible that we'll have faster computers in the next 30 years. Twice as fast? A hundred times? A million times? No one knows.

Although I agree with you answer, but for another reason: AGI doesn't have to look like a brain simulation, in the same sense that helicopters and rockets don't look like birds.


So computers are unlike any other technology?


Yes, computers are the field computer people labor in, unlike any other technology.

Whiggery is kind of the US/anglophone national religion. It's going to be amusing to watch people losing their minds as progress grinds to a halt. In fact, one might argue that is what is happening right now.


What do you mean? I think they are like any other technology, we keep making improvements to them.


But we don't make exponential improvements once the technology matures. The parent was either claiming that computers are different, or computing hardware won't mature for decades to come.


You can’t achieve parallelism.


> But in the end, it would just be a translation app. It wouldn’t want to take over the world. It wouldn’t even “want” to become better at translating than it was already. It would just translate stuff really well.

> It could have media services that can write books or generate movies to fit your personal tastes.

I'm not an expert in this topic, but wouldn't you say that the ability to create compelling narratives (even more so than 'mechanical' translation between languages), pretty much relies on your ability to empathise at some level with people who want to take over the world or the people who want to stop them?

How would AI come up with the plot for one of the most profitable franchises in recent history - "Avengers: Infinity War" movies? It would have to be programmed with an understanding for Thanos' perspective, and the fundamental will of most everyone around him to be terrified of that and disagree - and even then it's not a good movie franchise without the romance and familial dynamics between so many characters.

If an AI can already understand all that (and know that it has to understand all that) - well you've created a pretty smart human already - and the OP's argument about differentiating these powers doesn't seem to hold...


I've never been very interested in the Bostrom view of AI, as the viewpoint there seems to come from someone fairly removed from how computer programming works. It's always felt more like an intellectual exercise rather than very grounded in practicality (but that's just my two cents).

Where I got more interested in Bostrom was some notions I heard from him (I think) on the general nature of science. I'd always assumed that learning more about how nature actually works was a strict positive, but now am convinced (more than ever) that we are just in a race to discover a technology that will harm us all (we've found some already, but who knows what worse discoveries are out there?)


Bostrom also wrote a paper on The Simulation Argument that was pretty awesome[1]. It was referenced recently in The End of the World podcast. [2]

1- https://www.simulation-argument.com/simulation.pdf 2- https://www.theendwithjosh.com/


I see no a priori reason why "superintelligent services" as defined in the review would necessarily be inherently safer, due to the complexity of the systems they affect, and the possibility of emergent effects that might be in some sense equivalent to the hypothetical effects of an "agent type" AI. I also think that the concept needs more elaboration for that very reason. Furthermore, it could be that there is a "recipe" for forming general AI by integrating a small number of "service type" AIs, assuming that term refers to a real concept in the first place.


A review of a book by a serial fabulist (Drexler) compared to that of a bozo moonlighting as a science fiction writer (Bostrom) done by a psychologist on a subject none of them have the slightest whit of a clue about.

> All of this seems kind of common sense to me now. This is worrying, because I didn’t think of any of it when I read Superintelligence in 2014

Dunning Kruger is something that should come to mind here, doctor. People who know a decision tree from an echo state network kind of saw that as being incredibly dumb when it came out.

What has happened in the last 5 years isn't that the field has matured; it's as gaseous and filled with prevaricating marketers, science fiction hawking twits and overt mountebanks as ever. The difference is, 5 years later, rather than the swingularity-like super explosion of exponential increase in human knowledge, we're actually just as dumb as we were 5 years ago when we figured out how to classify German traffic signs, and we have slightly better libraries than we used to. No great benefit to the human race has come of "AI" -and nothing resembling "AI" or any kind of "I" has even hinted of its existence. In another 5 years I'd venture a guess machine learning will remain about as useful as it is now, which is to say, with no profitable companies based on "AI," let alone replacing human intelligences anywhere. And we'll sadly probably still have yoyos like Hanson, Drexler and Yudkowsky lecturing us on how to deal with this nonexistent threat.

Meanwhile, the actual danger to our society is surveillance capitalism and government agencies using dumb ass analytics related to singular value decomposition. Nobody wants to talk about this, presumably because it's real and we'd have to make difficult choices as a society to deal with it. Easier and more profitable to wank about Asimovian positronic brain science fiction.


You shouldn't be downvoted. The article is clickbait nonsense written by someone who's just making things up. There's no actual scientific basis to believe that the article or referenced books have any more insight or predictive power than astrology does. The reality is that no one has any clue what's going to happen and everyone is just guessing.


> The article is clickbait

The article is a review of the book "Reframing Superintelligence" titled "Book review: Reframing Superintelligence".


Yes but words don't mean things any more and haven't in a while.


Conversations about AGI always get really messy because it's so polarized. Everyone that's written a line of code, or watched The Terminator, or have heard about the singularity have an opinion on this topic. It's easy to fall into the Dunning-Kruger trap if you consider the above to be approximately the limit of human knowledge on the subject. We don't have an AGI yet, so how much could we know about AGIs? Quite a lot, as it turns out.


> The reality is that no one has any clue what's going to happen and everyone is just guessing.

This is one of the first sentences in Bostrom's Superintelligence; he was very clear that he's in no way claiming that AGI is even possible. The book then assumes that it is, and works from there to try and understand how to control an AGI. How is that related to astrology?


Bostrom, at least, makes good philosophical arguments, and has been doing so since well before these topics around AI were even being discussed.


As far as I understood Bostrom's argument, it goes more or less like this: "suppose we made an omnipotent being, how do we control it?". On and on chapter after chapter, with no serious discussion of whether it makes any sense to even think of an omnipotent being and of its control.

If we build an omnipotent paperclip-making machine, the first thing it'll do will be to short circuit its paperclip-driven pleasure centres to a permanent ecstatic state. Why bother with the damn paperclips?


Omnipotent might be a bit too strong a word, I'll assume you meant omnipotent with respect to humans.

> the first thing it'll do will be to short circuit its paperclip-driven pleasure centres to a permanent ecstatic state

Assuming the AGI is well-designed, the goal would have to be understandable to the AGI in a general sense. If the goal is to make paperclips, it could be tracked by increasing the number at the &num_paperclips address, but that wouldn't be the _goal_ as the AGI understands it. The goal would be whatever the AGI considers the phrase "make paperclips" to mean.

What if it instead determines that increasing the counter (the pleasure center) is actually the goal? Now it has a new goal: count to infinity. I guess you can see where that'll lead.

What if you bound the pleasure center to some number? It'll then spend infinite time and power checking and recounting the number. You can add more bounds but it doesn't make a difference.

What if it decides that the goal is silly or wants to change the goal to something else? It can't, because it doesn't _want_ anything other than the goal.


> Omnipotent might be a bit too strong a word, I'll assume you meant omnipotent with respect to humans.

A being that can transform the whole universe into paperclips and cannot be stopped sounds pretty omnipotent to me.

> The goal would be whatever the AGI considers the phrase "make paperclips" to mean.

One thing is the understanding of the goal. Another thing is the drive that makes you want to reach it, and the reward you get from doing so. Humans are general intelligences, and they short circuit their pleasure centers all the time. And they're not omnipotent, nor capable of the self-modification that would provide infinite new ways of doing it.


I was really surprised that there was no mention of APIs there. Obviously as "services" many (most?) of these AI services would be available via APIs. There are already machine-readable directories of APIs and AI services relying on external APIs, so we can extrapolate that we'll see more and more AI/ML systems experimenting with various external APIs as part of their learning.

From this perspective, it's very clear to me that there's a big difference between a translation service and a service that would "steer Fortune 500 companies". The latter will be much more open-ended and most likely dynamically rely on many other services. Indeed, I would also expect it to rely on artificial-artificial-intelligence services such as Mechanical Turk and Upwork, giving it a lot more flexibility.

What is to prevent that complex service from evolving beyond its creator's expectations?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: