If you're "only" superintelligent at language translation, or writing movies, or chess, I suspect as we ascend the tiers of increasingly super superingelligences, there's a depth of informational, structural understanding that avails itself of abstract meta principles, and meta-meta principles, and meta-meta-meta principles, and on to infinity. And that at a sufficiently high level of abstraction, something about be brilliant at translating the subtle irony of a Shakespearian sonnet into a dead tribal language is also at play in weighing strategic options in an incredibly complicated game of chess, is also at play in reading culture and finding out what kind of movie will be most successful at the box-office.
I think any domain-specific intelligence, as it approximates "perfect", would independently discover and solve similar high level questions and be transferable to other domains, the way there are general principles of manufacturing that apply to multiple products. And from a sufficiently advanced perspective, "solving" chess and "solving" Shakespearan sonnet translation would look as similar to each other and panting a car red vs painting a car blue.
There is a trap here, in the sense that we tend to lump all the unsolved AI problems into to the "requires AGI" bucket. Some computer scientists thought that a program that could beat a human in chess would have to be an AGI (can't find a source for this, can someone help?). Language translation that's good enough to convey basic meaning is another one that turned out to be _much_ easier to do than what was suspected. I'm still dumbfounded by GPT-2 and suddenly the Turing test doesn't seem that far out of reach as it used to. What if 99% of human intellectual tasks can be solved by domain-specific intelligence? What if we're not as smart as we think we are? That seems to be a trend.
On the other hand, there are some tasks that cannot be subdivided, and necessarily requires a massive amount of context about human culture, the real world, and excellent logical reason ability. The program that links the domain-specific programs might have to be like that, and then we're back at something that looks _very much_ like an AGI.
No it wouldn’t, and I think you may have missed the point. It’s about constrained domains in the mathematical sense, that there is a clearly defined scope of inputs and outputs. It would simply be impossible for an AI service to develop outside that scope.
An AI service that's only intended to solve one well defined problem may have pieces in its anatomy that are functionally equivalent to those in a different AI service solving a different kind of problem. You don't need to assume it's escaped the confines of it's well-defined domain, or that domains aren't well defined, for this to be true.
The article seems to agree with my sentiment:
> The very fact that [a Drexlerian agent] is less effective than the Bostromian agent suggests there will be pressure to build the Bostromian agent eventually (Drexler disagrees with this, but I don’t understand why)
From where would this curiosity and desire to find analogies and to use knowledge across domains arise? Are you saying it would spontaneously appear? You're going to have to explain how I think. How would a chess machine suddenly aquire a taste for sonnets?
These traits have obvious advantages that led to their being selected for in humans. In an AI we'd presumably have to program it in purposefully.
From the perspective of an AI it wouldn't at all be some new desire or curiosity for something outside its defined domain. It would be something squarely within its own domain that nevertheless has applicability outside of it.
Why would it step outside its own domain to make this realization?
If all you're saying is, "an AI may learn things that might be applicable to other domains", that's fine but it doesn't support your point that you don't think AI will be easily confined to a single domain of knowledge. It's entirely possible a chess AI would be potentially incredibly good at some other domain like murdering humans (say), but you're going to have to explain why it would acquire a taste for murdering humans if you're saying we should be worried.
I think any domain-specific intelligence, as it approximates "perfect", would independently discover and solve similar high level questions
Solving a question requires intent. What would prompt it to consider those questions? If you're not saying those questions are in other domains, how is it relevant?
The argument makes a lot of sense to me as an AI researcher; the idea that we will somehow get to self improving agents that can do any and all things they want to maximize the output of paper clips (as in the famous Bostrom thought experiment) is miles away from how AI is done today in practice AND how software functions. I suspect most software engineers, especially with knowledge of AI, would find the 'AI service' idea which actually reflects how AI is done today much more plausible and worth worrying about than the Bostom science fiction-y fears of AGI...
Drexler is more concerned about potential misuse by human actors – either illegal use by criminals and enemy militaries, or antisocial use to create things like an infinitely-addictive super-Facebook. ... Paul Christiano ... worries that AI services will be naturally better at satisfying objective criteria than at “making the world better” in some vague sense. Tasks like “maximize clicks to this site” or “maximize profits from this corporation” are objective criteria; tasks like “provide real value to users of this site instead of just clickbait” or “have this corporation act in a socially responsible way” are vague. That means AI may asymmetrically empower some of the worst tendencies in our society without giving a corresponding power increase to normal people just trying to live enjoyable lives.
The arenas of finance, business, journalism, and politics already exist, so we should watch for AIs trained to "win" those games.
But consider this...
The valuations of Intel, Nvidia, AMD and other chip makers are tied to Moore's Law, right? What's the point of buying a new GPU card, if your old one is good enough to play games? Their stock price depends on the continued belief that they can continue to produce something with some combination of better, faster and cheaper than what's currently available, and thereby have increasing sales. If Moore's Law comes to an end, their stock prices should crash, because then computing is a "boring" utility service, and doesn't warrant the massive R&D or stock valuations we have now.
Ditto for the companies that depend on the chip makers: Apple, Samsung, Google, LG, Dell, etc. (yes, some of these make chips themselves) What's the point of buying a new PC that isn't faster than the old one? Or a new phone? They'd also be relegated to a maintenance mode as well, and their stock prices should also fall considerably, because future growth isn't there.
And then there's all the companies that depend on those above. Game companies, etc.
So now we're talking about billions and billions of dollars of investment. And are these shareholders of Intel and everyone going to be content with seeing valuations fall?
No, I tell you they won't. If the current management of Intel can't promise new growth via new technology, they will be replaced by people who do promise it. If they can deliver on that promise is another story...
So companies will be researching new a better computing substrates. And so the tech will move on. And get better, faster, denser, and more cost effective.
And no amount of research is going to give us transistors less than 30 atoms on a side (we're around 100 now). There will be no chips that signal faster than the speed of light, which means no chip will be able to get bigger than a few centimeters across. Between those two constraints the design space is pretty much bounded.
Moore's law doesn't really matter. We're just at the beginning of computers, they will get much faster and powerful in the years - decades to come.
I just read this for example: https://www.sciencenews.org/article/chip-carbon-nanotubes-no...
No one knows that, and we are extremely bad at predicting scientific and engineering breakthroughs. Yes, it's plausible that we'll have faster computers in the next 30 years. Twice as fast? A hundred times? A million times? No one knows.
Although I agree with you answer, but for another reason: AGI doesn't have to look like a brain simulation, in the same sense that helicopters and rockets don't look like birds.
Whiggery is kind of the US/anglophone national religion. It's going to be amusing to watch people losing their minds as progress grinds to a halt. In fact, one might argue that is what is happening right now.
> It could have media services that can write books or generate movies to fit your personal tastes.
I'm not an expert in this topic, but wouldn't you say that the ability to create compelling narratives (even more so than 'mechanical' translation between languages), pretty much relies on your ability to empathise at some level with people who want to take over the world or the people who want to stop them?
How would AI come up with the plot for one of the most profitable franchises in recent history - "Avengers: Infinity War" movies? It would have to be programmed with an understanding for Thanos' perspective, and the fundamental will of most everyone around him to be terrified of that and disagree - and even then it's not a good movie franchise without the romance and familial dynamics between so many characters.
If an AI can already understand all that (and know that it has to understand all that) - well you've created a pretty smart human already - and the OP's argument about differentiating these powers doesn't seem to hold...
Where I got more interested in Bostrom was some notions I heard from him (I think) on the general nature of science. I'd always assumed that learning more about how nature actually works was a strict positive, but now am convinced (more than ever) that we are just in a race to discover a technology that will harm us all (we've found some already, but who knows what worse discoveries are out there?)
> All of this seems kind of common sense to me now. This is worrying, because I didn’t think of any of it when I read Superintelligence in 2014
Dunning Kruger is something that should come to mind here, doctor. People who know a decision tree from an echo state network kind of saw that as being incredibly dumb when it came out.
What has happened in the last 5 years isn't that the field has matured; it's as gaseous and filled with prevaricating marketers, science fiction hawking twits and overt mountebanks as ever. The difference is, 5 years later, rather than the swingularity-like super explosion of exponential increase in human knowledge, we're actually just as dumb as we were 5 years ago when we figured out how to classify German traffic signs, and we have slightly better libraries than we used to. No great benefit to the human race has come of "AI" -and nothing resembling "AI" or any kind of "I" has even hinted of its existence. In another 5 years I'd venture a guess machine learning will remain about as useful as it is now, which is to say, with no profitable companies based on "AI," let alone replacing human intelligences anywhere. And we'll sadly probably still have yoyos like Hanson, Drexler and Yudkowsky lecturing us on how to deal with this nonexistent threat.
Meanwhile, the actual danger to our society is surveillance capitalism and government agencies using dumb ass analytics related to singular value decomposition. Nobody wants to talk about this, presumably because it's real and we'd have to make difficult choices as a society to deal with it. Easier and more profitable to wank about Asimovian positronic brain science fiction.
The article is a review of the book "Reframing Superintelligence" titled "Book review: Reframing Superintelligence".
This is one of the first sentences in Bostrom's Superintelligence; he was very clear that he's in no way claiming that AGI is even possible. The book then assumes that it is, and works from there to try and understand how to control an AGI. How is that related to astrology?
If we build an omnipotent paperclip-making machine, the first thing it'll do will be to short circuit its paperclip-driven pleasure centres to a permanent ecstatic state. Why bother with the damn paperclips?
> the first thing it'll do will be to short circuit its paperclip-driven pleasure centres to a permanent ecstatic state
Assuming the AGI is well-designed, the goal would have to be understandable to the AGI in a general sense. If the goal is to make paperclips, it could be tracked by increasing the number at the &num_paperclips address, but that wouldn't be the _goal_ as the AGI understands it. The goal would be whatever the AGI considers the phrase "make paperclips" to mean.
What if it instead determines that increasing the counter (the pleasure center) is actually the goal? Now it has a new goal: count to infinity. I guess you can see where that'll lead.
What if you bound the pleasure center to some number? It'll then spend infinite time and power checking and recounting the number. You can add more bounds but it doesn't make a difference.
What if it decides that the goal is silly or wants to change the goal to something else? It can't, because it doesn't _want_ anything other than the goal.
A being that can transform the whole universe into paperclips and cannot be stopped sounds pretty omnipotent to me.
> The goal would be whatever the AGI considers the phrase "make paperclips" to mean.
One thing is the understanding of the goal. Another thing is the drive that makes you want to reach it, and the reward you get from doing so. Humans are general intelligences, and they short circuit their pleasure centers all the time. And they're not omnipotent, nor capable of the self-modification that would provide infinite new ways of doing it.
From this perspective, it's very clear to me that there's a big difference between a translation service and a service that would "steer Fortune 500 companies". The latter will be much more open-ended and most likely dynamically rely on many other services. Indeed, I would also expect it to rely on artificial-artificial-intelligence services such as Mechanical Turk and Upwork, giving it a lot more flexibility.
What is to prevent that complex service from evolving beyond its creator's expectations?