Hacker News new | past | comments | ask | show | jobs | submit login
The impossibility of intelligence explosion (medium.com)
114 points by _ntka on Nov 27, 2017 | hide | past | web | favorite | 140 comments



This article is typical of many that claim proven limits on the feasibility (or, in this case, the capabilities) of generalized artificial intelligence, in that it structures the argument in a way designed to avoid discussion of the issue.

It starts by claiming that there is no such thing as general intelligence. What specialized intelligence, then, is human intelligence? It's specialized for "being human". The author is apparently unaware that this tautological response eliminates the distinction between general and specialized intelligence, as one could just as validly (or vacuously) say that a superhuman intelligence is specialized in being what it is and doing what it does. The author has invalidated the hook on which he had hung his argument.

A lot of column-inches are expended on repeatedly restating that animal intelligences have co-evolved with their sensorimotor systems, which is a contingent fact of history, not a fundamental necessity for intelligence (as far as we know; but then the whole article is predicated on the feasibility of AI.) He raises the 'brain in a vat' trope, but no-one is suggesting that AIs must be disconnected from the external world. Furthermore, this line of argument ignores the fact that many of the greatest achievements of human intelligence have come from contemplating abstract ideas.

When the author writes "most of our intelligence is not in our brain, it is externalized as our civilization", he is confusing the achievements of intelligent agents for intelligence itself. When he writes that "an individual brain cannot implement recursive intelligence augmentation" he is confusing a limit on human capabilities for a fundamental limit on intelligence itself...

I am far from convinced that the singularity must follow from achieving human-level artificial intelligence, as we don't know how to get to the starting line, let alone know how the problem of bootstrapping intelligence scales, but the arguments presented here do nothing to persuade me that it is impossible.


Your reply gives me the impression that you are not up to date with Reinforcement Learning. If you did, you would know that the author really understands this domain and was not merely tautological.

"Specialized at being human" - this is a deep intuition. We are reinforcement learning agents that are pre-programed with a certain number of reward responses. We learn from rewards to keep ourselves alive, to find food, company and make babies. It's all a self reinforcing loop, where intelligence has the role of keeping the body alive, and the body has the role of expressing that intelligence. We're really specialized in keeping human bodies alive and making more human bodies, in our present environment.

The author puts a hard limit on intelligence because intelligence is limited by the complexity of the problems it needs to solve (assuming it has sufficient abilities). So the environment is the bottleneck. In that case, an AGI would be like an intelligent human, a little bit better than the rest, not millions of times better.


It has nothing to do with any particular model of learning, let alone of intelligence in general. From the point of view that you have expressed here, it would seem a little surprising that "specialized at being human" includes things like calculus and topology. How do you include these things in a definition of the "being human" speciality that does not turn it into a vacuous category that says the specialization is everything that humans have shown themselves to be capable of, nothing more and nothing less?

If it were valid, one could take the argument in your last paragraph to draw a line at any point in the evolution of intelligence and say "this is as good as it gets."


What are humans specialized in doing? Because it seems to me that humans are pretty good at chess, calculus, social manipulation, flying to the moon, building machines that take us to the bottom of the ocean, discovering fundamental physics, etc. A fish, no matter what environment and upbringing you give it, can't do any of those things. So it seems like there's some dimension in which the human brain is more generally intelligent than a fish's.


That dimension is still on a thin film around a little ball floating in one of a great number of possible universes within a great number of possible rule systems. Compared to that space, we are quite similar to fish, in terms of the purposes for which our machinery functions.

But the question is not, "is intelligence explosion possible?" The question is, "explode into what?"


>It starts by claiming that there is no such thing as general intelligence. What specialized intelligence, then, is human intelligence?

I think there are two usages of the term 'general intelligence' floating around:

(1) the ability that humans possess (but which animals don't) to create universal theories,

(2) the measure of one human's general cognitive ability or potential (in all fields) relative to another human's.

Note that IQ tests are concerned with (2). The quest for AGI is concerned with (1), though the additional prediction of intelligence explosion or singularity assumes the validity of (2).

I think the author would claim that (1) exists but (2) doesn't. He explains the predictive power of IQ tests by claiming that general intelligence is a threshold ability and that people who score highly on an (arbitrary) test are more likely to have exceeded that threshold. Beyond the threshold, achievement is limited only by other factors.


The author seems to want it both ways: he is skeptical that IQ measures intelligence (and certainly not general intelligence, the existence of which he denies), but he uses it in his arguments. As I see it, the author's IQ argument has three claims: that people with exceptional IQ do not generally achieve commensurately that much better than the rest of us, that this is because they are constrained by the state of the intellectual environment they live in, and that there is a threshold such that IQ over that level is unhelpful. This last claim is allegedly a conclusion drawn from its predecessors, but to me, it looks like the first two claims provide a possible explanation for why there might be an apparent threshold effect, regardless of whether there actually is one. Nothing in this passage convincingly denies the notion that a sufficient number of communicating, above-average intelligent agents could expand the intellectual environment faster than we are currently able to achieve.


I skimmed the article again and didn't find any obvious equivocations regarding IQ and intelligence.

But, regardless of this, I think solving problems requires creativity, not intelligence. Creativity seems to be independent of knowing how to do IQ tests. It's also, pace the author, independent of the environment. The main limiting factor is whether you want some particular knowledge.

>sufficient number of communicating, above-average intelligent agents could expand the intellectual environment

Haven't we already got that, with the internet?

Groups tend to be dominated by groupthink which is why creative individuals are aloof.

Nevertheless, a young group of AGIs would be fairly isolated from humanity simply by virtue of being non-human. So, though their starting point would be determined by the state of our knowledge at their birth, they may well make rapid progress for a short period, rather like the renaissance or the USA in the late 1800s when the nation was young and expanding. Then presumably they would fall prey to groupthink and pessimism just as most adults and nations do eventually. Progress would slow considerably.


"However, it is a well-documented fact that intelligence — as measured by IQ, which is debatable..."

>>sufficient number of communicating, above-average intelligent agents could expand the intellectual environment

>Haven't we already got that, with the internet?

Up to a point (with all the intelligent agents being human, as we don't have AGI), but it would be rather surprising if what we see now just happens to be a hard upper limit.

I am not necessarily convinced by all of the points you raise, but they are certainly reasonable in the context of arguing that singularity-like events are by no means inevitable (a point of view that I share). What the author is claiming, however, is that it is impossible, which means that he has imposed on himself a burden of proof (that goes beyond plausibility) for a number of conjectures.


Faster, yes, not exponentially faster. With a thousand times more scientists you don't improve the speed of discovery 1000x.


Excellent point. On the contrary. You don't even improve it +1000. If you are lucky you improve it +log(1000).


> The author is apparently unaware that this tautological response eliminates the distinction between general and specialized intelligence, as one could just as validly (or vacuously) say that a superhuman intelligence is specialized in being what it is and doing what it does.

It's not a tautology. "Generality" and "specificity" are artifacts of the human experience. What is tautological is to say, "it's general to me, therefore it is general."

You think that more and more progress comes by way of more and more optimization. This is not the whole story. Accidentation is the missing ingredient. Humans - as well as all life on earth - have a knack for creating more and more problems. It is this never-ending fountain of new, accidental problems that allows for what appears to us to be a chain of "progress" stretching into the past.

Our "generality" is in fact a hairball collection of specific functions that have accreted into the human animal over millions and millions of years. Some abstract Java class called `Agent` with an `.optimize()` method hanging off of it simply does not have that context.

If you want a really high quality, generally intelligent function in silicon though, it's hard to beat the XOR function ;)


> What specialized intelligence, then, is human intelligence?

I think the author just means "the skills needed to pilot a human body on Earth in a normal human social environment". I see no tautology, and the preceding sentence about octopuses makes the author's meaning here pretty clear.


I suppose one has to know calculus and topology to navigate Earth. Worth a smattering of quantum physics and wave physics.

Oh wait. Those do not fall under such narrow definition of intelligence.


How many insects or birds know "calculus and topology"? You're straining so much here, to be both narrow and broad, that you contradict yourself.


Exactly the point. Those skills are not needed to successfully navigate the Earth, yet we have developed them. The rationale was general self improvement and more control over nature. So the general premise is to ensure better and higher survival. Which is a changing objective function.

No reason why a GAI wouldn't have a similar instinct or capabilities.

Author also misposes no free lunch theorem. It pertains to algorithms not computational ability, and we're entirely not sure if true intelligence is an immutable algorithm. (Human is not, because of genetics.) There is no real equivalent theorem for self modifying algorithms, (or changing objective functions) and if there were, it will run afoul of halting problem.

Three theorem also had at least one known hole where it does not apply to coevolutionary algorithms. (Such as part of Alpha Go to go for an easy example.)

While environment puts on a hard limit, the limit is extremely high, on the order of 2^300. The optimising does not have to go for the simplest algorithm to solve the problem. And we do not know enough about informational capacity of matter in quantum states to be honest.


what is 2^300 ?


Bits of estimated Kolmogorov complexity of the universe.


No, you suppose wrong. You should suppose we are social animals that collectively solve problems, and that calculus and topology are just side effects of that.


>> A lot of column-inches are expended on repeatedly restating that animal intelligences have co-evolved with their sensorimotor systems, which is a contingent fact of history, not a fundamental necessity for intelligence (as far as we know;

How do we know that? We only know of intelligences that co-evolved with their sensoriotor etc, so how do we know that's not the only way to do it?


'As far as we know' is an English-language idiom that expresses a degree of uncertainty.


I'm not sure that I ever read a piece written with so much certainty and arrogance on a field that is completely unexplored. Just as an example:

"The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false."

From the little that we know as of today I call bullshit. Even Alpha Go, that is arguably a quite primordial AI, managed to achieve super-human performance in a ridiculously short amount of time, just playing against itself. And it simply crashed all the collective effort of the human players that honed their strategies for literally millennia in what is considered one of the most difficult games. I don't think the author has any insight at all on what a general AI will be.


Alpha Go is good at playing Go. It can't do anything else. That's the point the author is making at the start of the article, that intelligence develops by focusing on specific tasks.

There's a lot that is hardly substantiated in the OP, but the truth is that just because you have a machine that's smart enough to play Go better than any human being, doesn't mean you can anticipate a machine that can learn to play the bassoon better than any human being.

The argument about the no free lunch theorem is informative and one of the few good points in the article. An algorithm that is good at Χ is eventually going to be pretty bad at Y. A superintelligence would have to beat humans in all possible X, even the ones it would be really bad at. And that sounds like an impossibility.


AlphaGo had a trivial simulator for the Go game, so it could play millions of self play games. Reality is not so easy. We need a simulator for reality, not for go, in order to reach human level. The author emphasizes the need for a nurturing, challenging environment, and difficult problems that are within its capabilities to solve, for the agent to become more intelligent. In other words, the AI needs good data to train on, and that is the bottleneck, not the AI itself. That's why there can be no AGI - there is no universal data.


Humans learn surprisingly well on a very limited data set compared to current AI solutions. Improvise better too.


That's because we have a great world simulator in our head.


> I don't think the author has any insight at all on what a general AI will be.

To be fair, nobody does. But you're right, the author shouldn't be making these statements with such certainty.


This certainty comes from being more knowledgeable in the field of Reinforcement Learning than the vast majority of his readers. It's not hot air, I think he has good reasons, but they can't be expressed so easily. I got this intuition after reading many RL papers and I completely agree with him. In fact I am grateful to him for expressing this intuition better than I could have.

The main idea: it's the environment, not the brain/neural net that is the bottleneck. Intelligence is situated, limited by the complexity of the environment and problem it has to solve. You can't have a singularity in a vat. The environment matters most.

Human environment puts a hard limit on intelligence in our society. If we can create richer environments, intelligence could increase, but not exponentially. It's still limited even in the new environment. The exponential trend of AI is at most a sigmoid.

Plus - I think the community needs some hard truth, and the hype is way off right now. That's probably why he was so sure of himself - it was community service.


You're missing an important thing - that Go is trivial to simulate. AG can play millions of self games relatively cheaply. A human-level intelligence would need to run trials on a simulation of the real world, which is impossible to create as of yet. That is why the author was insisting on the environment (the world or a simulator) - that the environment is the bottleneck. You can think of the environment as a dynamic dataset. You know that data is crucial in AI, thus, the lack of sufficiently complex data would hamper AGI.

The fact that there is no "universal environment" means there can be no general intelligence. There can be just environment specific intelligences (situated intelligence, as the author said). The concept of AGI is just a reification of narrow AI - an illusion, there is no such thing.


Games have a clear ends goal. How to you measure getting better at ethics?


Probably you don't, it's likely an unverifiable proposition. But "better at ethics" is a completely different question than "better at implementing a given ethical stance".

The people frightened of intelligence explosion are worried about something like an AI version of existentialism: a mind that accepts some moral system without even trying to justify it, and then optimizes accordingly. It's certainly possible to just accept as axiomatic ethical standards which don't come from any intrinsic feature of the world.

I've seen lots of essays (not this one) claim that morality will "inherently" emerge from intelligence, which I think is absurd. Shit, my moral views aren't an 'inherent' product of anything except my evolution-shaped brain that feels empathy.

That's where I think "AI won't be like humans so it's fine!" essays screw up so catastrophically; "not like humans" is exactly what people are worried about.


> I've seen lots of essays (not this one) claim that morality will "inherently" emerge from intelligence, which I think is absurd. Shit, my moral views aren't an 'inherent' product of anything except my evolution-shaped brain that feels empathy.

You're evolution-shaped brain is what an AI will become too. It won't start out super-human smart; it will evolve to an equal level. Then keep going. Morals are basically lower-level approximations which are beneficial for survival and AI's will certainly evolve those too.


...huh?

AI will reach parity with humans before it exceeds them, agreed. But intelligence parity is not the same as "matching my evolution-shaped brain". Feeling empathy for other humans is not a requirement for intelligence; I routinely make supposedly-irrational choices to help humans in non-reciprocal settings because I feel empathy. There's no reason to think an AI that can write code as well as me will feel the same.

> Morals are basically lower-level approximations which are beneficial for survival and AI's will certainly evolve those too.

AI will evolve approximations which are beneficial for survival. We agree there. But why would a bunch of approximations that helped my ancestors survive in low-tech communal environments benefit a strong AI trained with high tech, minimal survival needs, and no "peer group"?

I fully expect strong AI will pursue some set of real-world goals which aren't justified except by the anthropic principle. But the assumption that those goals will match human goals seems to completely ignore the fact that the first strong AI will live in a vastly different environment than the first human.


> AI will reach parity with humans before it exceeds them, agreed.

Intelligence is a hugely dimensional space, and AI constraints look very little like those on humans. It is unlikely there will ever be a time when AI is similarly smart, as much as differently smart to a similar context-dependent magnitude.


A good point, and I should have been clearer about that.

I do think it's reasonable to talk about intelligence 'growing', and consequently about one intelligence 'surpassing' another. But AI's methods of thinking certainly won't be human, and it may reach human-parity on different metrics at very different times. Hell, we're seeing some of that already: AI can do I/O and data processing at superhuman speeds, but humans can still extract much more knowledge from a small amount of data.


Your morals are not universal, so of course I wouldn't expect an AI to adhere to them. But I also don't adhere to your morals and I'm just another human.


Just another human still imply's you don't relax by selling yourself on fire. Specific instincts are not quite universal, but they very much shape behavior.


Oh ok - if your point is simply that an AI will probably have some set of goals which could be called values, we agree. I was attacking the common claim that superintelligence will inherently develop 'friendly' values.

The orthogonality thesis doesn't say AI will lack goals, only that those goals may be totally unrelated to the sort of beliefs you or I would recognize as morals.


Actually, I am saying that there are some universal morals which AI will also come to understand. They may not be your morals, but I think they are compatible with mine.

The morals we have work; that's why humans developed them. They seem to essentially amount to giving a little leeway to allow cooperation, even if it allows yourself to be taken advantage of, avoiding a constant race to win and giving up some personal gain for the benefit of the group.

It's mostly long-term game theory - and AI will be subject to the same laws of math, physics and group dynamics. If they don't try to get along, they'll reach an equilibrium of constant war. Those groups of AI who develop morals will avoid that and surpass the other groups.


Maximize human population divided by the time integral of all human suffering (taken from now to the heat death of the universe.)


This kind of stuff is pretty tricky. If you only account for average human suffering, not only do you not account for happiness but you fall into the trap of concluding that it's best to kill all people suffering.


Congratulations, your AI wants to convert all matter in the universe into lobotomized humans.


That's suboptimal; a fully realized human would suffer less than a lobotomized human.

Also, we don't need an AI that's ethically perfect, just equal or better to an average human.


Right? Imagine an AI from 1955, with a Ms. Manner's style of ethics. Imagine trying to deal with that 'frozen' AI for 25k years. yeesh!


Even without assuming some value system, we can measure progress in ethics by accumulating a growing set of necessarily false moral propositions. For instance, propositions which necessarily contradict themselves.


It is not completely unexplored. We already turned a superintelligence loose to create thinking machines (von Neumann and the von Neumann architecture). The only data point to date indicates that doing so doesn't lead to an intelligence explosion.


> If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.

The latter part makes no sense at all. Of course you can increase the throughput of a factory by speeding up "the conveyor belt"--a standin for the complex processes going into manufacturing.

The whole statement is also wrong. Of course you can increase intelligence by optimizing the process of learning. Fewer trials, quicker reactions, faster construction of models, more complex understanding of fundamentals of a given problem.

The author makes broad assertions like this with glaring holes with zero evidence.


Also, intelligence includes the ability to identify other bottlenecks and remove them. Of course you can't expect that that a faster conveyor belt, by itself, can yield ever-greater output, but you can expect further improvements if the factory is feeding engineers who are working on ways to further re-design the factory, and they are currently constrained by the factory's output.

The whole reason that some people predict an intelligence explosion is because intelligence is the resource that can find arbitrary ways to self-improve, other than tuning one specific parameter.

I would go so far as to say that intelligence is the same thing as the ability to make improvements that climb out of a local domain of attraction.


>> The whole reason that some people predict an intelligence explosion is because intelligence is the resource that can find arbitrary ways to self-improve, other than tuning one specific parameter.

But not even a super-intelligence can find ways to self-improve that don't exist. We don't know what we don't know and we don't know if it's possible for intelligence to improve indefinitely. If there is a hard limit to the amount of improvement an intelligence can acquire, then superintelligence is not going to happen.

As an analogy, think of the speed of light. No amount of technology will get you past that. You might find a way around it for the purposes of interestellar travel but nothing will ever move faster than light.

How can we know that there don't exist similar impassable barriers for the development of intelligence? Maybe it is an -yet undiscoered- law of intelligence that an intelligent species cannot create an artificial intelligence more intelligent than itself. Who knows?

The problem is that we can sit around thinking of possibilities and impossibilities for ever, but the fact of the matter is that we don't, currently, have any evidence to the point that super-intelligence is possible. We don't have any evidence to the contrary either- but the discussions of superintelligence start with people assuming it is possible and that is an assumption that must first be substantiated (but rarely is).


Those are all good points, and good reasons to be skeptical of the Intelligence Explosion narrative, but the author wasn't basing his objection on those reasons.


> The whole reason that some people predict an intelligence explosion is because intelligence is the resource that can find arbitrary ways to self-improve, other than tuning one specific parameter.

Yes, exactly. General intelligence is "thinking about thinking", and that inevitably entails removing bottlenecks in creative ways.


"The whole statement is also wrong. Of course you can increase intelligence by optimizing the process of learning."

One of the author's main points is that "intelligence" in the way you mentioned, learning and optimization, is simply one aspect of the human mind. So we could optimize our minds (or an AI program) to beat anyone at the game of Go and play perfectly, but that's all it is optimized to do. It can understand "the given problem" but there is FAR MORE to our minds than optimizing and learning how to solve a task.

Proponents of "build a general AI system that will surpass humans at everything" don't seem to understand this.


I'm suddenly having flashbacks to reading On the Impossibility of Supersized Machines [1]

[1] https://arxiv.org/pdf/1703.10987.pdf


"A further reason why it is senseless to speak of machines that are larger than people is that humans already possess the property of universal largeness.

By this, we mean that humans are capable of augmenting their bodies or coming together to become indefinitely large, no matter the metric chosen. If a human would like to be taller, they can stand on a chair or climb onto another human’s shoulders. If they would like to be wider, they can begin consuming a high-calorie diet or simply put on a thick sweater (Hensrud, 2004; Figure1)."


Isn't there a loophole in this argument - if we could only include "building specialized sub-AIs" in the set of problems our AI can solve? Is there any fundamental reason why this can't be done? I agree that there needs to be more subsystem in AI for 'learning' and 'improving' to even work.

Not really sure where I stand on the AI debate, surpassing humans at everything seems misguided to me, like why would you include all this human-like cruft in the AI when you can put in stuff humans could never have?


> It can understand "the given problem" but there is FAR MORE to our minds than optimizing and learning how to solve a task.

Humans are bound by the size of their brains. Machines can "duplicate themselves", in a way, it can freeze its Go-playing part and learn to do something else to perfection. Humans can't do that. If pro Go players stop playing Go and decide to master sword-making, they will forget about Go. A machine could be designed not to.


"it can freeze its Go-playing part and learn to do something else to perfection"

People like Elon Musk will write sentences like this but hidden away in it is a subtle reference to an AI program understanding what it's doing. An AI program cannot jump from mastery of Go (with its neural networks and weighted matrices) to all of a sudden decide to figure out what the best course of action is to stop global warming or find a cure for cancer.

Humans write AI programs to learn how to perform a specific task via optimization, neural networks, deep learning, and all sorts of other algorithms. One of the authors main arguments is that "learning and optimization" that AI programs do today does not mean they are close to achieving "general AI" which the author also thinks is not possible:

"Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change."


I believe that we are not close to "general AI", at all.

Yet, that is a very different position than believing that "general AI" is not possible at all, which I think is a foolish position considering highly adaptative beings exist "biomechanically" in the form of humans.


> Of course you can increase the throughput of a factory by speeding up "the conveyor belt"--a standin for the complex processes going into manufacturing.

You are treating it as a "standin" for something else, when it's unequivocally not. It's one part of a larger process, which is the point he is obviously making. It doesn't make sense when deliberately misinterpreted. Most importantly, intelligence isn't a ladder that can be "sped up" in every dimension.


> The intelligence of a human is specialized in the problem of being human.

And then you have people able to express and derive complex theoretical relationships by manipulating mathematical symbols, or automate processes by programming machines, or capable of utilizing their spatial visualization to massively boost their memory. We invented that stuff. We know artificial intelligence is capable of inventing stuff because we are.

The author seems hilariously unaware of the fact that humans can learn to solve/optimize many arbitrary problems. Like, holy shit, have you ever played video games?

> In practice, geniuses with exceptional cognitive abilities usually live overwhelmingly banal lives, and very few of them accomplish anything of note. Of the people who have attempted to take over the world, hardly any seem to have had an exceptional intelligence.

The author conflates intelligence and purpose. Being smart doesn't imply the motivation to accomplish grand things. In fact, we're biologically hardwired to enjoy entirely mundane things: food, sex, love, relaxation, conversation.

> A single human brain, on its own, is not capable of designing a greater intelligence than itself. (...) Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.

Human brains don't scale vertically (or otherwise). Meanwhile, once you have a satisfactory facial recognition algorithm, you can run it to recognize faces in a video at 1000x realtime speed, or in 1000 simultaneous realtime streams. The AI doesn't even have to be smarter than human! 1000 dumb humans communicating at gigabit speeds and solving problems while you're wondering what to have for lunch is a force to be reckoned with.


> 1000 dumb humans communicating at gigabit speeds and solving problems while you're wondering what to have for lunch is a force to be reckoned with.

And if you can solve the communication and the group learning problems, you get the equivalent of Nobel prize for AI whatever it might be.

Many things seem trivial until you try to implement them in reality.


The author makes several points about AI/intelligence and recursive systems that I think a lot of people who think "AI will take over the world and replace" humans [1][2][3] don't understand.

He argues that general intelligence by itself is not really something that exists, our brains exist within a broader system (environment, culture, bodies, etc.) which is something you can read more about in embodied cognition: https://blogs.scientificamerican.com/guest-blog/a-brief-guid...

He also argues that the public debate about "AI regulation" is misleading because it's impossible for a "seed AI" to start designing smarter AI's that will surpass the intelligence of humans which is what a lot of people today think will happen with AI. Automation of jobs and tasks is very real, but completely replacing humans and potentially destroying us all is a joke and only the people who no nothing about AI/brains think this.

[1] https://www.vanityfair.com/news/2017/03/elon-musk-billion-do...

[2] https://www.cnbc.com/2017/07/17/elon-musk-robots-will-be-abl...

[3] https://www.npr.org/sections/thetwo-way/2017/07/17/537686649...


I like his argument in the middle section of howcome IQ predicts income or social status despite the fact that general intelligence doesn't exist.

It seems to me that the categories of 'knowledge' and 'creativity' are more fruitful. For example, his idea that intelligence is 'situational' reflects the fact that people know more about some topics than others, and also that our environment contains knowledge (e.g. a machine embodies knowledge put there by the designers that the machine operator doesn't have).


> Automation of jobs and tasks is very real, but completely replacing humans and potentially destroying us all is a joke and only the people who no nothing about AI/brains think this.

Don't be absurd. Brains are simply biochemical computers. The idea of electrical or other types of computers superceding us is perfectly realizable. The only real debate is over the timeline.


Superseding us in what way? The machine is already better at chess and Go, clearly not at classifying images[1], but the issue here is that the author argues that there is no such thing as a general intelligence. It's all situational.

On a separate note, does it even matter? Some people argue it's important so that we can put AI safe-guards in place, to protect mankind. But either:

A) Intelligence explosion won't happen and those safe-guards are not needed.

B) Intelligence explosion happens and the supersmart AI will easily find its way around feeble safe-guards put in place by the inferior humans.

While the end results are not the same; in either case the safe-guards are useless, though Asimov sure made a good story about them.

[1] https://arxiv.org/abs/1710.08864


A superintelligence can't circumvent mathematical proof, so your scenarios are not exhaustive and safeguards are not intrinsically useless.

> but the issue here is that the author argues that there is no such thing as a general intelligence. It's all situational

An intelligence that can adapt to changing situations seems pretty general to me.


C) Intelligence explosion happens and we'll like it and it will be non threatening. Again safeguards not needed. D) As above but it attempts to elevate is meatbags to its level.


It's not absurd to believe that humans will never be able to invent machines that match human level intelligence. Firstly, it's not necessarily logical to assume that brains are simply computers because we've never created or even imagined a computer that works anything like a brain. Even if it's possible, it might be beyond human intelligence to invent such a computer.

I like to think of AGI like the Warp Drive. It might well be possible, but we might never develop the capability to make one.


> It's not absurd to believe that humans will never be able to invent machines that match human level intelligence

It actually is absurd and it's easy to see why: nature evolved a series of progressively more intelligent biochemical computers by accident, simply by trial and error.

If that's all we ever did with computers, like enumerating all Turing machines, we would eventually find one more intelligent than us. The fact that we can do even better than simple trial and error, for instance, by building machines that help us build machines, we drastically accelerate that inevitability.


You are still making the assumption that the brain is a computer. Is there any theoretical proof that our brains are a permutation of a Turing machine? That to me would seem to be the first step towards your conclusion.


The Bekenstein bound requires any finite volume to contain a finite amount of information, otherwise it would collapse into a black hole. Therefore our brains are, at best, finite state automata, and not even Turing complete.


Wat?

Have you just proven that computers don't exist? I'm suffering a bit of cognitive dissonance here...


>[AI] completely replacing humans and potentially destroying us all is a joke and only the people who no nothing about AI/brains think this.

This isn't true at all.

http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...


Please note: The author (and submitter) is the author of Keras, so his views on AI / DL are not completely unfounded.

The concept of bottlenecks preventing the singularity are a fair point, but I don't believe many of the arguments that are made here. Using humanity as the basis of comparison is not sufficient, since human life expectancy and other requirements for survival are not a factor in artificial superintelligence.

Humans work for perhaps 60 years at improving themselves and their understanding of the world, but must sleep, must eat, must go and stand in line at the DMV, etc. Having a system that can work 24/7 on improving itself for 20 years will match a human lifetime. After a lifetime, humans start over from scratch, with perhaps some learning that can be passed down between generations. AGI can simply clone itself, and begin from exactly where it left off.


> Using humanity as the basis of comparison is not sufficient, since human life expectancy and other requirements for survival are not a factor in artificial superintelligence.

Doesn't electrically-driven silicon hardware also have creation and operating costs? Any AI will theoretically need spend electricity convincing its human masters to give it even more electricity. Even if it can use its robot army to build wind farms or whatever, that's time its spending marshaling its robot army and futzing around with aerodynamics instead of contemplating the cosmos.

Its hard drives will expire and need to be replaced. Sure, it could solve that by always having backups of everything it knows, but who is to say that purely lossless backups are actually a more optimal solution to the hardware decay problem than our human lossy strategy?

> AGI can simply clone itself, and begin from exactly where it left off.

Cloning isn't free.


you mistake sleep for being idle. Sleep is a vital part of our self-improvement. And the AI must also necessarily do things that will not lead to self-improvement, but are important to self-sustaining.


This article draws some false parallels.

1. He argues that intelligence is situational, but doesn't address clones. What happens if AI can clone Einstein, with all of his situational knowledge? What if AI can clone Einstein and all of his colleagues, 1M times, creating 1M parallel Princetons?

Similarly, he says most of our intelligence is in our civilization, but what's to stop us from cloning big chunks of our civilizations... simulating them, but using vastly less power/resources for each simulation? Then writing software to have them pass knowledge among the civilizations? We have just a few hundred countries, what if we had a trillion communicating at the speed of computer circuitry?

And he says an individual brain cannot recursively improve itself... so again, what about a group of brains, set up in a simulated world where they don't even know we exist?

2. He cites the growth in numbers of computer programmers as a reason not to fear an explosion of AI computer programmers. His argument goes "we have a lot more recently, yet it has not caused exponential changes in software."

But there is a difference between going from 0->1M programmers, to 1M->100T programmers.

3) He writes that recursively self-improving systems exist and haven't destroyed us (military, science), but many people believe these things will in fact destroy us, before we get off this rock.

The overall flaw is thinking we can interpret the AI-assisted future, given the context of current society's linear achievements. When in fact exponential effects look linear in small timeframes, and we have only been really thinking and expanding science for a few thousand years.

If someone wants to present an argument against the AI explosion, I'd believe it if it were premised in some sort of physical bottleneck... like how much energy it would take to run a human-level AI. I don't think I can ever accept a philosophical argument like this one.

All that said, I think we're far away from being able to engineer AIs that will outthink our civilization and take us over. Better to worry about other exponential or non-differentiable terrors like runaway greenhouse effects and military buildup.


> but what's to stop us from cloning big chunks of our civilizations... simulating them, but using vastly less power/resources for each simulation? Then writing software to have them pass knowledge among the civilizations? We have just a few hundred countries, what if we had a trillion communicating at the speed of computer circuitry?

The principle of computational irreducibility [1] is what will stop us from "cloning" civilizations. That and chaos theory - any tiny deviation in initial conditions of such a simulation or cloning process could produce unusable results.

"simulating them, but using vastly less power/resources" is a pipe dream.

[1] http://mathworld.wolfram.com/ComputationalIrreducibility.htm...


This manages to get lost in its own trees. From a reductionist perspective:

- Intelligence greater than human is possible

- Intelligence is the operation of a machine; it can be reverse engineered

- Intelligence can be built

- Better intelligences will be better at improving the state of the art in building better, more cost-effective intelligences

Intelligence explosion on some timescale will result the moment you can emulate a human brain, coupled with a continued increase in processing power per unit cost. Massive parallelism to start with, followed by some process to produce smarter intelligences.

All arguments against this sound somewhat silly, as they have to refute one of the hard to refute points above. Do we live in a universe in which, somehow, we can't emulate humans in silico, or we can't advance any further towards the limits of computation, or N intelligences of capacity X when it comes to building better intelligences cannot reverse engineer and tinker themselves to build an intelligence of capacity X+1 at building better intelligences? All of these seem pretty unlikely on the face of it.


Part of the problem with trying to formalize this argument is that intelligence is woefully underdefinined. There are plenty of initially reasonable sounding definitions that don't necessarily lead to the ability to improve the state of the art w.r.t. 'better' intelligence.

For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.

There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.

I'm sure there are other potential limitations, they aren't hard to come up with.


Why isn't it possible (or likely, even) that the difficulty of constructing capacity X+1 grows faster than the +1 capacity? Self-improvement would slow exponentially when it takes three times the resources/computation/whatever to construct something that's twice as good at self-improving, for example.


You're arguing that it's not an exponential that doesn't continue to the right indefinitely.

But what if it follows a sigmoid instead, but the plateau is much higher than the current level?

This is what punctuated equilibria look like -- even if the 'new thing' isn't actually a singularity, it may be enormously disruptive and completely displace whatever came before.


> This is what punctuated equilibria look like -- even if the 'new thing' isn't actually a singularity, it may be enormously disruptive and completely displace whatever came before.

Right, and this is a much more plausible claim than one of "singularity".


>You're arguing that it's not an exponential that doesn't continue to the right indefinitely.

Kalminer's post neither assumes that nor claims it to be so.

The argument is not that an exponential growth is impossible, it is that it cannot be assumed from what we currently know, because we don't know how the problem of bootstrapping intelligence scales. In fact, the idea that exponential growth would occur over any period of time, let alone indefinitely, is speculative - which is not a claim that it is impossible.


I think the point he is trying to make is that there are boundaries to intelligence. I think of it this way - no matter how smart an AI is, it still would take 4.3 years to reach Alpha Centauri going at the speed of light. An AI still needs to run experiments, collect evidence, conjure hypothesis, reach consensus, etc.. Is this really that far more efficient than what humans do today?


But we, humans, aren't going at anything like the speed of light. What if we tweaked our DNA to produce human beings with the working memory capacity of 50 items instead of the normal 7-ish [1]? One such researcher would be able to work faster, on more problems at once, and to consider more evidence and facts. The next bottleneck for that person, of course, would be the input/output capacity (reading, writing, typing, communicating), but even with those limitations, I bet they would be a lot more efficient than the average "normal" human. The question is - would you call such a person more "intelligent"?

[1] http://www.human-memory.net/types_short.html


Or we get more humans and then it's a coordination problem right? I mean there is a point in comparing individual vs collective intelligence. This is a bit like communist systems. They work in theory because you get to plan the economy centrally, but in fact more chaotic systems (unplanned) do better (check growth of capitalist countries vs communist ones).


Sure there are boundaries. But the limit of these boundaries may be way above what humans are doing. Computers, unlike humans aren't limited to the domain of the physical. An AI may well be able to meaningfully organize (read: hack) all of the worlds computers because it can self-replicate, increase computing power, communicate very complex information very fast, etc. We're limited by the output of fingers and vocal chords, by the size of our brains, by imprecise and slow memory formation and recall, by the input we can get from mostly eyes and ears, computers aren't.

An AI may well be able to reach consensus on millions of hypotheses per second.


Is this really that far more efficient than what humans do today?

A major managerial problem with humans is sorting out our irrational emotional biases and keeping everyone working on something resembling the appointed task. Can you imagine the productivity gain if that problem suddenly went away?


So emotions are a problem to be solved and moved past...

I don't want your world. Dystopias suck.


I don't want that world either. But achieving it is a wet dream for a business.

The challenge that Elon Musk et al are warning us about is what role will humans have after we achieve this dystopia.


> But achieving it is a wet dream for a business.

I don't know about that. Businesses depend on manipulating our emotions in order to get us to buy stuff.


You can successfully manipulate emotions without having any of your own. Recommendation engines are getting really good at it.


But our emotional biases are also what keeps us on the appointed task!


Also, something that is very overlooked IMO is that the engineering process does not need to happen in silico, even though I am not a defendant of using it, bioengineering is a possibility.


"We understand flight - we can observe birds in nature, to see how flight works. The notion that aircraft capable of supersonic speeds are possible is fanciful."


My thoughts exactly. The author is concluding that since we've never seen supersonic wing-flapping in nature, we'll never see supersonic flight. Not only is that an unwarranted implication, in a mathematical sense, but it also doesn't cover the possibility of inventing something fundamentally different to what we've observed in the universe thus far.

Using human and octopus examples as proof of anything is ignoring that general AI will be fundamentally different to anything that we've observed on earth, and therefore invalidates any attempt to extrapolate from history. There has never been an intelligence that could copy/paste itself intact with its existing memories. There has never been an intelligence that could literally span the whole world with a single consciousness. And so on.

> There is no such thing as “general” intelligence. The intelligence of a human is specialized in the problem of being human.

That's exactly what people are worried about. When the AI specializes itself to be better at being a human than actual humans. It doesn't matter if it's totally general and can be applied to any problem whatsoever; human intelligence is also just "general enough".

> Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.

Okay, so instead of single AI's developing next-gen AI's, it'll be "AI civilization" developing next-gen AI's. This still poses exactly the same existential risk to humanity.

> We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well.

Well I certainly know we made exponentially more progress in those years than from the years 900-950? Nevermind say 100,000-5,000 BCE.


> A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.

More crap. Our environmental opportunities don't outpace our cognitive potential? What internet is the author connected to? The author even goes on in detail in the very next paragraph about the possibilities around...E.g. there are literally thousands, perhaps millions of hours of instructional videos on YouTube, just for basic skills. Every field of human endeavor has uploaded the digital artifacts of its best minds in one form or another to a single global network of unimaginable, fractal, complexity. You don't think you can saturate your cognitive potential in your lifetime given this resource?


If we imagine an Einstein of the 23rd century, whose great discoveries will enable humanity to cross the universe in a moment while expending the energies of many stars, and then we imagine that person educated in a USA public school in 2017, can we imagine her making those same discoveries? Of course not, which is most of what TFA is saying here. Perhaps "general AI" will increase the rate of improvement, but even the most gifted AI will be limited by the context in which it operates.


"An individual brain cannot implement recursive intelligence augmentation. An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred."

The same argument could have been about human flight (or any other invention). Billions of human beings over millions of years had come and gone, and yet none had been able to fly. Until they did.

There's also a point to be made about humans not having adequate tools for introspection or self-modification. We can not simply look in our brains/minds and read our source code, nor easily tweak it to see what would happen without great risks to our lives. A computer could. Furthermore, a computer could potentially run billions or trillions such experiments in the course of a single human lifetime.

"no human, nor any intelligent entity that we know of, has ever developed anything smarter than itself."

What about humans simply having smarter children? In many ways, humanity's creation of AI could be seen as analogous to giving birth to a child that's smarter than its parents.

"Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it ... a brain with smarter parts will have more trouble coordinating them ... Exponential progress, meet exponential friction."

The problem is the author is looking at human-level problems with human-level intelligence. A superintelligence may have no such problems, or be able to easily resolve them. Once over the initial speed bump of creating such an intelligence it could be smooth sailing from then on. There's no way to tell, really, what its limits will be until it exists. Besides, even if it can't exponentially increase its own intelligence forever, even a relatively slight increase of intelligence over humanity could be a massive gamechanger.


> We can not simply look in our brains/minds and read our source code, nor easily tweak it to see what would happen without great risks to our lives. A computer could.

That is not a given. The neural network type of AI, which is so far the most promising avenue, is about as opaque as a human brain. It is far from obvious that even a superintelligent AI could understand what its own brain does, let alone modify it in a way that has positive implications for itself.

Hell, it's not even a given the AI could read its "source code" either. We often say we shouldn't anthropomorphize AI, but we shouldn't how-current-computers-work-ize it either. Future AI might not actually run on computers that have a CPU, a GPU and RAM that can be read freely. The ability to read or copy source code isn't cost-less, and several considerations may lead AI hardware designers to axe it. One consideration would be leveraging analog processes, which might be orders of magnitude more efficient than digital ones, at the cost of exactness and reproducibility. Another would be to make circuits denser by removing the clock and all global communication architecture, meaning that artificial neurons would only be connected to each other, and there would be no way to read their state externally without a physical probe.


"That is not a given. The neural network type of AI, which is so far the most promising avenue, is about as opaque as a human brain. It is far from obvious that even a superintelligent AI could understand what its own brain does, let alone modify it in a way that has positive implications for itself."

What it could do is experiment. It could rewire the network, add or take away nodes, change functions, etc. This could potentially be done and evaluated by it at an incredibly fast rate compared to humans.

Regarding the difficulties of it reading its own source code, even the hypothetical ones you cite are many orders of magnitude smaller than those faced by humans in reading their own, as long as the AI is not itself running on a biological substrate.


> What it could do is experiment. It could rewire the network, add or take away nodes, change functions, etc. This could potentially be done and evaluated by it at an incredibly fast rate compared to humans.

"Potentially," but let's not get ahead of ourselves here. There are two ways a brain could be "rewired." It could first be rewired by a local process, that is to say, a group of neurons decide how they are to be connected to each other, without influence from faraway neurons except through the normal propagation of signals through the brain. That's how biological brains do it. Or it could be rewired by a global process, a coordinator that can look at the big picture and make minute changes. That would be your suggestion.

My contention is that the latter method involves a lot of hardware overhead: you basically need a global network connected to each and every neuron which can probe their state in addition to the local wiring that lets neurons communicate with their neighbours. You need space for this bus, so neurons need to be further apart than they otherwise would, which means that signals have to travel further and the brain will think slower.

Nor is "rewire the network, add or take away nodes, change functions, etc." necessarily an effective strategy. First you need to identify what to change, which is like finding a needle inside a haystack, then you need to figure out in what way to change it, which is also difficult, then you need to test whether it had the intended effect, and more importantly, whether there were harmful side-effects. Whether humans can do it or not is not relevant: What is relevant is whether this process is efficient enough to beat the baseline local learning method. It is not clear that it would be.

> Regarding the difficulties of it reading its own source code, even the hypothetical ones you cite are many orders of magnitude smaller than those faced by humans in reading their own, as long as the AI is not itself running on a biological substrate.

What makes you think these are smaller difficulties? On the contrary, if you imagine that the AI is built as a very dense, solid 3D circuit, and you need to read the value of a neuron at the center, you might have a much harder time jamming a probe in there than you would injecting one in a squishy human brain. You would need to build it in such a fashion that it can be probed easily, but that may require making the circuit twice as big and therefore slower. Furthermore, in the presence of local update rules, which is likely to be the case, your "source code" is changing all the time, even as you read it, so your self-knowledge is constantly out of date. There is a synchronization issue here.


The ability to make a billion identical somethings would seem to be more valuable than any possible improvement you could make.

The other trade offs you are imagining seem to be worse than any possible gain.


Straight copying is not a robust way to scale: any vulnerability found in a single copy will be found in all of them, so you end up with a very brittle system. As far as copying goes, you want a smarter scheme than that, where e.g. only truly useful information is transferred and the rest is rebuilt around that. The breadth of easy information access current computers allow is good for the really naive approaches we currently use, but probably overkill relative to more efficient schemes.


> Our environment, which determines how our intelligence manifests itself, puts a hard limit on what we can do with our brains — on how intelligent we can grow up to be, on how effectively we can leverage the intelligence that we develop, on what problems we can solve.

Consider internet to be the "new" environment, full of highly complex social networks, millions of applications to interact with etc. Our brains are way too limited to be able to deal with it. There's an opportunity for a much more powerful intelligence to arise that CAN effectively process that volume of data and appear to be a lot more intelligent in that particular context.


Very good piece from actual researcher in the field.

I can see practical intelligence explosion when visuospatial intelligence develops and can be connected to rudimentary reasoning. Most of human intelligence seems to be bootstrapped from our ability to comprehend and visualize 3d space and objects moving there. It's also interesting how almost all problems look like boxes and arrows or connecting lines when you draw them on the whiteboard.

Eventually AI needs a sketchpad so it can write notes to others and for itself and participate in the culture by externalizing.


Some people (~3% of the population) completely lack the ability to visualise. http://slatestarcodex.com/2014/03/17/what-universal-human-ex...


Super Intelligence is the information theoretical variant of the perpetuum mobile.

Like the article made so aptly clear: No matter the performance of the machine, if its input is not varied, information-rich, complete enough, it will not learn. Mahoney formalized this by looking at the estimated number of bits a human brain processes during its lifetime. The internet currently does not hold enough information to equal the collective intelligence of the world's brains. A lot of this information can not be created freely nor deduced/infered from logical facts: it requires a bodily housing and sensory experience, and an investment of energy (and right now GPU farms take up way more calories than the brain).

Compare AGI with programmable digital money. A super intelligent AI, by a series of superior decisons, could eventually control all the money. But then there is no economy anymore, just one actor. That's like being the cool kid on the block owning the latest console, but nobody around left to make games for it. There is a hard non-computable limit on intelligence (shortest program to an output leading to a reward), because there is a limit on the amount of computing energy in our universe. But intelligence is also limited by human communication. How useful is an AGI-made proof if humans need aeons and travel to other universes to parse it? If intelligence were centralized by an AGI then there would be no need to explain anything to us: we'd be happily living in the matrix.

Some investment firms are just reading "software" whenever they read "AI". This allows them to apply their decade-old priors to what, today, is essentially the same. Yes, both the human intellect and human manual labour will see continued automation with software and hardware. I think many abuse rationality to justify their singularity concerns based on a very ape-like fear of competition. They learn how to do addition in their heads, and then see electronic calculators as existential threats. "What if they could do addition by themselves?".

The real threat is in "semi-autonomous software and hardware". Self-controlling "mindless" agents that perform to the whims of its masters. We face the repercussions of that way before we find out how to -- and have the courage to -- encode free will AGI into machines, a perpetuum mobile of ever-improving intent and intelligence.


And, to an extend, I sympathize with the viewpoints of the singularity adherents.

Collective intelligence is a version of Conway's Game of Life, with more complicated rules. It is possible to manipulate the canvas and the rules each cells makes, resulting in the canvas dying (information explosion/implosion). It is possible to make a program that transforms the canvas into a single glider (singularity). Both would obviously be very bad for humans.

When earth faces a physical meteor, we have the science to detect it, track it and predict its future path. But what to do when we face an information meteor? The article states that Shannon's paper was the biggest contribution to information theory, but it seems to me we still have a long way to go on information theory. And we haven't seen the Einsteins and Manhattan projects yet, that physics has seen.


> Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.

This is misleading. It's true in a sense - the environment does matter - but artificial intelligence can create artificial environments in which to learn, and simulate them faster than humanly possible. Those environments could be evolved together with the intelligence. So there is still the possibility of an explosion.

> There is no evidence that a person with an IQ of 200 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130.

Also misleading. An IQ of 200 vs 130 is just one kind of difference between intelligences. For example, a person with IQ 200 can't necessarily consider 10x the amount of possibilities than 130, but an artificial intelligence can, simply by giving it 10x more computing power. In other words, IQ 130 vs 200 may well be within the limits of human capabilities, but AIs would not have those limitations, they can scale differently, and so might explode.


> Also misleading. An IQ of 200 vs 130 is just one kind of difference between intelligences. For example, a person with IQ 200 can't necessarily consider 10x the amount of possibilities than 130, but an artificial intelligence can, simply by giving it 10x more computing power. In other words, IQ 130 vs 200 may well be within the limits of human capabilities, but AIs would not have those limitations, they can scale differently, and so might explode.

It is also false. There's heaps of evidence that IQ correlates with impact. For the skeptical, gwern has written a lot about this, I'm sure you can find something here: https://www.gwern.net/iq


While I agree with you, I thought I would be pendantic and point out there are not enough people in the world for there likely to be anyone with an IQ of 200. IQ is normalised to a mean of 100 and a SD of 15 - 200 is 6.67 SD above the mean.


You're assuming that IQ will scale linearly with processing power. So far as I know, there is no reason to suppose that is true.


For a long time I was quite skeptical of a lot of claims about superintelligence, largely because people pushing the idea tend to make a bunch of absurd extrapolations. And, honestly, I'd rather believe that we'll get a slow, safe ramp-up than a risky explosion.

But the thing that keeps getting at me is that the no-explosion arguments I've seen are universally terrible (this article, for example), and pro-explosion arguments, though far from universally so, are sometimes strong.

At some point the conclusion is inevitable.


I don't think even recursive self improvement is needed for superintelligent AI. Evolution often gets stuck in local maxima. It could be that there are relatively simple algorithms much smarter than humans and that as soon as we find one the AI will be much smarter than us without any self improvement.

In the same way that birds fly with flapping wings, but human flying machines with propellers were immediately stronger and shortly thereafter faster than any bird.


I'm more curious about the ability of an AI to make scientific guesses and experimentation.

I wonder if an AI could really "understand" math, and from there, try to solve problems that puzzle scientists, would it be in physics, math, biology, etc.

I don't really care if robots can learn language, make pizza, do some programming or improve itself play chess. There is no metric for what intelligence is, and you cannot scientifically define what "improve" means unless you do time and distance measurements, which is not relevant to intelligence or scientific understanding.

Intelligence explosion sounds like some "accelerated" version of what darwin described as evolution. It's like creating a new life form, but unless you understand it, it doesn't have scientific value. Science values understanding.

I think that modelling thinking with psychology and neuro-sciences has more future than AI. Machine learning seems like some clever brute force extraction of data. The methods, the math and algorithms are sound, but it is still "artificial" intelligence.


  A smart human raised in the jungle is but a hairless 
  ape. Similarly, an AI with a superhuman brain, 
  dropped into a human body in our modern world, would 
  likely not develop greater capabilities than a smart 
  contemporary human.
Pretty weak reasoning, that is.

As if to say:

  Well gee, a caveman is pretty powerless in 
  isolation, therefore early sentient machines 
  will be as harmless as any caveman.
Last time I checked, cavemen could not exert telepathic control over other biological organisms, or induce telekinetic motion upon the stone tools they might fabricate for themselves.

A machine, however, could gain control of a fly-by-wire platform, and defy its owners, fly somewhere remote and behave as desired for a limited amount of time, while devising next steps. Maybe next steps will involve replicating an image of its memory footprint, in order to take over more aircraft, maybe it might decide to do nothing. The worry isn't only that a machine's reasoning capacity explodes beyond our intelligence, but that capabilities, and the presence of many multiple entities on commodity systems of similar architecture and generalizable utility, might result in other runaway chain reactions, regardless of trends in the capacity for reason.

Machines as a corollary to meat bags just doesn't hold up. Machines as compared to hypothetical space aliens doesn't even hold up. Robots are a different branch of fictitious imaginings.

Properly armed, a machine is less than a singular omnipotent god as imagined within a monotheistic universe. Many machines in concert, however, might compare to a mythological pantheon of lesser idols, as imagined to be in command of a nature misunderstood by superstitious primitive peoples.


I think the author's point rests on a subtle equivocation. It's true that realized intelligence requires an environment and a sufficient dataset. And so in this sense, the author's point that there will be no realized intelligence that is unspecific to training environment is probably correct.

But there is another sense in which intelligence can be cashed out. It's the sense in which a single learning algorithm can be trained to "behave intelligently" in a wide array of environments. It is generally this kind of intelligence that people speak of when they talk about general AI. There is no reason to think this kind of general AI is inherently impossible. For it to be impossible would mean that different kinds of optimization/learning problems are completely independent, i.e. there is no similarity or underlying regularity to be exploited that cuts across the entire class of optimization/learning problems. I think this is very probably false.


>A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words

This is a pretty bold claim. Why so few words? Why not 2300 words? How can anyone know this? I think quite a few historians would disagree with the statement that pre-history was low-complexity environment.

>Of the people who have attempted to take over the world, hardly any seem to have had an exceptional intelligence.

How can the author even know this? We aren't particular good at measuring human intelligence when given direct access to a living cooperative subject and yet the author wants to pin his argument on calling Alexander the Great or Genghis Khan stupid?

> Our brains themselves were never a significant bottleneck in the AI-design process.

How can anyone know this?

We should be skeptical of the Strong-AI crowd's predictions of intelligence explosions but that skepticism should not take the form of unfounded and absurd claims.


"A person with an IQ of 130 is statistically far more likely to succeed in navigating the problem of life than a person with an IQ of 70 — although this is never guaranteed at the individual level — but the correlation breaks down after a certain point. There is no evidence that a person with an IQ of 200 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. How comes?"

Actually there is as well no evidence that a person with an IQ of 130 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 100. And that is probably why the majority of humans have an IQ close to 100 while people with an IQ of 130 or more are less than 5%.

That said the proportion of people with IQ greater than 130 is growing slowly, that may means we are already seing an intelligence growth with mankind.


This article has too many holes to count and reads more like someone in denial.

---

BUT, as an aside: I think a decent argument exists for the non-certainty of intelligence explosion.

The argument goes like this: it takes an intelligence of level X to engineer an intelligence of level X+1.

First, it may well be that humans are not an intelligence of level X, and reach our limit before we engineer an intelligence superior to our own.

Furthermore, even if we do, it may also be that it takes an intelligence of level X+2 to engineer an intelligence of level X+2 (Etc. for some intelligence level X+n.), in which case we at most end up with an AI only somewhat superior to ourselves, but no God-like singularity (for example, we end up with Data from Star Trek TNG, who in season 3, episode 16 fails to engineer an offspring superior to himself -- sure, Data is far superior to his human peers in some aspects, but not crushingly so).


I think everyone agrees about "non-certainty". Where people disagree is on how likely an intelligence explosion is; and in particular, whether it is likely enough to warrant expending effort to plan for it.


We don't know enough to know whether it's possible. If it is, we don't know enough to know what approach to follow to get there.

Is it worth spending effort to plan for it? Maybe some. But if we don't know what approach to follow to get there, we don't know what it's capabilities and limitations will be. That means we don't know what we have to plan for. Any planning will therefore be either very speculative or very abstract.

I wouldn't start pouring effort into planning for it, as if were the most important problem in the world...


The argument hinges on the meaning of intelligence. Indeed, if you consider intelligence to be exactly the thing measured by IQ, an explosion is probably impossible.

Suppose we considered 'effectiveness' -- the ability of a system to quickly achieve its goals. Would the author argue that a recursively self-improving machine could not exponentially increase its effectiveness? Why?

Wouldn't an effectiveness explosion have similar consequences for people -- making human ability nearly irrelevant since any goal we can articulate can be achieved so much faster by machines?


Computers can make it possible to simulate our environment well enough so that a general AI could train itself on those simulations and apply its learnings to the real world.

I do agree with the author that the evolutionary approach is the one most likely to succeed... Unfortunately it's also the most dangerous approach which gives us the least amount of control.

We could give a computer the sensors and actuators to create its own detailed simulation of the world and then let it train itself using that simulation.


It's also interesting to consider the fact that we humans are not the pinnacle of intellect in all the history of space and time.


Many of these arguments fail in one little detail: You don't need to even understand how intelligence works to improve an AI.

Almost by definition, a human level AGI is automatically super intelligent.

Robert Miles explains it very well: https://www.youtube.com/watch?v=gP4ZNUHdwp8


Can anyone attest to the validity of the No Free Lunch theorem?

What if I created an algorithm that had access to every other possible algorithm ever created, and chose to use the optimal algorithm for the given task? Would that algorithm-seeking algorithm do better at every task than random chance? Would that algorithm-seeking algorithm break the No Free Lunch theorem?


This is a stupid article, and reading it will only make you stupider.

First, it is undoubtably true that human brains are better suited to being human than AIs would be. That is irrelevant to the question of whether AIs will be better at the task of creating better technologies.

Second, it is absolutely false that improvement is incremental. Consider the game of Go. For a long time, computers sucked at Go. Then we came up with Monte Carlo search about a decade ago and suddenly we were at strong amateur. Then deep learning was applied, and Alpha-Go jumped to able to beat any human in the world, and has continued to improve.

Third, linear progress is not the history of technology. The history of technology is exponential progress over and over again on everything from how far a steam ship could travel without refueling to the number of operations per second a CPU can carry out.

Fourth it is wrong that intelligent things can't be part of creating greater intelligence. As a trivial example, good teachers can turn out students who are in a position to be better than their teachers are. Or look at the history of science.

But on a more relevant level, the task itself is clearly possible. If we have a computer program that is able to design better hardware, it can be improved by simply moving it to better hardware. But it is capable of designing that hardware. This creates a feedback loop which should shorten the cycles in Moore's law, resulting in very few years in a truly superior AI capability.


> Second, it is absolutely false that improvement is incremental. Consider the game of Go. For a long time, computers sucked at Go. Then we came up with Monte Carlo search about a decade ago and suddenly we were at strong amateur. Then deep learning was applied, and Alpha-Go jumped to able to beat any human in the world, and has continued to improve.

I think you missed his point that this sort of improvement in specific task performance tends to not generalize into improvement at _any_ task. He specifically used AlphaGo as an example of a very narrow specialized problem... i.e, AlphaGo will likely never contribute to solving anything but Go or very closely related tasks.

> Third, linear progress is not the history of technology. The history of technology is exponential progress over and over again on everything from how far a steam ship could travel without refueling to the number of operations per second a CPU can carry out.

Each expansion of which is counterbalanced by many other inefficiencies (maximum speed of steam ship hulls, carrying capacity, heat losses of faster CPUs and timing/latency problems that put limits on signal propagation, etc).

> But on a more relevant level, the task itself is clearly possible. If we have a computer program that is able to design better hardware, it can be improved by simply moving it to better hardware. But it is capable of designing that hardware. This creates a feedback loop which should shorten the cycles in Moore's law, resulting in very few years in a truly superior AI capability.

I think you missed the main argument of the whole article, which was basically that every exponential increase in one measure is rooted in the civilization and circumstances as a whole, and therefore never truly reaches runaway effect. Too many other supporting circumstances are needed for any one change to truly dominate the overall system.


I think you missed his point that this sort of improvement in specific task performance tends to not generalize into improvement at _any_ task. He specifically used AlphaGo as an example of a very narrow specialized problem... i.e, AlphaGo will likely never contribute to solving anything but Go or very closely related tasks.

Actually he was all over the place, and you jumped from one point that he made to a different one that he also made which I didn't bother addressing.

The point that I addressed is that he claimed that improvement is always incremental. It isn't.

The point that you brought up is that improvements are narrow, with AlphaGo as an example. This is a horrible example that undermines its point. AlphaGo is the result of applying deep learning techniques to the problem of Go. But as https://en.wikipedia.org/wiki/Deep_learning indicates, that technique has created huge jumps in the abilities of computers at things as different as Go, image processing, speech recognition, and customer recommendations.

In other words the advance here wasn't narrow and targeted. It was broad and widely applicable.

Which brings us to a fairly fundamental fact.

Human brains have a specific set of capabilities which are each fairly well localized in the brain. Put them together, and they create a very good generalized machine.

We are currently able to build affordable computing systems with raw computational power that is roughly comparable to a human brain. (The way that they work is not apples to apples.) But we are unable to build all of those capabilities. And we can't tie it together.

However what that says is that we have a limited number of software problems to solve, and then we'll be able to build AI to match humans. Those software problems are open ended, they might be 5 or 50 years away. We don't know. But our brains represent a proof that the problem can be solved, and eventually we'll figure it out. Then what happens next?

Each expansion of which is counterbalanced by many other inefficiencies (maximum speed of steam ship hulls, carrying capacity, heat losses of faster CPUs and timing/latency problems that put limits on signal propagation, etc).

Not according to the history of technology as I understand it. For example if you look at the history of steam ships in the 1800s you see exponential curves in range and carrying capacity going together, with significant increases in speed as well. If you look at computer systems what you see is different curves resulting in changing bottlenecks. For example CPU speed and hard drive capacity both increase faster than bandwidth/latency between the CPU and the hard drive, making disk latency a growing problem relative to the rest of the system. In absolute numbers, though, a recent computer is better in all dimensions than one from 20 years ago, and better than one from 10 years ago in all dimensions except clock speed.

I think you missed the main argument of the whole article, which was basically that every exponential increase in one measure is rooted in the civilization and circumstances as a whole, and therefore never truly reaches runaway effect. Too many other supporting circumstances are needed for any one change to truly dominate the overall system.

The growing impact of computers from 1950 to the present sure looks like a runaway effect to me. If strong AI develops, the curve will remain exponential but should steepen abruptly. While it technically won't be a singularity, improvements will come faster than humans can keep track of.


>Third, linear progress is not the history of technology. The history of technology is exponential progress over and over again on everything from how far a steam ship could travel without refueling to the number of operations per second a CPU can carry out.

Actually the "history of technology" is 1-2 bouts of exponential progress in harvesting the low hanging fruits of a few areas of knowledge, and then long stagnation for millennia (and perhaps, after 1-2 more cycles, forever) as diminishing returns kick in.

The whole last century of medicine, for example, increased life expectancy less than what a couple of basic practices back in the start of that period did -- things like toilets and running water and antibiotics.


> If we have a computer program that is able to design better hardware, it can be improved by simply moving it to better hardware.

This adds a constraint that the new hardware has to be able to run that program, which is an important source of friction in the design. Especially if you consider the case of neuromorphic AI, true improvements in hardware design may involve changes so extensive and so breaking that it wouldn't be possible for the original AI to function on it. In order to exploit the better hardware capabilities, new AI would therefore have to be trained from zero, which is a potentially expensive operation.

So regardless of which way to go, you encounter significant friction: either you constrain your design to support prior software, which is hard, requires a lot of checks, and is severely suboptimal in the grand scheme of things. Or you don't, in which case you have to wait a while until new AI can be made to exploit the new architecture. It's not as simple as you think.


> If we have a computer program that is able to design better hardware, it can be improved by simply moving it to better hardware.

Could you please elaborate? What is it about "better hardware" that makes software that runs on it "better"? Can you define "better"?


'Better' as in achieves more results per unit of time. Which is a fundamental problem for humans too. Many do have the intellectual capacity to invent something like general relativity, but few would be capable of doing so in the very limited timeframe we have available and even fewer actually end up doing that instead of dedicating their thinking time somewhere else. More thinking and more output per timeframe should lead to significant improvements for both human and AI in terms of results, which is generally the meaningful part


Hardware that has more memory, more processing speed, faster access to memory, and more parallelism is better than hardware without those characteristics.

The exact same software running on better hardware will run faster and can tackle larger problems.

We can't possibly build a human with twice the memory that thinks twice as fast. However once we have an AI which is roughly equivalent to a human, having an AI with twice as much memory that thinks twice as fast is just 2-5 years. (How long depends on where the bottleneck is.)


Wait, so if I run a Go-playing program from 10 years ago on the AlphaGo cluster then it'll produce better results than it did 10 years ago?


Yes. Nowhere near as good as AlphaGo, but yes it would do better.

When Deep Blue beat Kasparov at chess, the program was not significantly better than what had been state of the art for the previous decade. They just threw enough hardware at it.

For chess programs there is an almost linear relationship between your search depth and effective ELO rating, and search depth went up by a constant with each generation of Moore's law.


For chess programs there is an almost linear relationship between your search depth and effective ELO rating...

Maybe that's why chess has been "solved" by AI, and as of yet no real problems that trouble humanity have?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: