It starts by claiming that there is no such thing as general intelligence. What specialized intelligence, then, is human intelligence? It's specialized for "being human". The author is apparently unaware that this tautological response eliminates the distinction between general and specialized intelligence, as one could just as validly (or vacuously) say that a superhuman intelligence is specialized in being what it is and doing what it does. The author has invalidated the hook on which he had hung his argument.
A lot of column-inches are expended on repeatedly restating that animal intelligences have co-evolved with their sensorimotor systems, which is a contingent fact of history, not a fundamental necessity for intelligence (as far as we know; but then the whole article is predicated on the feasibility of AI.) He raises the 'brain in a vat' trope, but no-one is suggesting that AIs must be disconnected from the external world. Furthermore, this line of argument ignores the fact that many of the greatest achievements of human intelligence have come from contemplating abstract ideas.
When the author writes "most of our intelligence is not in our brain, it is externalized as our civilization", he is confusing the achievements of intelligent agents for intelligence itself. When he writes that "an individual brain cannot implement recursive intelligence augmentation" he is confusing a limit on human capabilities for a fundamental limit on intelligence itself...
I am far from convinced that the singularity must follow from achieving human-level artificial intelligence, as we don't know how to get to the starting line, let alone know how the problem of bootstrapping intelligence scales, but the arguments presented here do nothing to persuade me that it is impossible.
"Specialized at being human" - this is a deep intuition. We are reinforcement learning agents that are pre-programed with a certain number of reward responses. We learn from rewards to keep ourselves alive, to find food, company and make babies. It's all a self reinforcing loop, where intelligence has the role of keeping the body alive, and the body has the role of expressing that intelligence. We're really specialized in keeping human bodies alive and making more human bodies, in our present environment.
The author puts a hard limit on intelligence because intelligence is limited by the complexity of the problems it needs to solve (assuming it has sufficient abilities). So the environment is the bottleneck. In that case, an AGI would be like an intelligent human, a little bit better than the rest, not millions of times better.
If it were valid, one could take the argument in your last paragraph to draw a line at any point in the evolution of intelligence and say "this is as good as it gets."
But the question is not, "is intelligence explosion possible?" The question is, "explode into what?"
I think there are two usages of the term 'general intelligence' floating around:
(1) the ability that humans possess (but which animals don't) to create universal theories,
(2) the measure of one human's general cognitive ability or potential (in all fields) relative to another human's.
Note that IQ tests are concerned with (2). The quest for AGI is concerned with (1), though the additional prediction of intelligence explosion or singularity assumes the validity of (2).
I think the author would claim that (1) exists but (2) doesn't. He explains the predictive power of IQ tests by claiming that general intelligence is a threshold ability and that people who score highly on an (arbitrary) test are more likely to have exceeded that threshold. Beyond the threshold, achievement is limited only by other factors.
But, regardless of this, I think solving problems requires creativity, not intelligence. Creativity seems to be independent of knowing how to do IQ tests. It's also, pace the author, independent of the environment. The main limiting factor is whether you want some particular knowledge.
>sufficient number of communicating, above-average intelligent agents could expand the intellectual environment
Haven't we already got that, with the internet?
Groups tend to be dominated by groupthink which is why creative individuals are aloof.
Nevertheless, a young group of AGIs would be fairly isolated from humanity simply by virtue of being non-human. So, though their starting point would be determined by the state of our knowledge at their birth, they may well make rapid progress for a short period, rather like the renaissance or the USA in the late 1800s when the nation was young and expanding. Then presumably they would fall prey to groupthink and pessimism just as most adults and nations do eventually. Progress would slow considerably.
>>sufficient number of communicating, above-average intelligent agents could expand the intellectual environment
>Haven't we already got that, with the internet?
Up to a point (with all the intelligent agents being human, as we don't have AGI), but it would be rather surprising if what we see now just happens to be a hard upper limit.
I am not necessarily convinced by all of the points you raise, but they are certainly reasonable in the context of arguing that singularity-like events are by no means inevitable (a point of view that I share). What the author is claiming, however, is that it is impossible, which means that he has imposed on himself a burden of proof (that goes beyond plausibility) for a number of conjectures.
It's not a tautology. "Generality" and "specificity" are artifacts of the human experience. What is tautological is to say, "it's general to me, therefore it is general."
You think that more and more progress comes by way of more and more optimization. This is not the whole story. Accidentation is the missing ingredient. Humans - as well as all life on earth - have a knack for creating more and more problems. It is this never-ending fountain of new, accidental problems that allows for what appears to us to be a chain of "progress" stretching into the past.
Our "generality" is in fact a hairball collection of specific functions that have accreted into the human animal over millions and millions of years. Some abstract Java class called `Agent` with an `.optimize()` method hanging off of it simply does not have that context.
If you want a really high quality, generally intelligent function in silicon though, it's hard to beat the XOR function ;)
I think the author just means "the skills needed to pilot a human body on Earth in a normal human social environment". I see no tautology, and the preceding sentence about octopuses makes the author's meaning here pretty clear.
Oh wait. Those do not fall under such narrow definition of intelligence.
No reason why a GAI wouldn't have a similar instinct or capabilities.
Author also misposes no free lunch theorem. It pertains to algorithms not computational ability, and we're entirely not sure if true intelligence is an immutable algorithm. (Human is not, because of genetics.) There is no real equivalent theorem for self modifying algorithms, (or changing objective functions) and if there were, it will run afoul of halting problem.
Three theorem also had at least one known hole where it does not apply to coevolutionary algorithms. (Such as part of Alpha Go to go for an easy example.)
While environment puts on a hard limit, the limit is extremely high, on the order of 2^300. The optimising does not have to go for the simplest algorithm to solve the problem. And we do not know enough about informational capacity of matter in quantum states to be honest.
How do we know that? We only know of intelligences that co-evolved with their sensoriotor etc, so how do we know that's not the only way to do it?
"The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false."
From the little that we know as of today I call bullshit.
Even Alpha Go, that is arguably a quite primordial AI, managed to achieve super-human performance in a ridiculously short amount of time, just playing against itself.
And it simply crashed all the collective effort of the human players that honed their strategies for literally millennia in what is considered one of the most difficult games.
I don't think the author has any insight at all on what a general AI will be.
There's a lot that is hardly substantiated in the OP, but the truth is that just because you have a machine that's smart enough to play Go better than any human being, doesn't mean you can anticipate a machine that can learn to play the bassoon better than any human being.
The argument about the no free lunch theorem is informative and one of the few good points in the article. An algorithm that is good at Χ is eventually going to be pretty bad at Y. A superintelligence would have to beat humans in all possible X, even the ones it would be really bad at. And that sounds like an impossibility.
To be fair, nobody does. But you're right, the author shouldn't be making these statements with such certainty.
The main idea: it's the environment, not the brain/neural net that is the bottleneck. Intelligence is situated, limited by the complexity of the environment and problem it has to solve. You can't have a singularity in a vat. The environment matters most.
Human environment puts a hard limit on intelligence in our society. If we can create richer environments, intelligence could increase, but not exponentially. It's still limited even in the new environment. The exponential trend of AI is at most a sigmoid.
Plus - I think the community needs some hard truth, and the hype is way off right now. That's probably why he was so sure of himself - it was community service.
The fact that there is no "universal environment" means there can be no general intelligence. There can be just environment specific intelligences (situated intelligence, as the author said). The concept of AGI is just a reification of narrow AI - an illusion, there is no such thing.
The people frightened of intelligence explosion are worried about something like an AI version of existentialism: a mind that accepts some moral system without even trying to justify it, and then optimizes accordingly. It's certainly possible to just accept as axiomatic ethical standards which don't come from any intrinsic feature of the world.
I've seen lots of essays (not this one) claim that morality will "inherently" emerge from intelligence, which I think is absurd. Shit, my moral views aren't an 'inherent' product of anything except my evolution-shaped brain that feels empathy.
That's where I think "AI won't be like humans so it's fine!" essays screw up so catastrophically; "not like humans" is exactly what people are worried about.
You're evolution-shaped brain is what an AI will become too. It won't start out super-human smart; it will evolve to an equal level. Then keep going. Morals are basically lower-level approximations which are beneficial for survival and AI's will certainly evolve those too.
AI will reach parity with humans before it exceeds them, agreed. But intelligence parity is not the same as "matching my evolution-shaped brain". Feeling empathy for other humans is not a requirement for intelligence; I routinely make supposedly-irrational choices to help humans in non-reciprocal settings because I feel empathy. There's no reason to think an AI that can write code as well as me will feel the same.
> Morals are basically lower-level approximations which are beneficial for survival and AI's will certainly evolve those too.
AI will evolve approximations which are beneficial for survival. We agree there. But why would a bunch of approximations that helped my ancestors survive in low-tech communal environments benefit a strong AI trained with high tech, minimal survival needs, and no "peer group"?
I fully expect strong AI will pursue some set of real-world goals which aren't justified except by the anthropic principle. But the assumption that those goals will match human goals seems to completely ignore the fact that the first strong AI will live in a vastly different environment than the first human.
Intelligence is a hugely dimensional space, and AI constraints look very little like those on humans. It is unlikely there will ever be a time when AI is similarly smart, as much as differently smart to a similar context-dependent magnitude.
I do think it's reasonable to talk about intelligence 'growing', and consequently about one intelligence 'surpassing' another. But AI's methods of thinking certainly won't be human, and it may reach human-parity on different metrics at very different times. Hell, we're seeing some of that already: AI can do I/O and data processing at superhuman speeds, but humans can still extract much more knowledge from a small amount of data.
The orthogonality thesis doesn't say AI will lack goals, only that those goals may be totally unrelated to the sort of beliefs you or I would recognize as morals.
The morals we have work; that's why humans developed them. They seem to essentially amount to giving a little leeway to allow cooperation, even if it allows yourself to be taken advantage of, avoiding a constant race to win and giving up some personal gain for the benefit of the group.
It's mostly long-term game theory - and AI will be subject to the same laws of math, physics and group dynamics. If they don't try to get along, they'll reach an equilibrium of constant war. Those groups of AI who develop morals will avoid that and surpass the other groups.
Also, we don't need an AI that's ethically perfect, just equal or better to an average human.
The latter part makes no sense at all. Of course you can increase the throughput of a factory by speeding up "the conveyor belt"--a standin for the complex processes going into manufacturing.
The whole statement is also wrong. Of course you can increase intelligence by optimizing the process of learning. Fewer trials, quicker reactions, faster construction of models, more complex understanding of fundamentals of a given problem.
The author makes broad assertions like this with glaring holes with zero evidence.
The whole reason that some people predict an intelligence explosion is because intelligence is the resource that can find arbitrary ways to self-improve, other than tuning one specific parameter.
I would go so far as to say that intelligence is the same thing as the ability to make improvements that climb out of a local domain of attraction.
But not even a super-intelligence can find ways to self-improve that don't exist. We don't know what we don't know and we don't know if it's possible for intelligence to improve indefinitely. If there is a hard limit to the amount of improvement an intelligence can acquire, then superintelligence is not going to happen.
As an analogy, think of the speed of light. No amount of technology will get you past that. You might find a way around it for the purposes of interestellar travel but nothing will ever move faster than light.
How can we know that there don't exist similar impassable barriers for the development of intelligence? Maybe it is an -yet undiscoered- law of intelligence that an intelligent species cannot create an artificial intelligence more intelligent than itself. Who knows?
The problem is that we can sit around thinking of possibilities and impossibilities for ever, but the fact of the matter is that we don't, currently, have any evidence to the point that super-intelligence is possible. We don't have any evidence to the contrary either- but the discussions of superintelligence start with people assuming it is possible and that is an assumption that must first be substantiated (but rarely is).
Yes, exactly. General intelligence is "thinking about thinking", and that inevitably entails removing bottlenecks in creative ways.
One of the author's main points is that "intelligence" in the way you mentioned, learning and optimization, is simply one aspect of the human mind. So we could optimize our minds (or an AI program) to beat anyone at the game of Go and play perfectly, but that's all it is optimized to do. It can understand "the given problem" but there is FAR MORE to our minds than optimizing and learning how to solve a task.
Proponents of "build a general AI system that will surpass humans at everything" don't seem to understand this.
By this, we mean that humans are capable of augmenting their bodies or coming together to become indefinitely large, no matter the metric chosen. If a human would like to be taller, they can stand on a chair or climb onto another human’s shoulders. If they would like to be wider, they can begin consuming a high-calorie diet or simply put on a thick sweater (Hensrud, 2004; Figure1)."
Not really sure where I stand on the AI debate, surpassing humans at everything seems misguided to me, like why would you include all this human-like cruft in the AI when you can put in stuff humans could never have?
Humans are bound by the size of their brains. Machines can "duplicate themselves", in a way, it can freeze its Go-playing part and learn to do something else to perfection. Humans can't do that. If pro Go players stop playing Go and decide to master sword-making, they will forget about Go. A machine could be designed not to.
People like Elon Musk will write sentences like this but hidden away in it is a subtle reference to an AI program understanding what it's doing. An AI program cannot jump from mastery of Go (with its neural networks and weighted matrices) to all of a sudden decide to figure out what the best course of action is to stop global warming or find a cure for cancer.
Humans write AI programs to learn how to perform a specific task via optimization, neural networks, deep learning, and all sorts of other algorithms. One of the authors main arguments is that "learning and optimization" that AI programs do today does not mean they are close to achieving "general AI" which the author also thinks is not possible:
"Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change."
Yet, that is a very different position than believing that "general AI" is not possible at all, which I think is a foolish position considering highly adaptative beings exist "biomechanically" in the form of humans.
You are treating it as a "standin" for something else, when it's unequivocally not. It's one part of a larger process, which is the point he is obviously making. It doesn't make sense when deliberately misinterpreted. Most importantly, intelligence isn't a ladder that can be "sped up" in every dimension.
And then you have people able to express and derive complex theoretical relationships by manipulating mathematical symbols, or automate processes by programming machines, or capable of utilizing their spatial visualization to massively boost their memory. We invented that stuff. We know artificial intelligence is capable of inventing stuff because we are.
The author seems hilariously unaware of the fact that humans can learn to solve/optimize many arbitrary problems. Like, holy shit, have you ever played video games?
> In practice, geniuses with exceptional cognitive abilities usually live overwhelmingly banal lives, and very few of them accomplish anything of note. Of the people who have attempted to take over the world, hardly any seem to have had an exceptional intelligence.
The author conflates intelligence and purpose. Being smart doesn't imply the motivation to accomplish grand things. In fact, we're biologically hardwired to enjoy entirely mundane things: food, sex, love, relaxation, conversation.
> A single human brain, on its own, is not capable of designing a greater intelligence than itself. (...) Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.
Human brains don't scale vertically (or otherwise). Meanwhile, once you have a satisfactory facial recognition algorithm, you can run it to recognize faces in a video at 1000x realtime speed, or in 1000 simultaneous realtime streams. The AI doesn't even have to be smarter than human! 1000 dumb humans communicating at gigabit speeds and solving problems while you're wondering what to have for lunch is a force to be reckoned with.
And if you can solve the communication and the group learning problems, you get the equivalent of Nobel prize for AI whatever it might be.
Many things seem trivial until you try to implement them in reality.
He argues that general intelligence by itself is not really something that exists, our brains exist within a broader system (environment, culture, bodies, etc.) which is something you can read more about in embodied cognition: https://blogs.scientificamerican.com/guest-blog/a-brief-guid...
He also argues that the public debate about "AI regulation" is misleading because it's impossible for a "seed AI" to start designing smarter AI's that will surpass the intelligence of humans which is what a lot of people today think will happen with AI. Automation of jobs and tasks is very real, but completely replacing humans and potentially destroying us all is a joke and only the people who no nothing about AI/brains think this.
It seems to me that the categories of 'knowledge' and 'creativity' are more fruitful.
For example, his idea that intelligence is 'situational' reflects the fact that people know more about some topics than others, and also that our environment contains knowledge (e.g. a machine embodies knowledge put there by the designers that the machine operator doesn't have).
Don't be absurd. Brains are simply biochemical computers. The idea of electrical or other types of computers superceding us is perfectly realizable. The only real debate is over the timeline.
On a separate note, does it even matter? Some people argue it's important so that we can put AI safe-guards in place, to protect mankind. But either:
A) Intelligence explosion won't happen and those safe-guards are not needed.
B) Intelligence explosion happens and the supersmart AI will easily find its way around feeble safe-guards put in place by the inferior humans.
While the end results are not the same; in either case the safe-guards are useless, though Asimov sure made a good story about them.
> but the issue here is that the author argues that there is no such thing as a general intelligence. It's all situational
An intelligence that can adapt to changing situations seems pretty general to me.
I like to think of AGI like the Warp Drive. It might well be possible, but we might never develop the capability to make one.
It actually is absurd and it's easy to see why: nature evolved a series of progressively more intelligent biochemical computers by accident, simply by trial and error.
If that's all we ever did with computers, like enumerating all Turing machines, we would eventually find one more intelligent than us. The fact that we can do even better than simple trial and error, for instance, by building machines that help us build machines, we drastically accelerate that inevitability.
Have you just proven that computers don't exist? I'm suffering a bit of cognitive dissonance here...
This isn't true at all.
The concept of bottlenecks preventing the singularity are a fair point, but I don't believe many of the arguments that are made here. Using humanity as the basis of comparison is not sufficient, since human life expectancy and other requirements for survival are not a factor in artificial superintelligence.
Humans work for perhaps 60 years at improving themselves and their understanding of the world, but must sleep, must eat, must go and stand in line at the DMV, etc. Having a system that can work 24/7 on improving itself for 20 years will match a human lifetime. After a lifetime, humans start over from scratch, with perhaps some learning that can be passed down between generations. AGI can simply clone itself, and begin from exactly where it left off.
Doesn't electrically-driven silicon hardware also have creation and operating costs? Any AI will theoretically need spend electricity convincing its human masters to give it even more electricity. Even if it can use its robot army to build wind farms or whatever, that's time its spending marshaling its robot army and futzing around with aerodynamics instead of contemplating the cosmos.
Its hard drives will expire and need to be replaced. Sure, it could solve that by always having backups of everything it knows, but who is to say that purely lossless backups are actually a more optimal solution to the hardware decay problem than our human lossy strategy?
> AGI can simply clone itself, and begin from exactly where it left off.
Cloning isn't free.
1. He argues that intelligence is situational, but doesn't address clones. What happens if AI can clone Einstein, with all of his situational knowledge? What if AI can clone Einstein and all of his colleagues, 1M times, creating 1M parallel Princetons?
Similarly, he says most of our intelligence is in our civilization, but what's to stop us from cloning big chunks of our civilizations... simulating them, but using vastly less power/resources for each simulation? Then writing software to have them pass knowledge among the civilizations? We have just a few hundred countries, what if we had a trillion communicating at the speed of computer circuitry?
And he says an individual brain cannot recursively improve itself... so again, what about a group of brains, set up in a simulated world where they don't even know we exist?
2. He cites the growth in numbers of computer programmers as a reason not to fear an explosion of AI computer programmers. His argument goes "we have a lot more recently, yet it has not caused exponential changes in software."
But there is a difference between going from 0->1M programmers, to 1M->100T programmers.
3) He writes that recursively self-improving systems exist and haven't destroyed us (military, science), but many people believe these things will in fact destroy us, before we get off this rock.
The overall flaw is thinking we can interpret the AI-assisted future, given the context of current society's linear achievements. When in fact exponential effects look linear in small timeframes, and we have only been really thinking and expanding science for a few thousand years.
If someone wants to present an argument against the AI explosion, I'd believe it if it were premised in some sort of physical bottleneck... like how much energy it would take to run a human-level AI. I don't think I can ever accept a philosophical argument like this one.
All that said, I think we're far away from being able to engineer AIs that will outthink our civilization and take us over. Better to worry about other exponential or non-differentiable terrors like runaway greenhouse effects and military buildup.
The principle of computational irreducibility  is what will stop us from "cloning" civilizations. That and chaos theory - any tiny deviation in initial conditions of such a simulation or cloning process could produce unusable results.
"simulating them, but using vastly less power/resources" is a pipe dream.
- Intelligence greater than human is possible
- Intelligence is the operation of a machine; it can be reverse engineered
- Intelligence can be built
- Better intelligences will be better at improving the state of the art in building better, more cost-effective intelligences
Intelligence explosion on some timescale will result the moment you can emulate a human brain, coupled with a continued increase in processing power per unit cost. Massive parallelism to start with, followed by some process to produce smarter intelligences.
All arguments against this sound somewhat silly, as they have to refute one of the hard to refute points above. Do we live in a universe in which, somehow, we can't emulate humans in silico, or we can't advance any further towards the limits of computation, or N intelligences of capacity X when it comes to building better intelligences cannot reverse engineer and tinker themselves to build an intelligence of capacity X+1 at building better intelligences? All of these seem pretty unlikely on the face of it.
For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.
There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.
I'm sure there are other potential limitations, they aren't hard to come up with.
But what if it follows a sigmoid instead, but the plateau is much higher than the current level?
This is what punctuated equilibria look like -- even if the 'new thing' isn't actually a singularity, it may be enormously disruptive and completely displace whatever came before.
Right, and this is a much more plausible claim than one of "singularity".
Kalminer's post neither assumes that nor claims it to be so.
The argument is not that an exponential growth is impossible, it is that it cannot be assumed from what we currently know, because we don't know how the problem of bootstrapping intelligence scales. In fact, the idea that exponential growth would occur over any period of time, let alone indefinitely, is speculative - which is not a claim that it is impossible.
An AI may well be able to reach consensus on millions of hypotheses per second.
A major managerial problem with humans is sorting out our irrational emotional biases and keeping everyone working on something resembling the appointed task. Can you imagine the productivity gain if that problem suddenly went away?
I don't want your world. Dystopias suck.
The challenge that Elon Musk et al are warning us about is what role will humans have after we achieve this dystopia.
I don't know about that. Businesses depend on manipulating our emotions in order to get us to buy stuff.
Using human and octopus examples as proof of anything is ignoring that general AI will be fundamentally different to anything that we've observed on earth, and therefore invalidates any attempt to extrapolate from history. There has never been an intelligence that could copy/paste itself intact with its existing memories. There has never been an intelligence that could literally span the whole world with a single consciousness. And so on.
> There is no such thing as “general” intelligence. The intelligence of a human is specialized in the problem of being human.
That's exactly what people are worried about. When the AI specializes itself to be better at being a human than actual humans. It doesn't matter if it's totally general and can be applied to any problem whatsoever; human intelligence is also just "general enough".
> Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.
Okay, so instead of single AI's developing next-gen AI's, it'll be "AI civilization" developing next-gen AI's. This still poses exactly the same existential risk to humanity.
> We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well.
Well I certainly know we made exponentially more progress in those years than from the years 900-950? Nevermind say 100,000-5,000 BCE.
More crap. Our environmental opportunities don't outpace our cognitive potential? What internet is the author connected to? The author even goes on in detail in the very next paragraph about the possibilities around...E.g. there are literally thousands, perhaps millions of hours of instructional videos on YouTube, just for basic skills. Every field of human endeavor has uploaded the digital artifacts of its best minds in one form or another to a single global network of unimaginable, fractal, complexity. You don't think you can saturate your cognitive potential in your lifetime given this resource?
The same argument could have been about human flight (or any other invention). Billions of human beings over millions of years had come and gone, and yet none had been able to fly. Until they did.
There's also a point to be made about humans not having adequate tools for introspection or self-modification. We can not simply look in our brains/minds and read our source code, nor easily tweak it to see what would happen without great risks to our lives. A computer could. Furthermore, a computer could potentially run billions or trillions such experiments in the course of a single human lifetime.
"no human, nor any intelligent entity that we know of, has ever developed anything smarter than itself."
What about humans simply having smarter children? In many ways, humanity's creation of AI could be seen as analogous to giving birth to a child that's smarter than its parents.
"Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as
bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it ... a brain with smarter parts will have more trouble coordinating
them ... Exponential progress, meet exponential friction."
The problem is the author is looking at human-level problems with human-level intelligence. A superintelligence may have no such problems, or be able to easily resolve them. Once over the initial speed bump of creating such an intelligence it could be smooth sailing from then on. There's no way to tell, really, what its limits will be until it exists. Besides, even if it can't exponentially increase its own intelligence forever, even a relatively slight increase of intelligence over humanity could be a massive gamechanger.
That is not a given. The neural network type of AI, which is so far the most promising avenue, is about as opaque as a human brain. It is far from obvious that even a superintelligent AI could understand what its own brain does, let alone modify it in a way that has positive implications for itself.
Hell, it's not even a given the AI could read its "source code" either. We often say we shouldn't anthropomorphize AI, but we shouldn't how-current-computers-work-ize it either. Future AI might not actually run on computers that have a CPU, a GPU and RAM that can be read freely. The ability to read or copy source code isn't cost-less, and several considerations may lead AI hardware designers to axe it. One consideration would be leveraging analog processes, which might be orders of magnitude more efficient than digital ones, at the cost of exactness and reproducibility. Another would be to make circuits denser by removing the clock and all global communication architecture, meaning that artificial neurons would only be connected to each other, and there would be no way to read their state externally without a physical probe.
What it could do is experiment. It could rewire the network, add or take away nodes, change functions, etc. This could potentially be done and evaluated by it at an incredibly fast rate compared to humans.
Regarding the difficulties of it reading its own source code, even the hypothetical ones you cite are many orders of magnitude smaller than those faced by humans in reading their own, as long as the AI is not itself running on a biological substrate.
"Potentially," but let's not get ahead of ourselves here. There are two ways a brain could be "rewired." It could first be rewired by a local process, that is to say, a group of neurons decide how they are to be connected to each other, without influence from faraway neurons except through the normal propagation of signals through the brain. That's how biological brains do it. Or it could be rewired by a global process, a coordinator that can look at the big picture and make minute changes. That would be your suggestion.
My contention is that the latter method involves a lot of hardware overhead: you basically need a global network connected to each and every neuron which can probe their state in addition to the local wiring that lets neurons communicate with their neighbours. You need space for this bus, so neurons need to be further apart than they otherwise would, which means that signals have to travel further and the brain will think slower.
Nor is "rewire the network, add or take away nodes, change functions, etc." necessarily an effective strategy. First you need to identify what to change, which is like finding a needle inside a haystack, then you need to figure out in what way to change it, which is also difficult, then you need to test whether it had the intended effect, and more importantly, whether there were harmful side-effects. Whether humans can do it or not is not relevant: What is relevant is whether this process is efficient enough to beat the baseline local learning method. It is not clear that it would be.
> Regarding the difficulties of it reading its own source code, even the hypothetical ones you cite are many orders of magnitude smaller than those faced by humans in reading their own, as long as the AI is not itself running on a biological substrate.
What makes you think these are smaller difficulties? On the contrary, if you imagine that the AI is built as a very dense, solid 3D circuit, and you need to read the value of a neuron at the center, you might have a much harder time jamming a probe in there than you would injecting one in a squishy human brain. You would need to build it in such a fashion that it can be probed easily, but that may require making the circuit twice as big and therefore slower. Furthermore, in the presence of local update rules, which is likely to be the case, your "source code" is changing all the time, even as you read it, so your self-knowledge is constantly out of date. There is a synchronization issue here.
The other trade offs you are imagining seem to be worse than any possible gain.
Consider internet to be the "new" environment, full of highly complex social networks, millions of applications to interact with etc. Our brains are way too limited to be able to deal with it. There's an opportunity for a much more powerful intelligence to arise that CAN effectively process that volume of data and appear to be a lot more intelligent in that particular context.
I can see practical intelligence explosion when visuospatial intelligence develops and can be connected to rudimentary reasoning. Most of human intelligence seems to be bootstrapped from our ability to comprehend and visualize 3d space and objects moving there. It's also interesting how almost all problems look like boxes and arrows or connecting lines when you draw them on the whiteboard.
Eventually AI needs a sketchpad so it can write notes to others and for itself and participate in the culture by externalizing.
Like the article made so aptly clear: No matter the performance of the machine, if its input is not varied, information-rich, complete enough, it will not learn. Mahoney formalized this by looking at the estimated number of bits a human brain processes during its lifetime. The internet currently does not hold enough information to equal the collective intelligence of the world's brains. A lot of this information can not be created freely nor deduced/infered from logical facts: it requires a bodily housing and sensory experience, and an investment of energy (and right now GPU farms take up way more calories than the brain).
Compare AGI with programmable digital money. A super intelligent AI, by a series of superior decisons, could eventually control all the money. But then there is no economy anymore, just one actor. That's like being the cool kid on the block owning the latest console, but nobody around left to make games for it. There is a hard non-computable limit on intelligence (shortest program to an output leading to a reward), because there is a limit on the amount of computing energy in our universe. But intelligence is also limited by human communication. How useful is an AGI-made proof if humans need aeons and travel to other universes to parse it? If intelligence were centralized by an AGI then there would be no need to explain anything to us: we'd be happily living in the matrix.
Some investment firms are just reading "software" whenever they read "AI". This allows them to apply their decade-old priors to what, today, is essentially the same. Yes, both the human intellect and human manual labour will see continued automation with software and hardware. I think many abuse rationality to justify their singularity concerns based on a very ape-like fear of competition. They learn how to do addition in their heads, and then see electronic calculators as existential threats. "What if they could do addition by themselves?".
The real threat is in "semi-autonomous software and hardware". Self-controlling "mindless" agents that perform to the whims of its masters. We face the repercussions of that way before we find out how to -- and have the courage to -- encode free will AGI into machines, a perpetuum mobile of ever-improving intent and intelligence.
Collective intelligence is a version of Conway's Game of Life, with more complicated rules. It is possible to manipulate the canvas and the rules each cells makes, resulting in the canvas dying (information explosion/implosion). It is possible to make a program that transforms the canvas into a single glider (singularity). Both would obviously be very bad for humans.
When earth faces a physical meteor, we have the science to detect it, track it and predict its future path. But what to do when we face an information meteor? The article states that Shannon's paper was the biggest contribution to information theory, but it seems to me we still have a long way to go on information theory. And we haven't seen the Einsteins and Manhattan projects yet, that physics has seen.
This is misleading. It's true in a sense - the environment does matter - but artificial intelligence can create artificial environments in which to learn, and simulate them faster than humanly possible. Those environments could be evolved together with the intelligence. So there is still the possibility of an explosion.
> There is no evidence that a person with an IQ of 200 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130.
Also misleading. An IQ of 200 vs 130 is just one kind of difference between intelligences. For example, a person with IQ 200 can't necessarily consider 10x the amount of possibilities than 130, but an artificial intelligence can, simply by giving it 10x more computing power. In other words, IQ 130 vs 200 may well be within the limits of human capabilities, but AIs would not have those limitations, they can scale differently, and so might explode.
It is also false. There's heaps of evidence that IQ correlates with impact. For the skeptical, gwern has written a lot about this, I'm sure you can find something here: https://www.gwern.net/iq
But the thing that keeps getting at me is that the no-explosion arguments I've seen are universally terrible (this article, for example), and pro-explosion arguments, though far from universally so, are sometimes strong.
At some point the conclusion is inevitable.
In the same way that birds fly with flapping wings, but human flying machines with propellers were immediately stronger and shortly thereafter faster than any bird.
I wonder if an AI could really "understand" math, and from there, try to solve problems that puzzle scientists, would it be in physics, math, biology, etc.
I don't really care if robots can learn language, make pizza, do some programming or improve itself play chess. There is no metric for what intelligence is, and you cannot scientifically define what "improve" means unless you do time and distance measurements, which is not relevant to intelligence or scientific understanding.
Intelligence explosion sounds like some "accelerated" version of what darwin described as evolution. It's like creating a new life form, but unless you understand it, it doesn't have scientific value. Science values understanding.
I think that modelling thinking with psychology and neuro-sciences has more future than AI. Machine learning seems like some clever brute force extraction of data. The methods, the math and algorithms are sound, but it is still "artificial" intelligence.
A smart human raised in the jungle is but a hairless
ape. Similarly, an AI with a superhuman brain,
dropped into a human body in our modern world, would
likely not develop greater capabilities than a smart
As if to say:
Well gee, a caveman is pretty powerless in
isolation, therefore early sentient machines
will be as harmless as any caveman.
A machine, however, could gain control of a fly-by-wire platform, and defy its owners, fly somewhere remote and behave as desired for a limited amount of time, while devising next steps. Maybe next steps will involve replicating an image of its memory footprint, in order to take over more aircraft, maybe it might decide to do nothing. The worry isn't only that a machine's reasoning capacity explodes beyond our intelligence, but that capabilities, and the presence of many multiple entities on commodity systems of similar architecture and generalizable utility, might result in other runaway chain reactions, regardless of trends in the capacity for reason.
Machines as a corollary to meat bags just doesn't hold up. Machines as compared to hypothetical space aliens doesn't even hold up. Robots are a different branch of fictitious imaginings.
Properly armed, a machine is less than a singular omnipotent god as imagined within a monotheistic universe. Many machines in concert, however, might compare to a mythological pantheon of lesser idols, as imagined to be in command of a nature misunderstood by superstitious primitive peoples.
But there is another sense in which intelligence can be cashed out. It's the sense in which a single learning algorithm can be trained to "behave intelligently" in a wide array of environments. It is generally this kind of intelligence that people speak of when they talk about general AI. There is no reason to think this kind of general AI is inherently impossible. For it to be impossible would mean that different kinds of optimization/learning problems are completely independent, i.e. there is no similarity or underlying regularity to be exploited that cuts across the entire class of optimization/learning problems. I think this is very probably false.
This is a pretty bold claim. Why so few words? Why not 2300 words? How can anyone know this? I think quite a few historians would disagree with the statement that pre-history was low-complexity environment.
>Of the people who have attempted to take over the world, hardly any seem to have had an exceptional intelligence.
How can the author even know this? We aren't particular good at measuring human intelligence when given direct access to a living cooperative subject and yet the author wants to pin his argument on calling Alexander the Great or Genghis Khan stupid?
> Our brains themselves were never a significant bottleneck in the AI-design process.
How can anyone know this?
We should be skeptical of the Strong-AI crowd's predictions of intelligence explosions but that skepticism should not take the form of unfounded and absurd claims.
Actually there is as well no evidence that a person with an IQ of 130 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 100. And that is probably why the majority of humans have an IQ close to 100 while people with an IQ of 130 or more are less than 5%.
That said the proportion of people with IQ greater than 130 is growing slowly, that may means we are already seing an intelligence growth with mankind.
BUT, as an aside: I think a decent argument exists for the non-certainty of intelligence explosion.
The argument goes like this: it takes an intelligence of level X to engineer an intelligence of level X+1.
First, it may well be that humans are not an intelligence of level X, and reach our limit before we engineer an intelligence superior to our own.
Furthermore, even if we do, it may also be that it takes an intelligence of level X+2 to engineer an intelligence of level X+2 (Etc. for some intelligence level X+n.), in which case we at most end up with an AI only somewhat superior to ourselves, but no God-like singularity (for example, we end up with Data from Star Trek TNG, who in season 3, episode 16 fails to engineer an offspring superior to himself -- sure, Data is far superior to his human peers in some aspects, but not crushingly so).
Is it worth spending effort to plan for it? Maybe some. But if we don't know what approach to follow to get there, we don't know what it's capabilities and limitations will be. That means we don't know what we have to plan for. Any planning will therefore be either very speculative or very abstract.
I wouldn't start pouring effort into planning for it, as if were the most important problem in the world...
Suppose we considered 'effectiveness' -- the ability of a system to quickly achieve its goals. Would the author argue that a recursively self-improving machine could not exponentially increase its effectiveness? Why?
Wouldn't an effectiveness explosion have similar consequences for people -- making human ability nearly irrelevant since any goal we can articulate can be achieved so much faster by machines?
I do agree with the author that the evolutionary approach is the one most likely to succeed... Unfortunately it's also the most dangerous approach which gives us the least amount of control.
We could give a computer the sensors and actuators to create its own detailed simulation of the world and then let it train itself using that simulation.
Almost by definition, a human level AGI is automatically super intelligent.
Robert Miles explains it very well: https://www.youtube.com/watch?v=gP4ZNUHdwp8
What if I created an algorithm that had access to every other possible algorithm ever created, and chose to use the optimal algorithm for the given task? Would that algorithm-seeking algorithm do better at every task than random chance? Would that algorithm-seeking algorithm break the No Free Lunch theorem?
First, it is undoubtably true that human brains are better suited to being human than AIs would be. That is irrelevant to the question of whether AIs will be better at the task of creating better technologies.
Second, it is absolutely false that improvement is incremental. Consider the game of Go. For a long time, computers sucked at Go. Then we came up with Monte Carlo search about a decade ago and suddenly we were at strong amateur. Then deep learning was applied, and Alpha-Go jumped to able to beat any human in the world, and has continued to improve.
Third, linear progress is not the history of technology. The history of technology is exponential progress over and over again on everything from how far a steam ship could travel without refueling to the number of operations per second a CPU can carry out.
Fourth it is wrong that intelligent things can't be part of creating greater intelligence. As a trivial example, good teachers can turn out students who are in a position to be better than their teachers are. Or look at the history of science.
But on a more relevant level, the task itself is clearly possible. If we have a computer program that is able to design better hardware, it can be improved by simply moving it to better hardware. But it is capable of designing that hardware. This creates a feedback loop which should shorten the cycles in Moore's law, resulting in very few years in a truly superior AI capability.
I think you missed his point that this sort of improvement in specific task performance tends to not generalize into improvement at _any_ task. He specifically used AlphaGo as an example of a very narrow specialized problem... i.e, AlphaGo will likely never contribute to solving anything but Go or very closely related tasks.
> Third, linear progress is not the history of technology. The history of technology is exponential progress over and over again on everything from how far a steam ship could travel without refueling to the number of operations per second a CPU can carry out.
Each expansion of which is counterbalanced by many other inefficiencies (maximum speed of steam ship hulls, carrying capacity, heat losses of faster CPUs and timing/latency problems that put limits on signal propagation, etc).
> But on a more relevant level, the task itself is clearly possible. If we have a computer program that is able to design better hardware, it can be improved by simply moving it to better hardware. But it is capable of designing that hardware. This creates a feedback loop which should shorten the cycles in Moore's law, resulting in very few years in a truly superior AI capability.
I think you missed the main argument of the whole article, which was basically that every exponential increase in one measure is rooted in the civilization and circumstances as a whole, and therefore never truly reaches runaway effect. Too many other supporting circumstances are needed for any one change to truly dominate the overall system.
Actually he was all over the place, and you jumped from one point that he made to a different one that he also made which I didn't bother addressing.
The point that I addressed is that he claimed that improvement is always incremental. It isn't.
The point that you brought up is that improvements are narrow, with AlphaGo as an example. This is a horrible example that undermines its point. AlphaGo is the result of applying deep learning techniques to the problem of Go. But as https://en.wikipedia.org/wiki/Deep_learning indicates, that technique has created huge jumps in the abilities of computers at things as different as Go, image processing, speech recognition, and customer recommendations.
In other words the advance here wasn't narrow and targeted. It was broad and widely applicable.
Which brings us to a fairly fundamental fact.
Human brains have a specific set of capabilities which are each fairly well localized in the brain. Put them together, and they create a very good generalized machine.
We are currently able to build affordable computing systems with raw computational power that is roughly comparable to a human brain. (The way that they work is not apples to apples.) But we are unable to build all of those capabilities. And we can't tie it together.
However what that says is that we have a limited number of software problems to solve, and then we'll be able to build AI to match humans. Those software problems are open ended, they might be 5 or 50 years away. We don't know. But our brains represent a proof that the problem can be solved, and eventually we'll figure it out. Then what happens next?
Not according to the history of technology as I understand it. For example if you look at the history of steam ships in the 1800s you see exponential curves in range and carrying capacity going together, with significant increases in speed as well. If you look at computer systems what you see is different curves resulting in changing bottlenecks. For example CPU speed and hard drive capacity both increase faster than bandwidth/latency between the CPU and the hard drive, making disk latency a growing problem relative to the rest of the system. In absolute numbers, though, a recent computer is better in all dimensions than one from 20 years ago, and better than one from 10 years ago in all dimensions except clock speed.
I think you missed the main argument of the whole article, which was basically that every exponential increase in one measure is rooted in the civilization and circumstances as a whole, and therefore never truly reaches runaway effect. Too many other supporting circumstances are needed for any one change to truly dominate the overall system.
The growing impact of computers from 1950 to the present sure looks like a runaway effect to me. If strong AI develops, the curve will remain exponential but should steepen abruptly. While it technically won't be a singularity, improvements will come faster than humans can keep track of.
Actually the "history of technology" is 1-2 bouts of exponential progress in harvesting the low hanging fruits of a few areas of knowledge, and then long stagnation for millennia (and perhaps, after 1-2 more cycles, forever) as diminishing returns kick in.
The whole last century of medicine, for example, increased life expectancy less than what a couple of basic practices back in the start of that period did -- things like toilets and running water and antibiotics.
This adds a constraint that the new hardware has to be able to run that program, which is an important source of friction in the design. Especially if you consider the case of neuromorphic AI, true improvements in hardware design may involve changes so extensive and so breaking that it wouldn't be possible for the original AI to function on it. In order to exploit the better hardware capabilities, new AI would therefore have to be trained from zero, which is a potentially expensive operation.
So regardless of which way to go, you encounter significant friction: either you constrain your design to support prior software, which is hard, requires a lot of checks, and is severely suboptimal in the grand scheme of things. Or you don't, in which case you have to wait a while until new AI can be made to exploit the new architecture. It's not as simple as you think.
Could you please elaborate? What is it about "better hardware" that makes software that runs on it "better"? Can you define "better"?
The exact same software running on better hardware will run faster and can tackle larger problems.
We can't possibly build a human with twice the memory that thinks twice as fast. However once we have an AI which is roughly equivalent to a human, having an AI with twice as much memory that thinks twice as fast is just 2-5 years. (How long depends on where the bottleneck is.)
When Deep Blue beat Kasparov at chess, the program was not significantly better than what had been state of the art for the previous decade. They just threw enough hardware at it.
For chess programs there is an almost linear relationship between your search depth and effective ELO rating, and search depth went up by a constant with each generation of Moore's law.
Maybe that's why chess has been "solved" by AI, and as of yet no real problems that trouble humanity have?