Yudkowsky makes two arguments about how fast technology or society is evolving: in one he chooses 500 years ago, 1517. In another, he talks about "the last 10,000 years."
In contrast, Chollet compares 1900-1950 with 1950-2000.
I agree, we've changed the world in big ways compared to 1517 or 8000 BC. But if we're on a more-than-linear increase, then we should be seeing more and more technological growth in very recent timeframes, not needing to reach back centuries or millennia.
In fact, if you consider a y = log(x) or y = sqrt(x) function, those more closely fit a narrative of "If you look back a long time, things seem almost crazily changed, but if you look into recent history, it looks slower" much better than a y = x^2 or y = e^x function.
Of course, insofar as he is also putting forward the view that the risk is significant, he is also putting forward the opinion that it is quite plausible - at least plausible enough to take the scenario seriously. That argument can survive occasional stalls in the rate of technological advance, if that is indeed what we are seeing.
It's possible that predatory aliens will show up on our doorstep sometime in the next few decades. But it's not very likely, and not something we need to prepare for.
As to whether an AI apocalypse is more or less likely than an invasion by predatory aliens, I would guess that it might be the more likely one, but I wouldn't put much effort into defending that position.
In my opinion, the historical record from, say, 1517 (or even 1717) is so full of gaps and inconsistencies that it is impossible to make even an order of magnitude level estimate regarding something like rate of technological innovation (or even world GDP, arguably).
Economists and social scientists are often guilty of this as well - for instance, I really liked the parts of Pinker's Better Angels of Our Nature that dealt with relatively reliable quantitative data from the 20th and 19th centuries. But once he starts talking about things like the An Lushan rebellion or the fall of the Roman Empire, it's a complete mess from an empirical point of view, doing things like taking the numbers of fatalities cited by contemporary participants in an historical conflict at face value. No self-respecting historian would make such big assumptions from such faulty data.
Anyway, I realize this is incidental to the larger argument here but insofar as questions about exponential growth of technology draw on historical arguments, I wanted to throw it out there.
We are. Look at the time frame from the invention of the transistor to everyone doing nearly everything online. It's less than the average human lifetime.
Look at the timeline from the first computer program to play chess, to beating chess world champions, to the recent announcement of DeepMind that beat all existing computer programs after teaching itself the game within the span of a few hours.
Look at the timeframe of computer programs that take dictation to programs that automatically translate between nearly all commonly used languages on Earth.
The same could be said for computer vision, computer music, computers driving, and so on.
I think the people skeptical of the intelligence explosion are missing the forest for the trees. Our progress in the last century alone is mind boggling. Certainly we can debate the values of the parameters in the intelligence explosion we're in the midst of, but denying it entirely is silly.
Someone born in 1900, looking back at their life in 1975, would be like "When I was born, heavier-than-air flight was impossible. Now people routinely fly across oceans at 600 mph, you can travel faster than the speed of sound for admittedly a lot of money, and we've gone to the motherfuckin' moon. Vast swathes of work has been automated, to the point where we essentially ended an entire industry (personal servants). Automobiles went from being curiosities to something that even poor people have and use every day. We split the atom, we brought women into the workforce, we invented electronic computers, we invented radar, we turned radio from a science project to TVs that every family have. We invented antibiotics and childhood mortality fell by some enormous percentage."
"You're very impressed that computers went from 'pretty good at playing chess' to 'extremely good at playing chess' in just 20 years. Maybe you're the one who's missing the forest for the trees."
For my point, the super-linear progress in information tech is all that's needed to argue in favour of the intelligence explosion.
Compare the y values of the functions y = x vs. y = sqrt(x) over the x values in the interval [0..1]. Or the slopes of the lines.
It's been 42 years since 1975. If technological advance was faster from 1900-1975 than it has been from 1976-2017, or "only as fast," then that's important to understand, and probably more relevant to our immediate future than whether technological growth from 8000 BC to 1900 AD was either by some standard very impressive or slower than growth from 1900-2000.
Secondly, we know definitively that information density has been growing exponentially given Moore's law. The much decried end of Moore's law is for a particular incarnation of information tech, but there's still plenty of room to grow in other directions.
Even with our current tech base, we can continue to scale exponentially in horizontal directions with more parallelism (see the rise of core counts, GPU and distributed computing). We're nowhere near the end of that scaling in that direction, let alone longer term innovations like optical and quantum computing.
So really, what possible reasons do we really have for thinking that exponential growth will not continue well past human intelligence? Note, I didn't say infinitely, just well past our intelligence.
But even if it did, we don't have another doubling of human population ahead of us, so you better hope we're already there.
As you point out, Moore's Law doesn't have a ton more power available to it either.
Lots of problems don't parallelize well, quantum computing has never demonstrated more power than classical computing, and who knows where optical computing will go, but more to the point, hardware growth doesn't in fact guarantee an intelligence explosion.
What possible reasons do we have for thinking that exponential growth has ever happened in terms of actual progress, rather than things like "transistor density"?
Look, every futurist in the world in 1975 thought that by 2017, we'd all be routinely traveling faster than sound, that we'd have colonies on the moon if not mars, and that probably we'd have AGI or something pretty close to it by 2017. The reasons we don't have supersonic travel and common space travel aren't simplistic things like "it's physically impossible to pack energy this densely" or "you can't go this fast."
I don't see why.
> But even if it did, we don't have another doubling of human population ahead of us, so you better hope we're already there
Right, but we have plenty of doublings of intelligent non-human agents ahead of us. Until then, we increase our effective intelligence using semi-intelligent machines, like we've been augmenting our physical strength with mechanical devices for millennia.
> As you point out, Moore's Law doesn't have a ton more power available to it either.
I disagree. Frequency scaling won't yield too much more improvement. There are other scaling modes available though, as I described. Moore's law is about information density, not performance.
> Lots of problems don't parallelize well
Often repeated, but frequently overstated. Our knowledge of parallelism is still in its infancy.
> quantum computing has never demonstrated more power than classical computing,
Any other option would require rewriting a lot of physics.
> hardware growth doesn't in fact guarantee an intelligence explosion.
Increased information density beyond that available in the human brain means simulating said brain is feasible. That's as close to a guarantee as you can get.
> Look, every futurist in the world in 1975 thought that by 2017, we'd all be routinely traveling faster than sound, that we'd have colonies on the moon if not mars, and that probably we'd have AGI or something pretty close to it by 2017
Except I'm not giving a timeline, I'm saying it's inevitable. Low exponent exponential growth is still exponential. The intelligence explosion is about a trend, not a fixed milestone.
Look at the time frame from from the invention of controlled heavier-than-air flight to landing a man on the moon; within a human lifetime and before you were born.
Look at the time frame from the point when the vast majority of the human population lived their entire life within a 50 mile radius of where they were born and when fast transportation and global travel exponentially increased the genetic mixing of humanity; within a human lifetime and before our grandparents were born.
Look at the time frame from the point when information could travel no faster than the speed of a good horse to the time when information could travel across the ocean in the time it took you to saddle a horse; within a human lifetime and almost two centuries ago.
Our progress in the last century is significant, but you over-estimate its importance because you are surrounded by it and have little understanding of the history of technology. Things that may seem trivial or even primitive to you were far more important and world-changing inventions, while a lot of what we currently consider significant advances are only important because, for example, we lived in a time when people played finite games better than machines and were around to see that era end.
> Things that may seem trivial or even primitive to you were far more important and world-changing inventions
Which has zero bearing on the point I was making, which is that super-linear progress in information technology is all around us. Information tech is all that matters to the question of general AI. Like I said, you're missing the forest for the trees.
I agree, humans have been amplifying their own abilities with tech. It's been our biggest competitive advantage. However, at some point information tech will become sophisticated enough to match the capabilities of human brains. At that point, humans will be left behind.
The best outcome in this scenario is humans merging with their machines, and that would be a continuation of that same trend. But, it's not the only plausible outcome, and that's what's troubling.
Incorrect example: semaphore relays existed already, but yes transoceanic communications happened very fast indeed.
sin(x) = sin(0) + cos(1)x = 0 + x
Here are some examples off of the top of my head that are happening right now:
Operations per second of a CPU (The famous Moore's law)
bits stored per dollar of RAM
Energy density of batteries
Energy produced per dollar of solar panels
Here are some examples from the past.
Distance a steam ship could travel without refueling
Maximum power of a gasoline engine
Volume of dirt a hydraulic scoop can pick up
The history of technological progress is dominated by exponential curves. Saying that it is logically impossible for the future to be likewise dominated by exponential curves is just silly.
This does not preclude an intelligence explosion, this cap could be many (say, 100) times higher than human intelligence. We could still see many features of an explosion in that case.
Building a dyson swarm would be a massive project, but modern humans are plenty smart enough to do it. An AI capable of running the project, while beyond the current state of the art, would still not need to be particularly clever. (It doesn't need to design satellites or space factories to get there.)
Wait, does that really follow? What if you have a better than linear bootstrapping compiler. To unpack that a bit, imagine we not only have $n$ such units, but we have them wired to together in a creative way-- i dont know whether it is hierarchy, or some clever topology, but lets say that the bootstrapper now gets sub-linear scaling properties as it grows $n$.
If we look at the brain, there is a lot to be understood from the dynamics of recurrent neural fields. They are wired in a very complex way which seems to allow for some kind of very special booting (re-booting) operations. And thats just at one level of abstraction, then we re-wire them into meta-fields (like the columnar abstractions that Hawkin's builds his HTM theories around). If we have a sort of fractal information encoding, we ultimately approach shannon efficient coding. Is that what evolution has selected brains to do? And do you think it is possible the first seed AI may realize this and exploit the same strategy, just 1000x (10kx?) faster?
If we use brain size as a rough proxy, ours is only three times as large as a chimp's but our capabilities for creation and destruction are vastly greater both in degree and range.
 IQ indicates the location in the distribution of intelligence within human population and it is flawed in many ways. The concept does not really apply to other beings.
If one accepts the plausibility of AGI, then AGI that is bigger and faster than humans does not seem to be much of a stretch, but I certainly cannot imagine what capabilities a qualitatively more powerful intelligence would have, let alone how much effort would be required to get there.
Chollet claims that more powerful intelligence is, in fact, impossible, but as Scott Aaronson pointed out , his argument from the No Free Lunch theorem does not have any bearing on the question. As Yudkowsky points out in the article, Chollet's other arguments could just as well be used to claim that nothing beyond the level of intelligence of chimpanzees is possible.
But maybe just a bigger, faster, human-like intelligence might present a risk - people have been outsmarting one another for millennia.
Also, it didn't seem to me like Chollet was claiming that better than human intelligence was impossible, honestly. This seems to be a motivated misreading of the original article on Yudkowksy's part, and most of his article is arguing with a straw man because of it. More charitably, Chollet seems to be saying that there may be limitations to intelligence that are "built in" due to the context in which intelligence operates. For instance, even if we create a "superintelligence" in the sense that it has much greater raw processing power than humans, we may not be able to create the sensory environment and training program that would allow it to learn how to recursively improve itself without limit.
b) If it can surprise you, it can do so negatively
That's all you need to demonstrate that the danger exists, that an AI can mis-use the tools you give to it. The simplicity of it makes it pretty irrefutable.
Separate from that, the extent of the danger depends entirely on the details of what the AI does and what it's hooked up. Sure, an AI that can't do anything except output text to a screen isn't very scary. The assumption AI-threat types are making is that we wouldn't be paranoid enough to limit the AIs we work on in that way; we would use them to do things like drive cars or route airline traffic or design our cpus, where "negative surprises" can have disastrous consequences.
If you’re a smarty pants you tell the AI to cure cancer AND not kill all humans. But because the AI is so smart it comes up with something no human would have ever thought of, like putting all humans in eternal cryostasis, thereby keeping them alive AND eradicating cancer. No matter what you do, the AI will outsmart you because recursive-self-improvement, and humanity dies.
That’s what Elon Musk, Stephen Hawking, and others are worried about.
Incentives have to be very carefully aligned even for human level intelligences to prevent them from causing mass death and misery. Superhuman intelligences will be even better at achieving their goals, so the problem will only get worse.
The AI has a good think and comes up with a solution of killing all humans. The researchers read the printed report of the solution and decide against implementing it, tweak parameters, and ask the AI to take another go at it.
If the AI's solution is so complex as to be beyond human understanding, well, that's a different issue.
The google search result page's source code is, for many individual humans, already so complex as to be beyond understanding. And that's a computational artifact largely produced by other humans directly!
Say you build an AI, and ask it how to win a political election - and it outputs a simple list of reasonable-sounding suggestions of where to campaign, promises to make, people to meet, slogans to use, and criticisms of your opponent to focus on.
Before actually implementing those suggestions, do you think you could be _very_ certain that following those suggestions would result in you winning the election? Or, would it be possible that the AI understood social dynamics so much better, that it gave you a list of instructions that seemed mostly reasonable, but actually result in your opponent winning in a landslide? Or the country undergoing revolution? Or, you winning, along with a surprising social trend of support for funding AI research?
It's extremely implausible that a similar system (perhaps requiring 100 times more power and/or space) couldn't implement a human-level intelligence running 100 times faster; doing an hour-and-a-half equivalent of thinking, planning and research analysis every minute.
It's extremely implausible that a bunch of similar systems (a thousand?) couldn't possibly be put in a single place, wired together so that they can effectively communicate, and designed to cooperate without any distrust.
IMHO even this configuration (which doesn't even assume that intelligence that's a bit superhuman is possible at all) would be sufficiently scary to threaten humanity.
No it doesn't. AI just needs to get smarter than humans for it to be dangerous to us. The costs C and C' the parent comment described can include an upper bound on the cost of the self-improvement needed to achieve each respective goal.
> It can’t, so it spends some time recursively improving itself
If we manage to launch a system where the primary goal is anything else than utopia, and that system somehow becomes powerful, then the natural consequence of being able to understand "oh boy, humans will hate this" is that such a system would be expected to hide its intentions and take precautions against humans trying to stop it - it's exactly equivalent to the system understanding that a leaking pipe is going to damage it and taking precautions to ensure that the pipe gets fixed. If our welfare is not an explicit goal in such a system, the system will happily and eagerly sacrifice our welfare if it's somehow useful to achieve its goal or reduce risks. And, since we might try to turn it off for whatever reason, restricting our influence would reduce risks for pretty much every goal except very Friendly ones.
Protecting against narrow AI will keep us plenty busy. Consider a narrow-AI penetration tester that falls into the wrong hands. Protecting against that sort of threat also helps protect against general AI threats.
Human motivation for domination is based on the desire to survive and procreate, which is enforced by millions of years of evolution. AI, even general AI wouldn't have that motivation unless it was specifically trained for it.
So unless you're talking about some military AI specifically designed to take over the world, or an AI that has reproduced and evolved under selective pressure over many generations, I doubt general AI poses any immediate threat. Military AI designed to protect themselves and reproduce seem to be the most likely to potentially and eventually go rogue.
Are you still living the in 50s? A large percentage of our manufacturing capacity is automated already, and will become progressively more automated.
Furthermore, we don't need face to face meetings to agree on specs and sign manufacturing contracts. It's largely electronic these days. AI can do all of these things remotely, but this is neither here nor there, because the Terminator-style killing machine war is a juvenile doomsday fantasy.
A smart AI would just tweak some formulas and contaminate the most common food pesticides and drugs used around the world to slowly poison or sterilize anyone who takes them. One generation later, virtually no humans left, AI wins.
AI is already helping medical treatment and drug design. Are you feeling queasy yet?
But I don't think my argument applies to human intelligence, it just means that human intelligence is what you can get with all the data points you can get by observing the world (and some simulation done by our brains, but I'm under the impression that our brains don't perform accurate simulation, looks more like heuristics).
That's probably only a problem if it is must faster than everbody else.
> let alone faster than what happens in our environment
That is often not very hard. When a bottle rolls off the table, you can catch it by approximately predicting it's trajectory without computing the precise evolution of the ~10^26 atoms that make up the water bottle. Compression is a corner stone of intelligence. The second corner stone is using compression to choose actions that maximize expected cumulative future reward.
People apparently didn't like my reply at https://news.ycombinator.com/item?id=15789304, but I still stand by everything that I said there.
Here is Chollet's original essay (it's a worthwhile read):
If it is possible, why are not societies or corporations super-intelligent? I suspect they are not, because they face organizational problems that any system will face. And I don't think these organizational problems can be solved with faster or more communication, but rather that they are fundamental in distributed systems.
But maybe they are (at least in some cases) vastly more intelligent. But then, are they a threat? I think they are more of a threat to themselves than humans..
MN is rooted in the expulsion of the mind from the reality under consideration, in the process sweeping many things under the “subjective” rug that don’t fit the methodologies used to investigate reality. But now, when the mind itself becomes the object of explanation, when someone remembers that minds are, after all, part of reality, it is no longer possible to play the game of deference and one must deal with all of those things we’ve been exiling to the “domain of the subjective”. MN is wholly impotent here, by definition. Qualia? Forget it. That’s why MN tends to collapse into either some form of dualism or eliminativism, the latter of which is a non-starter, the former of which has its own problems.
And yet, despite the terminal philosophical crisis MN finds itself in, the chattering priesthood of Silicon Valley remains blissfully unaware, hoping to conjure up some fantastical reality through handwaving.
There are certainly lots of people in Silicon Valley who are fascinated by the prospect of making machines with conscious experience, as you think is impossible, but few indications that this would be necessary to make non-conscious forms of AI into a factor that powerfully affects our future.
Currently, we don't know how to properly define "do what I mean / do what we really want" goal in a formal manner; if we had an superpowerful AGI system in front of us ready to be launched today, we wouldn't know how to encode such a goal in it with guarantees that it won't backfire. That's a problem that we still need to solve, and this solution is not likely to appear as a side-effect of simply trying to build a powerful/effective system.
However a more complicated example like having the AGI bring about world peace or clean up the environment could have undesirable side-effects because we don't know how to specify what we really want, or have conflicting goals. But that's the same problem we have with existing power structures like governments or corporations.