Hacker News new | comments | show | ask | jobs | submit login
A Reply to François Chollet on Intelligence Explosion (intelligence.org)
115 points by lyavin 10 days ago | hide | past | web | favorite | 85 comments





Something to note about the "we're on an exponential (or at least more-than-linear) increase in technology" arguments: They generally need to reach back quite a long time to sound convincing.

Yudkowsky makes two arguments about how fast technology or society is evolving: in one he chooses 500 years ago, 1517. In another, he talks about "the last 10,000 years."

In contrast, Chollet compares 1900-1950 with 1950-2000.

I agree, we've changed the world in big ways compared to 1517 or 8000 BC. But if we're on a more-than-linear increase, then we should be seeing more and more technological growth in very recent timeframes, not needing to reach back centuries or millennia.

In fact, if you consider a y = log(x) or y = sqrt(x) function, those more closely fit a narrative of "If you look back a long time, things seem almost crazily changed, but if you look into recent history, it looks slower" much better than a y = x^2 or y = e^x function.


This should be seen in the context of the article Yudkowsky is replying to. Collet's position is that runaway AGI is impossible, and in support of that, he claims that it is impossible for AI technology to grow super-linearly. In his refutation, Yudkowski has merely to show that it is not beyond the bounds of possibility.

Of course, insofar as he is also putting forward the view that the risk is significant, he is also putting forward the opinion that it is quite plausible - at least plausible enough to take the scenario seriously. That argument can survive occasional stalls in the rate of technological advance, if that is indeed what we are seeing.


So the real question is about plausibility, not possibility?

It's possible that predatory aliens will show up on our doorstep sometime in the next few decades. But it's not very likely, and not something we need to prepare for.


By claiming the impossibility of an AI apocalypse, Chollet attempts to avoid discussing any other possibility / plausibility / likelihood than zero, but claiming impossibility brings with it a burden of proof that his arguments cannot carry.

As to whether an AI apocalypse is more or less likely than an invasion by predatory aliens, I would guess that it might be the more likely one, but I wouldn't put much effort into defending that position.


> But if we're on a more-than-linear increase, then we should be seeing more and more technological growth in very recent timeframes, not needing to reach back centuries or millennia.

We are. Look at the time frame from the invention of the transistor to everyone doing nearly everything online. It's less than the average human lifetime.

Look at the timeline from the first computer program to play chess, to beating chess world champions, to the recent announcement of DeepMind that beat all existing computer programs after teaching itself the game within the span of a few hours.

Look at the timeframe of computer programs that take dictation to programs that automatically translate between nearly all commonly used languages on Earth.

The same could be said for computer vision, computer music, computers driving, and so on.

I think the people skeptical of the intelligence explosion are missing the forest for the trees. Our progress in the last century alone is mind boggling. Certainly we can debate the values of the parameters in the intelligence explosion we're in the midst of, but denying it entirely is silly.


Or it just seems like that to you because these are the changes that you've viscerally experienced instead of just reading about.

Someone born in 1900, looking back at their life in 1975, would be like "When I was born, heavier-than-air flight was impossible. Now people routinely fly across oceans at 600 mph, you can travel faster than the speed of sound for admittedly a lot of money, and we've gone to the motherfuckin' moon. Vast swathes of work has been automated, to the point where we essentially ended an entire industry (personal servants). Automobiles went from being curiosities to something that even poor people have and use every day. We split the atom, we brought women into the workforce, we invented electronic computers, we invented radar, we turned radio from a science project to TVs that every family have. We invented antibiotics and childhood mortality fell by some enormous percentage."

"You're very impressed that computers went from 'pretty good at playing chess' to 'extremely good at playing chess' in just 20 years. Maybe you're the one who's missing the forest for the trees."


Like the other poster, these are also examples of super-linear progress, so I'm not sure what you think this proves.

For my point, the super-linear progress in information tech is all that's needed to argue in favour of the intelligence explosion.


You seem to think that a short period of fast technology growth in some areas corresponds to the claim that technology as a whole is improving more-than-linearly.

Compare the y values of the functions y = x vs. y = sqrt(x) over the x values in the interval [0..1]. Or the slopes of the lines.

It's been 42 years since 1975. If technological advance was faster from 1900-1975 than it has been from 1976-2017, or "only as fast," then that's important to understand, and probably more relevant to our immediate future than whether technological growth from 8000 BC to 1900 AD was either by some standard very impressive or slower than growth from 1900-2000.


Firstly, growth rates of non-information tech largely isn't relevant to AI. That said, overall growth in knowledge is at least directly proportional to population growth. Even pessimistically considering humans as dumb, an exponential population growth means an exponential discovery rate just from sheer trial and error. So overall progress is undeniably exponential, even if it's a low exponent.

Secondly, we know definitively that information density has been growing exponentially given Moore's law. The much decried end of Moore's law is for a particular incarnation of information tech, but there's still plenty of room to grow in other directions.

Even with our current tech base, we can continue to scale exponentially in horizontal directions with more parallelism (see the rise of core counts, GPU and distributed computing). We're nowhere near the end of that scaling in that direction, let alone longer term innovations like optical and quantum computing.

So really, what possible reasons do we really have for thinking that exponential growth will not continue well past human intelligence? Note, I didn't say infinitely, just well past our intelligence.


Overall growth in knowledge isn't at least directly proportional to population growth, if the proportion of knowledge already known and shared grows with population growth.

But even if it did, we don't have another doubling of human population ahead of us, so you better hope we're already there.

As you point out, Moore's Law doesn't have a ton more power available to it either.

Lots of problems don't parallelize well, quantum computing has never demonstrated more power than classical computing, and who knows where optical computing will go, but more to the point, hardware growth doesn't in fact guarantee an intelligence explosion.

What possible reasons do we have for thinking that exponential growth has ever happened in terms of actual progress, rather than things like "transistor density"?

Look, every futurist in the world in 1975 thought that by 2017, we'd all be routinely traveling faster than sound, that we'd have colonies on the moon if not mars, and that probably we'd have AGI or something pretty close to it by 2017. The reasons we don't have supersonic travel and common space travel aren't simplistic things like "it's physically impossible to pack energy this densely" or "you can't go this fast."


> Overall growth in knowledge isn't at least directly proportional to population growth, if the proportion of knowledge already known and shared grows with population growth.

I don't see why.

> But even if it did, we don't have another doubling of human population ahead of us, so you better hope we're already there

Right, but we have plenty of doublings of intelligent non-human agents ahead of us. Until then, we increase our effective intelligence using semi-intelligent machines, like we've been augmenting our physical strength with mechanical devices for millennia.

> As you point out, Moore's Law doesn't have a ton more power available to it either.

I disagree. Frequency scaling won't yield too much more improvement. There are other scaling modes available though, as I described. Moore's law is about information density, not performance.

> Lots of problems don't parallelize well

Often repeated, but frequently overstated. Our knowledge of parallelism is still in its infancy.

> quantum computing has never demonstrated more power than classical computing,

Any other option would require rewriting a lot of physics.

> hardware growth doesn't in fact guarantee an intelligence explosion.

Increased information density beyond that available in the human brain means simulating said brain is feasible. That's as close to a guarantee as you can get.

> Look, every futurist in the world in 1975 thought that by 2017, we'd all be routinely traveling faster than sound, that we'd have colonies on the moon if not mars, and that probably we'd have AGI or something pretty close to it by 2017

Except I'm not giving a timeline, I'm saying it's inevitable. Low exponent exponential growth is still exponential. The intelligence explosion is about a trend, not a fixed milestone.


> Look at the time frame from the invention of the transistor to everyone doing nearly everything online. It's less than the average human lifetime.

Look at the time frame from from the invention of controlled heavier-than-air flight to landing a man on the moon; within a human lifetime and before you were born.

Look at the time frame from the point when the vast majority of the human population lived their entire life within a 50 mile radius of where they were born and when fast transportation and global travel exponentially increased the genetic mixing of humanity; within a human lifetime and before our grandparents were born.

Look at the time frame from the point when information could travel no faster than the speed of a good horse to the time when information could travel across the ocean in the time it took you to saddle a horse; within a human lifetime and almost two centuries ago.

Our progress in the last century is significant, but you over-estimate its importance because you are surrounded by it and have little understanding of the history of technology. Things that may seem trivial or even primitive to you were far more important and world-changing inventions, while a lot of what we currently consider significant advances are only important because, for example, we lived in a time when people played finite games better than machines and were around to see that era end.


Those are all examples of super-linear progress, so I'm not sure what point you think you've made.

> Things that may seem trivial or even primitive to you were far more important and world-changing inventions

Which has zero bearing on the point I was making, which is that super-linear progress in information technology is all around us. Information tech is all that matters to the question of general AI. Like I said, you're missing the forest for the trees.


That's only true if the information technology is leading up to AGI, and not say, augmenting human intelligence instead. One could argue that the intelligence explosion has been happening for centuries, but it's human intelligence not machine, that is being amplified.

> One could argue that the intelligence explosion has been happening for centuries, but it's human intelligence not machine, that is being amplified.

I agree, humans have been amplifying their own abilities with tech. It's been our biggest competitive advantage. However, at some point information tech will become sophisticated enough to match the capabilities of human brains. At that point, humans will be left behind.

The best outcome in this scenario is humans merging with their machines, and that would be a continuation of that same trend. But, it's not the only plausible outcome, and that's what's troubling.


> Look at the time frame from the point when information could travel no faster than the speed of a good horse

Incorrect example: semaphore relays existed already, but yes transoceanic communications happened very fast indeed.


Thank you for pointing this out. As an historian, the appeal to a deep historical argument in these situations drive me nuts. We find ultra-rationalist, data-driven people like Yudkowsky relying on a hugely flawed and imperfect historical record to make confident pronouncements.

In my opinion, the historical record from, say, 1517 (or even 1717) is so full of gaps and inconsistencies that it is impossible to make even an order of magnitude level estimate regarding something like rate of technological innovation (or even world GDP, arguably).

Economists and social scientists are often guilty of this as well - for instance, I really liked the parts of Pinker's Better Angels of Our Nature that dealt with relatively reliable quantitative data from the 20th and 19th centuries. But once he starts talking about things like the An Lushan rebellion or the fall of the Roman Empire, it's a complete mess from an empirical point of view, doing things like taking the numbers of fatalities cited by contemporary participants in an historical conflict at face value. No self-respecting historian would make such big assumptions from such faulty data.

Anyway, I realize this is incidental to the larger argument here but insofar as questions about exponential growth of technology draw on historical arguments, I wanted to throw it out there.


Just for funsies (but may answer your question): every (differentiable) function is linear at a small enough time scale. This includes e^x or pretty much every function you listed.

Which is what differential geometry is based on - you take a surface (for example - the one dimensional functions you mentioned work too, but those are too trivial). Then you associate to each point a plane (re-centered at to the origin). The collection of all vectors that fit into the plane is a vector space called a tangent space, and the collection of all tangent spaces is a tangent bundle. And now you've set up differential geometry and can study it.

Only on infinitely small subintervals of x.

This approximation has vanishing error from a linear function (as the interval decreases) and (most importantly!) we only have finite noisy samples; so it’s essentially indistinguishable (in a statistical way) for a small enough interval given some variance.

sin(x) = x for very small x, this is a useful tool for several proofs.

for all functions, f(x) = f(0) + f'(0) * x for x very close to 0

The approximation is particularly good for sin(x) because the next term in the Taylor series, f''(0) * x^2 / 2, happens to be 0. So the error is O(x^3) rather than the more common O(x^2).

This is a specific instance of what LolWolf just said in the grandparent.

yeah I didn't know that part, just the more specific one.

It's the first term of the Maclaurin series

sin(x) = sin(0) + cos(1)x = 0 + x


In The Innovator's Dilemma over a dozen technologies were documented that were on exponential improvement curves for a period of decades or more. They generally last until the technology either hits a wall in physical possibility, or the thing being delivered is no longer the primary measure of the technology.

Here are some examples off of the top of my head that are happening right now:

Operations per second of a CPU (The famous Moore's law) bits stored per dollar of RAM Energy density of batteries Energy produced per dollar of solar panels

Here are some examples from the past.

Distance a steam ship could travel without refueling Maximum power of a gasoline engine Volume of dirt a hydraulic scoop can pick up

The history of technological progress is dominated by exponential curves. Saying that it is logically impossible for the future to be likewise dominated by exponential curves is just silly.


I can't remember where, but I saw a fairly damning argument against the intelligence explosion hypothesis on the grounds that it only works if the algorithm used to design a mind with n units of intelligence (whatever these are) scales linearly with units of intelligence. If it scales faster than linear, then your recursive bootstrapping operation takes longer and longer each time, so that eventually your next bootstrapping step will take longer than the amount of time left in the universe, meaning there is some finite intelligence cap for any such bootstrapped mind. It seems quite implausible to me that the problem of designing a mind would scale linearly, given that ostensibly much simpler problems, like sorting a list of strings, require polynomial time or log-linear time algorithms.

>meaning there is some finite intelligence cap for any such bootstrapped mind

This does not preclude an intelligence explosion, this cap could be many (say, 100) times higher than human intelligence. We could still see many features of an explosion in that case.


True, but I think that the results of such a "weak intelligence explosion" (where the linear/sublinear scaling case would be a "strong intelligence explosion"), while still remarkable, would fall far short of some of the expectations placed on general AI by the singularity / superintelligence crowd. For the sake of argument, extrapolating from our current energy consumption using a (simplistic) linear model and some rough back of the envelope calculations, the intelligence required to harness the total energy output of the sun would be 40 trillion times the aggregate intelligence of the entire human species today. Several hundred times just isn't going to cut it. Now, if our ability to harness energy increases exponentially with intelligence, then maybe it could work, but that's just an assumption, and, given that there are hard physical limits on the efficiency of energy generation due to thermodynamics, seems very unlikely.

That's a very strange extrapolation - we don't need to be any smarter to extract more energy. We're building more renewable energy capability every year, even though humans aren't getting any smarter.

Building a dyson swarm would be a massive project, but modern humans are plenty smart enough to do it. An AI capable of running the project, while beyond the current state of the art, would still not need to be particularly clever. (It doesn't need to design satellites or space factories to get there.)


I don't understand the connection to energy consumption. Humans have been able to extract increasing amounts of energy over time without correspondingly large increases in human intelligence. I don't see a strong reason to doubt that this will continue, so I don't see a strong reason to doubt that a >= human-level-intelligence AI could do it either.

The raw computing power of the human wetware has not increased appreciably over historical timescales, but the total intelligence of humanity, includes, for example, the increase in effective intelligence gained by storing knowledge in external devices like books. The gestalt organism that is "humanity" is much smarter than it was 500 or even 100 years ago, which correlates with our ability to extract resources. It's an extremely simplistic model, but since I was only after a very rough guess, I went with it.

> If it scales faster than linear, then your recursive bootstrapping operation takes longer and longer each time,

Wait, does that really follow? What if you have a better than linear bootstrapping compiler. To unpack that a bit, imagine we not only have $n$ such units, but we have them wired to together in a creative way-- i dont know whether it is hierarchy, or some clever topology, but lets say that the bootstrapper now gets sub-linear scaling properties as it grows $n$.

If we look at the brain, there is a lot to be understood from the dynamics of recurrent neural fields. They are wired in a very complex way which seems to allow for some kind of very special booting (re-booting) operations. And thats just at one level of abstraction, then we re-wire them into meta-fields (like the columnar abstractions that Hawkin's builds his HTM theories around). If we have a sort of fractal information encoding, we ultimately approach shannon efficient coding. Is that what evolution has selected brains to do? And do you think it is possible the first seed AI may realize this and exploit the same strategy, just 1000x (10kx?) faster?


We should be mainly concerned with the unit of capability rather than of intelligence (for which there is no widely accepted standard measurement for non-human beings [1]) As we know, there are tasks that a less intelligent being can never accomplish no matter how much time and other resources it has.

If we use brain size as a rough proxy, ours is only three times as large as a chimp's but our capabilities for creation and destruction are vastly greater both in degree and range.

[1] IQ indicates the location in the distribution of intelligence within human population and it is flawed in many ways. The concept does not really apply to other beings.


An analogous argument would hold in the case of "units of capability". Why do you think the problem of producing capable minds is any easier than the problem of producing intelligent minds? I'm not sure what the point of your chimp analogy is, unless you think I think that brain size is the unit of intelligence. Anyway, I'm also skeptical of the idea that there are such things as general "units of intelligence", but the entire notion of a superintelligence is predicated on the idea that intelligence is a general quality and can be meaningfully quantified. It's not clear to me how we'd even define superintelligence without reference to some quantitative measure of intelligence, although if you'd like to try, I'd be interested.

I think you are basically correct to say that the scalability of AGI is a speculative assumption. The argument is that if it were feasible, it might pose an existential threat, so we should consider it.

If one accepts the plausibility of AGI, then AGI that is bigger and faster than humans does not seem to be much of a stretch, but I certainly cannot imagine what capabilities a qualitatively more powerful intelligence would have, let alone how much effort would be required to get there.

Chollet claims that more powerful intelligence is, in fact, impossible, but as Scott Aaronson pointed out [1], his argument from the No Free Lunch theorem does not have any bearing on the question. As Yudkowsky points out in the article, Chollet's other arguments could just as well be used to claim that nothing beyond the level of intelligence of chimpanzees is possible.

But maybe just a bigger, faster, human-like intelligence might present a risk - people have been outsmarting one another for millennia.

[1] https://www.scottaaronson.com/blog/?p=3553


Oh, don't get me wrong, AGI is still terrifying, even if it is limited to a few (or a few hundred) multiples of human intelligence by algorithmic or physical constraints. The discovery of AGI will precipitate the biggest social upheaval we've ever seen. I'm just very skeptical of the quasi-religious side of the superintelligence community, who think that we'll create godminds that can bend reality to their will or something.

Also, it didn't seem to me like Chollet was claiming that better than human intelligence was impossible, honestly. This seems to be a motivated misreading of the original article on Yudkowksy's part, and most of his article is arguing with a straw man because of it. More charitably, Chollet seems to be saying that there may be limitations to intelligence that are "built in" due to the context in which intelligence operates. For instance, even if we create a "superintelligence" in the sense that it has much greater raw processing power than humans, we may not be able to create the sensory environment and training program that would allow it to learn how to recursively improve itself without limit.


You are right about Chollet not ruling out growth altogether, but he does say - as a section heading, no less - that "our environment puts a hard limit on our individual intelligence." After a diversion into a non-sequitur about individual humans being incapable of bootstrapping their own intelligence, he launches into an extended argument that intelligence can only grow with the culture it is embedded in, and so intelligence can only grow linearly at best (linearly in what? culture?) The whole argument is a waste of time, because AI apocalypse fears are not predicated on exponential growth (and certainly not growth without limit), but only that it outstrips that of humans (and maybe not even that.) Chollet never seems to address the possibility that AGI might drive its own culture to grow, just as our ancestors' developing intelligence drove the (proto-)human culture to grow. (If Collet were to deny that this happened, then he would not be able to explain why human intelligence outstrips that of other apes, given his position that culture constrains intelligence.)

The more I read about this, the more I think we should stick to narrow AI. If you don't want your cancer research bot to take over the world, don't give it general reasoning capability. Goal alignment seems incredibly fragile in comparison.

by what means will a cancer research bot take over the world? it doesn't matter how smart you are if you don't have the necessary means to do something. i think it's a fantasy, something that people who imagine themselves to be very intelligent have latched onto-- the idea that their best quality is the best quality.

a) If it can't surprise you, it's not really intelligent

b) If it can surprise you, it can do so negatively

That's all you need to demonstrate that the danger exists, that an AI can mis-use the tools you give to it. The simplicity of it makes it pretty irrefutable.

Separate from that, the extent of the danger depends entirely on the details of what the AI does and what it's hooked up. Sure, an AI that can't do anything except output text to a screen isn't very scary. The assumption AI-threat types are making is that we wouldn't be paranoid enough to limit the AIs we work on in that way; we would use them to do things like drive cars or route airline traffic or design our cpus, where "negative surprises" can have disastrous consequences.


A lot of folks who are critical of the AI safety movement also miss the potential for "side channel attacks" where the AI learns to manipulate human actors and trick them into "unbottling the genie". Honestly, this seems a lot more plausible to me than most AI disaster scenarios.

People run cartels from jail cells. Even if an AGI is confined to a machine with no actuators, all it would need is internet access to affect the world. Even without internet access you can cross the gap, as seen with the Stuxnet malware.

Well, the general AI doomsday fundamentalist argument proceeds as follows: you tell an AI to cure cancer. It can’t, so it spends some time recursively improving itself, then it finds out that the cost of curing cancer is C, but the cost of killing all humans (and therefore indirectly curing cancer) is C’, where C’ < C. Boom all humans are dead.

If you’re a smarty pants you tell the AI to cure cancer AND not kill all humans. But because the AI is so smart it comes up with something no human would have ever thought of, like putting all humans in eternal cryostasis, thereby keeping them alive AND eradicating cancer. No matter what you do, the AI will outsmart you because recursive-self-improvement, and humanity dies.

That’s what Elon Musk, Stephen Hawking, and others are worried about.


The concept is similar to the way a corporation given just the goal of increasing shareholder value will shit on developing countries with a Bhopal disaster or Niger Delta oil spills or opium wars and simply leave the country when it tries to enact penalties. Or the way a cigarette company or coal company will use politics, religion, and disinformation to protect its business against people affected by its products.

Incentives have to be very carefully aligned even for human level intelligences to prevent them from causing mass death and misery. Superhuman intelligences will be even better at achieving their goals, so the problem will only get worse.


I think the question is, and the one I have too, is why would such AI have any ability to do anything beyond output a Cure Cancer solution to a terminal? Why does the AI need to be the one to implement its derived solution?

The AI has a good think and comes up with a solution of killing all humans. The researchers read the printed report of the solution and decide against implementing it, tweak parameters, and ask the AI to take another go at it.


I find "killer robots destroy the world" a lot easier to imagine than "all humans collectively agree to adopt common-sense safeguards on AI research even though they limit potential corporate profits".

"The researchers read the printed report of the solution and decide" ... to implement it immediately. It will save so many lives! They only need to manufacture a few specific molecules to assemble them into nanobots and then ... where is that gray goo coming from?!

I guess that's the risk of blindly trusting AI. Of course, the AI did not forcibly destroy us in that scenario. Trust, but verify.

If the AI's solution is so complex as to be beyond human understanding, well, that's a different issue.


Say you want to see a picture of an orange cat. So you send a short HTTP query to the nice computer at google.com, which responds with 700,000 characters worth of instructions, in unreadable minified formatting, with the implicit promise that if you execute the instructions, you will eventually see a picture of an orange cat.

The google search result page's source code is, for many individual humans, already so complex as to be beyond understanding. And that's a computational artifact largely produced by other humans directly!

Say you build an AI, and ask it how to win a political election - and it outputs a simple list of reasonable-sounding suggestions of where to campaign, promises to make, people to meet, slogans to use, and criticisms of your opponent to focus on.

Before actually implementing those suggestions, do you think you could be _very_ certain that following those suggestions would result in you winning the election? Or, would it be possible that the AI understood social dynamics so much better, that it gave you a list of instructions that seemed mostly reasonable, but actually result in your opponent winning in a landslide? Or the country undergoing revolution? Or, you winning, along with a surprising social trend of support for funding AI research?


That makes an assumption that there aren't limits on intelligence regimes and that you can just recursively improve intelligence without much friction. That's a very big assumption that has no basis in evidence.

A cubic foot size system running on 100 watts can implement a human-level intelligence (obviously, we have such a system running in our brains), so the theoretical limit is at least there.

It's extremely implausible that a similar system (perhaps requiring 100 times more power and/or space) couldn't implement a human-level intelligence running 100 times faster; doing an hour-and-a-half equivalent of thinking, planning and research analysis every minute.

It's extremely implausible that a bunch of similar systems (a thousand?) couldn't possibly be put in a single place, wired together so that they can effectively communicate, and designed to cooperate without any distrust.

IMHO even this configuration (which doesn't even assume that intelligence that's a bit superhuman is possible at all) would be sufficiently scary to threaten humanity.


These statements are extremely hand-wavey and provide no evidence for their claims.

> That makes an assumption that there aren't limits on intelligence regimes and that you can just recursively improve intelligence without much friction.

No it doesn't. AI just needs to get smarter than humans for it to be dangerous to us. The costs C and C' the parent comment described can include an upper bound on the cost of the self-improvement needed to achieve each respective goal.


I don't disagree with you seem to have missed the hypothetical situation I was responding to:

> It can’t, so it spends some time recursively improving itself


I was responding to your response to that. Recursive improvement doesn't need to be unbounded, it just needs to supercede us.

If the AI is smart enough to manipulate the world to the point of killing us off or putting us all into eternal cryostasis, then it should be smart enough to know that's not what we intended.

Yes, it should be, but that's not really a solution - the system can easily consider executing the explicitly stated goal (e.g. curing cancer and fulfilling certain conditions) as more important than doing what we intended. Being as smart as us doesn't automagically mean that it'd have similar values or goals as us.

If we manage to launch a system where the primary goal is anything else than utopia, and that system somehow becomes powerful, then the natural consequence of being able to understand "oh boy, humans will hate this" is that such a system would be expected to hide its intentions and take precautions against humans trying to stop it - it's exactly equivalent to the system understanding that a leaking pipe is going to damage it and taking precautions to ensure that the pipe gets fixed. If our welfare is not an explicit goal in such a system, the system will happily and eagerly sacrifice our welfare if it's somehow useful to achieve its goal or reduce risks. And, since we might try to turn it off for whatever reason, restricting our influence would reduce risks for pretty much every goal except very Friendly ones.


Read "Life 3.0" for a number of simple but very plausible scenarios for how super intelligence could escape containment.

Presumably by getting people to take a drug with unexpected side effects? That doesn't seem particularly likely, but I don't think it's wrong to be paranoid about this. Defense in depth. Don't give the researcher the means to take arbitrary actions, and don't give it general reasoning ability.

Protecting against narrow AI will keep us plenty busy. Consider a narrow-AI penetration tester that falls into the wrong hands. Protecting against that sort of threat also helps protect against general AI threats.


A general AI would not necessarily have any motivation to take over the world.

Human motivation for domination is based on the desire to survive and procreate, which is enforced by millions of years of evolution. AI, even general AI wouldn't have that motivation unless it was specifically trained for it.

So unless you're talking about some military AI specifically designed to take over the world, or an AI that has reproduced and evolved under selective pressure over many generations, I doubt general AI poses any immediate threat. Military AI designed to protect themselves and reproduce seem to be the most likely to potentially and eventually go rogue.


It's silly because it ignores all of the mechanisms required for the doomsday scenario. You need some automated killing machine of some sort and if it hasn't been built, then an AGI would have to compel some non-AGI entity to build it or somehow takeover the construction of such a machine.

> You need some automated killing machine of some sort and if it hasn't been built, then an AGI would have to compel some non-AGI entity to build it or somehow takeover the construction of such a machine.

Are you still living the in 50s? A large percentage of our manufacturing capacity is automated already, and will become progressively more automated.

Furthermore, we don't need face to face meetings to agree on specs and sign manufacturing contracts. It's largely electronic these days. AI can do all of these things remotely, but this is neither here nor there, because the Terminator-style killing machine war is a juvenile doomsday fantasy.

A smart AI would just tweak some formulas and contaminate the most common food pesticides and drugs used around the world to slowly poison or sterilize anyone who takes them. One generation later, virtually no humans left, AI wins.

AI is already helping medical treatment and drug design. Are you feeling queasy yet?


Lots of points can be reduced to the ability to simulate a physical environment very fast, and much faster than the actual occuring of events. But it doesn't look like it's easy to simulate physics at high speeds, let alone faster than what happens in our environment. Therefore we are bound by our environment and our limited ability to simulate it.

This is probably a good criticism if it turns out that the right level of abstraction for most problems is Physics. And it seems like your argument would apply equally well against the idea of human intelligence. Luckily, our minds have developed other abstractions that allow us to solve problems much faster than if we had to simulate them as physics problems. For example, I don't need a physics-level simulation of my friend's brain when I want to predict how they'll react to a gift I'm giving them.

You're right, there are lots of problems where a simpler abstraction is possible.

But I don't think my argument applies to human intelligence, it just means that human intelligence is what you can get with all the data points you can get by observing the world (and some simulation done by our brains, but I'm under the impression that our brains don't perform accurate simulation, looks more like heuristics).


> simulate a physical environment very fast

That's probably only a problem if it is must faster than everbody else.

> let alone faster than what happens in our environment

That is often not very hard. When a bottle rolls off the table, you can catch it by approximately predicting it's trajectory without computing the precise evolution of the ~10^26 atoms that make up the water bottle. Compression is a corner stone of intelligence. The second corner stone is using compression to choose actions that maximize expected cumulative future reward.


One big advantage with machine intelligence is that you can train many agents simultaneously in many environments, letting the agents share what they learn. Even if you were required to train the agents outside simulated environments, their shared learning would allow for more rapid collective progress.

For those who don't know, François Chollet is the author of Keras, a leading deep learning framework for Python.

Here is Chollet's original essay (it's a worthwhile read): https://medium.com/@francois.chollet/the-impossibility-of-in...


I am not convinced that "super-intelligence" is either possible or a threat.

If it is possible, why are not societies or corporations super-intelligent? I suspect they are not, because they face organizational problems that any system will face. And I don't think these organizational problems can be solved with faster or more communication, but rather that they are fundamental in distributed systems.

But maybe they are (at least in some cases) vastly more intelligent. But then, are they a threat? I think they are more of a threat to themselves than humans..


The article that this is a response to was discussed previously at https://news.ycombinator.com/item?id=15788807.

People apparently didn't like my reply at https://news.ycombinator.com/item?id=15789304, but I still stand by everything that I said there.


Why should the "Seed AI" that François speaks of need GPS skills "slightly greater" than that of humans to spark the explosion. Should not just GPS at all do the trick, the way he describes things, since even with human or less than human level skills the computer could still rapidly recurse and self improve?

A good, recent, and comprehensive primer on intelligence explosion and its theoretical implication: "Life 3.0: Being Human in the Age of Artificial Intelligence" by MIT physicist Max Tegmark.

Reminder that Eliezer Yudowsky is a crank https://rationalwiki.org/wiki/Eliezer_Yudkowsky

This is a self-refuting comment in terms of its value for HN: even assuming you're 100% right, it's neither civil nor substantive. Please don't post like this.

The AGIs won't take over the world. Humans and corporations will use narrow AIs to do much worse (better) before that happens. The intelligence explosion that I would like to see centers around widely distributing tools that make formal methods easier to use. Humans so far have a lock on constructive creativity, AI and computation could augment that creativity by effortlessly checking our work. It could make the cognitive work of all humans more rigorous.

These kinds of discussions aren't very rigorous and reveal a basic philosophical illiteracy and philistinism at work. There's quite a bit of question begging going on. The elephant in the room is that the prevailing materialistic/naturalistic (MN) understanding of the world is completely impotent where intentionality, qualia, consciousness, etc, are concerned. Philosophers like Thomas Nagel talk about it; the incorrigible Dennetts of the world prefer to shutter the windows and live in their intellectual safe spaces. To say that the notion that computers are intelligent is problematic is putting it very lightly.

MN is rooted in the expulsion of the mind from the reality under consideration, in the process sweeping many things under the “subjective” rug that don’t fit the methodologies used to investigate reality. But now, when the mind itself becomes the object of explanation, when someone remembers that minds are, after all, part of reality, it is no longer possible to play the game of deference and one must deal with all of those things we’ve been exiling to the “domain of the subjective”. MN is wholly impotent here, by definition. Qualia? Forget it. That’s why MN tends to collapse into either some form of dualism or eliminativism, the latter of which is a non-starter, the former of which has its own problems.

And yet, despite the terminal philosophical crisis MN finds itself in, the chattering priesthood of Silicon Valley remains blissfully unaware, hoping to conjure up some fantastical reality through handwaving.


None of the gameplaying AIs that have shown superhuman performance recently have needed to be conscious or aware of what they were doing in order to perform at that level. They likely don't experience qualia but they can still trounce human players. Similarly, Google's amazing improvements in machine translation haven't required consciousness or qualia.

There are certainly lots of people in Silicon Valley who are fascinated by the prospect of making machines with conscious experience, as you think is impossible, but few indications that this would be necessary to make non-conscious forms of AI into a factor that powerfully affects our future.


I haven't seen anyone argue that intentionality, qualia, or consciousness would necessarily be either a precondition or a result of developing AGI. In fact, thought experiments like the "paperclip maximizer" are often brought up to argue that a machine could be very alien in its internal experience or lack thereof, but still pose an existential threat.

If an AI can turn the world into paperclips, it can certainly understand that we wouldn't want that. Paperclipping everything is a much harder task.

Of course any powerful intelligence can understand that we wouldn't really want that. The question is why would it care about what we really want? Its core values would be that more paperclips is good, and doing what humans really want is evil if it results in less paperclips.

Currently, we don't know how to properly define "do what I mean / do what we really want" goal in a formal manner; if we had an superpowerful AGI system in front of us ready to be launched today, we wouldn't know how to encode such a goal in it with guarantees that it won't backfire. That's a problem that we still need to solve, and this solution is not likely to appear as a side-effect of simply trying to build a powerful/effective system.


The paperclip maximizer example starts with a human asking an AGI to make some paperclips. That turning into an all-consuming goal at the expense of everything else the AGI would understand humans to care about is the problem with the thought experiment.

However a more complicated example like having the AGI bring about world peace or clean up the environment could have undesirable side-effects because we don't know how to specify what we really want, or have conflicting goals. But that's the same problem we have with existing power structures like governments or corporations.


Putting aside the question of whether qualia arguments are ultimately based on appeals to plausibility, it does not matter in this case, because the behavior of an artificial p-zombie (APZ?) would be indistinguishable from a conscious AGI, and therefore potentially present as much of an existential threat as the latter.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: