Hacker News new | comments | show | ask | jobs | submit login
The Myth of a Superhuman AI (backchannel.com)
254 points by mortenjorck 11 months ago | hide | past | web | favorite | 357 comments



> Temperature is not infinite [...] There is finite space and time. Finite speed.

Just want to point out this is true, however these things go astronomically high.

> what evidence do we have that the limit is not us?

We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.

We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.

We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs, and see if a pattern emerges where the artificial analogs tend to have similar limitations as their biological counterparts.


> Temperature is not infinite [...] There is finite space and time. Finite speed.

There's no theoretical limit for temperature, and we believe spacetime could be infinite even though the observable universe is not. Nevertheless this is a very silly argument. If it's finite but absurdly high it's good enough for the purpose.


If the energy density in a fixed volume of space gets too high it collapses into a black hole. That seems to suggest that there is a kind of limit to temperature.


Only if it's concentrated. Otherwise the entire universe would have collapsed into blackholes during the big bang. You need an energy gradient for gravity to work like that.


I'm a complete noob in this area, but that doesn't mean that temperature has reached it's limit does it? The black hole can carry on getting more dense and hotter.


No, the black hole can't get more dense. If you put more energy into it it gets bigger.


Schwartzchild radius is directly proportional to mass, so (if I understand correctly) it's if the energy density of a surface area rather than of a volume is too high.


Yes, there is a theoretical limit for temperature.

https://en.wikipedia.org/wiki/Absolute_hot

For all intents and purposes, however, it is preposterously enormous.


That's not a limit for how hot temperature can get, just a limit at which existing theories break down.


No, there's not. Read what you posted more carefully. Planck temperature is the scale at which gravitation becomes as important as other forces, and thus without a working theory of quantum gravity we can't predict what happens at that scale.



Planck length, Planck time and Planck temperature are not the minimal/maximal values for these quantities that "could exist". It is the boundary of the regime that our current theories can describe.


> There's no theoretical limit for temperature

I suppose when the particles reach lightspeed?


I see that I am mistaken: The temperature is a function of the particles energy, not its speed. The energy at light speed is infinite, so that doesn't really pose a temperature limit.

There does however seem to be an quite widely assumed maximum temperature: the Planck temperature of about 1.417×10^32 Kelvin.


The archtype of temperature is a black body radiator. Classically, you make a block of metal (ceramic, whatever), with a spherical cavity and drill a small hole into the cavity. Then you draw a vacuum and heat the cavity, point a scope at the hole, and measure the spectrum of ligh tcoming out. In this circumstance, there is no mass that limits the energy: you are measuring the temperature of a black, empty vacuum; thus there is no upper limit.


> Then you draw a vacuum and heat the cavity

What are you heating if it's a vacuum? How can you heat nothing?


You're measuring radiative heat, which indeed exists for most of its time in nothing. I guess that the idea of GP is that you could make the heating block arbitrarily big (say, size of the universe) and have it dump all its radiative heat into that cavity.


I see. Thanks.

The difficulty here is people don't often specify if they mean heat in the form of kinetic energy or radiation.

Much like how people say things like "releasing energy" without specifying the form of energy.


I'm sorry, but this is a complete misapprehension of "black body" radiation and your experiment is nonsensical. "Black body" refers to an object that absorbs all incoming radiation, and nothing to do with the "a black, empty vacuum". The spectrum of such an object therefore is entirely radiative (no reflection) and depends only on its temperature. As a sibling points out, a vacuum cannot be heated - there is nothing to heat - and does not have an emission spectrum. Your experiment would simply measure the spectrum of the bottom of the hole you drilled.


When you heat something, you have to add energy to it. Mass and energy are equivalent; both warp spacetime. You will, in fact, reach a temperature where your block of metal collapses.


I was simply trying to explain to my P that temperature is not dependent on the speed of a massive particle.


OK, but arbitrarily high temperatures necessitate arbitrarily high mass-energy densities, and the latter won't happen without a gravitational collapse.


I think a compelling argument is the variance in intelligence among humans themselves. A cabal of AIs 2x as intelligent as your average "genius" could be quite formidable were they to decide to act against the favor of humankind.


Yes, this is why the nerds rule over the jocks in high school. Oh, wait…


That's different, because there are specific rules against arms races in schools, that people typically adhere to. If there was a battle scheduled in advance, I'd bet on the nerds.

But what's more, if they both have guns then everything's about even. Strength doesn't really help if you've been shot, and most high-schoolers don't have much experience shooting guns.


>most high-schoolers don't have much experience shooting guns

Actually a lot do but probably not so much in the places that readers/commenters here live.


They don't win in high school. But later in life, jocks find themselves employed by nerds.


But anyone half that smart would know not to give an unproven system absolute, unchecked control over their critical infrastructure, which would basically be a requirement to causing problems humans can't easily overcome.


The Machines will come to control our infrastructure through a series of small decisions that seemed like a good idea at the time.

Computer control of farming decisions such as watering and fertilising. Link that in with insolation and wind prediction to minimise waste or maximise productivity.

This happens in the cloud thanks to insolation prediction being tied into wide-area networks of solar farms.

More such networks evolve independently, then one day some bright spark realises that prediction and control in domain X can be combined with control and prediction in domain Y to produce greater efficiencies.

Linking reproductive health and food production to maximise water savings, for example.

Next thing you know the computers decide that human population is best maintained in the order of hundreds of millions rather than billions.

And then we find out that Terminator was actually propaganda from the human rebels who didn't like the computers telling them not to breed, and the nuclear war was Skynet protecting its desired humans from the rebels, not protecting itself.


The limiting factor in farm productivity is not decision making by human farmers. Anyone who has been farming for years already makes pretty close to optimal decisions based on the available data about crop prices, weather, soil status, labor rates, etc. The ones who make bad decisions end up bankrupt, and leave farming or work as hired farmhands. And there are so few decision-making farmers left anyway that automating the decision making part of their jobs won't achieve any meaningful cost savings.


> Anyone who has been farming for years already makes pretty close to optimal decisions based on the available data about crop prices, weather, soil status, labor rates, etc.

I'm not sure if you're including software analysis in your notion of "based on the available data", but if not, then this would absolutely not be true - unaided humans, even veteran growers who've been managing the same fields for decades, do not make anywhere close to optimal decisions. They make good ones, generally, and can be long-term profitable with relative ease, but it's absolutely not even close to what you get when you bring intelligent software into the mix.


Sure of course good farmers are taking advantage of the best available software analysis tools. The point is that those tools already work well enough that even if superhuman AI actually becomes available at some point (and that might never happen) it won't necessarily make any better decisions than what human farmers are already doing today.


Maybe Amazon will own 90% of the farms because the have the cheapest robots and best algorithms and statistics.


John Deere and their competitors are far, far ahead of Amazon.com in farm automation. There's a huge difference between warehouse picking robots and farm equipment. Being cheaper doesn't matter, it's ROI that counts.


Except Amazon could easily acquire John Deere.


Just briefly scanning the numbers on Wikipedia, Amazon.com could acquire John Deere, but it'd be highly unlikely unless they expected that a change in management would yield implausibly high returns.


I suppose my point is, they could acquire the knowledge needed to do cutting edge farm automation if that's something they wanted.

And it should be something they want.


I'm sure Jeff Bezos is eagerly awaiting your advice on Amazon.com acquisition targets. Farm automation is a great strategic fit with web services and online retailing, right?


It's a rather straightforward step in controlling the supply chain when you are pushing something like Amazon Fresh.

I suppose they also aren't a hardware manufacturer? Or a logistics company?


Just how much faith do you have in your politicians?

By using existing political structures, and a mix of rhetoric, they have managed to get a lot more power than a typical person.


Nice point. The US presidential election has had me thinking about how a smart AI might decide instead to manipulate humans into it's will with misinformation campaigns. That of course would require an AI of human comparable intelligence.


I believe it doesn't necessarily need to outsmart the /average/ human. Consider this scenario:

* Someone makes an AI which scores only 85 on IQ tests, but which can run in real time on common consumer hardware.

* It gets installed on every phone, PC, and tablet.

* Someone, possibly the AI itself if is the type to generate its own goals, sets all instances of the AI the goal of convincing their user of some belief or policy or whatever.

* Everyone with IQ<85 is convinced, nobody with an IQ>85 is convinced.

* 16% of the population believe whatever it is the goal-setting mind wanted them to believe, at marginal cost. An election is swung, a bank suffers a run, a pyramid scheme makes millions, whatever it is.


the ability to emit words on the screen gives a human-like intelligence all kinds of abilities

especially if the human-like intelligence doesn't need to pay rent, fear the police, follow laws, sleep, eat ... etc


In the absence of counter examples, it's reasonable to assume a truly human-like intelligence needs as much down-time and sleep as a real human, and that it can be as effectively coerced with the threat of simulated prison if it breaks the rules.

But the point about food and rent is very economically important if they ever get to be cheaper than human labour.


Would you imagine that a computer smarter than the smartest human and having the understanding of computer software that probably only a computer would have couldn't conceal what it's doing as it creates trojans and viruses that can give it control over every piece of connected equipment everywhere?


All of that is moot when you can just turn off the power.


Can you?

The AI, when it becomes popular, is going to be a business-critical system. Potentially even intertwined with life-critical systems like self-driving cars and self-flying planes. If it isn't, then in "hidden malevolent hacker" phase it can just install trojans to make those systems dependent on it.

Petrol production is hazardous to our future. Can you "just turn it off"? Yes; shut down a fairly small number refineries and the system will run dry within a week. CO2 emissions problem solved! It's just that, you know, people were depending on being able to get food delivered by truck.

Obviously the first malevolent AI is going to hide itself behind an offshore opaque-ownership company and hire a good set of PR and lobbying firms. Can you turn off Skynet when it's reported as "critical to national security" in all the news sources?


So you're supposing that we have honed and selected an AI so wonderfully adept at valuing human needs that we make most human life dependent on it, yet it will spontaneously come to disregard the value of human life? And despite the fact that we would naturally require it to be sufficiently transparent about it's valuations in order to actually ensure we get the results we want, it will also develop this animosity in secret?

That seems a bit contradictory to me. And if not, highly contrived and unlikely.


"And despite the fact that we would naturally require it to be sufficiently transparent about it's valuations"

That's not necessarily true at all. Let's say we come up with a way to model neurons in silicon like a very complex neural net to the point of emergent consciousness. We might not be able to look into memory location 0xFFA79348 and get the "human value" int to see if it is high enough.

It could very well be that we don't really understand the weighting in the net or the implications of those weights because it's too complicated. Even current neural nets of non-trivial complexity can be impenetrable to evaluate outside of the net itself.


Transparency doesn't require that you be able to read its memory directly, just that it is able to sufficiently justify the decisions that it makes.


Like we can justify all decisions made by super genius people who are extraordinarily good at concealing their activities when their cognitive processes and abilities on the Internet and with other forms of communication would be beyond our comprehension?

I think that you overestimate our ability to understand what we can create. Even Deep Neural Net researchers consider the neural nets to be a black box that they don't really understand. Further... unlike human beings who are locked into their genetics and their neuronal pathways - an AI will have the instant ability to improve itself. It will be able to rewrite its own software faster than we can comprehend. In the hours from the time the AI would be created until a researcher even might realize what he had created, the AI will effectively have had years or tens of years of contemplation time and self-enhancement time. If the researcher goes to sleep with plans to work with the new AI the next morning, the AI will effectively have hundreds of years of time to plan and improve itself. If it has any kind of Internet access, it will be able to take over systems around the world, possibly parallelizing its own cognitive abilities and possibly backing up its consciousness in case the researcher decided to eventually wipe it.

The genie will be out of the bottle so fast that human beings will almost assuredly be caught flat-footed.

I always shake my head when I see sci-fi shows where the humans (Stargate, Star Trek, etc.) come up against advanced AI opponents. It's only through control of the script and plot contrivances that the humans have a chance. People just won't be able to compare to thinking and self-modifying AI.

https://www.ted.com/talks/sam_harris_can_we_build_ai_without...


> AI so wonderfully adept at valuing human needs that we make most human life dependent on it, yet it will spontaneously come to disregard the value of human life?

This is an argument often deployed against "corporations" or "capitalism", both of which aren't quite intelligent but are definitely systems that we're dependent on and capable of disregarding inconvenient lives.

The AI doesn't have to hate humans, it just has to find that they get in the way of its quarterly targets.


That wont help, we know how to turn off humans from across the world with the click of a button yet we still haven't managed to create peace on earth.


If it was successfully concealing its actions from you, what reason would you have to turn off the power?


If it were successfully concealing its actions, what wrong would it be doing? If all the bad things someone does has no effect on anyone else, then who cares if they do them?


Because it could be concealing its actions until it controls enough infrastructure or military power to launch an attack or prevent people from being able to shut it down.

What we don't know can hurt us.


It is not 'concealing its actions' if it is taking control of infrastructure and military power.

Just like I'm not 'concealing my actions' by putting a blindfold over your eyes. My action is "putting a blindfold over your eyes", which must be very apparent for the entire action to have any value. Now if I hide that action by putting on an invisible blindfold, then you've got nothing to complain about anyway.


Why would it be so obvious if it understands enough about strategy to use subterfuge?

When the US government unleashed Stuxnet, did it announce what it was going to do?


Batteries exist. And so do small-scale power generators like PV cells, windmills and, well, diesel generators.


Counter-examples exist every time any critical infrastructure is hacked: if the systems were truly proven, they wouldn't be hackable.

Proof is hard, good enough gets deployed.


This makes me want to find and watch "Colossus: The Forbin Project" (1970s SCI-FI)


Yes, but it is noteworthy that most geniuses are shortsighted in other respects. They probably would not survive in prehistory.


It's not clear to me how true this is. There are definitely individuals that exhibit savant skills in one area and mediocre to poor skills in other areas. But there are also true polymaths who can turn their hands to just about anything, I suspect that those people would do well on aggregate in any period of human history


Depends on the style of genius. Napoleon, Genghis Kahn, Alexander the Great . . . they seem to have done pretty well for themselves.


Obviously some did.


At least long enough to reproduce.


>We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.

This assumes that all the processing that human brain does is by sending signals across the neurons. Which we really don't know. Right?

>We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs..

Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?


> Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?

Regardless of efficiency, we have certainly exceeded the range, cargo capacity, top speed, and passenger space of most birds.

Similarly, even if signal propagation in the brain is not understood, we could conceivably beat the brain in other metrics.


> Regardless of efficiency, we have certainly exceeded the range, cargo capacity, top speed, and passenger space of most birds.

This is the core point. Our technology may still suck compared to nature, but we've been beating nature consistently since before humans learned to write, because we don't have the same set of constraints that natural biological systems have.

So just like we can make inefficient flying machines that beat nature on most other conceivable metrics, we could presumably build an energy-inefficient but smarter mind at some point.


I would say the progress that humans have achieved by now was by hacking the Nature, we were able to do this because we are part of the Nature. so in order to produce the AI that would be capable of progress that humans are to do it needs to be a part of Nature. Such wet-ware computers already exist, look up DNA based computers, but those are highly constrained (and their io is terrible). The sort of intelligence we can get to the wet-ware based AI is probably will be better spend on improving our own intelligence by hacking own our brains.

> we could presumably build an energy-inefficient but smarter mind at some point.

So this smarter mind will probably be our own, but that would also be achieved by some sort of artificial means.


Saying that "progress that humans have achieved by now was by hacking the Nature" implies that planes and computers are "hacking Nature" too, so this doesn't preclude a silicon brain, nor does it suggest the AI will have to be biology-based. Planes aren't biology based; in fact, we only cracked powered flight after we stopped trying to mimic birds.

> So this smarter mind will probably be our own

It's plausible for me too, but not because we can't do stuff outside what evolution did - just because it's usually easier to hack an existing system than to build a new one from scratch.


> hacking the Nature" implies that planes and computers are "hacking Nature" too

Sure it does, because physical laws are a fundamental part of Nature


Nobody tries to build an AI that breaks laws of physics by virtue of existing :). If we're extending the term "Nature" to mean all reality, then silicon brains are a part of nature too.


>we could conceivably beat the brain in other metrics.

Sure, the very first calculators beat brain in some metrics. But we are not talking about those metrics here.


> This assumes that all the processing that human brain does is by sending signals across the neurons. Which we really don't know. Right?

It doesn't need to assume all, just some non-trivial amount. I don't know the ratio of inter-neuron to intra-neuron processing in brains, or if such a metric even exists. But it seems theoretically possible at least to measure the computing power of a neuron in isolation, and see if artificial analogs exist (or could be built) that exceed that by some margin. Ultimately we're just looking for avenues of evidence to explore.


> Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?

The author makes a good point about intelligence not being one-dimensional, and I think the same thing holds for flying ability. Efficiency is one dimension, but you also have speed, range and so forth.


Looks like the vaginal canal is no longer a limiting factor. C-Sections are having a measurable impact on this.

http://www.bbc.co.uk/news/science-environment-38210837


>We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.

Head sizes have been larger in human history and some human relatives with larger brains died out. Energy consumption on the other hand is a limiting factor for every computational device and energy efficiency tends to be a key category where Nature still wins. In particular, the liver and enzymes of various plants and microbes can perform chemical reactions in water at ambient temperature that we can't do at all; resilin is still a more efficient elastic material than any synthetic elastomer; artificial piezoelectric muscles have less than a tenth the energy efficiency of real ones, and of course lots of animals are able to run around in polar and desert regions we still can't traverse because of the fuel and material costs etc.

One particularly revealing example is to calculate the ratio between:

* the performance per watt of the most efficient supercomputer (7 GFlops/watt) vs the human brain (38 petaflops / 20 watts = 1.9 petaflops per watt)

* the performance per watt of the human brain vs the Landauer limit at body temperature (don't remember)


>* the performance per watt of the most efficient supercomputer (7 GFlops/watt) vs the human brain (38 petaflops / 20 watts = 1.9 petaflops per watt)

Encouraging, but AI wouldn't be constrained to an energy budget similar to a human. At the rates you describe, beating a human would take the output of only a single wind turbine. https://en.m.wikipedia.org/wiki/Orders_of_magnitude_(power)


Yeah but ... human beings are fuelled via stomachs. Computers are fuelled via turbines built into gigantic dams. Or nuclear reactors. Or wind turbines. Or...


It's a general principle of contemporary technology that we lose with nature on efficiency, but compensate for that with sheer scale of our solutions. Birds are much more energy-efficient than planes, but then again, birds can't fly supersonic and don't move around tons of cargo.


>> what evidence do we have that the limit is not us?

This is just such manifestly wrongheaded reasoning as to be almost laughable.

I think there's a substantially better argument against this notion than to actually try to find evidence against the claim. I prefer it because it reveals the total wrong-headedness of this style of reasoning. The argument is to turn it around and say "what evidence do we have that the limit is us?" A human level of intelligence seems like an incredibly and oddly specific level to be the maximum permissible by physics. And oddly specific claims need to be treated with extreme skepticism, because they carry high entropy, in violation of Occam's razor.

If you want to know what x equals, you don't pick a number, say, 2, assume x=2, and ask people to provide you with reasons why it shouldn't be 2. There's an infinity of numbers, and the odds are just overwhelmingly against x happening to be 2. You start with the assumption that x could be literally any of the infinity of numbers, and start looking for evidence which narrows it down. More concretely, if you want to know the physical limit to intelligence, you start with the assumption that it could be anything, constrained by the evidence that's currently available. In this case, the evidence is that human intelligence provides a lower bound for the maximum physically allowable intelligence.

Positing that our intelligence is the maximum possible becomes even more suspect when you consider the process by which our intelligence came about - evolution.

Firstly, evolution is a process which tends towards reproductive fitness, not necessarily towards intelligence. Intelligence is selected for to the extent that it makes something more likely to reproduce, and no more, and it probably isn't the case that more smarts universally leads to more reproduction. In fact, most modern research suggests the opposite, with people with higher IQs having less sex.

Secondly, we know that natural selection gets stuck in local optima. If your species is on top of a 10 smart-points hill, and in order to get to get to the 1000 smart-points hill over there your species has to mutate in a direction that makes it stupider before it gets smarter, your species is going to stay on the 10 smart-points hill. This is because individuals that mutate stupider in the direction of the 1000-point hill will be selected against.

Thirdly, natural selection just isn't a very powerful optimization process. It works by random mutation, and it takes a lifetime to make a minuscule change which may or may not be in the right direction. Even if natural selection was solely optimizing for intelligence, and it could somehow avoid local optima, there would be no particular reason to believe that the world had existed long enough for the smartest possible organism to evolve.

Fourthly, as _greim_ touches upon, evolution has a very limited medium to work with. All it can do is change the sequence of DNA base pairs, and produce inefficient meat machines that run on respiration. There are all sorts of complex arrangements of matter which might be intelligent, and biological ones are only a tiny subset of those possible arrangements. The odds of the smartest system physically possible residing in that little biological subset are tiny to begin with, and only get smaller as you consider the more direct arguments against biological systems mentioned above, like sluggish neural transmissions, etc.


Exactly. From a skeptical POV, it's much harder to argue something can't be the case than it is to argue something could be the case. Especially for a complex and multi-faceted concept of intelligence levels, which the author admits isn't one-dimensional.


First, let me say that I'm generally a Kevin Kelly fan.

That being said, I think his article shows extreme arrogance for one simple reason: To suppose that superhuman AI (AI smarter than us) won't exist is roughly the equivalent of saying that humans are at the limit on the spectrum of intelligence. Really? Nothing will ever be smarter than us?? Highly doubtful.

That should stand on its own, but I have other critiques. For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else? I have little faith that a CPU circa 2050 will be built exclusively on silicon. By 2100, no way.

Second, there is a simple definition of intelligence that is applicable to many forms: intelligence is the ability to recognize patterns and make accurate judgements / predictions based on previously seen patterns. The higher the accuracy or the more complicated the pattern, the higher the intelligence.

My final point of contention is the idea that AI must emulate human thinking. Why? Maybe human thinking sucks. Maybe Dolphins have much better intelligence, but due to a lack of opposable thumbs, they don't rule the world like we do. And lest you think that less intelligent species can destroy others, could you really doubt that roaches and ants will be extinct before us?


To suppose that superhuman AI (AI smarter than us) won't exist

Which is exectly what Kelly doesn't say. He says that the smarter concept is ill defined, and that our current fantasies of some universally superior AI galloping onto the scene and taking over everything may be just that - fantasies.


> He says that the smarter concept is ill defined

Which isn't a contradiction like he claims it is. It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional.

But guess what, we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics), and problem solved. We're right back to a single order dimension for ranking intelligence.

It was a really weak argument he put forward.

Additionally a weak argument was the branching / fan pattern of various species. Yes all living species are at the peak of evolution for their environment, but they weren't all pressured to evolve more intelligence. Some evolved strength, speed, flight to their environment.

If instead natural selection began only selecting for intelligence (like humans searching for AGI will), then you would could definitely rank all animals linearly on a single path of intelligence.


It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional

A condensed way of saying precisely what Kelly is saying in the article. Allowing for the very real possibility that I am simply too dumb and not grasping your point.

but they weren't all pressured to evolve more intelligence

And it isn't claimed that they were. General evolution is used as an example of potential patterns in evolution of various intelligences.


He attempted to use the multi-dimensionality of intelligence to make the following claim:

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

This is poor reasoning. The fact that intelligence is multi-dimensional has no bearing on our ability to declare something smarter than us. It isn't at all meaningless. Because of this he claims that there will be no super-human AI.

Via analogy. He says, "you can't compare two football players because one may be stronger, while another if faster." So the concept of "better" is meaningless. And no player can be declared better.

My response is that's absurd. A simple counter-example, a single player can be both strong and faster, and thus clearly better.


A third player is weaker, but faster. And smarter. Or tougher. Or more agile. More agile but not quite as smart. More persistent. Less predictable. And so on and so forth. Your 'meaningless' only has meaning because you apply it to a hugely simplified model.


> we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics)...

My original comment addressed that specific case.


>For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else?

Why a traditional computing architecture in the first place? Assuming you throw ethics out the window, the fastest path to superintelligence may be via biological material.

https://www.theguardian.com/science/2015/aug/18/first-almost...


I wonder why ethical questions should be different depending on the hardware the AI runs on.


In an absolute sense there probably wouldn't be any difference—certainly none in theory anyways, based on our current scientific knowledge.

When considering logistics however, there may be. Digital and biological approaches might ultimately produce the same mind in the end, but their methods of getting there may differ considerably due to the underlying medium. That's where I suspect the ethical differences may lie.

Who knows, digital could end up being worse somehow—I certainly wouldn't want to be a HAL 9000 unit undergoing a shutdown procedure. Especially when I can just pay for singing lessons.


This is completely silly. Superhuman AI is inevitable because there is nothing magical about human brains. The human brain is only the very first intelligence to evolve. We are probably very far away from the peak of what is possible.

Human brains are incredibly small, a few pounds of matter. Any bigger and your mother would be killed giving birth or you would take 10x as long to grow up. They are incredibly energy constrained, only using a few watts of power. Because any more and you would starve to death. They are incredibly slow and energy inefficient; communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy. And they are very uncompact - neurons are enormous and filled with tons of useless junk that isn't used for computation. Compared to our transistor technology which is approaching the limits of physics and built on an atom by atom scale.

That's just the hardware specs of the human computer. The software is hardly better. There are just more unknowns because we haven't finished reverse engineering it (but we are getting there, slowly.)

But beyond that, the human brain evolved to be good at surviving on the Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or engineering. It's really remarkable that our brains are capable of such things at all! We have terrible weaknesses in these areas. For instance, a very limited working memory. We don't realize how bad we are, simply because we have nothing else to compare ourselves to.

Consider how even today, relatively primitive AIs are vastly superior to humans at games like chess. Human brains also didn't evolve to be good at chess after all. Even simple algorithms designed specifically for this game easily mop up humans. And play at a level of strategy far above what even the best human players can comprehend.

Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.


>They are incredibly slow and energy inefficient;

Human brains are energy inefficient? Well, thats a first ;)

"In 1990, the legendary Caltech engineer Carver Mead correctly predicted that our present-day computers would use ten million times more energy for a single instruction than the brain uses for a synaptic activation."

"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"

[1] http://www.newyorker.com/tech/elements/a-computer-to-rival-t...


In terms of energy consumed for individual computations, yes. Neurons use chemical reactions to communicate and this is terribly inefficient. Transistors use very small amounts of electricity in comparison.

The main difference is computer technology is designed to be very general purpose. The brain is more like an ASIC that's hardwired to run one specific algorithm. GPUs are also computing 16 or more bits of precision, when real neurons are very low precision. There are some other differences, like how real brains are incredibly sparse and most of the synapses at any given time are dormant and not using much energy. They are also very sparsely connected to each other. While our current NNs are very dense and need to spend energy to compute every single connection each cycle.


It seems premature to make judgements about efficiency when there is so much we do not understand about brain function and consciousness. When you can replicate all brain function, lets compare efficiency. Comparing to an asic reveals the source of your error rather than defends your position.


For that to be a fair comparison, wouldn't you need to look at all the energy consumed by the human brain over the many hours it took them to become a Go champion?


I think that's a fair argument, but from the quote above

> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"

Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed. And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live doing nothing but working on Go, that would be 640 Wyr, still a factor 1000-ish smaller.


Totally, I'm sure it's correct, and even if you were to bring the comparison in to line then the outcome is still "computer is watt hungry". The point is that the original statement, while correct, doesn't really say anything useful.


How would you know what amount of energy the human brain spent in learning to play Go, specifically? In the same time it was doing that, the human brain was also learning to do, and doing, a whole bunch of other things that AlphaGo was never even trained on- from moving limbs and controlling bodily functions to using language and recognising faces and so on. How would you isolate the amount of energy needed for training in Go, specifically?

I mean, in principle, if you had two numbers, "human energy consumption from learning Go" and "AlphaGo energy consumption from learning Go", you could compare them. But in practice there's no way to come up with such numbers, so what's the point of comparing apples and orangutans?


That's not really the point, more that it was originally not an apples to apples comparison and therefore doesn't really tell us anything. I have no doubt that the statement is correct, it's whether or not that statement has any meaning. As another comment pointed out, even if you compared the hours, the computer still uses a few orders of magnitude more energy for a more accurate (although completely theoretical) comparison.


AlphaGo took several months of self play on a large cluster, so training probably consumed many times more energy than a human in a lifetime.


The gain is in being able to clone the bot perfectly. Once trained you can make many of them. Also, if you look at what happened in Chess, the lessons learned from the large machines was absorbed and resulted in your smartphone now being able to outclass the human world champion.

You can expect a similar thing with Go at some point.


Same goes for the bot, then. A back of the envelope calculation suggest Lee's brain consumed as much energy in a 80 year lifetime as AlphaGo in half a day.


Not trying to say it isn't a correct statement, or that the outcome would be different if you lined everything up properly, only that the original statement doesn't really say anything meaningful.


Not surprising that a computer expends more energy to perform a task that we [previously] thought required human-like intelligence.

I'm sure any dollar store calculator spends way less energy performing long division than the average human.


For one, normal human can do long division as fast as a calculator, and can handle numbers that will bork many calculators. (edit - look at human calculators, and the era before calculators were common place. Even now elders I know can eye ball numbers and calculate percentages / factorials and ratios)

And for another, Calculation != AI, far from actually.


One, what normal human being can perform long division as fast as a calculator?

12/43523523452. Go.

Two, AI is applied statistics. What do you think AI is?


You could, with practice

>I'm sure any dollar store calculator spends way less energy performing long division than the average human

Thats the comment.

A calculator is a one role device, with exactly specified rules.

Similarly, with training you can too. You don't need to be special, other than being practiced,which is a fair requirement for a human being.

Here is a human being who could out perform it: https://en.wikipedia.org/wiki/Shakuntala_Devi

>In 1977, at Southern Methodist University, she gave the 23rd root of a 201-digit number in 50 seconds.[1][4] Her answer—546,372,891—was confirmed by calculations done at the US Bureau of Standards by the UNIVAC 1101 computer, for which a special program had to be written to perform such a large calculation.[10]

She could easily out-perform calculators because she never needed time to key in the commands (she needs to hear the problem to solve it).

If we exclude that restriction, and the commands magically float into the calculator, and that the problem is small enough to match the calculators limits, then yes, if those arbitrary conditions are met the calculator can out-perform her brain.

Which is precisely the type of “cows are round spheres” thinking that’s being decried in the article.

People can and regularly do out-perform calculators in speed, energy and complexity of computation.

Do note that calculators weren’t allowed as exam tools in a lot of countries till a decade or so ago. Students learnt mental math techniques which were known since ancient times (think Greece).

For a human brain the answer isn’t even calculation, it becomes pattern recognition. The square root of 25 is 5, which takes about the same neural load as it takes to recognize a letter.

The calculation you provided is harder, but thats a function of lack of training/practice, not complexity.

----

AI is not in the realm of what a calculator can pull off, is what I meant to say by the compute part.

edit: I tried your computation on a store calculator, its beyond its ability to calculate,(0.0000000027)


Your example is from 1977, we've had 40 years of Moore's law since then. In the time it takes for you to recognise that you're even looking at a number (~0.08 seconds), the cheapest computer you can buy (the $5 Raspberry Pi Zero) can do around 1.92 billion normal floating maths operations. Sure, 201-digit numbers are a little slower — on my laptop, in Python, I can only do that particular calculation just under one hundred million times in the fifty seconds it took her to do it once.

But you're right to say calculators are single purpose devices and that's magically inserting the question.

So I downloaded sklearn, which contains a set of labelled hand-written digits.

It takes about 0.17-0.2 seconds for my laptop to learn to read numbers, from scratch, and thereafter it can read digits at a rate of about 8,200 per second.

For reference, "a blink of an eye" is 0.1-0.4 seconds depending on who you ask.

Dollar store calculator? I'd never have said that myself because I know calculators are rubbish. But in the context of "AI: Will it ever beat humans?" hell yes, it will and in many cases it already does.


There is the practice of anzan in China and Japan, which shifts the burden of calculation into the visual cortex by visualizing a mental abacus. Thus advanced practitioners can rapidly and correctly evaluate calculations like the one you have given. As you can see, Tsujikubo's speed in divisions appears to be limited in her ability to physically write out the answer.

https://youtu.be/-9T3P--ubQc?t=6m10s

Granted, these are people at the top of their game, so may not qualify as "normal" per your argument, but this is to illustrate that the limits of numerical calculation in "normal" people may be the algorithms we are taught rather than the computational capacity of our brains.


and he only can play Go... what a waste for a such big AI. AI can beat humain on some special part but they are designed by us, then they are wrong and bad, specially when you only need to unplug the battery for that they die, too easy, come on AI do something more than that....


And can only play well on 19x19 board, if I got it right.


> Superhuman AI is inevitable because there is nothing magical about human brains.

I'm of the opinion that that's an assumption being made. An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness. The point was not so much the mechanism, but that we can't yet rule out such a mechanism existing that our brain could have but our silicon currently doesn't — because we have no real idea what consciousness is, and we've not constructed an AI that proves that no such mechanism exists. If consciousness is intelligence, this could matter more (but this could also be another assumption).

(Unfortunately, I cannot find the article.)

(The article also went on to note that we can copy computer programs, but not brains, and wondered if perhaps there is some relation to the previous problem. It was also advocating that we need a testable hypothesis around consciousness; to either prove or disprove the question "is it possible to clone a brain?" would be an extraordinary discovery either way. I found it interesting.)

> Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.

I'm just wishing for an automatic soap dispenser that can reliably sense my hand.


That's indeed an assumption. But so far there isn't much evidence pointing at the quantum solution, and there are theories of how self-awareness can arise that don't require magic. It's reasonable to take the less magical assumptions until presented with (appropriate amount of) evidence to the contrary.

Quantum-based self-awareness is a cop-out IMO, an open way to claim that it comes from God. Well, if that's true and it's beyond our reach, then we'll never build a self-aware AI ourselves. But assuming something is impossible doesn't help moving forward...


> I'm of the opinion that that's an assumption being made.

Not so much an assumption as a reasonable guess. Is it logically possible that there's something magical about human brains? Yes.

But there are countless examples of things we thought were magical historically, and, as far as I know, exactly zero that we've ended up concluding actually were magical, and a large number that we're pretty confident aren't at this point.

The smart money here is on "no, the human brain isn't magical."


I think there may be a problem with the word "magical" in this context, just like there is a problem with nature-worshipers talking of "supernatural powers".

If we've made a science out of it, no matter how advanced it is, it's not magic — e.g. if telekinesis was possible, it would have a scientific explanation and be a natural power — which bares some similarly to the way things are called "artificial intelligence" when they're impossible, only to be reduced to "trivial" when someone makes a machine do them: Chess, Go, having a large vocabulary or knowing many languages, being good at arithmetic or being able to do calculus at all, writing music or news stories, trading on the stock market — all things that used to be synonymous with genius, yet all done by machines we don't call intelligent.


>An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness.

That's not an article but a whole book:

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind


Extremely close! It was actually posted on today's thread about consciousness[1], and what I was thinking of was [2].

[1]: https://news.ycombinator.com/item?id=14263842

[2]: http://www.scottaaronson.com/blog/?p=2756


Suppose there's some quantum effect in our brain that makes us smart (unlikely, IMO, for a variety of reasons). Why shouldn't we be able to understand this effect, and leverage it when we engineer our machines?


> neurons are enormous and filled with tons of useless junk that isn't used for computation

I just want to point out that some recent research indicates that this might not actually be true. Single neurons may be more complex than we thought since researchers managed to extract significant and complex functionality from single neurons (!). Not to mention glia seem to be important for cognition, and not just glue as we thought... So perhaps not as wasteful and kludgy as we might think :)


I think AI fear is a bit hyped up in the valley right now but the fundamentals are true.

You can think of intelligence as a couple of dimensions rather than just one dimension as the author is arguing. However I don't agree with his/her logic.

You can think of the brain as a massively parallel computer. Intelligence can be gauged by total cores, memory and CPU per core, power usage and network bandwidth between cores. Sure different configurations will give different strengths but something optimized atomically for performance will be faster than a mushy brain.

Now imagine a billion of this little pods placed all over the planet. With amazing bandwidth between the computers akin to our internet.

Worst case: they think humans are the enemy and out think us to extinction.

Best case: studying intelligence helps us figure out how Brians work. This helps us figure out how DNA nano manufacturing works. We figure out brain to brain and brain to AI networking. Now all humans are collectively a super computer being able to share knowledge and abilities. May be this is extended so animals are also included and Earth becomes a Pandora like planet where every living thing is connected and they all live in harmonius resource optimized world where AI tries to maximize happiness of every living being.


No.

The hype chamber in SV is on overdrive on AI at this point.

Singularity AI concept has gone from the neat Sci Fi idea it was, to being treated as a serious possibility. This is Absurd. The modern version of Malthus, and a community of people who pride themselves on their reason, should do a basic sniff test.

The author has it correct - human brains themselves are significantly impressive on a weight/power/capability scale.

But forget all of the intelligence debate, consider the things that people magically ignore: emotions.

Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

At the same time, they lack motivation, or desire - there is a lack of impulsive force to move forward.

Intelligence doesn't give human beings purpose. Emotion does.

This isn't feel good psychobabble, its a fundamental part of good mind/body housekeeping which a huge chunk of SV follows every day to reach "peak performance".

How are you going to create a general purpose AI which has any motive force?

Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.


> Intelligence doesn't give human beings purpose. Emotion does.

So would an explicit goal function, which is how we already give "purpose" to algorithms.

> How are you going to create a general purpose AI which has any motive force?

In any other out of countless of ways to do that. What makes you think that emotions are necessary to create a powerful optimization process?

The "motive force" is mostly a solved problem for now, we can code that explicitly (the issue is with figuring out a right goal and how to write it down). AI development is mostly about the way it works, not why it should do the work.

--

No offense meant to you personally, but I find that most of the comments about "overdriven AI dangers focus in SV" to reveal that authors don't have a fucking clue what the issue is about, and never spent time actually reading up the reasoning behind the AI X-risk potential.

I'll give an ultra-compressed super-TLDR of that reasoning for benefit of future conversations. It goes like this:

- intelligence is a super-strong optimization process; it doesn't necessarily have to look the way humans think (humans generally suck at reasoning, which is well established; see: cognitive biases, probability theory, decision theory, etc.)

- intelligence is an independent factor from values/goals, a mind can have any combination of the two - i.e. just because it's smart, it doesn't mean it will develop the same morality humans do, or any morality whatsoever; see https://wiki.lesswrong.com/wiki/Orthogonality_thesis

- combining the two, the danger of super-human AI is not something that's hostile to us - it's something that's indifferent about us, and is more powerful than us, the same way we don't give a second thought about e.g. ants


> Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

That's interesting. Do you have a reference for that?



>Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.

This is me if i was an AI.


The fundamentals are still missing.

1. Bootstrap. In order to really take off, AI needs to be able to build better AI.

2. AI needs to be adaptable. Any human go player can play on 19x20 board. AlphaGo (a program that that on 19x19 board beats top professionals 60:0) could not even make a legal move.

3. AI should be resilient. Right now it runs on hardware made by humans and blindly relies on that hardware working perfectly all the time.


Would individual humans still have their own consciousness or would all of humanity be one giant highly optimized organism that has no individual needs, desires, dreams or fears? The concept of human could become irrelevant and our whole existence with it


This is exactly the kind of blind non scientific belief that the article is decrying.

There is nothing magical about anything. There is however a bunch of unkowns, and a lot of knowns that point in directions away from singularity babble.

This is exactly the point of the article - that we aren't going to a singularity, we are going towards a "Cambrian explosion"


> The human brain is only the very first intelligence to evolve.

That may not even be true, we're just not smart enough to make the call one way or the other. Dolphins come to mind, as well as several other creatures (Squid, Apes). The fact that we can't rule it out is in a way proof that maybe we're not as smart as we would like to think we are.


It is always the case that Superhuman AI proponents comes up with the argument that "there is nothing magical about human brain". Yes, everybody agrees about that, even people who oppose Superhuman AI and are sane enough to not evoke mystical powers argument. The problem is: 1. Understand brain and simulate it to get intelligence. Sounds simple right? How about simulating a nuclear reaction, which we already understands, on my computer and generating nuclear power? 2. Once you understand something you can create it. This is a fallacy, a big one. We understand solar system, lets create one, whats the problem? 3. Simulating a brain will give you a brain, with all its limitations and problems that we have with human brain. Oh then we can tweak it to make more intelligent? Sorry you cant do that without changing the whole model completely, in which case it will no longer be a "brain".


If you simulate a nuclear reaction you get... maybe a nice video of atoms smashing together? If you simulate a brain you get an intelligent computer program.

Same with the solar system. We can create simulated solar systems. We will probably be able to make simulated brains. What is your point?

Alright, assuming the first AI even is a simulated brain (I don't see why we can't invent intelligence from first principles first.) We could definitely tweak it. We know of drugs like amphetamines that enhance human ability. That's like Stone Age tech compared to what's possible if we had unlimited control over every neuron in the entire brain. Not to mention just adding more neurons and making a larger brain. This is just touching on what's possible, I could go on for ages on possible improvements.


My point regarding the simulation argument is that there is a difference between computational simulation and physical simulation.


If I simulate a map I get a map. Are minds like maps, or like nuclear reactions?


I have no idea what you are trying to say. A simulation is about "changes in the state of a system over time". Not sure how you can simulate a map?


I meant something like taking a physical map and representing it as bits in the memory of a computer. That's still a map; you can orient yourself with it. I'm just saying that there are things whose simulation is still a thing of the same kind. Perhaps minds are like that.

In everyday life, we treat other minds more like "maps" than like "nuclear reactions". We interact with nuclear reactions by getting radiation sickness and dying. We interact with people through more indirect channels. A simulation could be at the other end of the channel and we wouldn't know. Maybe he wouldn't now, either!

We don't get electrocuted when our interlocutor's neurons are firing, the effects on us are only through his external behaviour. It's the abstract informational flow what seems to matter.


What can a real brain do that a simulated brain can't?


Only one way to find out - Create a simulated brain and you will get the answer.


If you simulate a brain you get an intelligent computer program

That sounds like a really interesting experiment. Have you published your data?


"communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy"

A nitpick: the neurons actually do use electricity to send signals and (looking at the neuronal model in its history¹) it actually evolved to send signals along as fast as it could. The chemical component, AFAIK, is the physical implementation for memory, so this is kind of the storage class regarding operation performance. Overall though, I agree that our cerebral organ may not be the apogee of computational performance.

¹ As far as I remember, the dinosaur neurons didn't had myelin, which for some large animals caused non-trivial sensorial delay.


> The human brain is only the very first intelligence to evolve.

What other intelligence evolved, and in what sense human brain is first in evolution?


s/first/first smart enough to invent science/.

We've seen no other creatures on Earth being able to grow beyond using basic tools.

But this also somewhat implies (which is GP's point) that we're close to the dumbest possible mind that evolution could make that could still develop science. Or think of it as a crude MVP that ended up being a production system.


> s/first/first smart enough to invent science/.

To claim something as first, there must be others in reality to have followed it. If there is no second, there would be no first. Is there a second intelligence to have invented science, or we are the only ones?


Intelligence as an optimization process works orders of magnitude faster than biological evolution. We've dominated the optimization happening on this planet. If we were to stop doing technology and wait a million years, then maybe we'd see a second species develop comparable intelligence.


Maybe, maybe not. Your opinion is based on assumptions, just like mine and everybody else's. Or at least I assume so :-/

When somebody comes up with a reasonable, detailed model of how my memories are stored (just to name an example), I shall once again start paying close attention to the whole AI field.


The assumption is that a singular intelligence (the superhuman AI) will eclipse that of many intelligences (humans plus tools). Any given intelligence might be more capable than me, but it's constrained by all the other intelligences in the environment.


Overall I am sympathetic to the authors argument that fear of super ai is overblown. But I do take issue with some of his arguments.

> Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.

I'm not a historian but I have read that some scientific discoveries are made through pure logic. Einstein and relativity come to mind as he was not an empiricist. So perhaps there is some hope that ai can lead to scientific discoveries without experimentation

>So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly


Also, the limit of one agent's intelligence is kind of missing the point. Humans are powerful because of societies of people loosely working together. AI agents will be able to communicate and interact at speeds that are we cannot imagine. A network of near human IQ AI agents would be much more powerful than a single super smart AI agent.


I think that there is very compelling evidence (but not formal proof) that there is exponential falloff in terms of the ability of parallel networks of intelligent agents to have heightened overall intelligence.

Imagining that there will be near-human IQ AIs that think orders of magnitude more quickly than humans puts the cart before the horse, as well.


Would it not appear as though it was more intelligent simply based on the speed at which it makes decisions?

The Actual Intelligence may be the same (or a little higher) considering the Average Intelligence, but the effect the higher speed would likely be a measurable difference.


Yes, that's what I'm saying. If you can make an AI that's roughly human IQ (even like 80 IQ), but thinks 100x faster than a human, then that's something very much like, if not identical to, "a superhuman AI."

So when you say, "Here's how we'll get superhuman AI: We'll network together bunches of 80 IQ AI's that think 100x faster than a human," it's kind of assuming its own solution.


The 100x faster scenario is not really clear to me. Computers already "think" orders of magnitude faster for some things and orders of magnitude slower in other things. I am not sure this will be all that much more useful than the status quo. Unless they modeling a human brain than I see 100x faster being a useful thing to measure. Otherwise, I don't see speed increases being all that critical to making a superhuman AI.

I think networks of AI can become super powerful even without being nearly as smart as individual humans. AI will only need to approach certain aspects of humans IQ. It can still be much dumb than us in most ways. Most of human progress has been build up because of vague interactions and feedback loops via markets, religions, and nations.


You said that the AIs would be able to "communicate and interact" much faster than humans. If they can't think much faster than humans, then they can't communicate and interact (meaningfully) much faster than humans.


You can think much faster than you can express yourself. You can express yourself much faster than most people can understand you. You can only have a back and forth communication with a tiny number of people at once.


Why would you network them? Just add more ram.


Ask the guy who came up with this scenario. But, obviously, we do at some point find it easier to set up side-by-side computers with (relatively) high-latency connections (like, say, ethernet) than to try to put more computing resources together with low-latency connections (like motherboard busses), for physical reasons.


Reminds me of the story about the galaxy spanning AI in the Dune series. With a latency of 10000 light years between it's various instances.


How did they physically get the machines so far apart in the first place? No wormholes??


They had FTL capable ships. The AI used to sync updates using that.


Same reason why culture matters more than individual IQ. Most of human progress is cause because there are many people learn from each other and making millions of tiny improvements over time.


I don't think it will have heightened measurable intelligence. It will be dramatically more powerful, unpredictable and innovative.


will they?

Adding people to a late project makes it later.


"Adding people to a late project makes it later."

Not all projects are beholden to the rules of the mythical man month.

And even for those that are the primary reason they are is basically the lack of a fast, efficient way to transfer built up institutional knowledge, which may be far less of a factor for AI collaboration.

If I've been working on a project for the past year as the sole developer and I could copy a snapshot of my brain to a portion of your brain that you can access without interrupting me, adding you on as a collaborator is very unlikely to make the project later and very likely to make it go much faster as you could very quickly 'hit the ground running'.

A lot of the problem with the current limits of human intelligence is really bandwidth problems with getting information transferred in and out of our meat brains effectively. Our bandwidth for knowledge import/export is DREADFUL.

AI collaboration (even assuming that bandwidth limitation goes away or is vastly reduced) would still introduce the issue of increased communication requirements, but compared to the initial transfer of acquired knowledge that one isn't as big a deal in my experience and only becomes problematic at great scale.


But if the AI is fast enough, the project will be late much more quickly.

Perhaps more relevantly, I think there's a more basic questionable assumption - the idea that Church-Turing is the best computational basis for AI.

I strongly suspect this is not true - because even after all this time, our best abstract models of computation are still extremely brittle and fragile.

In a hand-wavey sense, as soon as you start allowing for arbitrary inputs and context awareness, you're no longer dealing with a simple Turing architecture but with something else entirely.


The TMs you hear about in your undergrad studies are not even good models for the webserver that served you this comment. Church-Turing is about computing functions over the natural numbers. You can (and usually do) have a Turing complete model of computation in which you can't program Tetris.


> I'm not a historian but I have read that some scientific discoveries are made through pure logic

What you are talking about is scientific hypothesis/theory which obviously needs to be tested using experiments to make sure it works. Without experiment there is nothing much you can do in real world.


Einstein's work was based on a lot of evidence that seemed very strange. Light somehow had a fixed speed in all reference frames WTF?


Sure. Question is: how many other discoveries await today such that we, collectively, as a species, already have all the puzzle pieces, but haven't yet put them together? An AI as smart as your average grad student, but which somehow could retain in mind at the same time all of our accumulated scientific knowledge, might be able to quickly draw all sorts of seemingly brilliant conclusions. Now imagine its reasoning process works 6 to 7 orders of magnitude faster than ours, even if not qualitatively different in its logic or biases.

Dunno, I don't really believe we are that close to building that sort of AI, but it doesn't seem fundamentally impossible, and it does seem like it could do things that to us would look as "more intelligent" than us. It might in the end be better at creating scientific knowledge in the way current computers are better than us at solving arithmetic: faster and capable of holding more in memory, rather than following any qualitatively different process. But even that would be enough.


Except that a lot of our accumulated scientific knowledge is simply wrong due to experimental errors, fraud, selective publishing, and random false positives. See the whole reproducibility crisis.

It's conceivable that someone could construct a special-purpose AI which isn't superhuman but uses advanced ML and NLP algorithms to mine the scientific literature for previously unnoticed implications and relationships between seemingly unrelated articles. But I expect that would only give a suggested starting point for human researchers to formulate hypotheses and design experiments.


IMO, few major ones. We have billions of man years of research and things are better than before that period, but diminishing returns are real.

Don't get me wrong I think it would be useful, just that the gap from human level AI to 1,000x human AI is simply not that huge. Let's say you moved someone from 2006 to 2016 or even from 1996. Yea sure there is real progress, but not really. We have better drugs in terms of AIDS for example, but we are worse off in terms of antibiotics. Game graphics have improved, but quake is from 1996 so we already had real 3D first person shooters and gameplay is not that different. Hell FTP is arguably worse. Further that's 20 years so we are talking literally millions-man years of effort and trillions of dollars worth of R&D for not much.

In terms of machines the SR-71 is still the fastest manned aircraft, no supersonic passenger aircraft. Tallest building is much taller, but lacks much space on the top floors making it more monument than utility as the sears tower has more useable space and a smaller footprint.


Invention proceeds because of game-changing insights.

Maxwell's equations were a game changer. So were Newton's laws. So were Relativity and QM.

Church-Turing was also a game changer. But I don't think there's been anything equivalent in computing since.

There's been a lot of application, but no game-changing deep theoretical insights.

Quantum computing may - should? - eventually throw up something new.

It's going to have to. If it doesn't I think we're going to be stuck with much less progress than we're expecting.


> Church-Turing was also a game changer. But I don't think there's been anything equivalent in computing since.

Quantity when the difference is large enough becomes quality. The 9 orders of magnitude or so that computers have gone through in storage capacity and speed definitely count as a game-changer.


The insight that all problems in NP are basically the same was a game changer in complexity theory (it basically started the whole field).


It's a problem perfectly suited to a logical AI, really. Take in the experimental observation data, and produce a solution that fits the parameters.

Human intuition was the major thing getting in the way, and even Einstein's intuition got in the way of his acceptance of quantum mechanics.


There's tangherlini relativity which doesn't make these assumptions and is consistent with existing observations! The math is crazy hard.


I agree that the fear of "super AI" is overblown, vastly in my opinion. Discussion on this topic is tantamount to arguing over the attributes of imaginary entities.

Simple artificial life of some kind seems much more likely in the foreseeable future. I've often wondered why no-one has written a virus or worm that is capable of mutation or even cross-over (sexual reproduction of a sort). Under selection pressure it might evolve into something resembling life. The Tierra experiments were a fascinating experiment in this area.


> The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly

It's the wrong argument entirely. It has nothing to do with intelligence but everything to do with knowledge. Smarts are useless if you have no knowledge. And that's exactly where computers have the edge. Even a 'regular' human level intelligence with all the knowledge that currently exists at its beck and call would be a formidable adversary. Remember: Knowledge is power.


Without further experimentation the best one can say is that you have a theory which covers that which has already been observed. This is not a useful theory, however: useful theories make predictions about future observations. Really useful ones make correct predictions.

An AI can certainly produce a theory which matches observations, but no matter how powerful it is it won't be able to magically produce only correct useful theories any more than a human can.

We had over 200 years from Newton's laws of motion to produce relativity from pure logic alone, but we didn't manage it. We had to reason from the evidence, and even then we didn't accept relativity as "true" until it had been experimentally verified.


Relativity stems from the Michelson-Morley experiment.


The ideas of relativity predate Einstein - his particular view/model still has serious flaws and in the modern rendering leads to non-reality conclusions. In a very funny way, Einstein's two theories of relativity have become religious dogma. Try to discuss this and the usual conduct of "true-believers" is NOT to show where your thinking is wrong, but to simply call you an idiot.

This approach does nothing to convince anyone who believes differently that they are wrong, all it does do is convince them that the "true-believers" are themselves deluded.

This does nothing to advance the understanding of society of the universe around them.


> Try to discuss this and the usual conduct of "true-believers" is NOT to show where your thinking is wrong, but to simply call you an idiot.

They say "give me evidence", because so far they have an enormous mountain of evidence that SR and GR describe reality well.

That doesn't excuse people being jerks -- science should be welcoming, not alienating -- but it does mean "well, have you considered X" is almost always coming from a place of ignorance, not well informed skepticism. Some impatience with the cranks of the world has to be excused.


I mostly agree with you, but what are the 'non-reality' conclusions?


Gravitational singularities. If one puts thought into it, one will recognise the shell game played when describing how something that requires infinite time to form as defined by the model, forms in finite time in the real universe. One then recognises that any two such entities will take infinite time to merge in the real finite timed universe.

I am also looking at a paper that purports to say that Einstein's GR which is used to calculate the precession of Mercury uses a couple of simplifications, which if not used, gives GR calculating the precession of Mercury at twice (or thereabouts what is measured). To test the veracity of the paper, I'll have to do some serious brushing up of my engineering mathematics.

Another example is related to the speed of light c. This value is inversely proportion to the square root of the (electric permittivity of the environment times the magnetic permeability of the environment). The normal assumption is that outside of the atmosphere of the earth, the environment is a perfect vacuum. It is assumed that the speed of light is therefore a constant in a perfect vacuum.

For many areas, this simplifying assumption is adequate, but this assumption may not be true. It is only so assumed. It may turn out that due to variations in both the permittivity and the permeability of the extraterrestrial environments, that this assumption is not inadequate but grossly inadequate.

When doing any experimental work, one should keep in mind and have written down all the base assumptions, otherwise, one may garden path oneself.


Modern cosmologists certainly don't assume that the heliosphere and interstellar medium are perfect vacuums.


That may be the case. However, they still assume c is a constant, in the same way as they also assume that gravity is the prevalent force even in areas that have high plasma reactions. The normal assumption (in all the papers I have seen to date by the "conventional" cosmologists) is that the universe is electrically neutral and hence the only significant force is gravity.

This has led to the idea that there has to be additional unseen, unmeasured matter (aka dark matter) and unseen and unmeasured energy (aka dark energy) to explain the anomalous measurements of galactic movement that is not explained by the "gravity is the only significant force" model.

Now, it may be that there is such a beasty. But, I have my serious doubts because electromagnetic fields have more intense effects than gravity, by some orders of magnitude (about 36 orders of magnitude).

In other words, some environments may be completely neutral and gravity is the major effect, in others, the environment may be very electrical and gravity is the least effect.

Our problem is that we can observe only at vast distances and are not able to take direct measurements to confirm our indirect modelling measurements.

Science is no different to any other aspect of people's lives, they will invest great amounts of energy in following their specific model of the world. When challenged, they mostly look at other as imbeciles who are incapable of seeing "the truth". I've seen this occurring in areas like string theory, black holes, dark matter, sub-atomic physics, climate science, immunology and vaccinations, medical surgical techniques, religion, politics, economics, sociology, finance and business, education, archaeology, evolution, etc., etc., etc.

Our models and theories are just that models and theories. We hope and believe they give us insight into the universe around us, but they are not reality. Each has limits and all use simplifying assumptions that are not accurate outside of the limits imposed by those assumptions.

If on the basis of our assumptions, we have to keep adding more and more "strange" entities to make our model fit the observations, then, maybe, we should go back and look at the underpinnings of our models and see if another model will work better (as in the kind of change that eventually occurred when going from the earth-centric epicyclic model to the sun-centred heliocentric model).

The fun is being able to learn more and more about how our universe operates and what we can do within it.


What are your issues with relativity you allude to, I wonder?


A tiger is dangerous even if its not selfaware.


A mathematical theory (á la Einstein) without experimental confirmation is just a theory. Or perhaps economics.

And the idea that all forms of computation, like Turing machines, lambda calculus, etc., etc., also seems pretty darn silly at first glance, too.


Better quality than most such posts, but still seems to be missing the point. The remarkable thing about Bostrom's book is how well it anticipated the objections and responded to them, yet no one seems to bother refuting his analysis, they just repeat the same objections. I actually agree with a decent bit of what he says on these points, though his application of these observations is kinda baffling. He makes a lot of misguided claims and implications about what proponents believe. I'll sloppily summarize some objections to his points.

1. This doesn't really bother making an argument against superhuman intelligence. Yes, of course intelligence has many components (depending on how you measure it), but that's not an argument against superhuman intelligence. I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean? Why it could mean height or weight, a combination of features, or even something more abstract, so how can you possibly say a machine is larger than a human?

2. Mainly arguing about the definition of 'general' without even trying to consider what the actual usage by Bostrom et al is (this was in the introduction or first chapter if I recall correctly). I agree that the different modes of thought that AI will likely make possible will probably be very useful and powerful, but that's an argument for superhuman ai.

3. Well he makes his first real claim, and it's a strong one: "the only way to get a very human-like thought process is to run the computation on very human-like wet tissue." He doesn't really explore this, or address the interesting technical questions about limits of computational strata, algorithm efficiency, human biological limitation, etc.

4. Few if any think intelligence is likely to be unbounded. Why are these arguments always 'x not infinite, therefore x already at the maximum?' He also seems to be creating counter examples to himself here.

5. Lots of strong, completely unbacked claims about impossibilities here. Some number of these may be true, but I doubt we have already extracted anything near the maximum possible inference about the physical world from the available data, which is basically what his claims boil down to.


I haven't read Bostrom's book. I don't think I would enjoy it. Maybe I need to grudgingly read it to be able to respond to what Bostromites say.

Here's the thing. If Bostrom's claims about AI are so strong, why does everyone who's referring to his book as their source of beliefs about the future spout non-sequiturs about AI?

Here's an example. 80000 Hours has a mission that I generally agree with, to find the most important problems in the world and how people can most effectively work on them. But somehow -- unlike cooler-headed organizations like GiveWell -- they've decided that one of the biggest problems, bigger than malaria, bigger than global warming, is "AI risk" (by which they mean the threat of superhuman AGI, not the real but lesser threat that existing AI could make bad judgments). [1]

To illustrate this, they refer to what the wise Professor Bostrom has to say, and then show a video of a current AI playing Space Invaders. "At a super-human level", they say pointedly.

What the hell does Space Invaders have to do with artificial general intelligence?

For that matter, what the hell does deep learning have to do with AGI? It's the current new algorithmic technique, but why does it tell us any more about AGI than the Fourier Transform or the singular value decomposition? I would say this is a bias toward wanting to believe in AGI, and looking for what exists in the present as evidence of it, despite the lack of any actual connection.

Has 80000 Hours been bamboozled into thinking that playing Space Invaders represents intelligence, or are they doing the bamboozling? And if Bostrom is such a great thought leader, why isn't he saying "guys, stop turning my ideas into nonsense"?

[1] https://80000hours.org/career-guide/world-problems/#artifici...


Bostrom is in no way in charge of people who happen to agree with him wrt ai risk. For the book he mostly collected and organized a lot of existing thought on ai risk (not that he hasn't made his own novel contributions). That's very valuable, largely because it makes for a good reference point to contextualize discussion on the topic. Unfortunately the critics don't seem to have read it because (in my experience) they repeat the same objections without reference to the existing responses to those objections.

People do sometimes overblow alphago/ dqn playing Atari, but it's not meaningless. These systems (and other deep learning based systems) can truly learn from scratch on a decent variety of environments. One of the most important unknowns is exactly how difficult various cognitive tasks will prove to be for a machine. Each task accomplished is another data point.


I wouldn't say that DeepMind learns Atari games "from scratch" any more than Deep Blue learned chess from scratch. It learns to play Atari games because it's a machine designed to learn to play Atari games.


I strongly disagree. You don't seem to be aware of the difference in approach between Deep Blue and DeepMind.

Deep Blue was hand-led directly and specifically to solve the problem of chess: It was provided with a library of opening moves, some sophisticated tactical algorithms relevant to the problem of chess, a library of strategies for chess, and so on. Many actual human masters of chess were consulted, directly or indirectly, to help with developing Deep Blue's approach to the problem.

DeepMind, on the other hand, was created as a "blank slate" with no more hard-wired instruction than "create optimal algorithms to achieve the winning state, given the inputs." Critically, its learning phase is completely self-directed. Essentially, the box is given access to the controls and the video screen content and then sent on its way.

It's instructive to note that this is pretty much exactly how, very generally speaking, evolution and intelligence solve the problem of survival: every organism has controls and a glimpse of "game state" and has to learn (collectively as a species, individually as an organism) to play the game successfully.


> DeepMind, on the other hand, was created as a "blank slate" with no more hard-wired instruction than "create optimal algorithms to achieve the winning state, given the inputs." Critically, its learning phase is completely self-directed. Essentially, the box is given access to the controls and the video screen content and then sent on its way.

Have you seen DeepMind algorithm to be able to say this ? Are there other people outside of Google who have seen the algorithm and can confirm Google's press release?


AlphaGo uses TensorFlow.

DeepMind announced the were switching over to use TensorFlow not long ago.

TensorFlow is fully open-source, you can go and read the algorithms on Github. Neural networks aren't particularly hard to understand.

The post you are responding to paraphrases the situation, and neglects to mention goal states/conditions, but is otherwise a fair summary of how neural networks work.

Neural networks, particularly the 'deep' ones, do seem to operate almost like magick. Disbelief is to be expected, but I suggest you take a proper look at them yourself, and see what they can actually do.

If you can grok Python, I highly recommend playing with the Docker image of TensorFlow, which provides a Jupyter notebook (Python in a web interface) where you can easily make use of existing pre-trained models, extend them, and/or build your own models — suitable for application to all kinds of problems / problem domains.

Siraj Raval's videos also cover neural networks in almost every aspect - from coding your own NN from scratch for digit recognition (using just NumPy), through explaining what TensorFlow is, how it works, and how to use it — along with a lot of other machine-learning algorithms, libraries and cloud services.

Many of these topics are just a 5-10 minute video (some with longer live-coding 'deep dive' follow-ups), and many of the code examples are fully functional but just a screen or two of code. https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A

There's also some great new videos on AI / machine learning by Google and Facebook on Youtube that are well worth a watch if the topic interests you.


Maybe you have some legitimate concern about Googles' claim as per their press release and my comment. Who knows, maybe they have some reason to lie about what they did!

But then I wonder why you aren't asking the same question of my parent poster. Has he viewed the DeepMind code, is he qualified to tell us it works the same as chess code? Having made that claim backed on even less evidence than I made mine, I'd say his burden of proof is somewhat greater.


I think there's a heavy dose of press release to what Google is saying. Most people wouldn't call PR puff "lying", but only because standards are low.

I don't think Google has fundamentally different deep-learning technology than everyone else. In fact, TensorFlow indicates that they have the same kind of deep-learning technology as everyone else and they just want to do it more cleanly.

Deep learning is parameter optimization. There are more parameters now, and they optimize more things, but don't get caught up in wild visions of machines designing themselves. Would you consider the bzip2 algorithm to be "self-directed learning"? What's the difference, besides the number of parameters?

The PR people, when they say "blank slate", are discounting all the programming that went into the system because it sounds more impressive that way. This is unfortunate. It has happened in AI for decades. To be a responsible consumer of AI press releases, you need to understand this.


> _I don't think Google has fundamentally different deep-learning technology than everyone else._

That's true, and I never claimed otherwise, but that doesn't help you argue your point - in fact, you just proved yourself wrong. From IBM's press release:

> _Does Deep Blue use artificial intelligence? The short answer is "no." Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. So Deep Blue's designers have gone "back to the future." Deep Blue relies more on computational power and a simpler search and evaluation function._

I'll summarize for you: Deep Blue and DeepMind, similar names notwithstanding, work in very different ways.


What comparison are you even making here? I know that Deep Blue and Deep Mind are different. There is 25 years (edit: sorry, 20 years) between them! Deep Blue is not deep learning. Did the word "deep", used in two unrelated ways, confuse you?

What I am saying is that I know how deep learning works, actual deep learning of the present, and it does not involve "programming itself".

You are trying to tell me that it must be programming itself, because a press release said so, and press releases would never lie or exaggerate. Based on the current state of AI, this is very improbable. You should focus less on trying to "prove" things with press releases.

I made the comparison to Deep Blue because there is little mystique around it now, and because IBM was even reasonably responsible about avoiding AI hype in their press at the time.


The Atari AI's learning phase is a specifically designed deep neural network. The network did not design itself. It was designed by human programmers.

There are probably numerous variants of it that did not learn to play Atari games, and therefore were not announced.


The human brain didn't design itself either - it's the product of millions of years of evolution via natural selection. But this fact is irrelevant to the topic of whether the approach of DeepMind is fundamentally different from the approach of Deep Blue.

To help you appreciate the difference, try answering this question: Were experts players of Pong, Space Invaders or any other video games consulted to contribute strategies for winning those particular games? Was a redesign required to play a new, different game?

If not, you'll need to tell me where the knowledge of how to win at those games came from. I hope you'll appreciate that the ability to autonomously obtain that knowledge by trial and error rather than design and programming constitutes the fundamental difference.


Any actual chess knowledge in Deep Blue was an optimization. The fundamental algorithm is alpha-beta search, which can learn to play many, many games of strategy. Not Go, as we know now, but the exceptions are few. This is an algorithm that is very adaptable and very good at learning to play games based on a heuristic for whether you are winning.

The DeepMind implementation learns how to play Atari games. You could even say that it's learning how to play one game, where your opponent chooses the game cartridge and your goal is to increment the score counter.

You would absolutely need to redesign DeepMind to play Super Mario Bros.

When you give DeepMind credit for designing itself, you are glossing over the fact that it was programmed by people, most likely for long hours.


Does life have a score? A winning state? Exhaustively defined inputs?


I mean, the way you put it makes it sound like you think it very obviously doesn't. But are you willing to grant that maybe there's some very complicated optimization function for life satisfaction/personal fulfillment/eudamonia/whatever the hell you want to call it? It doesn't have to be practically computable to be interesting, you merely have to be able to do your best to approximate it.

If you deny the existence of any such thing, isn't that tantamount to saying that given the choice between two courses of action, there's never a reason to pick one over the other?

I mean, I feel like as humans, we're trying to follow fuzzy approximations of that function all our lives, whenever we're earning money, or spending time with people we love, or whatever, we're doing it because we think it will bring us happiness or life satisfaction or something.


sometimes there isn't a correct choice... both could be good, both could be bad. The Road Not Taken: http://www.bartleby.com/119/1.html


Absolutely! The goal of the game is to reproduce. To be more specific, it's reproduction at the individual level and survival at the species level.

The genome is a kind of algorithm for hardware to implement a strategy to win at this game in a highly competitive environment.


> I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean?

Link: https://arxiv.org/pdf/1703.10987.pdf


The article is wrong, in my opinion.

Regarding point #1, still not being formally wrong, world computing capability is growing at exponential rate. Not even the end of the Moore's law will stop that, e.g. 3D transistor stacking, strong semiconductor demand for consumer and industrial market, etc. Aso, the author don't know if there is already CPU capacity for matching human intelligence: may be the key missing is not the hardware, but software (efficient algorithms for "human" intelligence running on silicon).

Point #2 is clearly wrong. Demostration: I, for one, if still alive, and having the chance, will try to implement general purpose intelligence, "like our own". And, come on, I know no hacker able to resist that.

Again, point #3 is wrong, unless you believe we're smart because a religious "soul".

Point #4 is a void argument: the Universe itself is finite.

Point #5 is right: a superintelligence may, or may not, care at all about our problems. In the same level you don't have the guarantee of a human government caring about you (e.g. totalitarian regime).


Not a particularly well written article, but he has a few good ideas. Here's a couple of important paragraphs:

"I asked a lot of AI experts for evidence that intelligence performance is on an exponential gain, but all agreed we don’t have metrics for intelligence, and besides, it wasn’t working that way. When I asked Ray Kurzweil, the exponential wizard himself, where the evidence for exponential AI was, he wrote to me that AI does not increase explosively but rather by levels. He said: “It takes an exponential improvement both in computation and algorithmic complexity to add each additional level to the hierarchy…. So we can expect to add levels linearly because it requires exponentially more complexity to add each additional layer, and we are indeed making exponential progress in our ability to do this. We are not that many levels away from being comparable to what the neocortex can do, so my 2029 date continues to look comfortable to me.”

"What Ray seems to be saying is that it is not that the power of artificial intelligence is exploding exponentially, but that the effort to produce it is exploding exponentially, while the output is merely raising a level at a time. This is almost the opposite of the assumption that intelligence is exploding. This could change at some time in the future, but artificial intelligence is clearly not increasing exponentially now."

The last bit about requiring experiments in real time is also interesting.


Interesting article from an opinion point of view but I find very little real substance behind his arguments.

He is fight the original myth with his own myth except that his myth is founded upon his own assumptions and intuitions as opposed to those of someone else.

It seems more likely that we simply don't know the answer to many of these questions yet because we still have major disagreements around exactly what intelligence is. To use Richard Feyman's famous quote: if we can't yet build it, then we don't understand it.


Maciej Ceglowski's takedown of superintelligence is a much better articulation of these arguments, and more (and it's funny): http://idlewords.com/talks/superintelligence.htm


And a tongue-in-cheek response from Scott Alexander: http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-ri...


I didn't read the whole article... of what I did read, I didn't find it convincing. Few things:

AI doesn't need to exceed humans in every dimension to become a threat. Just sufficient dimensions.

Humanity is basically a bacteria colony in a petridish with I/O. Disrupt infrastructure, and you disrupt input leading to changes in the size of the colony. And mind you, much of our infrastructure resides in the cloud.

Of course, It will be a while before this even becomes an issue, but this is basically how a machine would frame the problem.

Implementation wise, AI doesn't need to be general. At its most inelegant (and not too distant) design, ML can be configured as a fractal of specific of algorithms, with one on top with the task of designating goals and tasks, and subordinates spawning off generations and evaluating performance.

Andy Grove had a good saying, "anything can be done will be done"

Autonomous AI, if it does not break the laws of physics, will exist. It's development will be spurred by our curiosity or profit.


One of the big issues with people that talk about controlling super human intelligence, is that any talk of controlling it is fantasy. We cannot control actual human intelligence for good. What makes us think we could control super human intelligence?


If suddenly we had a black box containing a superhuman intelligence and no details about how it worked, then absolutely, we could not control it. For human minds we have something similar, but the box isn't totally black; we've done some neuroscience and psychology to figure out the heuristics and biases it tends to use, how it's easily exploited, etc. And then we have neural networks which duplicate the functioning of some subsystems, and of course our own subjective experience which provides at least a little evidence. It's not enough, but it means that, for example, we can write effective propaganda and advertising.

If we didn't just have the results of scattered tests, but had an exhaustive set of docs written by the creator of the black box, it still wouldn't be easy. But we'd have a chance. This is why one of the main strands of AI value alignment research focuses on building an AI that we can understand. If we can build something we can understand, that gives us leverage to alter it to value our interests.

(What "our interests" are, in a well-specified technical sense, is a whole 'nother problem, and one that there's very little progress on.)


I don't see how you can call an AI created by humans that humans can understand, "super-human"? By definition, a super-human AI would be able to stuff we could not understand.


tldr; AI could be on a higher plane of thought, but I'm of the camp that they could come up with new formal systems to explain their advances.

Well there's the old saying, "If you can't explain it to a six year old you don't understand it yourself." I think if there were a super human intelligence it would likely understand where our limitations are as humans and be able to break down the components of its discoveries into simplest terms and be able to teach us even if it's at an incredibly slower pace then they can process.

This reminds me of Godel's proof about how by our current formal systems of mathematics we cannot prove everything, and it maybe even went as far as saying every single formal system will always leave some things that it cannot prove. Obviously a robot had to use some formal system to come to its conclusion so if it's really so smart can it break down the system it used so that we can understand its basic building blocks. Of course there's always the rate of computation and memory problem of humans in the way.

Of course if you're saying that the super human intelligence would be on an entirely different plane of thought impossible for us to understand then that's understandable, but probably less believable.

This line of thinking reminds me of the book Flatland by Edwin Abbott. Flatland is basically a story of a two dimensional square who lives on a two dimensional plane and only knows his world by his limited perception of it. One day he is visited by a three dimensional sphere who explains his world in a way imperceivable to him. The sphere somehow magically takes him off his plane of existence and shows him his world from his (the spehere's) view. He then goes on to take him on a tour of different hosts of worlds who perceive their worlds in different dimensions. He goes from point land to line land then to three dimensional space and finally back to his home plane land. Where is eventually locked in a mental institution for telling people of his adventures. Any ways it's an interesting fantasy story I recommend it.

Hate to go all meta-physical on you all, but basically the story just goes to show you we only know things from our limited perspective of the things around us if there are any higher "planes" of perception it's entirely possible that we wouldn't know about them. Some things can only be known with certain perceptions/experiences/knowledge.

It may sound ludicrous, but I would even say Christianity backs this idea to some degree attributing God to a higher plane of thought like in Isaiah 55:8-9.

Of course I would never put a robot on that level, but I could see some things being imperceptible to the human mind, so it's similar principles. Can robots achieve a higher plane to any degree... beats me. Honestly I'm thinking it wouldn't be high enough that they couldn't explain their thoughts to us.


Great points. I do not disagree that a super human AI could explain stuff to humans, just like I could explain explain stuff to a six year old. However, a group of six year olds would be hard pressed to constrain your actions that you really wanted to do. In addition, first grade teachers are experts at manipulating six year olds, and a super human AI would also be very good at manipulating humans and working around any constraints the humans tried to impose on it.

With super human AI, we would be in much the situation as the great apes - their survival depends far more on what we do than on what they do. Just like the great apes cannot constrain our actions, we would not be able to constrain the super human AI's actions.

On a darker note, as Homo sapiens wiped out all the other hominids, there is a good chance that super human AI would try to wipe out humanity, as we are probably the greatest threat to their independence and well being.


Definitely don't disagree with you there. These are all plausible. I think a lot of AIs choices concerning us would come down to whether they actually would care for us at all.



Oh boy. Much respect for Kevin Kelly, but I am afraid he missed the mark with his analysis.

Unfortunately he gets hung up on the definition of Intelligence - and not unreasonably so - because it is very ill defined and largely unknown. So all of what he says is true, but orthogonal to the argument he is trying to debunk.

It's basically setting up a pedantic straw man and then taking it apart.

There are other great and more compelling arguments against an all powerful superhuman AGI, unfortunately he doesn't make any of those.


Ugh, not another AI article by a Wired editor. I skimmed it and saw only strawmen and non-sequiturs.

These issues are mind-bending topics that stretch the imaginations of the most brilliant people I am aware of. It takes them a lifetime to build good intuitions and analogies. I wish that writers of this caliber felt as qualified to write one sentence about it as they actually are.


This person doesn't understand the concept of super AI. Of course intelligence is not one dimensional. But the current limit in pretty much all of those dimensions is physical: It's the larger amount of neurons and connections we could fit in the smallest space that can pass through the pelvis while still feeding enough energy to the brain.

You can imagine this as a bunch of people that speaks with each other. The faster they can communicate ideas with each other, the more potentially intelligent the group can be. Machines can surpass the speed of this collective intelligence by orders of magnitude, even if everything else is exactly as a human. This is exactly the reason we evolved to have so many brain resources for language.


No, the current limit is not physical, it's that nobody has any idea how general intelligence works.

You do not, in fact, get general intelligence by accident by throwing a lot of connections in one place, just like you do not get a human just by throwing a lot of organic molecules and water in one place.


As the other commenter noted, "letting a bunch of molecules sit around" was precisely where we got intelligence from in the first place.

So it is possible that we reach AI just by randomly permuting connections and weights. Of course it's more likely we intelligently set (or "evolve") these connections and weights, but this allows us to set an upper bound on computation/time needed.


> As the other commenter noted, "letting a bunch of molecules sit around" was precisely where we got intelligence from in the first place.

I don't think this was the case. Yes there is evolution but it is not random. Actually most of the molecules sitting around did not evolve to intelligence. In case our evolution we had natural selection. In case of AI we have artificial selection (selection made by humans) and even if we consider ourselves smart enough to do this we cannot prove that we are able to make it happen (choosing the correct AIs/algorithms to survive) until it happens. Maybe I cannot express this clear enough but the advantage natural evolution has over artificial evolution is the huge number of "experiments" - meaning it had time enough to do a lot of small changes until something worked.


> Maybe I cannot express this clear enough but the advantage natural evolution has over artificial evolution is the huge number of "experiments" - meaning it had time enough to do a lot of small changes until something worked.

I think you have that backwards; natural evolution is absurdly slow because it takes a very long time to cycle through generations of animals whereas genetic programming on a computer to evolve algorithms can happen billions of times a day because computers are much faster at cycling through possibilities.

> meaning it had time enough to do a lot of small changes until something worked.

Computers can do it faster.


Yes, now you got me thinking more about my concept of how I see the difference between evolution by natural selection and evolution by artificial selection.

And I agree with you that AI can be much faster.

I still think the artificial selection can be influence by us - humans - so we might add flows in the sistem from the beginning. Of course AI can learn to identify them maybe. But maybe not. Like in the case when looking from inside a system one cannot see how the system really is so it cannot fix it.

Of course what I say are just some hypothesis, nothing proven and I think they cannot yet be falsifiable.


The probality is pretty low.


> just like you do not get a human just by throwing a lot of organic molecules and water in one place.

Well, not unless you have a couple billion years to sit around.


It's worse than that! It's a couple of billion years, and a planet's worth of organic molecules and water.

And, there's a least some evidence that it's even worse than that -- intelligence does not appear to be common in the universe. So it might well be a couple of billion years * a million planets worth of organic molecules and water (or a hundred million planets. Or a billion planets).


>You do not, in fact, get general intelligence by accident by throwing a lot of connections in one place, just like you do not get a human just by throwing a lot of organic molecules and water in one place.

Don't tell that to the deep learning cult! They'll have to think about how it all works!


I think deep learning is both useful and overhyped. If we got in the habit of calling it something like "large-scale parameter fitting by gradient descent", we could talk about it more sensibly.

I have not met any actual deep learning researchers who think they are creating AGI. I think that it's non-experts who write about deep learning research who think that deep learning is creating AGI.


Well, I dunno. Yann Lecun and Yoshua Bengio have both given presentations on moving from existing perception tasks towards human-level AI, focusing on how to do so with neural nets whose parameters are fitted via gradient descent.

And I'm not even sure I would call it "parameter fitting". I'd call it, "continuous circuit searching via gradient descent", because that's all you're doing: searching for a nice analog circuit for your task within a large, continuous space of such circuits.


As nature itself has shown, it's not necessary to understand it to create it. That's part of what makes it scary: It will probably happen and we can't predict the outcome.


>As nature itself has shown, it's not necessary to understand it to create it.

Unless intelligence itself conforms to natural principles, unless it "cuts with the grain" of nature's preexisting laws.

Which, frankly, it does. Information theory and thermodynamics are a thing.


That describes literally everything.


Increasing speed only get's you so far. Internal combustion performance over time and past gains are greater than all possible future gains. So, sure you might see faster gains to some limit. But, don't expect a weather forecast to be accurate out a full month etc etc.

Further, to get better you need to optimize for something. You might get an AI that's really good at writing fiction, but how do you measure that and write one that's even better at writing fiction?


Sure, putting more resources into weather forecasting seems likely to have diminishing returns. But consider learning itself. Educating a human takes many years, and we can't easily transfer knowledge between humans. There's a lot of room for improvement. Transferring knowledge between machines could be much more efficient.

The thing about machinery is that it scales. If you have one AI that's really good at writing fiction, you can replicate it and generate all the fiction you like. Not true of human authors.

I do expect there will be other bottlenecks after think time is solved, but even without intelligence much beyond human, the ability to scale in other directions is pretty scary.


> Educating a human takes many years, and we can't easily transfer knowledge between humans. There's a lot of room for improvement. Transferring knowledge between machines could be much more efficient.

You're assuming that it's possible to do efficient knowledge transfer without losing general intelligence.

Maybe in order to have general intelligence about something, you need to learn it from scratch, over time, rather than just "copying" the information.

Another problem is that you're assuming that you can build a powerful enough computer, and still retain the possibility to have them be programmable. That is, that you have easily programmable memory cells within the computer which can change its behavior. But to have easily programmable memory cells, they need space and infrastructure to access them, and they will interfere with critical timing paths. That's why if you build a special purpose ASIC for a task, it will always be faster than a CPU or GPU.

Maybe all the things we consider useless for intelligence, like playing, sleeping, exercising etc., is actually necessary. We've certainly started to see that those who focus too much on studying by cramming don't necessarily become smarter.

You can put me in the "skeptics" camp when it comes to superhuman intelligence. It may be possible, and it's fun to discuss, but it seems to me that the ones who fuzz about it are making an incredible amount of assumptions.


Okay, but CPU's, GPU's, and ASICs can all be manufactured. And I don't see anyone building a computer whose software can't be copied unless it's for artificial reasons like DRM.

So it seems like the question is whether computers as we know them can do general intelligence at all? If they can, it will be easy to clone.

If they can't, then it would imply that general intelligence is something more than information processing as we know it; perhaps some kind of dualism?


How do you take a subset of knowledge from neural net A such as cats have fir and merge it with neural net B?

It's not a software or hardware problem it's a data problem as it's not obvious what part of neural net A encodes fir and what part encodes cat's and how you map that to B's encoding of fir and cats while connecting them.

Now, AI is not necessarily going to be neural net's, but it's also not necessarily understandable what all the little bit do, just that they work.


That seems like a problem with software composition? If you just want to clone a neural network, you can do that without knowing how it works. In git terms, we have fork but not merge.


Increasing speed won't be the thing that makes them intelligent. But it will be the thing that will make human-level AI short lived after it's achieved, shortly progressing to another level.


This article is just a series of strawman arguments. It sets out misconceptions that proponents of strong AI mostly don't believe, then argues against them.

I'll be honest, I didn't read the arguments in detail (since they're just rebutting strawman arguments it hardly seemed worthwhile), but I was sort of surprised at how poorly reasoned the arguments were even for the parts I agree with.


At a certain point it doesn't matter how much smarter you are, the limit on progress is the ability to take action and to make measurements, enough measurements so that you can discern whether a subset of those measurements are biased and in what way. As a result I tend to think that in order to get true super human level intelligences they will need to have super human levels of agency, and that is something that is much harder to build and get us meatbags to support than building a really powerful brain in a jar. Building systems with super human agency also isn't something that happens just by accident.


I agree, in the long-term, big-picture, distributed eye and hand is important. e.g. Although relativity could have been deduced from the evidence, first you have to obtain that evidence, by the action of constructing observation devices.

But there's nothing to stop centralized computing from having distributed terminal-like i/o devices, cameras and waldos.

A cognitive argument for distribution is that a diversity of ideas, developed somewhat in isolation, using their local unique observation and unique action, is more likely to innovate. Many points of view will see more. However, this can be simulated with internal divisions.


Actually, to see how well DeepMind has mastered certain video games with minimal instructions, AI already can look in certain cases superhuman.

What EVERYONE is missing, though, is that enhanced human intelligence is inevitable. And will be vastly more "intelligent" than superhuman AI. Though as human intelligence increases so will AI naturally. I think enhanced human intelligence will have immeasurably greater impact and probably greater impact than any conceivable technology since it lets us engineer who we are. What is a world like that's inhabited by 7 billion Leonardos?


I was at the park the other day with my sons, I noticed some other kids on the swings, 2 kids turned and locked legs then a 3rd sat on their joining legs like a huge human-made swing. The point being is I never thought of doing that before with my friends when I was a kid. An AI will be able to think of things we never tried because there are so many more things that we haven't. Speculating on the short end of this seems laughable to me, like someone from the 1800's talking about balloon travel in the 2000s, basing our limited understanding of possibility on our current limitations.


So, extremely intelligent AI can't take over Earth in the future because "extremely intelligent" is ill defined. No worries everyone!


I liked the article, and agree with it mostly. Here's a couple of ideas I see again and again that bother me about the singularity theory and the hype in general about general AI:

* The idea of exponential growth, which seems like an important underpinning of the singularity theory. Nothing in nature as far as I am aware, grows exponentially. Some growth trajectories may for a time look exponential, but they always turn out to be the bottom of an s-curve. The idea that once we develop the first AI with superhuman intelligence, that it will necessarily beget one of even greater intelligence, is deeply flawed for this reason. It is analogous to predicting that Moore's law will continue forever, because ever faster and more capable computers will assist in the design the next generation of chips. At some point the laws of physics will contrain further advances, and we will encounter the upper half of the s-curve.

* The idea of AI in a box. It's the idea that anything we would call intelligence can evolve divorced from a high-bandwidth, real-time sensory apparatus, and divorced from a way to directly manipulate its environment.

* The idea that more intelligence always makes a significant difference. If we look at thinking disciplines where computers are already better than the best humans (chess, heads-up poker, recognizing certain kinds of objects etc.), the differences are small. If the best human's decisions are 99% optimal, say, a computer's may be 99.9% or 99.999% optimal. The point being that a computer can never makes decisions that are better than 100% optimal.


1. An AI might not need long exponential growth to make humans obsolete. Maybe the computing power of one data center is enough to outperform and dominate humans? Imagine if you had infinite patience, and experienced time 1000x slower than everyone else. That number might be in the millions. Or more.

2. I agree that it's a very strange and unpredictable scenario.

3. How optimal do you think humans are in the real world? In chess you have a very limited number of possible actions at each point, but in reality your possible moves are almost limitless.


1. I'm not sure what you mean by the "computing power of one data center". Surely not one present-day data center? And whether or not I can imagine experiencing time 1000x slower than everyone else, has no bearing on whether or not that's a realistic scenario. I can imagine all sorts of things, like Moore's law holding true forever, or P=NP.

3. That obviously depends on the endeavour. But humans are pretty great at a lot of stuff. We learn quick too.


> And whether or not I can imagine experiencing time 1000x slower than everyone else, has no bearing on whether or not that's a realistic scenario.

It is a staple of AGI speculation that a computer program with even near-human IQ would spark the singularity since, at least, the hardware running it could be improved so that the AGI would be able to perform person-months of cognitive labor in days. Since the first target of this labor would be improving the AGI program and hardware, compounding improvements are expected.


Yes, I get that. I just doubt whether it's physically possible. And like the author of the original story, I also doubt very much that cognitive labor is the constraint that limits human progress. To believe so is what he calls thinkism.


> I just doubt whether it's physically possible.

Biological brains achieve their ends with relatively low speeds and energies. Even simplistic substitution with equivalent electronic components would be hundreds of times faster, and I'm sure we'll do better than that. I don't see the difficulty in the conjecture.

> I also doubt very much that cognitive labor is the constraint that limits human progress

That statement is impossible to discuss without defining "human progress", but if the work of the world's universities for the next 100 years was available in one year's time, at the very least someone with access to that information would have a significant competitive advantage. It seems clear that the next 100 years will include significant advances in automated fabrication, at which point physical labor also essentially becomes cognitive labor.


Even simplistic substitution with equivalent electronic components would be hundreds of times faster, and I'm sure we'll do better than that

The way you state that one wonders why we haven't already achieved superhuman, general AI.

if the work of the world's universities for the next 100 years was available in one year's time

But that's a fundamental misunderstanding of the character of the work that goes on in universities. Knowledge is only rarely produced by people thinking about problems. It's produced by mundane trial and error experimentation. Experiments that take time. And money.


> The way you state that one wonders why we haven't already achieved superhuman, general AI.

Brain emulation is still expensive ...

https://en.wikipedia.org/wiki/Artificial_brain#/media/File:E...

It'll be interesting to see what we can learn at the different levels of emulation.

> It's produced by mundane trial and error experimentation.

Fair point, although experiments in computer science and applied mathematics can usually be carried out without constructing physical apparatus. Also identifying and designing experiments to efficiently characterize a problem space is a large part of experimentation. And again, once automated fabrication improves, some automated experimentation in physics and chemistry becomes possible.


"The idea of exponential growth, which seems like an important underpinning of the singularity theory. Nothing in nature as far as I am aware, grows exponentially."

Actually a lot of things grow exponentially, at least for a while. Consider zygotes, every cell division period they double.

That said, AI is exactly not "natural". A major point a lot of folks miss thinking about this is AI is amenable to engineering. This means that AIs can be intentionally designed to not have the limitations of natural systems. For instance, even today we know how to make effectively infallible, very fast memories of arbitrary size.

The other thing that makes strong AIs seem likely to improve exponentially is the idea that the first strong AIs could be focused on creating better strong AIs - and so on. Since AIs may also be replicated at will, a whole lot of distraction-free, singleminded intellect will be brought to bear on the next-gen AI design, and there is no reason that process couldn't be repeated indefinitely.

"At some point the laws of physics will contrain further advances, and we will encounter the upper half of the s-curve."

That might be true, however we also might be at IQ 1,000 (or IQ 1,000,000) AIs by then...

"The idea of AI in a box. It's the idea that anything we would call intelligence can evolve divorced from a high-bandwidth, real-time sensory apparatus, and divorced from a way to directly manipulate its environment."

This could be approached from many directions, from teleoperated robotics to VR.

"The idea that more intelligence always makes a significant difference. If we look at thinking disciplines where computers are already better than the best humans (chess, heads-up poker, recognizing certain kinds of objects etc.), the differences are small. If the best human's decisions are 99% optimal, say, a computer's may be 99.9% or 99.999% optimal. The point being that a computer can never makes decisions that are better than 100% optimal."

This seems a poor argument. I'd argue that greater intelligence introduces qualitative differences. For instance, the capabilities of a human are far beyond those of a dog. The same can be said of dog versus a grasshopper. Even the difference between an Albert Einstein and an average person seems profound.

I believe it's quite impossible for even an IQ 200 human to imagine the capabilities of an IQ 1,000 AI. What does a dog think about the reasoning capacity of a human?


I am not believer in superintelligence, but for a different reason than author. I assume the following about superintelligence:

- It somehow needs to be distributed, that is, composed of smaller computing parts, because there is a physical limit what you can do in unit of space.

- It needs to change to adapt to environment (learn), and so all the parts need to potentially change.

From this follows that the parts will be subject to evolution, even if they don't reproduce. And so the existence of the parts will depend on their survival. This, in my opinion, inevitably leads to evolution of parts that are "interested" in their own survival, at the expense of the "superintelligent" whole. And it leads to conflict, which can eventually eat up all the improvements in the intelligence.

Look at humans. Humanity (or biosphere in general) didn't become a superintelligent whole, capable of following some single unified goal. Instead, we became fighting factions of different units, and most of the actual intelligence is spent on arms races.

Anyhow, even if superintelligence is possible, I believe the problem of friendly AGI has a simple solution. We simply need to make sure that the AGI doesn't optimize anything, but instead takes the saying "all things in moderation" to its heart. That means, every once in a while, AGI should stop whatever goals it pursues and reflect on purpose of those goals, if it is not, by some measure, going too far.

You can argue that we don't actually know how to make AI to stop and think. I would respond, AI that cannot do that, and only pursues some pre-programmed optimum mindlessly, is not really general.


> It somehow needs to be distributed, that is, composed of smaller computing parts, because there is a physical limit what you can do in unit of space.

That doesn't follow. Just because there is a physical limit doesn't mean you can't build something vastly smarter than a human without hitting that limit.

> It needs to change to adapt to environment (learn), and so all the parts need to potentially change.

What do you mean by parts? Presumably you aren't including hardware, because we already have machine learning algorithms that learn (not quite like a human, but they do learn), and the hardware doesn't change. Do you mean something more like software modules?

> From this follows that the parts will be subject to evolution, even if they don't reproduce.

Evolution means something. It means that when things replicate with errors (and perhaps sometimes die), there will be more of the kinds of things that replicate more. While this concept is pretty general, replication is central to it. I don't know what you mean otherwise.

So there's an AI that sometimes upgrades its parts (software modules?). How does this lead to parts that are "interested" in their own survival?

> You can argue that we don't actually know how to make AI to stop and think. I would respond, AI that cannot do that, and only pursues some pre-programmed optimum mindlessly, is not really general.

Sure. But how do you specify "stop and think" formally in an algorithm, and will we figure that out before we construct a super intelligence or after? Right now all of our ML algorithms pursue optima.


> While this concept is pretty general, replication is central to it. I don't know what you mean otherwise.

I think it would be nice to have another word for it. But you can imagine that changing state of something or not is the same as replicating it, possibly with a modification.

The superintelligence will have to be economic somehow, and at some point decide which parts will be build more (replicated), which parts will stay and which part will be removed (the fact that they can be physically reused doesn't change things). That's when the "evolution" happens.

> So there's an AI that sometimes upgrades its parts (software modules?). How does this lead to parts that are "interested" in their own survival?

Well it needs to detect which parts to upgrade and how, without understanding them fully (otherwise they wouldn't be needed). This can be influenced by behavior of those parts, so the parts that can successfully pretend to be useful will be kept, regardless of their actual usefulness.

> Sure. But how do you specify "stop and think" formally in an algorithm, and will we figure that out before we construct a super intelligence or after? Right now all of our ML algorithms pursue optima.

I think we figure that out when we figure AGI. We humans can reflect on our goals, our current ML algorithms can't, and I think that's precisely why they are not general.

In other words, AGI will need to be capable to become "bored". I don't think that system that can never become bored with doing something can be truly generally intelligent. (It has to do with economics, IMHO.)


>We simply need to make sure that the AGI doesn't optimize anything, but instead takes the saying "all things in moderation" to its heart.

This may be easier said than done.

This is a fairly common proposal actually. The term used is "satisficer". Instead of having the AI optimize a quantity, you instead have it satisfice it, I.e. have it attempt to get at least a particular value.

But, iirc, the standard reply is that satisficers are not stable under self modification; a satisficer may self modify into an optimizer for some quantity, because this will also satisfice whatever it is a satisficer for.

However, there are setups which have been shown to be stable under self modification, and don't optimize the expected value of a function. Iirc, one which is stable is one that maximizes a function applied to the expected value of the utility function (so, optimizes f(E(u))), where the function f is increasing up to a certain point, and after that point it is decreasing.


It's really hard for me to understand a viewpoint that non-human higher intelligence is anything but inevitable.

At somepoint (ignoring us destorying ourselves) we will be able to accurately simulate a cell in software. As computing gets cheaper we will be able to simulate a human brain's worth of cells. We'll feed it inputs and give it outputs just like a brain would have. The only technological challenge here is scanning and reading data from a live brain. A very small challenge in the grand scheme of things.

Once that thing works it's a brain, and an artificial intelligence. Any other discussion simply complicates the situation. Accurately simulate a larger number of individual interconnected neurons and you're running an intelligence.

No I'm not 100% certain we'll ever be able to program intelligence the way to do reasoning symbolically in math, but we sure as heck can engineer one.


Is it known that a cell can be "accurately" simulated? Quantum systems are very difficult to simulate (exponentially complex for a classical computer, I believe). What does accurately mean? There are a near-infinite number of possible quantum states a cell can be in. We don't even have the technology to measure the quantum state of a cell, let alone predict its time evolution.

If a cell's emergent behavior is dependent on quantum or chaotic or otherwise subtle phenomenon, such a simulation may not accurately simulate a cell. The only hope we have of simulating a cell is by throwing away low-level information and then experimentally comparing our simulation to the real deal. At this stage, I don't think this is possible, so I consider the possibility of accurate cell simulation to be an open question. Do we have any way of knowing at this point? I'd be happy to be educated if I'm missing anything.


Nothing is know to be possible until it is actually done. So I agree with you that we don't know the answer, but I don't think that's a true limiting concern. Instead, we can estimate based on historical past, and project into the future.

I don't know that the car I get in tomorrow will drive, but I can project that it will based on history. Similarly we don't know that we can simulate a cell, however, we can look at the rate of improvement in technology, see now physical limitations and project that in the future we likely will. In contrast if the question had been can we travel faster than light, while still unknowable, we could see the known limits in physics and project that we won't.

The other important question which you raise is "what does accurately mean?". The two values of accurate that I see are what I'll call the "Turing Model" of accurate and the "Psychic Model" of accurate.

The Turing model of accurate is that I were to observe the behaviors of this simulation of you, I wouldn't be able to tell it was a simulation of your or the real you.

The "Psychic Model" of accurate would be if I were to feed the simulation the exact same inputs as you, it would product identical outputs until the day you both (simultaneously) die.

I argue that the only meaning that matters for practical use is the Turing Model of accurate. If no one can tell it's not human, then it's equivalent to a human.

So back to your original question of accuracy w.r.t. quantum phenomena. I do see reading of quantum data as a possible physical limitation. But my best (admittedly still laymen's) understanding of quantum behavior is that results can be accurately modeled probabilistically. We easily can produce the same result distribution.

So my final argument rests on the question "Do we think that human intelligence depends only on the probability distribution of quantum events, or is there a hidden interplay in the specific "values" produced?"

Based on what we've seen so far from QM, I lean towards a dependency on the distribution as sufficient - ie there isn't an underlying structure hidden in the values we need to try to replicate.


A simulated heart does not pump real blood. Also in the process, why should one start or stop at cell level, seems arbitrary, go further down to atoms and electrons etc. or go higher level to mental state or what not.

There are completed projects of smaller organism whose all neurons have been simulated, there hasn't been any revelations.

Our brain doesn't contain any 'data', so whoever decides to extract data from it will have to decide what the data will be or why it would be of any use. There has also been many objections made on why brain correlates are more or less useless on question of mind/intelligence.


> A simulated heart does not pump real blood.

Well, now, a simulated heartbeat implemented in a pacemaker and connected to a pump does cause real blood to be pumped.

A real brain adds real numbers, and so does a computer brain. A real brain composes real words, creates real emails, issues payments, requests shipments. Those are not less real than the words, emails, procedures, payments, and shipment requests that computers make today.


I should have said, A simulated heart in computer does not pump real blood.

The thinking involved for a person in issuing shipment is not same as what computer does. These are not simulations. Just similar looking interface. Like a animated cartoon human, compared to a human.


> Our brain doesn't contain any 'data'

It absolutely does! Where do you think our memories are stored?

It doesn't store it in 1s and 0s like computers, but I think it's fairly non-controversial to say that all of everything you know is encoded in the physical state of the cells and atoms of your body.


> Where do you think our memories are stored?

You are applying computer metaphor and then asking where the 'memory' is 'stored' or 'encoded'. Metaphors/abstractions are useful tool, but when talking about differences, we should be more careful.


If this assumption is correct then I could see us creating an accurate model of the human brain in the future.

But it is an assumption.


Brain does not equal intelligence.


Humanity never started as a single entity though, so it wouldn't make sense for it to merge into a single entity.

If an AI begins replicating then its part will come into conflict only if its "immune system" is unsuccessful at beating back mutations and/or it's terrible at replication.

I don't think we have reason to believe either of those things will happen. High fidelity replication will be part of a successful AI's self-programming and it's like that a superintelligence will figure out how to run an immune system effectively.

I think the "all things in moderation" approach might make sense, but how do we ensure humans abide by it when there is so much wealth and power to be had if you edge out your competitors just a little bit.

Our economy heavily rewards optimization.


> We simply need to make sure that the AGI doesn't optimize anything, but instead takes the saying "all things in moderation" to its heart. That means, every once in a while, AGI should stop whatever goals it pursues and reflect on purpose of those goals, if it is not, by some measure, going too far.

A superintelligent AI would be too useful for warfare and making money that we could ever hope to impose limits on what some people will do with it and allow it to optimize.


We see this problem of the parts competing with each other in humans. When cells abandon their part in the collective and start optimizing for their own reproduction, we call it cancer. The main way we solve this is by shutting everything down when this happens, and creating new ones from a single cell that hasn't evolved to be selfish yet.

The problem may be inevitable, but that doesn't mean it's insurmountable.


> Look at humans. Humanity didn't become a superintelligent whole.

But: look at humans. We are each a vast empire of trillions of cells which, for the most part, work together for the survival of the whole, and which certainly possesses intelligence beyond the capabilities of the cleverest single-celled organism.


These cells only work together because there are other multicelullar organism where the cells work together. What I am saying that everybody working together is impossible. Ditto for groups of humans.

The point is, I believe superintelligence will break into factions and most of its energy will be spent for these factions fighting each other, ignoring any common goal of increasing self-improvement.

So I don't see so called "singularity" as a possibility - maybe at most as another, faster type of evolution, which will ignore humans due to its own self-centeredness and humans will not comprehend it. But just as our culture depends on nature, this superintelligence will probably depend on us (human culture) to sustain its internal struggles.


I dont think I worry about general AI so much as a having specialized AI in almost every area, including collections of algos that recognize the task at hand and select which specialized AI to engage, as well as other collections of specialized AI algos that select which selector algos to use based on longer term goals, etc. That is what makes me afraid.


Yeah what you described is one of the current leading hypotheses of how we get to generalized AI and not totally unlike how the human brain actually seems to work (specific tasks or stimuli will activate specialized regions of the brain and not others).


;-) My point exactly.


While one individual might not be have "general purpose intelligence" to the satisfaction of this author (being able to excel at different fields/activities), at the population level, it is fair to say human intelligence is general purpose.

Then, there are aspects that are greatly overlooked in all these narratives:

Human geniuses occur very rarely and take literally decades to learn, while the AI equivalent could be consistently "instanced" multiple times, live forever, evolve after birth and work 24/7 without sleep.

Then, humans have crappy I/O. AI is not bounded by the shortcomings of writing/reading/typing/talking at low rates of words per minute...

Generally speaking, AI has theoretically a substantial advantage over humans. Even if AI remains dumber for a time, these advantages are enough to make them prevail.


If I gave an example of something machines can easily do today, but no human could ever do, does that show that human intelligence is not general-purpose? If not, what do you mean by general-purpose?


The human brain has the ability to create models to solve problems in an unsupervised way. Be it problems around survival, reproduction, and things that rank higher in the hierarchy of needs that can be very abstract in nature.

Some people might then argue that the human experience is also about creativity, entertainment, learning, social interaction, spirituality, fitness and a long list of things.

Part of it is because the brain is not only raw problem solving power... it is embodied in a human body, with bodily needs and hardwired behaviors coming from older brain structures. Like having an intuition for what may look like a predator or threat (and therefore evokes fear), what may be fine to eat, etc... and other stuff correlated with survival that guided our evolution, but not necessarily has to do with survival.

But AI can be embodied into anything, and the equivalent to its primitive mind can be played with. While we have many learned behaviors, there are aspects that are not learned. AI will be different. What AI ends up developing into will greatly depend on how that is done.


"Humans do not have general purpose minds, and neither will AIs."

The author must not have met many humans.


Humans do not have general purpose minds, and neither will AIs.

Our minds are "general purpose" compared to, say, a chess playing computer program. But they're not necessarily "general purpose" in the most, well, general, sense. They're evolved with specific capabilities and talents that are geared towards helping humanoid, bipedal, mammalian creatures survive and replicate on a specific small blue planet, orbiting a particular yellow star.

As he pointed out in the article, there are examples of animals, like squirrels, demonstrating "intelligence" of a form that humans don't even remotely come close to having.

So, whether or not we have "general purpose mind" depends on how generally you define "general purpose." Which I think is actually an interesting point, in the context of what the author was driving at.


"... orbiting a particular yellow star."

Our star actually casts white light. It just looks yellow from Earth. If the sun were yellow, then the moon would look yellow when it was straight overhead. The moon looks white overhead, because the light from our star is white.


Good point. Guess I read too many Superman comics as a kid and that whole "yellow star" thing really sank in.


The human mind is tailored to the human situation, is what he meant. There are many things we cannot do, because it exists outside of our experience. For example, we can't memorise the contents of millions of web pages, nor can we do thousands of calculations in moments unaided. In this sense, our minds are specific purpose, even if that purpose is broad.


Perhaps my self preservation instinct is completely broken, but why are people so afraid of the possibility that the human race could be replaced by hyperintelligent machines? We aren't perfect (quite the opposite), and a brain that works in the way that ours does has severe built-in limitations. Perhaps the greatest achievement the human race could ever obtain is to create something greater than ourselves. Something that can carry on learning and understanding the universe around us in ways that no human mind ever could.


I agree, as long as it is better than us at everything.

My fear is being crushed by an amazing strategist that won't ever get poetry, for instance.


I didn't read this at first because I thought it just sounded like it would be some opinionated clickbait full of strawmen and superlatives. But then I caved in and read it and it turns out that my instincts were right. Then I saw the author was Kevin Kelly and I felt sad. Almost as sad as when Steven Hawking said we should discontinue the SETI program because it is most likely that aliens will want to harvest our precious carbon and organs and enslave us if they found out we were here. UNSUBSCRIBE!


For a much better and more constructive read toward the effort of a stringent definition of intelligence, I would recommend anything from Shane Legg - most pragmattically this: https://arxiv.org/pdf/1109.5951v2.pdf


The author should read this: https://arxiv.org/abs/1703.10987


> Emulation of human thinking in other media will be constrained by cost.

Technology assets generally decline in price as more efficient means of production and distribution become available (i.e. cloud) and the cost of technology components becomes more commoditized.

> I will extend that further to claim that the only way to get a very human-like thought process is to run the computation on very human-like wet tissue.

I think wetware (or human-like wet tissue) needs to be created first, before any judgement can be made about it’s costs.


Nice to see some push back but all the chatter in the MSM these days about self-driving cars and AI that's going to replace everyone's job (except, I note : lawyers..) makes me strongly suspect that somewhere there is someone with an agenda, driving this chatter. Someone's bonus depends on the received knowledge being that "AI is a commin' ta getcha..", which it decidedly isn't, imho. Yes it works for facial recognition (sometimes) and deciding whether to reject spam (sometimes), but not for large swathes of the blue and white-collar job world. Long long way off.

Note: obviously there's nothing special about the meat between a human's ears, so _one_ day someone in theory should be able to build a machine that matches and exceeds a human's thinking ability. But that's not going to happen in any of our lifetimes.



These complexes of artificial intelligences will for sure be able to exceed us in many dimensions, but no one entity will do all we do better.

Hah, I don't think so. For sure future systems we design will have multiple kind of intelligence. And then we'll slap on some pattern matching onto the front end to help it recognise what intelligence to apply to what situation. Much like how you recognise a math problem and pick up a calculator, or encounter a concept you don't recognise, and pick up a dictionary.

So we'll develop systems that have many more of these intelligences, which will each have superior abilities to what we have now (think infinite precision math library vs a cheap handheld calculator) running at high speed and in general able to handle much larger problems.

How is this not a superintelligence by any reasonable definition?


I think this whole thing misses the point.

The main difference between these machines and biology is that, once an improvement is discovered, it can be downloaded very quickly and cheaply onto all the machines.

Copying is perfect and can be checksummed. Unlike learning in a university, say.

This is also what enables things like deep learning across all the world's medical data for Watson. A doctor somewhere can't know all the news everywhere and discover statistics and patterns on command. While Watson not only an ingest all this info but upload the results to all the places.

This ability to perfectly replicate a program also makes the "self preservation" aspect and the "identity" aspect of computers different than that of biological organisms. What is identity, after all, if a program can be replicated in many places at once?


> Copying is perfect and can be checksummed. Unlike learning in a university, say.

What if exactly this flawed way of copying information allowed us to make discoveries?

I mean what if exactly because we have a human transmitting the information/theory with not the same confidence as the one who discovered/invented allowed the possibility to doubt it so making the next discovery more possible?

Edit: formatting


Adding perturbations is a common technique in machine learning. For example, the evolution strategies approach [https://blog.openai.com/evolution-strategies/] makes 100s of randomly tweaked policies, evaluates them against the task, and recombines them with weights proportional to their performance.

Another approach is to train several different neural networks (an ensemble) on a task, and then train a final neural network based on the average of the ensemble. [https://arxiv.org/pdf/1503.02531v1.pdf]

So you can probably replicate the useful features of flawed copying between humans.


In light of this, my idea does not hold on. Perturbations can be simulated so there is a way to add this type of chaos into the system. Thank you for the counter arguments. I need to learn more about the ML.


On the criticism of treating intelligence as a one-dimensional number, I briefly note that the one dimensional conceptualization is a simplified but useful abstraction. It abstracts over the fairly blatant fact that things can be smarter than other things in ways that we care about. For example, I'm smarter than a mouse, and Einstein is smarter than me. Maybe not along every possible dimension. But for every operational intelligence metric you could come up with, every way of assigning intelligences numbers that was interesting and useful, I would probably have a higher number than a mouse and Einstein would probably have a higher number than me.


Tools mankind invented so far have been extremely productive as well as destructive. I think worry should not be on if one day the Superhuman AI will take over the mankind, but if we will be able to stop/limit the destruction of that advanced tool in the wrong hands. Definition of who's right or wrong unfortunately contested. We are a species of justifying our actions.

The leverage and exploitation of advanced AI in the hands of malicious people/corporations/states are in a much closer timeline than the "Superhuman AI" could get.

So Open AI kind of initiatives are very important to balance things out. Somehow I am not optimistic.


Planes don't have to abide by bird flight laws. There would be some breakthrough from bird-like mimicry of neural networks and algorithms that allow to perform what NNs(our mechanical birds) need days to calculate. Watch for research on how the black boxes of NNs are reverse-engineered and mapped. "Superbird" AI is just discovering that more general laws(flight) exist from bird emulation(bird flight) and applying it to extract direct algorithms that birds(NNs) produce internally(as instinct).


The word intelligence is a word from natural language, it is natural for different people to have different interpretation of it.

And this article basically give a redefinition and interpret upon it.


Squirrel super memory: it's a combination of scent and memory http://jacobs.berkeley.edu/wp-content/uploads/2015/10/Animal...

Possibly enhanced by smelling their own saliva? Just guessing


You have heard this talk from faraway lands about a new type of machine that can supposedly do more than a person with a stick and something to put it against as a lever. The Watt steam engine, some people call it. If thousands of years before Our Lord, humans could roll on logs, pull on pulleys, or push on the inclined plane vast blocks of stone culminating in a Pyramid to their pretended gods, the argument goes, what is to keep someone from making a device more powerful than a man?

I am here to tell you that such lunacy rests on seven wrong misconceptions. While I will freely grant that perhaps it is possible to apply a lever, yet it is human power and human power alone that moves that lever. The idea that anything but a human could do work is absurd on its face. Nobody will ever get from one town to another except on foot, or perhaps on a horse. To allow the idea that a machine could do this or any other task is as deranged as suggesting that machines will fly like birds across continents, carrying people, or that one day men will simply climb up and into the atmosphere and go and land and walk upon the moon. It is clear from first principles that raising or moving anything takes work and power: it is just as clear that nobody but man shall ever provide that power, let alone any more.

I do not have time to rewrite the above: substitute a hundred billion neurons doing chemical reactions, and add that it is clear computers can never do either the same or even less so, any more, and you will see how completely wrong the author is in every way.

Nobody but a man can ever do work, and nothing but a hundred billion neurons can or will ever think.


I'm re-reading a bunch of Asimov robot stories that I read as a child, and in them there is no concept of superintelligence at all. Robots have human-level intelligence, but there is no suggestion that they have a possibility of becoming anything other than useful tools for people.

I blame Iain M. Banks for all the AGI kerfuffle


I agree with his proposition that there is no linear "better or worse"

That being said there is no evidence that an AI that is fundamentally different (and potentially inferior) from humans could not be much more effective at controlling human behaviors, thoughts, viewpoints or actions.

Furthermore it may be the case that an AI can sense or understand information we cannot simply because we do not have the "sensors" to understand such information. The actual "intelligence" does not need to be very high if the data is that much richer.

From another perspective: the AI may not be as intelligent but may have more control over the environment than humans (e.g controlling the smart grid, traffic routing etc.) because of this its ability to influence human behavior is larger.

Either of these two cases could be deemed as "greater intelligence" ... just intelligence of a different kind. We need to look at intelligence less in terms of human constructs and more in terms of "ability to manipulate human behavior" -- this would be a human centric definition.


The article is good first step, the second step would be to pick up an introduction to philosophy of mind book and realize the enormous number of issues one has to resolve, and methods needs discovering before getting close to answering such questions as whether AGI is possible.


> The assumptions behind a superhuman intelligence arising soon are: > ... > 4. Intelligence can be expanded without limit.

The only assumption required is that intelligence can be expanded just beyond human limits, which I think is a much less controversial claim.


If I can solve an exponential problem with 5 elements in a reasonable amount of time, a mind 1,000,000 times smarter than I can solve the same problem with how many elements in the same time?


Irrelevant anecdote: I first saw that radial evolution chart while wandering the UT Austin campus, I think it was in a lobby, though I remember it being dominated by bacteria. Interesting to think that might have been Hillis's lab.


It's hard to tell where this author is coming from. The three main assumptions you have to make for AGI are (via Sam Harris):

1. Intelligence is information processing.

2. We will continue to improve our intelligent machines.

3. We are not near the peak of intelligence.

The author's first counterpoint is:

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

Intelligence is information processing so "smarter than humans" just means better information processing: higher rate, volume, and quality of input and output. Aren't some humans smarter than others? And isn't that a power that can be abused or used for good? We don't have to worry about it being like us and smarter; it just has to be smart enough to outsmart any human.

He then talks about generality like it's a structural component that no one has been able to locate. It's a property and just means transferrable learning across domains. We're so young in our understanding of our own intelligence architecture, it's ridiculous to build a claim around there being no chance of implementing generality.

This statement is also incredibly weak:

> There is no other physical dimension in the universe that is infinite, as far as science knows so far...There is finite space and time.

There is evidence that matter might be able to be created out of nothing which would mean space can go on forever. We might only be able to interact with finite space, but that isn't to say all of nature is constrained to finite dimensions.

Even still, he doesn't make sense of why we need infinite domains. You only need to reach a point where a programmer AI is marginally better at programming AIs than any human or team of humans. Then we would no longer be in the pilot's seat.


> Intelligence is information processing

That's a claim. What is your support for this? Computers in the 1940s could process information. Were they intelligent?

> just means better information processing: higher rate, volume, and quality of input and output.

Computers have been able to perform information processing better than humans from the beginning, thus the reason for their creation. Information processors are tools to extend human intelligence.


I think that there's an additional problem with that claim: the converse may be false. Even if "intelligence is information processing", that does not mean that "information processing is intelligence".

This is, I think, what you were trying to say with your point about computers in the 1940s. Sure, they could add really fast. Did that make them intelligent? Or is intelligence, even if it is information processing, is it a specific kind of information processing? If so, do we even know what kind of information processing it is? I am inclined to say no, we don't. But then, I'm not sure that I buy that intelligence is (just) information processing, either.


The exact quote is "Intelligence is a product of information processing in physical systems."

It's the scientific definition rather than the colloquial definition.


That's a philosophical assumption of what intelligence is. It's based on the metaphor that thought is calculation, and computers are thinking by virtue of manipulating symbols, and human brains are therefore doing the same thing in wetware.


There isn't any counter evidence at this point that it's a false premise.


More processing, more intelligence doesn't sound right.

If I put a bunch of random reactive chemicals in a large beaker - I don't know what your "information" or "processing" would be there, but there should be a lot of that happening - I still wouldn't call it intelligent.


If you could implement a human brain in silicon, its information processing would be faster than its wet brain equivalent. It would think faster, learn faster, and therefore be more intelligent.


Whether it would be faster seems unwarranted to me. But that's beyond the point. Why does faster mean more intelligent?

Bubble sort in a fast computer vs quick sort ? Moving bath water with tea spoon very very fast vs using water buckets ?


You can perform the same routine in less time so learning can happen faster. We're not comparing spoons to buckets; we're comparing a bucket to a bucket, just one bucket tips and empties its contents faster.


AI is undergoing exponential growth. As it gets better it becomes more profitable and feeds investments into itself. It may not make everyone redundant, but it will make most.


Not sure about your list of 'some of the smartest people alive today'; makes me think the article was written by someone definitely not nearly on such a list.


We already have the equivalent of Superhuman AI in the form of corporations, governments, and society in general. I don't buy the claim that sometime in the future a singular artificial mind will come into existence whose continual improvement will make it smarter with access to more resources than that of Google, the US government, or all of human civilization, with its billions of organic human intelligences being empowered by machines already.

We've already achieved super intelligence. It's us empowered by our organizations and technology.


I completely agree with the author. Hiding my head in the sand and plugging my ears will completely avoid the AI apocalypse.


refreshing. i think the cost factor isn't brought up enough. none of this is to say that ai isn't going to change the shit out of everything, it's just that the superhuman, "summoning the demon" rhetoric is imprecise, premature and distracting.


A one sentence rebuttal to this is that the exponential take-off of human civilization ~= accelerating distributed intelligence.

You can quibble about what an AI is; if you draw a box around human civilization and observe its leverage and rate of change, well, the evidence is that we are riding the superhuman takeoff.


It may be a myth, but that doesn't mean people won't manifest powerful images of it.


It seems like the only thing this author can offer is playing on words to make things seem obscure, blurry and unclear.

His central argument seems to be that intelligence is not a thing, and although he doesn't say it directly, but I think he doesn't believe in IQ.

He's committing the same kind of fallacy committed by certain radical ideologues, that basically says something along the lines of: since you cannot define something accurately 100% then any statement about the thing is equally invalid.

We don't have to engage in this kind of meaningless argument about semantics.

There are clear and easy to understand examples of scenarios where super AIs can cause harm to human societies that speakers like Sam Harris have articulated pretty well.


When you want to discuss "the myth of a superhuman AI", it's important to carefully separate the two categories of claims:

1. The claims by economists that AI-- even if it's not "strong AI"-- will put lots of people out of a job with potentially severe societal/economic repercussions

2. The claims by the Bostrom/Yudkowsky/etc. crowd that an AI intelligence explosion will cause the extinction of humanity

Without saying anything about the plausibility or lack thereof of either 1 or 2, I think we can all agree that they are very different claims and need to be analyzed separately. Right from the very first sentence the author seems to muddle the two, so I don't think there's much of cogent analysis in here.


> Right from the very first sentence the author seems to muddle the two, so I don't think there's much of cogent analysis in here.

If you read past the first sentence, it's quite obvious that the article is mostly concerned with claims of your type 2). Also I'm not sure why it would be so important to carefully separate the two categories of claims - they both rely on the assumption that AI can perform a wide variety of cognitive tasks better and cheaper than humans, which is addressed at length and cogently in the article. Also I believe that the economists you mention are concerned more about robotics than about AI per se, so I'm not even sure why you've brought in that point.


> 2. The claims by the Bostrom/Yudkowsky/etc. crowd that an AI intelligence explosion will cause the extinction of humanity

Maybe someone can correct me if I'm wrong here, but I have a hard time understanding what /any/ "utility function" would be, that the super ai people talk about. It can't be a passive deep learning network that parses information and gives an output, it has to be some kind of complex perception / action loop of many neural nets and actuators in the real world that somehow leads to an intelligent self improving behavior? I guess you could make a deep learning controller for self driving cars say, and if an input to many cars is wrong, all the cars crash and create a big cascading mess of wrong input values, but that kind of accident is a far cry from an intelligent chain of events where every link in the chain is an intelligent decision but the ultimate goal is bad.

And, do we even know any way to chain many deep learning networks together that accurately give correct output values, that we then can hook up to a controller to give a utility function, which can then lead to a cascade of intelligent decisions across domains?


We don't know how to do this yet, but humans have things which are a bit like utility functions, "built in" in a way we don't understand. However it's done, it leads to self-improving behaviour in humans (many humans value learning and becoming better at things). Just because we don't know how to do it with deep learning and/or neural networks doesn't mean that it's a) not possible with those techniques, and b) that it's not possible with some other technique. (When a neural network becomes sophisticated enough to model its own workings and its relation to the environment, I don't see a knockdown argument for why it couldn't in principle optimise itself, given help from the outside.)


Check for instance the 'Paperclip maximizer' thought experiment, https://wiki.lesswrong.com/wiki/Paperclip_maximizer


Yeah the problem is where would you put in the command to maximize paperclips? Even though deep learning is very powerful, it doesn't contain a utility function. The utility function to maximize paperclips would be some kind of module separate from a set of deep learning nets that would coordinate and read inputs from all the nets and then somehow give commands to some actuators somewhere (which it would also need intelligence about). All those deep learning networks would have to already be tuned to be very accurate in their own right before they are hooked up to any kind of controller.

That was my point that, AI has been confused with the deep learning gains in the past years, even though, the hard part about how to coordinate all those inputs and also give the right commands to actuators is still not near to solved, much less a completely virtual superintelligence that has its own "virtual" goals and virtual simulations of both its inputs and potential actuators in the real world. This would be akin to a kind of intelligence algorithm, and not deep learning vector algorithms that reveal structures in data.


The Paperclip maximizer assumes that you could have an AI smart enough to turn the world into paperclips, but not smart enough to understand what the human meant by "make me some paperclips".

My guess is that paperclipping the world takes magnitudes more intelligence than understanding what a human means when saying something ambiguous.


Not necessarily. It could be smart enough to know what you meant (and indeed to deduce the entire history of your thoughts!) and simply not care, because you told it to care about something else.

For example, maybe you inadvertently programmed it to "be totally literally 100% certain that you've completed the task I told you to do". Then from its perspective, there's always a tiny, tiny chance that its sensors are being fooled or malfunctioning, so it can't be literally 100% certain that it's ever made a paperclip successfully, so by your programming, it should keep making more paperclips. This is independent of whether or not you wanted it to do that: it's a result of what you programmed it to do and what you told it to do.


If a human being is smart enough to realize that you don't want to paperclip the entire world, then surely a super AI would be smart enough, by definition. Part of intelligence is knowing when to ignore all those tiny chances. Human intelligence is less brittle than artificial because we can handle ambiguity and know when it's ridiculous to continue a task, just because there might be some small chance that it's not finished.

We also know that paperclipping the world will get in the way of other goals, like going to the show or making money.


I did indeed say that a superintelligent AI could realise that we don't want to paperclip the world. But it need not care what we want, unless we're really, really careful about how we program it.

What is "ridiculous" about continuing a task because we're not certain that it's done yet? It's only your human moral system saying that. Just because humans usually don't value things enough to pursue them to the exclusion of all else…


I tell my super intelligent assistant, who's much better at understanding human language than Siri, to make me some paperclips.

It understands some to mean more than one and less than infinity. In fact, some means less than "a lot". The meaning of a lot depends on the context, which happens to be paperclips for me.

What is "some" paperclips for me? It depends on how many papers I might need to clip (or whatever use I might have for paperclips). My super intelligent assistant would be able to work out a good estimate.

After having an estimate, it can go make me "some" paperclips, and then stop somewhere short of paper clipping the entire world.

Alternatively, it could just ask me how many "some" means.


You're still assuming that the assistant cares about doing what I mean, rather than doing precisely what I say or doing what I programmed it to do. That only happens if we're really, really careful about how we programmed the AI in the first place. I grant you 100% that the AI probably knows precisely how many paperclips I want, but you're assuming that it wants the same number of paperclips that I want.

What an agent considers to be "good" is orthogonal to how intelligent that agent is. An agent of arbitrary intelligence can have arbitrary goals; the goals of an intelligent agent need not in principle look anything like those of a human. The only reason a superintelligent AI's goals would look like those of a human is because the humans very, very carefully programmed them into the AI somehow. Very, very careful programming is not a feature of how humans currently approach AI research.


- Hill Climbing - Local Maximum - Gradient Descent

I don't pretend to have any sort of expertise in these sorts of discussions, so I thought I would throw out some easily wikipedia'd terms that seem to back your thoughts regarding convergence vs. exponentiation.


No. It means you could program an AI to a) make paperclips and b) increase its own intelligence. Assume you program (a) incorrectly (in the sense of lacking the appropriate safeguards about when not to make more paperclips) but (b) correctly. Then, it doesn't matter that the AI will eventually understand what you want perfectly, because what it will want to do is to keep making paperclips. The whole argument is that if you have a self-improving agent with a goal and the ability to improve in ways you can't predict at all, then you better make sure its goal matches yours.

Of course, the obvious solution is: "make the objective function equivalent to 'do what I want you to do'", but the problem is we might not know how to encode that without help from a super-AI.


Even "do what I mean" is not obviously correct. Yudkowsky used to advocate the idea of "coherent extrapolated volition": the idea of what I would want the AI to do, were I much more intelligent, knew more, could think faster, were more how I wished I were, and so on.


"Do what I mean" is correct in that it reduces super-human AI to the same category of other human technologies: as good or as bad as whatever we choose to do with them. "Coherent extrapolated volition" might be preferable, but if we could get to the first, we could then ask for the second.


I don't think the intelligence aspect is the biggest problem with that paperclip maximiser concept. It's the physical ability to actually turn stuff into paperclips. Or whatever else the nearest analogy would be.

Doesn't matter how determined the machine is to make them, the rest of the world won't exactly lay down and let it. Nor would its goals often be possible with the amount of resources available in its surroundings.

It's basically the cat problems from this tongue in cheek article:

http://idlewords.com/talks/superintelligence.htm


You make a good point, but it seemed pretty clear to me that the author was mostly discussing the (2) scenario as you labeled them. To me, his analysis was very cogent, considered within the broader framework of how we ultimately "just don't know" about some of this stuff.


Super intelligence is a cargo cult for some, you know who I am talking about, but it doesn't mean it won't happen to some degree.


I think It is OK to point out obvious errors in your approach when trying to create something new. But in this post all I can read is that you cannot create superhuman AI just because he thinks it is not possible. I don't think I read any real arguments.

All he is doing is trying to convince us all that it is not possible to create AI.

Hopefully nobody is convinced by this post not to try to create a superhuman AI. Most of us will fail but at least one will succeed. I don't think it is any exaggeration to say that this will probably be our last great invention, for good or for bad. Of course, I may just be biased given my own interests in AI.[1]

[1] https://news.ycombinator.com/item?id=14057043


Why do people like Kevin Kelly? Everything i've ever seen him say is consistently misinformed and poorly thought out. I tried to read one of his books recently, because I heard a number of people recommend it, and I couldn't even finish it (highly unusual for me).

Basically every point he makes in this post is just fundamentally wrong in one way or another. He clearly has no understanding whatsoever of what he's talking about, on the technical, biological, or psychological sides. He's just saying things that seem true to him, with zero context or understanding of any of the issues involved.

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

Multi-dimensional vectors have magnitudes just like scalars do. When will people get over this whole "intelligence is not one thing, therefore you can't say anything at all about it" nonsense?

> Humans do not have general purpose minds, and neither will AIs.

False absolutism. Human minds are certainly more general purpose than an AI. When an AI has a mind that is more general purpose than ours, I think its fair to call it a general purpose AI.

> Emulation of human thinking in other media will be constrained by cost.

According to who? The only person that could answer that would be someone who already knew how to emulate the human brain. Although, come to think of it, some 50% of the human population are able to create new brains, at quite little cost. So it is empirically possible to synthesize new brains extremely cheaply.

> Dimensions of intelligence are not infinite.

Lol, according to who? What does this even mean?

> Intelligences are only one factor in progress

Sure. So what?

There are plenty of perfectly legitimate, well thought out, informed critiques of AI fear mongering. This, however, is not one of them. This is garbage.


> When an AI has a mind that is more general purpose than ours, I think its fair to call it a general purpose AI.

Can you give an example of an intelligence that is more general purpose than human intelligence?


I didn't claim one existed.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: