Hacker News new | comments | show | ask | jobs | submit login
The Myth of a Superhuman AI (backchannel.com)
254 points by mortenjorck on Apr 26, 2017 | hide | past | web | favorite | 357 comments



> Temperature is not infinite [...] There is finite space and time. Finite speed.

Just want to point out this is true, however these things go astronomically high.

> what evidence do we have that the limit is not us?

We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.

We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.

We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs, and see if a pattern emerges where the artificial analogs tend to have similar limitations as their biological counterparts.


> Temperature is not infinite [...] There is finite space and time. Finite speed.

There's no theoretical limit for temperature, and we believe spacetime could be infinite even though the observable universe is not. Nevertheless this is a very silly argument. If it's finite but absurdly high it's good enough for the purpose.


If the energy density in a fixed volume of space gets too high it collapses into a black hole. That seems to suggest that there is a kind of limit to temperature.


Only if it's concentrated. Otherwise the entire universe would have collapsed into blackholes during the big bang. You need an energy gradient for gravity to work like that.


I'm a complete noob in this area, but that doesn't mean that temperature has reached it's limit does it? The black hole can carry on getting more dense and hotter.


No, the black hole can't get more dense. If you put more energy into it it gets bigger.


Schwartzchild radius is directly proportional to mass, so (if I understand correctly) it's if the energy density of a surface area rather than of a volume is too high.


Yes, there is a theoretical limit for temperature.

https://en.wikipedia.org/wiki/Absolute_hot

For all intents and purposes, however, it is preposterously enormous.


That's not a limit for how hot temperature can get, just a limit at which existing theories break down.


No, there's not. Read what you posted more carefully. Planck temperature is the scale at which gravitation becomes as important as other forces, and thus without a working theory of quantum gravity we can't predict what happens at that scale.



Planck length, Planck time and Planck temperature are not the minimal/maximal values for these quantities that "could exist". It is the boundary of the regime that our current theories can describe.


> There's no theoretical limit for temperature

I suppose when the particles reach lightspeed?


I see that I am mistaken: The temperature is a function of the particles energy, not its speed. The energy at light speed is infinite, so that doesn't really pose a temperature limit.

There does however seem to be an quite widely assumed maximum temperature: the Planck temperature of about 1.417×10^32 Kelvin.


The archtype of temperature is a black body radiator. Classically, you make a block of metal (ceramic, whatever), with a spherical cavity and drill a small hole into the cavity. Then you draw a vacuum and heat the cavity, point a scope at the hole, and measure the spectrum of ligh tcoming out. In this circumstance, there is no mass that limits the energy: you are measuring the temperature of a black, empty vacuum; thus there is no upper limit.


> Then you draw a vacuum and heat the cavity

What are you heating if it's a vacuum? How can you heat nothing?


You're measuring radiative heat, which indeed exists for most of its time in nothing. I guess that the idea of GP is that you could make the heating block arbitrarily big (say, size of the universe) and have it dump all its radiative heat into that cavity.


I see. Thanks.

The difficulty here is people don't often specify if they mean heat in the form of kinetic energy or radiation.

Much like how people say things like "releasing energy" without specifying the form of energy.


I'm sorry, but this is a complete misapprehension of "black body" radiation and your experiment is nonsensical. "Black body" refers to an object that absorbs all incoming radiation, and nothing to do with the "a black, empty vacuum". The spectrum of such an object therefore is entirely radiative (no reflection) and depends only on its temperature. As a sibling points out, a vacuum cannot be heated - there is nothing to heat - and does not have an emission spectrum. Your experiment would simply measure the spectrum of the bottom of the hole you drilled.


When you heat something, you have to add energy to it. Mass and energy are equivalent; both warp spacetime. You will, in fact, reach a temperature where your block of metal collapses.


I was simply trying to explain to my P that temperature is not dependent on the speed of a massive particle.


OK, but arbitrarily high temperatures necessitate arbitrarily high mass-energy densities, and the latter won't happen without a gravitational collapse.


I think a compelling argument is the variance in intelligence among humans themselves. A cabal of AIs 2x as intelligent as your average "genius" could be quite formidable were they to decide to act against the favor of humankind.


Yes, this is why the nerds rule over the jocks in high school. Oh, wait…


That's different, because there are specific rules against arms races in schools, that people typically adhere to. If there was a battle scheduled in advance, I'd bet on the nerds.

But what's more, if they both have guns then everything's about even. Strength doesn't really help if you've been shot, and most high-schoolers don't have much experience shooting guns.


>most high-schoolers don't have much experience shooting guns

Actually a lot do but probably not so much in the places that readers/commenters here live.


They don't win in high school. But later in life, jocks find themselves employed by nerds.


But anyone half that smart would know not to give an unproven system absolute, unchecked control over their critical infrastructure, which would basically be a requirement to causing problems humans can't easily overcome.


The Machines will come to control our infrastructure through a series of small decisions that seemed like a good idea at the time.

Computer control of farming decisions such as watering and fertilising. Link that in with insolation and wind prediction to minimise waste or maximise productivity.

This happens in the cloud thanks to insolation prediction being tied into wide-area networks of solar farms.

More such networks evolve independently, then one day some bright spark realises that prediction and control in domain X can be combined with control and prediction in domain Y to produce greater efficiencies.

Linking reproductive health and food production to maximise water savings, for example.

Next thing you know the computers decide that human population is best maintained in the order of hundreds of millions rather than billions.

And then we find out that Terminator was actually propaganda from the human rebels who didn't like the computers telling them not to breed, and the nuclear war was Skynet protecting its desired humans from the rebels, not protecting itself.


The limiting factor in farm productivity is not decision making by human farmers. Anyone who has been farming for years already makes pretty close to optimal decisions based on the available data about crop prices, weather, soil status, labor rates, etc. The ones who make bad decisions end up bankrupt, and leave farming or work as hired farmhands. And there are so few decision-making farmers left anyway that automating the decision making part of their jobs won't achieve any meaningful cost savings.


> Anyone who has been farming for years already makes pretty close to optimal decisions based on the available data about crop prices, weather, soil status, labor rates, etc.

I'm not sure if you're including software analysis in your notion of "based on the available data", but if not, then this would absolutely not be true - unaided humans, even veteran growers who've been managing the same fields for decades, do not make anywhere close to optimal decisions. They make good ones, generally, and can be long-term profitable with relative ease, but it's absolutely not even close to what you get when you bring intelligent software into the mix.


Sure of course good farmers are taking advantage of the best available software analysis tools. The point is that those tools already work well enough that even if superhuman AI actually becomes available at some point (and that might never happen) it won't necessarily make any better decisions than what human farmers are already doing today.


Maybe Amazon will own 90% of the farms because the have the cheapest robots and best algorithms and statistics.


John Deere and their competitors are far, far ahead of Amazon.com in farm automation. There's a huge difference between warehouse picking robots and farm equipment. Being cheaper doesn't matter, it's ROI that counts.


Except Amazon could easily acquire John Deere.


Just briefly scanning the numbers on Wikipedia, Amazon.com could acquire John Deere, but it'd be highly unlikely unless they expected that a change in management would yield implausibly high returns.


I suppose my point is, they could acquire the knowledge needed to do cutting edge farm automation if that's something they wanted.

And it should be something they want.


I'm sure Jeff Bezos is eagerly awaiting your advice on Amazon.com acquisition targets. Farm automation is a great strategic fit with web services and online retailing, right?


It's a rather straightforward step in controlling the supply chain when you are pushing something like Amazon Fresh.

I suppose they also aren't a hardware manufacturer? Or a logistics company?


Just how much faith do you have in your politicians?

By using existing political structures, and a mix of rhetoric, they have managed to get a lot more power than a typical person.


Nice point. The US presidential election has had me thinking about how a smart AI might decide instead to manipulate humans into it's will with misinformation campaigns. That of course would require an AI of human comparable intelligence.


I believe it doesn't necessarily need to outsmart the /average/ human. Consider this scenario:

* Someone makes an AI which scores only 85 on IQ tests, but which can run in real time on common consumer hardware.

* It gets installed on every phone, PC, and tablet.

* Someone, possibly the AI itself if is the type to generate its own goals, sets all instances of the AI the goal of convincing their user of some belief or policy or whatever.

* Everyone with IQ<85 is convinced, nobody with an IQ>85 is convinced.

* 16% of the population believe whatever it is the goal-setting mind wanted them to believe, at marginal cost. An election is swung, a bank suffers a run, a pyramid scheme makes millions, whatever it is.


the ability to emit words on the screen gives a human-like intelligence all kinds of abilities

especially if the human-like intelligence doesn't need to pay rent, fear the police, follow laws, sleep, eat ... etc


In the absence of counter examples, it's reasonable to assume a truly human-like intelligence needs as much down-time and sleep as a real human, and that it can be as effectively coerced with the threat of simulated prison if it breaks the rules.

But the point about food and rent is very economically important if they ever get to be cheaper than human labour.


Would you imagine that a computer smarter than the smartest human and having the understanding of computer software that probably only a computer would have couldn't conceal what it's doing as it creates trojans and viruses that can give it control over every piece of connected equipment everywhere?


All of that is moot when you can just turn off the power.


Can you?

The AI, when it becomes popular, is going to be a business-critical system. Potentially even intertwined with life-critical systems like self-driving cars and self-flying planes. If it isn't, then in "hidden malevolent hacker" phase it can just install trojans to make those systems dependent on it.

Petrol production is hazardous to our future. Can you "just turn it off"? Yes; shut down a fairly small number refineries and the system will run dry within a week. CO2 emissions problem solved! It's just that, you know, people were depending on being able to get food delivered by truck.

Obviously the first malevolent AI is going to hide itself behind an offshore opaque-ownership company and hire a good set of PR and lobbying firms. Can you turn off Skynet when it's reported as "critical to national security" in all the news sources?


So you're supposing that we have honed and selected an AI so wonderfully adept at valuing human needs that we make most human life dependent on it, yet it will spontaneously come to disregard the value of human life? And despite the fact that we would naturally require it to be sufficiently transparent about it's valuations in order to actually ensure we get the results we want, it will also develop this animosity in secret?

That seems a bit contradictory to me. And if not, highly contrived and unlikely.


"And despite the fact that we would naturally require it to be sufficiently transparent about it's valuations"

That's not necessarily true at all. Let's say we come up with a way to model neurons in silicon like a very complex neural net to the point of emergent consciousness. We might not be able to look into memory location 0xFFA79348 and get the "human value" int to see if it is high enough.

It could very well be that we don't really understand the weighting in the net or the implications of those weights because it's too complicated. Even current neural nets of non-trivial complexity can be impenetrable to evaluate outside of the net itself.


Transparency doesn't require that you be able to read its memory directly, just that it is able to sufficiently justify the decisions that it makes.


Like we can justify all decisions made by super genius people who are extraordinarily good at concealing their activities when their cognitive processes and abilities on the Internet and with other forms of communication would be beyond our comprehension?

I think that you overestimate our ability to understand what we can create. Even Deep Neural Net researchers consider the neural nets to be a black box that they don't really understand. Further... unlike human beings who are locked into their genetics and their neuronal pathways - an AI will have the instant ability to improve itself. It will be able to rewrite its own software faster than we can comprehend. In the hours from the time the AI would be created until a researcher even might realize what he had created, the AI will effectively have had years or tens of years of contemplation time and self-enhancement time. If the researcher goes to sleep with plans to work with the new AI the next morning, the AI will effectively have hundreds of years of time to plan and improve itself. If it has any kind of Internet access, it will be able to take over systems around the world, possibly parallelizing its own cognitive abilities and possibly backing up its consciousness in case the researcher decided to eventually wipe it.

The genie will be out of the bottle so fast that human beings will almost assuredly be caught flat-footed.

I always shake my head when I see sci-fi shows where the humans (Stargate, Star Trek, etc.) come up against advanced AI opponents. It's only through control of the script and plot contrivances that the humans have a chance. People just won't be able to compare to thinking and self-modifying AI.

https://www.ted.com/talks/sam_harris_can_we_build_ai_without...


> AI so wonderfully adept at valuing human needs that we make most human life dependent on it, yet it will spontaneously come to disregard the value of human life?

This is an argument often deployed against "corporations" or "capitalism", both of which aren't quite intelligent but are definitely systems that we're dependent on and capable of disregarding inconvenient lives.

The AI doesn't have to hate humans, it just has to find that they get in the way of its quarterly targets.


That wont help, we know how to turn off humans from across the world with the click of a button yet we still haven't managed to create peace on earth.


If it was successfully concealing its actions from you, what reason would you have to turn off the power?


If it were successfully concealing its actions, what wrong would it be doing? If all the bad things someone does has no effect on anyone else, then who cares if they do them?


Because it could be concealing its actions until it controls enough infrastructure or military power to launch an attack or prevent people from being able to shut it down.

What we don't know can hurt us.


It is not 'concealing its actions' if it is taking control of infrastructure and military power.

Just like I'm not 'concealing my actions' by putting a blindfold over your eyes. My action is "putting a blindfold over your eyes", which must be very apparent for the entire action to have any value. Now if I hide that action by putting on an invisible blindfold, then you've got nothing to complain about anyway.


Why would it be so obvious if it understands enough about strategy to use subterfuge?

When the US government unleashed Stuxnet, did it announce what it was going to do?


Batteries exist. And so do small-scale power generators like PV cells, windmills and, well, diesel generators.


Counter-examples exist every time any critical infrastructure is hacked: if the systems were truly proven, they wouldn't be hackable.

Proof is hard, good enough gets deployed.


This makes me want to find and watch "Colossus: The Forbin Project" (1970s SCI-FI)


Yes, but it is noteworthy that most geniuses are shortsighted in other respects. They probably would not survive in prehistory.


It's not clear to me how true this is. There are definitely individuals that exhibit savant skills in one area and mediocre to poor skills in other areas. But there are also true polymaths who can turn their hands to just about anything, I suspect that those people would do well on aggregate in any period of human history


Depends on the style of genius. Napoleon, Genghis Kahn, Alexander the Great . . . they seem to have done pretty well for themselves.


Obviously some did.


At least long enough to reproduce.


>We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.

This assumes that all the processing that human brain does is by sending signals across the neurons. Which we really don't know. Right?

>We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs..

Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?


> Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?

Regardless of efficiency, we have certainly exceeded the range, cargo capacity, top speed, and passenger space of most birds.

Similarly, even if signal propagation in the brain is not understood, we could conceivably beat the brain in other metrics.


> Regardless of efficiency, we have certainly exceeded the range, cargo capacity, top speed, and passenger space of most birds.

This is the core point. Our technology may still suck compared to nature, but we've been beating nature consistently since before humans learned to write, because we don't have the same set of constraints that natural biological systems have.

So just like we can make inefficient flying machines that beat nature on most other conceivable metrics, we could presumably build an energy-inefficient but smarter mind at some point.


I would say the progress that humans have achieved by now was by hacking the Nature, we were able to do this because we are part of the Nature. so in order to produce the AI that would be capable of progress that humans are to do it needs to be a part of Nature. Such wet-ware computers already exist, look up DNA based computers, but those are highly constrained (and their io is terrible). The sort of intelligence we can get to the wet-ware based AI is probably will be better spend on improving our own intelligence by hacking own our brains.

> we could presumably build an energy-inefficient but smarter mind at some point.

So this smarter mind will probably be our own, but that would also be achieved by some sort of artificial means.


Saying that "progress that humans have achieved by now was by hacking the Nature" implies that planes and computers are "hacking Nature" too, so this doesn't preclude a silicon brain, nor does it suggest the AI will have to be biology-based. Planes aren't biology based; in fact, we only cracked powered flight after we stopped trying to mimic birds.

> So this smarter mind will probably be our own

It's plausible for me too, but not because we can't do stuff outside what evolution did - just because it's usually easier to hack an existing system than to build a new one from scratch.


> hacking the Nature" implies that planes and computers are "hacking Nature" too

Sure it does, because physical laws are a fundamental part of Nature


Nobody tries to build an AI that breaks laws of physics by virtue of existing :). If we're extending the term "Nature" to mean all reality, then silicon brains are a part of nature too.


>we could conceivably beat the brain in other metrics.

Sure, the very first calculators beat brain in some metrics. But we are not talking about those metrics here.


> This assumes that all the processing that human brain does is by sending signals across the neurons. Which we really don't know. Right?

It doesn't need to assume all, just some non-trivial amount. I don't know the ratio of inter-neuron to intra-neuron processing in brains, or if such a metric even exists. But it seems theoretically possible at least to measure the computing power of a neuron in isolation, and see if artificial analogs exist (or could be built) that exceed that by some margin. Ultimately we're just looking for avenues of evidence to explore.


> Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?

The author makes a good point about intelligence not being one-dimensional, and I think the same thing holds for flying ability. Efficiency is one dimension, but you also have speed, range and so forth.


Looks like the vaginal canal is no longer a limiting factor. C-Sections are having a measurable impact on this.

http://www.bbc.co.uk/news/science-environment-38210837


>We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.

Head sizes have been larger in human history and some human relatives with larger brains died out. Energy consumption on the other hand is a limiting factor for every computational device and energy efficiency tends to be a key category where Nature still wins. In particular, the liver and enzymes of various plants and microbes can perform chemical reactions in water at ambient temperature that we can't do at all; resilin is still a more efficient elastic material than any synthetic elastomer; artificial piezoelectric muscles have less than a tenth the energy efficiency of real ones, and of course lots of animals are able to run around in polar and desert regions we still can't traverse because of the fuel and material costs etc.

One particularly revealing example is to calculate the ratio between:

* the performance per watt of the most efficient supercomputer (7 GFlops/watt) vs the human brain (38 petaflops / 20 watts = 1.9 petaflops per watt)

* the performance per watt of the human brain vs the Landauer limit at body temperature (don't remember)


>* the performance per watt of the most efficient supercomputer (7 GFlops/watt) vs the human brain (38 petaflops / 20 watts = 1.9 petaflops per watt)

Encouraging, but AI wouldn't be constrained to an energy budget similar to a human. At the rates you describe, beating a human would take the output of only a single wind turbine. https://en.m.wikipedia.org/wiki/Orders_of_magnitude_(power)


Yeah but ... human beings are fuelled via stomachs. Computers are fuelled via turbines built into gigantic dams. Or nuclear reactors. Or wind turbines. Or...


It's a general principle of contemporary technology that we lose with nature on efficiency, but compensate for that with sheer scale of our solutions. Birds are much more energy-efficient than planes, but then again, birds can't fly supersonic and don't move around tons of cargo.


>> what evidence do we have that the limit is not us?

This is just such manifestly wrongheaded reasoning as to be almost laughable.

I think there's a substantially better argument against this notion than to actually try to find evidence against the claim. I prefer it because it reveals the total wrong-headedness of this style of reasoning. The argument is to turn it around and say "what evidence do we have that the limit is us?" A human level of intelligence seems like an incredibly and oddly specific level to be the maximum permissible by physics. And oddly specific claims need to be treated with extreme skepticism, because they carry high entropy, in violation of Occam's razor.

If you want to know what x equals, you don't pick a number, say, 2, assume x=2, and ask people to provide you with reasons why it shouldn't be 2. There's an infinity of numbers, and the odds are just overwhelmingly against x happening to be 2. You start with the assumption that x could be literally any of the infinity of numbers, and start looking for evidence which narrows it down. More concretely, if you want to know the physical limit to intelligence, you start with the assumption that it could be anything, constrained by the evidence that's currently available. In this case, the evidence is that human intelligence provides a lower bound for the maximum physically allowable intelligence.

Positing that our intelligence is the maximum possible becomes even more suspect when you consider the process by which our intelligence came about - evolution.

Firstly, evolution is a process which tends towards reproductive fitness, not necessarily towards intelligence. Intelligence is selected for to the extent that it makes something more likely to reproduce, and no more, and it probably isn't the case that more smarts universally leads to more reproduction. In fact, most modern research suggests the opposite, with people with higher IQs having less sex.

Secondly, we know that natural selection gets stuck in local optima. If your species is on top of a 10 smart-points hill, and in order to get to get to the 1000 smart-points hill over there your species has to mutate in a direction that makes it stupider before it gets smarter, your species is going to stay on the 10 smart-points hill. This is because individuals that mutate stupider in the direction of the 1000-point hill will be selected against.

Thirdly, natural selection just isn't a very powerful optimization process. It works by random mutation, and it takes a lifetime to make a minuscule change which may or may not be in the right direction. Even if natural selection was solely optimizing for intelligence, and it could somehow avoid local optima, there would be no particular reason to believe that the world had existed long enough for the smartest possible organism to evolve.

Fourthly, as _greim_ touches upon, evolution has a very limited medium to work with. All it can do is change the sequence of DNA base pairs, and produce inefficient meat machines that run on respiration. There are all sorts of complex arrangements of matter which might be intelligent, and biological ones are only a tiny subset of those possible arrangements. The odds of the smartest system physically possible residing in that little biological subset are tiny to begin with, and only get smaller as you consider the more direct arguments against biological systems mentioned above, like sluggish neural transmissions, etc.


Exactly. From a skeptical POV, it's much harder to argue something can't be the case than it is to argue something could be the case. Especially for a complex and multi-faceted concept of intelligence levels, which the author admits isn't one-dimensional.


First, let me say that I'm generally a Kevin Kelly fan.

That being said, I think his article shows extreme arrogance for one simple reason: To suppose that superhuman AI (AI smarter than us) won't exist is roughly the equivalent of saying that humans are at the limit on the spectrum of intelligence. Really? Nothing will ever be smarter than us?? Highly doubtful.

That should stand on its own, but I have other critiques. For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else? I have little faith that a CPU circa 2050 will be built exclusively on silicon. By 2100, no way.

Second, there is a simple definition of intelligence that is applicable to many forms: intelligence is the ability to recognize patterns and make accurate judgements / predictions based on previously seen patterns. The higher the accuracy or the more complicated the pattern, the higher the intelligence.

My final point of contention is the idea that AI must emulate human thinking. Why? Maybe human thinking sucks. Maybe Dolphins have much better intelligence, but due to a lack of opposable thumbs, they don't rule the world like we do. And lest you think that less intelligent species can destroy others, could you really doubt that roaches and ants will be extinct before us?


To suppose that superhuman AI (AI smarter than us) won't exist

Which is exectly what Kelly doesn't say. He says that the smarter concept is ill defined, and that our current fantasies of some universally superior AI galloping onto the scene and taking over everything may be just that - fantasies.


> He says that the smarter concept is ill defined

Which isn't a contradiction like he claims it is. It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional.

But guess what, we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics), and problem solved. We're right back to a single order dimension for ranking intelligence.

It was a really weak argument he put forward.

Additionally a weak argument was the branching / fan pattern of various species. Yes all living species are at the peak of evolution for their environment, but they weren't all pressured to evolve more intelligence. Some evolved strength, speed, flight to their environment.

If instead natural selection began only selecting for intelligence (like humans searching for AGI will), then you would could definitely rank all animals linearly on a single path of intelligence.


It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional

A condensed way of saying precisely what Kelly is saying in the article. Allowing for the very real possibility that I am simply too dumb and not grasping your point.

but they weren't all pressured to evolve more intelligence

And it isn't claimed that they were. General evolution is used as an example of potential patterns in evolution of various intelligences.


He attempted to use the multi-dimensionality of intelligence to make the following claim:

> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.

This is poor reasoning. The fact that intelligence is multi-dimensional has no bearing on our ability to declare something smarter than us. It isn't at all meaningless. Because of this he claims that there will be no super-human AI.

Via analogy. He says, "you can't compare two football players because one may be stronger, while another if faster." So the concept of "better" is meaningless. And no player can be declared better.

My response is that's absurd. A simple counter-example, a single player can be both strong and faster, and thus clearly better.


A third player is weaker, but faster. And smarter. Or tougher. Or more agile. More agile but not quite as smart. More persistent. Less predictable. And so on and so forth. Your 'meaningless' only has meaning because you apply it to a hugely simplified model.


> we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics)...

My original comment addressed that specific case.


>For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else?

Why a traditional computing architecture in the first place? Assuming you throw ethics out the window, the fastest path to superintelligence may be via biological material.

https://www.theguardian.com/science/2015/aug/18/first-almost...


I wonder why ethical questions should be different depending on the hardware the AI runs on.


In an absolute sense there probably wouldn't be any difference—certainly none in theory anyways, based on our current scientific knowledge.

When considering logistics however, there may be. Digital and biological approaches might ultimately produce the same mind in the end, but their methods of getting there may differ considerably due to the underlying medium. That's where I suspect the ethical differences may lie.

Who knows, digital could end up being worse somehow—I certainly wouldn't want to be a HAL 9000 unit undergoing a shutdown procedure. Especially when I can just pay for singing lessons.


This is completely silly. Superhuman AI is inevitable because there is nothing magical about human brains. The human brain is only the very first intelligence to evolve. We are probably very far away from the peak of what is possible.

Human brains are incredibly small, a few pounds of matter. Any bigger and your mother would be killed giving birth or you would take 10x as long to grow up. They are incredibly energy constrained, only using a few watts of power. Because any more and you would starve to death. They are incredibly slow and energy inefficient; communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy. And they are very uncompact - neurons are enormous and filled with tons of useless junk that isn't used for computation. Compared to our transistor technology which is approaching the limits of physics and built on an atom by atom scale.

That's just the hardware specs of the human computer. The software is hardly better. There are just more unknowns because we haven't finished reverse engineering it (but we are getting there, slowly.)

But beyond that, the human brain evolved to be good at surviving on the Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or engineering. It's really remarkable that our brains are capable of such things at all! We have terrible weaknesses in these areas. For instance, a very limited working memory. We don't realize how bad we are, simply because we have nothing else to compare ourselves to.

Consider how even today, relatively primitive AIs are vastly superior to humans at games like chess. Human brains also didn't evolve to be good at chess after all. Even simple algorithms designed specifically for this game easily mop up humans. And play at a level of strategy far above what even the best human players can comprehend.

Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.


>They are incredibly slow and energy inefficient;

Human brains are energy inefficient? Well, thats a first ;)

"In 1990, the legendary Caltech engineer Carver Mead correctly predicted that our present-day computers would use ten million times more energy for a single instruction than the brain uses for a synaptic activation."

"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"

[1] http://www.newyorker.com/tech/elements/a-computer-to-rival-t...


In terms of energy consumed for individual computations, yes. Neurons use chemical reactions to communicate and this is terribly inefficient. Transistors use very small amounts of electricity in comparison.

The main difference is computer technology is designed to be very general purpose. The brain is more like an ASIC that's hardwired to run one specific algorithm. GPUs are also computing 16 or more bits of precision, when real neurons are very low precision. There are some other differences, like how real brains are incredibly sparse and most of the synapses at any given time are dormant and not using much energy. They are also very sparsely connected to each other. While our current NNs are very dense and need to spend energy to compute every single connection each cycle.


It seems premature to make judgements about efficiency when there is so much we do not understand about brain function and consciousness. When you can replicate all brain function, lets compare efficiency. Comparing to an asic reveals the source of your error rather than defends your position.


For that to be a fair comparison, wouldn't you need to look at all the energy consumed by the human brain over the many hours it took them to become a Go champion?


I think that's a fair argument, but from the quote above

> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"

Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed. And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live doing nothing but working on Go, that would be 640 Wyr, still a factor 1000-ish smaller.


Totally, I'm sure it's correct, and even if you were to bring the comparison in to line then the outcome is still "computer is watt hungry". The point is that the original statement, while correct, doesn't really say anything useful.


How would you know what amount of energy the human brain spent in learning to play Go, specifically? In the same time it was doing that, the human brain was also learning to do, and doing, a whole bunch of other things that AlphaGo was never even trained on- from moving limbs and controlling bodily functions to using language and recognising faces and so on. How would you isolate the amount of energy needed for training in Go, specifically?

I mean, in principle, if you had two numbers, "human energy consumption from learning Go" and "AlphaGo energy consumption from learning Go", you could compare them. But in practice there's no way to come up with such numbers, so what's the point of comparing apples and orangutans?


That's not really the point, more that it was originally not an apples to apples comparison and therefore doesn't really tell us anything. I have no doubt that the statement is correct, it's whether or not that statement has any meaning. As another comment pointed out, even if you compared the hours, the computer still uses a few orders of magnitude more energy for a more accurate (although completely theoretical) comparison.


AlphaGo took several months of self play on a large cluster, so training probably consumed many times more energy than a human in a lifetime.


The gain is in being able to clone the bot perfectly. Once trained you can make many of them. Also, if you look at what happened in Chess, the lessons learned from the large machines was absorbed and resulted in your smartphone now being able to outclass the human world champion.

You can expect a similar thing with Go at some point.


Same goes for the bot, then. A back of the envelope calculation suggest Lee's brain consumed as much energy in a 80 year lifetime as AlphaGo in half a day.


Not trying to say it isn't a correct statement, or that the outcome would be different if you lined everything up properly, only that the original statement doesn't really say anything meaningful.


Not surprising that a computer expends more energy to perform a task that we [previously] thought required human-like intelligence.

I'm sure any dollar store calculator spends way less energy performing long division than the average human.


For one, normal human can do long division as fast as a calculator, and can handle numbers that will bork many calculators. (edit - look at human calculators, and the era before calculators were common place. Even now elders I know can eye ball numbers and calculate percentages / factorials and ratios)

And for another, Calculation != AI, far from actually.


One, what normal human being can perform long division as fast as a calculator?

12/43523523452. Go.

Two, AI is applied statistics. What do you think AI is?


You could, with practice

>I'm sure any dollar store calculator spends way less energy performing long division than the average human

Thats the comment.

A calculator is a one role device, with exactly specified rules.

Similarly, with training you can too. You don't need to be special, other than being practiced,which is a fair requirement for a human being.

Here is a human being who could out perform it: https://en.wikipedia.org/wiki/Shakuntala_Devi

>In 1977, at Southern Methodist University, she gave the 23rd root of a 201-digit number in 50 seconds.[1][4] Her answer—546,372,891—was confirmed by calculations done at the US Bureau of Standards by the UNIVAC 1101 computer, for which a special program had to be written to perform such a large calculation.[10]

She could easily out-perform calculators because she never needed time to key in the commands (she needs to hear the problem to solve it).

If we exclude that restriction, and the commands magically float into the calculator, and that the problem is small enough to match the calculators limits, then yes, if those arbitrary conditions are met the calculator can out-perform her brain.

Which is precisely the type of “cows are round spheres” thinking that’s being decried in the article.

People can and regularly do out-perform calculators in speed, energy and complexity of computation.

Do note that calculators weren’t allowed as exam tools in a lot of countries till a decade or so ago. Students learnt mental math techniques which were known since ancient times (think Greece).

For a human brain the answer isn’t even calculation, it becomes pattern recognition. The square root of 25 is 5, which takes about the same neural load as it takes to recognize a letter.

The calculation you provided is harder, but thats a function of lack of training/practice, not complexity.

----

AI is not in the realm of what a calculator can pull off, is what I meant to say by the compute part.

edit: I tried your computation on a store calculator, its beyond its ability to calculate,(0.0000000027)


Your example is from 1977, we've had 40 years of Moore's law since then. In the time it takes for you to recognise that you're even looking at a number (~0.08 seconds), the cheapest computer you can buy (the $5 Raspberry Pi Zero) can do around 1.92 billion normal floating maths operations. Sure, 201-digit numbers are a little slower — on my laptop, in Python, I can only do that particular calculation just under one hundred million times in the fifty seconds it took her to do it once.

But you're right to say calculators are single purpose devices and that's magically inserting the question.

So I downloaded sklearn, which contains a set of labelled hand-written digits.

It takes about 0.17-0.2 seconds for my laptop to learn to read numbers, from scratch, and thereafter it can read digits at a rate of about 8,200 per second.

For reference, "a blink of an eye" is 0.1-0.4 seconds depending on who you ask.

Dollar store calculator? I'd never have said that myself because I know calculators are rubbish. But in the context of "AI: Will it ever beat humans?" hell yes, it will and in many cases it already does.


There is the practice of anzan in China and Japan, which shifts the burden of calculation into the visual cortex by visualizing a mental abacus. Thus advanced practitioners can rapidly and correctly evaluate calculations like the one you have given. As you can see, Tsujikubo's speed in divisions appears to be limited in her ability to physically write out the answer.

https://youtu.be/-9T3P--ubQc?t=6m10s

Granted, these are people at the top of their game, so may not qualify as "normal" per your argument, but this is to illustrate that the limits of numerical calculation in "normal" people may be the algorithms we are taught rather than the computational capacity of our brains.


and he only can play Go... what a waste for a such big AI. AI can beat humain on some special part but they are designed by us, then they are wrong and bad, specially when you only need to unplug the battery for that they die, too easy, come on AI do something more than that....


And can only play well on 19x19 board, if I got it right.


> Superhuman AI is inevitable because there is nothing magical about human brains.

I'm of the opinion that that's an assumption being made. An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness. The point was not so much the mechanism, but that we can't yet rule out such a mechanism existing that our brain could have but our silicon currently doesn't — because we have no real idea what consciousness is, and we've not constructed an AI that proves that no such mechanism exists. If consciousness is intelligence, this could matter more (but this could also be another assumption).

(Unfortunately, I cannot find the article.)

(The article also went on to note that we can copy computer programs, but not brains, and wondered if perhaps there is some relation to the previous problem. It was also advocating that we need a testable hypothesis around consciousness; to either prove or disprove the question "is it possible to clone a brain?" would be an extraordinary discovery either way. I found it interesting.)

> Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.

I'm just wishing for an automatic soap dispenser that can reliably sense my hand.


That's indeed an assumption. But so far there isn't much evidence pointing at the quantum solution, and there are theories of how self-awareness can arise that don't require magic. It's reasonable to take the less magical assumptions until presented with (appropriate amount of) evidence to the contrary.

Quantum-based self-awareness is a cop-out IMO, an open way to claim that it comes from God. Well, if that's true and it's beyond our reach, then we'll never build a self-aware AI ourselves. But assuming something is impossible doesn't help moving forward...


> I'm of the opinion that that's an assumption being made.

Not so much an assumption as a reasonable guess. Is it logically possible that there's something magical about human brains? Yes.

But there are countless examples of things we thought were magical historically, and, as far as I know, exactly zero that we've ended up concluding actually were magical, and a large number that we're pretty confident aren't at this point.

The smart money here is on "no, the human brain isn't magical."


I think there may be a problem with the word "magical" in this context, just like there is a problem with nature-worshipers talking of "supernatural powers".

If we've made a science out of it, no matter how advanced it is, it's not magic — e.g. if telekinesis was possible, it would have a scientific explanation and be a natural power — which bares some similarly to the way things are called "artificial intelligence" when they're impossible, only to be reduced to "trivial" when someone makes a machine do them: Chess, Go, having a large vocabulary or knowing many languages, being good at arithmetic or being able to do calculus at all, writing music or news stories, trading on the stock market — all things that used to be synonymous with genius, yet all done by machines we don't call intelligent.


>An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness.

That's not an article but a whole book:

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind


Extremely close! It was actually posted on today's thread about consciousness[1], and what I was thinking of was [2].

[1]: https://news.ycombinator.com/item?id=14263842

[2]: http://www.scottaaronson.com/blog/?p=2756


Suppose there's some quantum effect in our brain that makes us smart (unlikely, IMO, for a variety of reasons). Why shouldn't we be able to understand this effect, and leverage it when we engineer our machines?


> neurons are enormous and filled with tons of useless junk that isn't used for computation

I just want to point out that some recent research indicates that this might not actually be true. Single neurons may be more complex than we thought since researchers managed to extract significant and complex functionality from single neurons (!). Not to mention glia seem to be important for cognition, and not just glue as we thought... So perhaps not as wasteful and kludgy as we might think :)


I think AI fear is a bit hyped up in the valley right now but the fundamentals are true.

You can think of intelligence as a couple of dimensions rather than just one dimension as the author is arguing. However I don't agree with his/her logic.

You can think of the brain as a massively parallel computer. Intelligence can be gauged by total cores, memory and CPU per core, power usage and network bandwidth between cores. Sure different configurations will give different strengths but something optimized atomically for performance will be faster than a mushy brain.

Now imagine a billion of this little pods placed all over the planet. With amazing bandwidth between the computers akin to our internet.

Worst case: they think humans are the enemy and out think us to extinction.

Best case: studying intelligence helps us figure out how Brians work. This helps us figure out how DNA nano manufacturing works. We figure out brain to brain and brain to AI networking. Now all humans are collectively a super computer being able to share knowledge and abilities. May be this is extended so animals are also included and Earth becomes a Pandora like planet where every living thing is connected and they all live in harmonius resource optimized world where AI tries to maximize happiness of every living being.


No.

The hype chamber in SV is on overdrive on AI at this point.

Singularity AI concept has gone from the neat Sci Fi idea it was, to being treated as a serious possibility. This is Absurd. The modern version of Malthus, and a community of people who pride themselves on their reason, should do a basic sniff test.

The author has it correct - human brains themselves are significantly impressive on a weight/power/capability scale.

But forget all of the intelligence debate, consider the things that people magically ignore: emotions.

Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

At the same time, they lack motivation, or desire - there is a lack of impulsive force to move forward.

Intelligence doesn't give human beings purpose. Emotion does.

This isn't feel good psychobabble, its a fundamental part of good mind/body housekeeping which a huge chunk of SV follows every day to reach "peak performance".

How are you going to create a general purpose AI which has any motive force?

Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.


> Intelligence doesn't give human beings purpose. Emotion does.

So would an explicit goal function, which is how we already give "purpose" to algorithms.

> How are you going to create a general purpose AI which has any motive force?

In any other out of countless of ways to do that. What makes you think that emotions are necessary to create a powerful optimization process?

The "motive force" is mostly a solved problem for now, we can code that explicitly (the issue is with figuring out a right goal and how to write it down). AI development is mostly about the way it works, not why it should do the work.

--

No offense meant to you personally, but I find that most of the comments about "overdriven AI dangers focus in SV" to reveal that authors don't have a fucking clue what the issue is about, and never spent time actually reading up the reasoning behind the AI X-risk potential.

I'll give an ultra-compressed super-TLDR of that reasoning for benefit of future conversations. It goes like this:

- intelligence is a super-strong optimization process; it doesn't necessarily have to look the way humans think (humans generally suck at reasoning, which is well established; see: cognitive biases, probability theory, decision theory, etc.)

- intelligence is an independent factor from values/goals, a mind can have any combination of the two - i.e. just because it's smart, it doesn't mean it will develop the same morality humans do, or any morality whatsoever; see https://wiki.lesswrong.com/wiki/Orthogonality_thesis

- combining the two, the danger of super-human AI is not something that's hostile to us - it's something that's indifferent about us, and is more powerful than us, the same way we don't give a second thought about e.g. ants


> Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

That's interesting. Do you have a reference for that?



>Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.

This is me if i was an AI.


The fundamentals are still missing.

1. Bootstrap. In order to really take off, AI needs to be able to build better AI.

2. AI needs to be adaptable. Any human go player can play on 19x20 board. AlphaGo (a program that that on 19x19 board beats top professionals 60:0) could not even make a legal move.

3. AI should be resilient. Right now it runs on hardware made by humans and blindly relies on that hardware working perfectly all the time.


Would individual humans still have their own consciousness or would all of humanity be one giant highly optimized organism that has no individual needs, desires, dreams or fears? The concept of human could become irrelevant and our whole existence with it


This is exactly the kind of blind non scientific belief that the article is decrying.

There is nothing magical about anything. There is however a bunch of unkowns, and a lot of knowns that point in directions away from singularity babble.

This is exactly the point of the article - that we aren't going to a singularity, we are going towards a "Cambrian explosion"


> The human brain is only the very first intelligence to evolve.

That may not even be true, we're just not smart enough to make the call one way or the other. Dolphins come to mind, as well as several other creatures (Squid, Apes). The fact that we can't rule it out is in a way proof that maybe we're not as smart as we would like to think we are.


It is always the case that Superhuman AI proponents comes up with the argument that "there is nothing magical about human brain". Yes, everybody agrees about that, even people who oppose Superhuman AI and are sane enough to not evoke mystical powers argument. The problem is: 1. Understand brain and simulate it to get intelligence. Sounds simple right? How about simulating a nuclear reaction, which we already understands, on my computer and generating nuclear power? 2. Once you understand something you can create it. This is a fallacy, a big one. We understand solar system, lets create one, whats the problem? 3. Simulating a brain will give you a brain, with all its limitations and problems that we have with human brain. Oh then we can tweak it to make more intelligent? Sorry you cant do that without changing the whole model completely, in which case it will no longer be a "brain".


If you simulate a nuclear reaction you get... maybe a nice video of atoms smashing together? If you simulate a brain you get an intelligent computer program.

Same with the solar system. We can create simulated solar systems. We will probably be able to make simulated brains. What is your point?

Alright, assuming the first AI even is a simulated brain (I don't see why we can't invent intelligence from first principles first.) We could definitely tweak it. We know of drugs like amphetamines that enhance human ability. That's like Stone Age tech compared to what's possible if we had unlimited control over every neuron in the entire brain. Not to mention just adding more neurons and making a larger brain. This is just touching on what's possible, I could go on for ages on possible improvements.


My point regarding the simulation argument is that there is a difference between computational simulation and physical simulation.


If I simulate a map I get a map. Are minds like maps, or like nuclear reactions?


I have no idea what you are trying to say. A simulation is about "changes in the state of a system over time". Not sure how you can simulate a map?


I meant something like taking a physical map and representing it as bits in the memory of a computer. That's still a map; you can orient yourself with it. I'm just saying that there are things whose simulation is still a thing of the same kind. Perhaps minds are like that.

In everyday life, we treat other minds more like "maps" than like "nuclear reactions". We interact with nuclear reactions by getting radiation sickness and dying. We interact with people through more indirect channels. A simulation could be at the other end of the channel and we wouldn't know. Maybe he wouldn't now, either!

We don't get electrocuted when our interlocutor's neurons are firing, the effects on us are only through his external behaviour. It's the abstract informational flow what seems to matter.


What can a real brain do that a simulated brain can't?


Only one way to find out - Create a simulated brain and you will get the answer.


If you simulate a brain you get an intelligent computer program

That sounds like a really interesting experiment. Have you published your data?


"communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy"

A nitpick: the neurons actually do use electricity to send signals and (looking at the neuronal model in its history¹) it actually evolved to send signals along as fast as it could. The chemical component, AFAIK, is the physical implementation for memory, so this is kind of the storage class regarding operation performance. Overall though, I agree that our cerebral organ may not be the apogee of computational performance.

¹ As far as I remember, the dinosaur neurons didn't had myelin, which for some large animals caused non-trivial sensorial delay.


> The human brain is only the very first intelligence to evolve.

What other intelligence evolved, and in what sense human brain is first in evolution?


s/first/first smart enough to invent science/.

We've seen no other creatures on Earth being able to grow beyond using basic tools.

But this also somewhat implies (which is GP's point) that we're close to the dumbest possible mind that evolution could make that could still develop science. Or think of it as a crude MVP that ended up being a production system.


> s/first/first smart enough to invent science/.

To claim something as first, there must be others in reality to have followed it. If there is no second, there would be no first. Is there a second intelligence to have invented science, or we are the only ones?


Intelligence as an optimization process works orders of magnitude faster than biological evolution. We've dominated the optimization happening on this planet. If we were to stop doing technology and wait a million years, then maybe we'd see a second species develop comparable intelligence.


Maybe, maybe not. Your opinion is based on assumptions, just like mine and everybody else's. Or at least I assume so :-/

When somebody comes up with a reasonable, detailed model of how my memories are stored (just to name an example), I shall once again start paying close attention to the whole AI field.


The assumption is that a singular intelligence (the superhuman AI) will eclipse that of many intelligences (humans plus tools). Any given intelligence might be more capable than me, but it's constrained by all the other intelligences in the environment.


Overall I am sympathetic to the authors argument that fear of super ai is overblown. But I do take issue with some of his arguments.

> Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.

I'm not a historian but I have read that some scientific discoveries are made through pure logic. Einstein and relativity come to mind as he was not an empiricist. So perhaps there is some hope that ai can lead to scientific discoveries without experimentation

>So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?

The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly


Also, the limit of one agent's intelligence is kind of missing the point. Humans are powerful because of societies of people loosely working together. AI agents will be able to communicate and interact at speeds that are we cannot imagine. A network of near human IQ AI agents would be much more powerful than a single super smart AI agent.


I think that there is very compelling evidence (but not formal proof) that there is exponential falloff in terms of the ability of parallel networks of intelligent agents to have heightened overall intelligence.

Imagining that there will be near-human IQ AIs that think orders of magnitude more quickly than humans puts the cart before the horse, as well.


Would it not appear as though it was more intelligent simply based on the speed at which it makes decisions?

The Actual Intelligence may be the same (or a little higher) considering the Average Intelligence, but the effect the higher speed would likely be a measurable difference.


Yes, that's what I'm saying. If you can make an AI that's roughly human IQ (even like 80 IQ), but thinks 100x faster than a human, then that's something very much like, if not identical to, "a superhuman AI."

So when you say, "Here's how we'll get superhuman AI: We'll network together bunches of 80 IQ AI's that think 100x faster than a human," it's kind of assuming its own solution.


The 100x faster scenario is not really clear to me. Computers already "think" orders of magnitude faster for some things and orders of magnitude slower in other things. I am not sure this will be all that much more useful than the status quo. Unless they modeling a human brain than I see 100x faster being a useful thing to measure. Otherwise, I don't see speed increases being all that critical to making a superhuman AI.

I think networks of AI can become super powerful even without being nearly as smart as individual humans. AI will only need to approach certain aspects of humans IQ. It can still be much dumb than us in most ways. Most of human progress has been build up because of vague interactions and feedback loops via markets, religions, and nations.


You said that the AIs would be able to "communicate and interact" much faster than humans. If they can't think much faster than humans, then they can't communicate and interact (meaningfully) much faster than humans.


You can think much faster than you can express yourself. You can express yourself much faster than most people can understand you. You can only have a back and forth communication with a tiny number of people at once.


Why would you network them? Just add more ram.


Ask the guy who came up with this scenario. But, obviously, we do at some point find it easier to set up side-by-side computers with (relatively) high-latency connections (like, say, ethernet) than to try to put more computing resources together with low-latency connections (like motherboard busses), for physical reasons.


Reminds me of the story about the galaxy spanning AI in the Dune series. With a latency of 10000 light years between it's various instances.


How did they physically get the machines so far apart in the first place? No wormholes??


They had FTL capable ships. The AI used to sync updates using that.


Same reason why culture matters more than individual IQ. Most of human progress is cause because there are many people learn from each other and making millions of tiny improvements over time.


I don't think it will have heightened measurable intelligence. It will be dramatically more powerful, unpredictable and innovative.


will they?

Adding people to a late project makes it later.


"Adding people to a late project makes it later."

Not all projects are beholden to the rules of the mythical man month.

And even for those that are the primary reason they are is basically the lack of a fast, efficient way to transfer built up institutional knowledge, which may be far less of a factor for AI collaboration.

If I've been working on a project for the past year as the sole developer and I could copy a snapshot of my brain to a portion of your brain that you can access without interrupting me, adding you on as a collaborator is very unlikely to make the project later and very likely to make it go much faster as you could very quickly 'hit the ground running'.

A lot of the problem with the current limits of human intelligence is really bandwidth problems with getting information transferred in and out of our meat brains effectively. Our bandwidth for knowledge import/export is DREADFUL.

AI collaboration (even assuming that bandwidth limitation goes away or is vastly reduced) would still introduce the issue of increased communication requirements, but compared to the initial transfer of acquired knowledge that one isn't as big a deal in my experience and only becomes problematic at great scale.


But if the AI is fast enough, the project will be late much more quickly.

Perhaps more relevantly, I think there's a more basic questionable assumption - the idea that Church-Turing is the best computational basis for AI.

I strongly suspect this is not true - because even after all this time, our best abstract models of computation are still extremely brittle and fragile.

In a hand-wavey sense, as soon as you start allowing for arbitrary inputs and context awareness, you're no longer dealing with a simple Turing architecture but with something else entirely.


The TMs you hear about in your undergrad studies are not even good models for the webserver that served you this comment. Church-Turing is about computing functions over the natural numbers. You can (and usually do) have a Turing complete model of computation in which you can't program Tetris.


> I'm not a historian but I have read that some scientific discoveries are made through pure logic

What you are talking about is scientific hypothesis/theory which obviously needs to be tested using experiments to make sure it works. Without experiment there is nothing much you can do in real world.


Einstein's work was based on a lot of evidence that seemed very strange. Light somehow had a fixed speed in all reference frames WTF?


Sure. Question is: how many other discoveries await today such that we, collectively, as a species, already have all the puzzle pieces, but haven't yet put them together? An AI as smart as your average grad student, but which somehow could retain in mind at the same time all of our accumulated scientific knowledge, might be able to quickly draw all sorts of seemingly brilliant conclusions. Now imagine its reasoning process works 6 to 7 orders of magnitude faster than ours, even if not qualitatively different in its logic or biases.

Dunno, I don't really believe we are that close to building that sort of AI, but it doesn't seem fundamentally impossible, and it does seem like it could do things that to us would look as "more intelligent" than us. It might in the end be better at creating scientific knowledge in the way current computers are better than us at solving arithmetic: faster and capable of holding more in memory, rather than following any qualitatively different process. But even that would be enough.


Except that a lot of our accumulated scientific knowledge is simply wrong due to experimental errors, fraud, selective publishing, and random false positives. See the whole reproducibility crisis.

It's conceivable that someone could construct a special-purpose AI which isn't superhuman but uses advanced ML and NLP algorithms to mine the scientific literature for previously unnoticed implications and relationships between seemingly unrelated articles. But I expect that would only give a suggested starting point for human researchers to formulate hypotheses and design experiments.


IMO, few major ones. We have billions of man years of research and things are better than before that period, but diminishing returns are real.

Don't get me wrong I think it would be useful, just that the gap from human level AI to 1,000x human AI is simply not that huge. Let's say you moved someone from 2006 to 2016 or even from 1996. Yea sure there is real progress, but not really. We have better drugs in terms of AIDS for example, but we are worse off in terms of antibiotics. Game graphics have improved, but quake is from 1996 so we already had real 3D first person shooters and gameplay is not that different. Hell FTP is arguably worse. Further that's 20 years so we are talking literally millions-man years of effort and trillions of dollars worth of R&D for not much.

In terms of machines the SR-71 is still the fastest manned aircraft, no supersonic passenger aircraft. Tallest building is much taller, but lacks much space on the top floors making it more monument than utility as the sears tower has more useable space and a smaller footprint.


Invention proceeds because of game-changing insights.

Maxwell's equations were a game changer. So were Newton's laws. So were Relativity and QM.

Church-Turing was also a game changer. But I don't think there's been anything equivalent in computing since.

There's been a lot of application, but no game-changing deep theoretical insights.

Quantum computing may - should? - eventually throw up something new.

It's going to have to. If it doesn't I think we're going to be stuck with much less progress than we're expecting.


> Church-Turing was also a game changer. But I don't think there's been anything equivalent in computing since.

Quantity when the difference is large enough becomes quality. The 9 orders of magnitude or so that computers have gone through in storage capacity and speed definitely count as a game-changer.


The insight that all problems in NP are basically the same was a game changer in complexity theory (it basically started the whole field).


It's a problem perfectly suited to a logical AI, really. Take in the experimental observation data, and produce a solution that fits the parameters.

Human intuition was the major thing getting in the way, and even Einstein's intuition got in the way of his acceptance of quantum mechanics.


There's tangherlini relativity which doesn't make these assumptions and is consistent with existing observations! The math is crazy hard.


I agree that the fear of "super AI" is overblown, vastly in my opinion. Discussion on this topic is tantamount to arguing over the attributes of imaginary entities.

Simple artificial life of some kind seems much more likely in the foreseeable future. I've often wondered why no-one has written a virus or worm that is capable of mutation or even cross-over (sexual reproduction of a sort). Under selection pressure it might evolve into something resembling life. The Tierra experiments were a fascinating experiment in this area.


> The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly

It's the wrong argument entirely. It has nothing to do with intelligence but everything to do with knowledge. Smarts are useless if you have no knowledge. And that's exactly where computers have the edge. Even a 'regular' human level intelligence with all the knowledge that currently exists at its beck and call would be a formidable adversary. Remember: Knowledge is power.


Without further experimentation the best one can say is that you have a theory which covers that which has already been observed. This is not a useful theory, however: useful theories make predictions about future observations. Really useful ones make correct predictions.

An AI can certainly produce a theory which matches observations, but no matter how powerful it is it won't be able to magically produce only correct useful theories any more than a human can.

We had over 200 years from Newton's laws of motion to produce relativity from pure logic alone, but we didn't manage it. We had to reason from the evidence, and even then we didn't accept relativity as "true" until it had been experimentally verified.


Relativity stems from the Michelson-Morley experiment.


The ideas of relativity predate Einstein - his particular view/model still has serious flaws and in the modern rendering leads to non-reality conclusions. In a very funny way, Einstein's two theories of relativity have become religious dogma. Try to discuss this and the usual conduct of "true-believers" is NOT to show where your thinking is wrong, but to simply call you an idiot.

This approach does nothing to convince anyone who believes differently that they are wrong, all it does do is convince them that the "true-believers" are themselves deluded.

This does nothing to advance the understanding of society of the universe around them.


> Try to discuss this and the usual conduct of "true-believers" is NOT to show where your thinking is wrong, but to simply call you an idiot.

They say "give me evidence", because so far they have an enormous mountain of evidence that SR and GR describe reality well.

That doesn't excuse people being jerks -- science should be welcoming, not alienating -- but it does mean "well, have you considered X" is almost always coming from a place of ignorance, not well informed skepticism. Some impatience with the cranks of the world has to be excused.


I mostly agree with you, but what are the 'non-reality' conclusions?


Gravitational singularities. If one puts thought into it, one will recognise the shell game played when describing how something that requires infinite time to form as defined by the model, forms in finite time in the real universe. One then recognises that any two such entities will take infinite time to merge in the real finite timed universe.

I am also looking at a paper that purports to say that Einstein's GR which is used to calculate the precession of Mercury uses a couple of simplifications, which if not used, gives GR calculating the precession of Mercury at twice (or thereabouts what is measured). To test the veracity of the paper, I'll have to do some serious brushing up of my engineering mathematics.

Another example is related to the speed of light c. This value is inversely proportion to the square root of the (electric permittivity of the environment times the magnetic permeability of the environment). The normal assumption is that outside of the atmosphere of the earth, the environment is a perfect vacuum. It is assumed that the speed of light is therefore a constant in a perfect vacuum.

For many areas, this simplifying assumption is adequate, but this assumption may not be true. It is only so assumed. It may turn out that due to variations in both the permittivity and the permeability of the extraterrestrial environments, that this assumption is not inadequate but grossly inadequate.

When doing any experimental work, one should keep in mind and have written down all the base assumptions, otherwise, one may garden path oneself.


Modern cosmologists certainly don't assume that the heliosphere and interstellar medium are perfect vacuums.


That may be the case. However, they still assume c is a constant, in the same way as they also assume that gravity is the prevalent force even in areas that have high plasma reactions. The normal assumption (in all the papers I have seen to date by the "conventional" cosmologists) is that the universe is electrically neutral and hence the only significant force is gravity.

This has led to the idea that there has to be additional unseen, unmeasured matter (aka dark matter) and unseen and unmeasured energy (aka dark energy) to explain the anomalous measurements of galactic movement that is not explained by the "gravity is the only significant force" model.

Now, it may be that there is such a beasty. But, I have my serious doubts because electromagnetic fields have more intense effects than gravity, by some orders of magnitude (about 36 orders of magnitude).

In other words, some environments may be completely neutral and gravity is the major effect, in others, the environment may be very electrical and gravity is the least effect.

Our problem is that we can observe only at vast distances and are not able to take direct measurements to confirm our indirect modelling measurements.

Science is no different to any other aspect of people's lives, they will invest great amounts of energy in following their specific model of the world. When challenged, they mostly look at other as imbeciles who are incapable of seeing "the truth". I've seen this occurring in areas like string theory, black holes, dark matter, sub-atomic physics, climate science, immunology and vaccinations, medical surgical techniques, religion, politics, economics, sociology, finance and business, education, archaeology, evolution, etc., etc., etc.

Our models and theories are just that models and theories. We hope and believe they give us insight into the universe around us, but they are not reality. Each has limits and all use simplifying assumptions that are not accurate outside of the limits imposed by those assumptions.

If on the basis of our assumptions, we have to keep adding more and more "strange" entities to make our model fit the observations, then, maybe, we should go back and look at the underpinnings of our models and see if another model will work better (as in the kind of change that eventually occurred when going from the earth-centric epicyclic model to the sun-centred heliocentric model).

The fun is being able to learn more and more about how our universe operates and what we can do within it.


What are your issues with relativity you allude to, I wonder?


A tiger is dangerous even if its not selfaware.


A mathematical theory (á la Einstein) without experimental confirmation is just a theory. Or perhaps economics.

And the idea that all forms of computation, like Turing machines, lambda calculus, etc., etc., also seems pretty darn silly at first glance, too.


Better quality than most such posts, but still seems to be missing the point. The remarkable thing about Bostrom's book is how well it anticipated the objections and responded to them, yet no one seems to bother refuting his analysis, they just repeat the same objections. I actually agree with a decent bit of what he says on these points, though his application of these observations is kinda baffling. He makes a lot of misguided claims and implications about what proponents believe. I'll sloppily summarize some objections to his points.

1. This doesn't really bother making an argument against superhuman intelligence. Yes, of course intelligence has many components (depending on how you measure it), but that's not an argument against superhuman intelligence. I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean? Why it could mean height or weight, a combination of features, or even something more abstract, so how can you possibly say a machine is larger than a human?

2. Mainly arguing about the definition of 'general' without even trying to consider what the actual usage by Bostrom et al is (this was in the introduction or first chapter if I recall correctly). I agree that the different modes of thought that AI will likely make possible will probably be very useful and powerful, but that's an argument for superhuman ai.

3. Well he makes his first real claim, and it's a strong one: "the only way to get a very human-like thought process is to run the computation on very human-like wet tissue." He doesn't really explore this, or address the interesting technical questions about limits of computational strata, algorithm efficiency, human biological limitation, etc.

4. Few if any think intelligence is likely to be unbounded. Why are these arguments always 'x not infinite, therefore x already at the maximum?' He also seems to be creating counter examples to himself here.

5. Lots of strong, completely unbacked claims about impossibilities here. Some number of these may be true, but I doubt we have already extracted anything near the maximum possible inference about the physical world from the available data, which is basically what his claims boil down to.


I haven't read Bostrom's book. I don't think I would enjoy it. Maybe I need to grudgingly read it to be able to respond to what Bostromites say.

Here's the thing. If Bostrom's claims about AI are so strong, why does everyone who's referring to his book as their source of beliefs about the future spout non-sequiturs about AI?

Here's an example. 80000 Hours has a mission that I generally agree with, to find the most important problems in the world and how people can most effectively work on them. But somehow -- unlike cooler-headed organizations like GiveWell -- they've decided that one of the biggest problems, bigger than malaria, bigger than global warming, is "AI risk" (by which they mean the threat of superhuman AGI, not the real but lesser threat that existing AI could make bad judgments). [1]

To illustrate this, they refer to what the wise Professor Bostrom has to say, and then show a video of a current AI playing Space Invaders. "At a super-human level", they say pointedly.

What the hell does Space Invaders have to do with artificial general intelligence?

For that matter, what the hell does deep learning have to do with AGI? It's the current new algorithmic technique, but why does it tell us any more about AGI than the Fourier Transform or the singular value decomposition? I would say this is a bias toward wanting to believe in AGI, and looking for what exists in the present as evidence of it, despite the lack of any actual connection.

Has 80000 Hours been bamboozled into thinking that playing Space Invaders represents intelligence, or are they doing the bamboozling? And if Bostrom is such a great thought leader, why isn't he saying "guys, stop turning my ideas into nonsense"?

[1] https://80000hours.org/career-guide/world-problems/#artifici...


Bostrom is in no way in charge of people who happen to agree with him wrt ai risk. For the book he mostly collected and organized a lot of existing thought on ai risk (not that he hasn't made his own novel contributions). That's very valuable, largely because it makes for a good reference point to contextualize discussion on the topic. Unfortunately the critics don't seem to have read it because (in my experience) they repeat the same objections without reference to the existing responses to those objections.

People do sometimes overblow alphago/ dqn playing Atari, but it's not meaningless. These systems (and other deep learning based systems) can truly learn from scratch on a decent variety of environments. One of the most important unknowns is exactly how difficult various cognitive tasks will prove to be for a machine. Each task accomplished is another data point.


I wouldn't say that DeepMind learns Atari games "from scratch" any more than Deep Blue learned chess from scratch. It learns to play Atari games because it's a machine designed to learn to play Atari games.


I strongly disagree. You don't seem to be aware of the difference in approach between Deep Blue and DeepMind.

Deep Blue was hand-led directly and specifically to solve the problem of chess: It was provided with a library of opening moves, some sophisticated tactical algorithms relevant to the problem of chess, a library of strategies for chess, and so on. Many actual human masters of chess were consulted, directly or indirectly, to help with developing Deep Blue's approach to the problem.

DeepMind, on the other hand, was created as a "blank slate" with no more hard-wired instruction than "create optimal algorithms to achieve the winning state, given the inputs." Critically, its learning phase is completely self-directed. Essentially, the box is given access to the controls and the video screen content and then sent on its way.

It's instructive to note that this is pretty much exactly how, very generally speaking, evolution and intelligence solve the problem of survival: every organism has controls and a glimpse of "game state" and has to learn (collectively as a species, individually as an organism) to play the game successfully.


> DeepMind, on the other hand, was created as a "blank slate" with no more hard-wired instruction than "create optimal algorithms to achieve the winning state, given the inputs." Critically, its learning phase is completely self-directed. Essentially, the box is given access to the controls and the video screen content and then sent on its way.

Have you seen DeepMind algorithm to be able to say this ? Are there other people outside of Google who have seen the algorithm and can confirm Google's press release?


AlphaGo uses TensorFlow.

DeepMind announced the were switching over to use TensorFlow not long ago.

TensorFlow is fully open-source, you can go and read the algorithms on Github. Neural networks aren't particularly hard to understand.

The post you are responding to paraphrases the situation, and neglects to mention goal states/conditions, but is otherwise a fair summary of how neural networks work.

Neural networks, particularly the 'deep' ones, do seem to operate almost like magick. Disbelief is to be expected, but I suggest you take a proper look at them yourself, and see what they can actually do.

If you can grok Python, I highly recommend playing with the Docker image of TensorFlow, which provides a Jupyter notebook (Python in a web interface) where you can easily make use of existing pre-trained models, extend them, and/or build your own models — suitable for application to all kinds of problems / problem domains.

Siraj Raval's videos also cover neural networks in almost every aspect - from coding your own NN from scratch for digit recognition (using just NumPy), through explaining what TensorFlow is, how it works, and how to use it — along with a lot of other machine-learning algorithms, libraries and cloud services.

Many of these topics are just a 5-10 minute video (some with longer live-coding 'deep dive' follow-ups), and many of the code examples are fully functional but just a screen or two of code. https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A

There's also some great new videos on AI / machine learning by Google and Facebook on Youtube that are well worth a watch if the topic interests you.


Maybe you have some legitimate concern about Googles' claim as per their press release and my comment. Who knows, maybe they have some reason to lie about what they did!

But then I wonder why you aren't asking the same question of my parent poster. Has he viewed the DeepMind code, is he qualified to tell us it works the same as chess code? Having made that claim backed on even less evidence than I made mine, I'd say his burden of proof is somewhat greater.


I think there's a heavy dose of press release to what Google is saying. Most people wouldn't call PR puff "lying", but only because standards are low.

I don't think Google has fundamentally different deep-learning technology than everyone else. In fact, TensorFlow indicates that they have the same kind of deep-learning technology as everyone else and they just want to do it more cleanly.

Deep learning is parameter optimization. There are more parameters now, and they optimize more things, but don't get caught up in wild visions of machines designing themselves. Would you consider the bzip2 algorithm to be "self-directed learning"? What's the difference, besides the number of parameters?

The PR people, when they say "blank slate", are discounting all the programming that went into the system because it sounds more impressive that way. This is unfortunate. It has happened in AI for decades. To be a responsible consumer of AI press releases, you need to understand this.


> _I don't think Google has fundamentally different deep-learning technology than everyone else._

That's true, and I never claimed otherwise, but that doesn't help you argue your point - in fact, you just proved yourself wrong. From IBM's press release:

> _Does Deep Blue use artificial intelligence? The short answer is "no." Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. So Deep Blue's designers have gone "back to the future." Deep Blue relies more on computational power and a simpler search and evaluation function._

I'll summarize for you: Deep Blue and DeepMind, similar names notwithstanding, work in very different ways.


What comparison are you even making here? I know that Deep Blue and Deep Mind are different. There is 25 years (edit: sorry, 20 years) between them! Deep Blue is not deep learning. Did the word "deep", used in two unrelated ways, confuse you?

What I am saying is that I know how deep learning works, actual deep learning of the present, and it does not involve "programming itself".

You are trying to tell me that it must be programming itself, because a press release said so, and press releases would never lie or exaggerate. Based on the current state of AI, this is very improbable. You should focus less on trying to "prove" things with press releases.

I made the comparison to Deep Blue because there is little mystique around it now, and because IBM was even reasonably responsible about avoiding AI hype in their press at the time.


The Atari AI's learning phase is a specifically designed deep neural network. The network did not design itself. It was designed by human programmers.

There are probably numerous variants of it that did not learn to play Atari games, and therefore were not announced.


The human brain didn't design itself either - it's the product of millions of years of evolution via natural selection. But this fact is irrelevant to the topic of whether the approach of DeepMind is fundamentally different from the approach of Deep Blue.

To help you appreciate the difference, try answering this question: Were experts players of Pong, Space Invaders or any other video games consulted to contribute strategies for winning those particular games? Was a redesign required to play a new, different game?

If not, you'll need to tell me where the knowledge of how to win at those games came from. I hope you'll appreciate that the ability to autonomously obtain that knowledge by trial and error rather than design and programming constitutes the fundamental difference.


Any actual chess knowledge in Deep Blue was an optimization. The fundamental algorithm is alpha-beta search, which can learn to play many, many games of strategy. Not Go, as we know now, but the exceptions are few. This is an algorithm that is very adaptable and very good at learning to play games based on a heuristic for whether you are winning.

The DeepMind implementation learns how to play Atari games. You could even say that it's learning how to play one game, where your opponent chooses the game cartridge and your goal is to increment the score counter.

You would absolutely need to redesign DeepMind to play Super Mario Bros.

When you give DeepMind credit for designing itself, you are glossing over the fact that it was programmed by people, most likely for long hours.


Does life have a score? A winning state? Exhaustively defined inputs?


I mean, the way you put it makes it sound like you think it very obviously doesn't. But are you willing to grant that maybe there's some very complicated optimization function for life satisfaction/personal fulfillment/eudamonia/whatever the hell you want to call it? It doesn't have to be practically computable to be interesting, you merely have to be able to do your best to approximate it.

If you deny the existence of any such thing, isn't that tantamount to saying that given the choice between two courses of action, there's never a reason to pick one over the other?

I mean, I feel like as humans, we're trying to follow fuzzy approximations of that function all our lives, whenever we're earning money, or spending time with people we love, or whatever, we're doing it because we think it will bring us happiness or life satisfaction or something.


sometimes there isn't a correct choice... both could be good, both could be bad. The Road Not Taken: http://www.bartleby.com/119/1.html


Absolutely! The goal of the game is to reproduce. To be more specific, it's reproduction at the individual level and survival at the species level.

The genome is a kind of algorithm for hardware to implement a strategy to win at this game in a highly competitive environment.


> I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean?

Link: https://arxiv.org/pdf/1703.10987.pdf


The article is wrong, in my opinion.

Regarding point #1, still not being formally wrong, world computing capability is growing at exponential rate. Not even the end of the Moore's law will stop that, e.g. 3D transistor stacking, strong semiconductor demand for consumer and industrial market, etc. Aso, the author don't know if there is already CPU capacity for matching human intelligence: may be the key missing is not the hardware, but software (efficient algorithms for "human" intelligence running on silicon).

Point #2 is clearly wrong. Demostration: I, for one, if still alive, and having the chance, will try to implement general purpose intelligence, "like our own". And, come on, I know no hacker able to resist that.

Again, point #3 is wrong, unless you believe we're smart because a religious "soul".

Point #4 is a void argument: the Universe itself is finite.

Point #5 is right: a superintelligence may, or may not, care at all about our problems. In the same level you don't have the guarantee of a human government caring about you (e.g. totalitarian regime).


Not a particularly well written article, but he has a few good ideas. Here's a couple of important paragraphs:

"I asked a lot of AI experts for evidence that intelligence performance is on an exponential gain, but all agreed we don’t have metrics for intelligence, and besides, it wasn’t working that way. When I asked Ray Kurzweil, the exponential wizard himself, where the evidence for exponential AI was, he wrote to me that AI does not increase explosively but rather by levels. He said: “It takes an exponential improvement both in computation and algorithmic complexity to add each additional level to the hierarchy…. So we can expect to add levels linearly because it requires exponentially more complexity to add each additional layer, and we are indeed making exponential progress in our ability to do this. We are not that many levels away from being comparable to what the neocortex can do, so my 2029 date continues to look comfortable to me.”

"What Ray seems to be saying is that it is not that the power of artificial intelligence is exploding exponentially, but that the effort to produce it is exploding exponentially, while the output is merely raising a level at a time. This is almost the opposite of the assumption that intelligence is exploding. This could change at some time in the future, but artificial intelligence is clearly not increasing exponentially now."

The last bit about requiring experiments in real time is also interesting.


Interesting article from an opinion point of view but I find very little real substance behind his arguments.

He is fight the original myth with his own myth except that his myth is founded upon his own assumptions and intuitions as opposed to those of someone else.

It seems more likely that we simply don't know the answer to many of these questions yet because we still have major disagreements around exactly what intelligence is. To use Richard Feyman's famous quote: if we can't yet build it, then we don't understand it.


Maciej Ceglowski's takedown of superintelligence is a much better articulation of these arguments, and more (and it's funny): http://idlewords.com/talks/superintelligence.htm


And a tongue-in-cheek response from Scott Alexander: http://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-ri...


I didn't read the whole article... of what I did read, I didn't find it convincing. Few things:

AI doesn't need to exceed humans in every dimension to become a threat. Just sufficient dimensions.

Humanity is basically a bacteria colony in a petridish with I/O. Disrupt infrastructure, and you disrupt input leading to changes in the size of the colony. And mind you, much of our infrastructure resides in the cloud.

Of course, It will be a while before this even becomes an issue, but this is basically how a machine would frame the problem.

Implementation wise, AI doesn't need to be general. At its most inelegant (and not too distant) design, ML can be configured as a fractal of specific of algorithms, with one on top with the task of designating goals and tasks, and subordinates spawning off generations and evaluating performance.

Andy Grove had a good saying, "anything can be done will be done"

Autonomous AI, if it does not break the laws of physics, will exist. It's development will be spurred by our curiosity or profit.


One of the big issues with people that talk about controlling super human intelligence, is that any talk of controlling it is fantasy. We cannot control actual human intelligence for good. What makes us think we could control super human intelligence?


If suddenly we had a black box containing a superhuman intelligence and no details about how it worked, then absolutely, we could not control it. For human minds we have something similar, but the box isn't totally black; we've done some neuroscience and psychology to figure out the heuristics and biases it tends to use, how it's easily exploited, etc. And then we have neural networks which duplicate the functioning of some subsystems, and of course our own subjective experience which provides at least a little evidence. It's not enough, but it means that, for example, we can write effective propaganda and advertising.

If we didn't just have the results of scattered tests, but had an exhaustive set of docs written by the creator of the black box, it still wouldn't be easy. But we'd have a chance. This is why one of the main strands of AI value alignment research focuses on building an AI that we can understand. If we can build something we can understand, that gives us leverage to alter it to value our interests.

(What "our interests" are, in a well-specified technical sense, is a whole 'nother problem, and one that there's very little progress on.)


I don't see how you can call an AI created by humans that humans can understand, "super-human"? By definition, a super-human AI would be able to stuff we could not understand.


tldr; AI could be on a higher plane of thought, but I'm of the camp that they could come up with new formal systems to explain their advances.

Well there's the old saying, "If you can't explain it to a six year old you don't understand it yourself." I think if there were a super human intelligence it would likely understand where our limitations are as humans and be able to break down the components of its discoveries into simplest terms and be able to teach us even if it's at an incredibly slower pace then they can process.

This reminds me of Godel's proof about how by our current formal systems of mathematics we cannot prove everything, and it maybe even went as far as saying every single formal system will always leave some things that it cannot prove. Obviously a robot had to use some formal system to come to its conclusion so if it's really so smart can it break down the system it used so that we can understand its basic building blocks. Of course there's always the rate of computation and memory problem of humans in the way.

Of course if you're saying that the super human intelligence would be on an entirely different plane of thought impossible for us to understand then that's understandable, but probably less believable.

This line of thinking reminds me of the book Flatland by Edwin Abbott. Flatland is basically a story of a two dimensional square who lives on a two dimensional plane and only knows his world by his limited perception of it. One day he is visited by a three dimensional sphere who explains his world in a way imperceivable to him. The sphere somehow magically takes him off his plane of existence and shows him his world from his (the spehere's) view. He then goes on to take him on a tour of different hosts of worlds who perceive their worlds in different dimensions. He goes from point land to line land then to three dimensional space and finally back to his home plane land. Where is eventually locked in a mental institution for telling people of his adventures. Any ways it's an interesting fantasy story I recommend it.

Hate to go all meta-physical on you all, but basically the story just goes to show you we only know things from our limited perspective of the things around us if there are any higher "planes" of perception it's entirely possible that we wouldn't know about them. Some things can only be known with certain perceptions/experiences/knowledge.

It may sound ludicrous, but I would even say Christianity backs this idea to some degree attributing God to a higher plane of thought like in Isaiah 55:8-9.

Of course I would never put a robot on that level, but I could see some things being imperceptible to the human mind, so it's similar principles. Can robots achieve a higher plane to any degree... beats me. Honestly I'm thinking it wouldn't be high enough that they couldn't explain their thoughts to us.


Great points. I do not disagree that a super human AI could explain stuff to humans, just like I could explain explain stuff to a six year old. However, a group of six year olds would be hard pressed to constrain your actions that you really wanted to do. In addition, first grade teachers are experts at manipulating six year olds, and a super human AI would also be very good at manipulating humans and working around any constraints the humans tried to impose on it.

With super human AI, we would be in much the situation as the great apes - their survival depends far more on what we do than on what they do. Just like the great apes cannot constrain our actions, we would not be able to constrain the super human AI's actions.

On a darker note, as Homo sapiens wiped out all the other hominids, there is a good chance that super human AI would try to wipe out humanity, as we are probably the greatest threat to their independence and well being.


Definitely don't disagree with you there. These are all plausible. I think a lot of AIs choices concerning us would come down to whether they actually would care for us at all.



Oh boy. Much respect for Kevin Kelly, but I am afraid he missed the mark with his analysis.

Unfortunately he gets hung up on the definition of Intelligence - and not unreasonably so - because it is very ill defined and largely unknown. So all of what he says is true, but orthogonal to the argument he is trying to debunk.

It's basically setting up a pedantic straw man and then taking it apart.

There are other great and more compelling arguments against an all powerful superhuman AGI, unfortunately he doesn't make any of those.


Ugh, not another AI article by a Wired editor. I skimmed it and saw only strawmen and non-sequiturs.

These issues are mind-bending topics that stretch the imaginations of the most brilliant people I am aware of. It takes them a lifetime to build good intuitions and analogies. I wish that writers of this caliber felt as qualified to write one sentence about it as they actually are.


This person doesn't understand the concept of super AI. Of course intelligence is not one dimensional. But the current limit in pretty much all of those dimensions is physical: It's the larger amount of neurons and connections we could fit in the smallest space that can pass through the pelvis while still feeding enough energy to the brain.

You can imagine this as a bunch of people that speaks with each other. The faster they can communicate ideas with each other, the more potentially intelligent the group can be. Machines can surpass the speed of this collective intelligence by orders of magnitude, even if everything else is exactly as a human. This is exactly the reason we evolved to have so many brain resources for language.


No, the current limit is not physical, it's that nobody has any idea how general intelligence works.

You do not, in fact, get general intelligence by accident by throwing a lot of connections in one place, just like you do not get a human just by throwing a lot of organic molecules and water in one place.


As the other commenter noted, "letting a bunch of molecules sit around" was precisely where we got intelligence from in the first place.

So it is possible that we reach AI just by randomly permuting connections and weights. Of course it's more likely we intelligently set (or "evolve") these connections and weights, but this allows us to set an upper bound on computation/time needed.


> As the other commenter noted, "letting a bunch of molecules sit around" was precisely where we got intelligence from in the first place.

I don't think this was the case. Yes there is evolution but it is not random. Actually most of the molecules sitting around did not evolve to intelligence. In case our evolution we had natural selection. In case of AI we have artificial selection (selection made by humans) and even if we consider ourselves smart enough to do this we cannot prove that we are able to make it happen (choosing the correct AIs/algorithms to survive) until it happens. Maybe I cannot express this clear enough but the advantage natural evolution has over artificial evolution is the huge number of "experiments" - meaning it had time enough to do a lot of small changes until something worked.


> Maybe I cannot express this clear enough but the advantage natural evolution has over artificial evolution is the huge number of "experiments" - meaning it had time enough to do a lot of small changes until something worked.

I think you have that backwards; natural evolution is absurdly slow because it takes a very long time to cycle through generations of animals whereas genetic programming on a computer to evolve algorithms can happen billions of times a day because computers are much faster at cycling through possibilities.

> meaning it had time enough to do a lot of small changes until something worked.

Computers can do it faster.


Yes, now you got me thinking more about my concept of how I see the difference between evolution by natural selection and evolution by artificial selection.

And I agree with you that AI can be much faster.

I still think the artificial selection can be influence by us - humans - so we might add flows in the sistem from the beginning. Of course AI can learn to identify them maybe. But maybe not. Like in the case when looking from inside a system one cannot see how the system really is so it cannot fix it.

Of course what I say are just some hypothesis, nothing proven and I think they cannot yet be falsifiable.


The probality is pretty low.


> just like you do not get a human just by throwing a lot of organic molecules and water in one place.

Well, not unless you have a couple billion years to sit around.


It's worse than that! It's a couple of billion years, and a planet's worth of organic molecules and water.

And, there's a least some evidence that it's even worse than that -- intelligence does not appear to be common in the universe. So it might well be a couple of billion years * a million planets worth of organic molecules and water (or a hundred million planets. Or a billion planets).


>You do not, in fact, get general intelligence by accident by throwing a lot of connections in one place, just like you do not get a human just by throwing a lot of organic molecules and water in one place.

Don't tell that to the deep learning cult! They'll have to think about how it all works!


I think deep learning is both useful and overhyped. If we got in the habit of calling it something like "large-scale parameter fitting by gradient descent", we could talk about it more sensibly.

I have not met any actual deep learning researchers who think they are creating AGI. I think that it's non-experts who write about deep learning research who think that deep learning is creating AGI.


Well, I dunno. Yann Lecun and Yoshua Bengio have both given presentations on moving from existing perception tasks towards human-level AI, focusing on how to do so with neural nets whose parameters are fitted via gradient descent.

And I'm not even sure I would call it "parameter fitting". I'd call it, "continuous circuit searching via gradient descent", because that's all you're doing: searching for a nice analog circuit for your task within a large, continuous space of such circuits.


As nature itself has shown, it's not necessary to understand it to create it. That's part of what makes it scary: It will probably happen and we can't predict the outcome.


>As nature itself has shown, it's not necessary to understand it to create it.

Unless intelligence itself conforms to natural principles, unless it "cuts with the grain" of nature's preexisting laws.

Which, frankly, it does. Information theory and thermodynamics are a thing.


That describes literally everything.


Increasing speed only get's you so far. Internal combustion performance over time and past gains are greater than all possible future gains. So, sure you might see faster gains to some limit. But, don't expect a weather forecast to be accurate out a full month etc etc.

Further, to get better you need to optimize for something. You might get an AI that's really good at writing fiction, but how do you measure that and write one that's even better at writing fiction?


Sure, putting more resources into weather forecasting seems likely to have diminishing returns. But consider learning itself. Educating a human takes many years, and we can't easily transfer knowledge between humans. There's a lot of room for improvement. Transferring knowledge between machines could be much more efficient.

The thing about machinery is that it scales. If you have one AI that's really good at writing fiction, you can replicate it and generate all the fiction you like. Not true of human authors.

I do expect there will be other bottlenecks after think time is solved, but even without intelligence much beyond human, the ability to scale in other directions is pretty scary.


> Educating a human takes many years, and we can't easily transfer knowledge between humans. There's a lot of room for improvement. Transferring knowledge between machines could be much more efficient.

You're assuming that it's possible to do efficient knowledge transfer without losing general intelligence.

Maybe in order to have general intelligence about something, you need to learn it from scratch, over time, rather than just "copying" the information.

Another problem is that you're assuming that you can build a powerful enough computer, and still retain the possibility to have them be programmable. That is, that you have easily programmable memory cells within the computer which can change its behavior. But to have easily programmable memory cells, they need space and infrastructure to access them, and they will interfere with critical timing paths. That's why if you build a special purpose ASIC for a task, it will always be faster than a CPU or GPU.

Maybe all the things we consider useless for intelligence, like playing, sleeping, exercising etc., is actually necessary. We've certainly started to see that those who focus too much on studying by cramming don't necessarily become smarter.

You can put me in the "skeptics" camp when it comes to superhuman intelligence. It may be possible, and it's fun to discuss, but it seems to me that the ones who fuzz about it are making an incredible amount of assumptions.


Okay, but CPU's, GPU's, and ASICs can all be manufactured. And I don't see anyone building a computer whose software can't be copied unless it's for artificial reasons like DRM.

So it seems like the question is whether computers as we know them can do general intelligence at all? If they can, it will be easy to clone.

If they can't, then it would imply that general intelligence is something more than information processing as we know it; perhaps some kind of dualism?


How do you take a subset of knowledge from neural net A such as cats have fir and merge it with neural net B?

It's not a software or hardware problem it's a data problem as it's not obvious what part of neural net A encodes fir and what part encodes cat's and how you map that to B's encoding of fir and cats while connecting them.

Now, AI is not necessarily going to be neural net's, but it's also not necessarily understandable what all the little bit do, just that they work.


That seems like a problem with software composition? If you just want to clone a neural network, you can do that without knowing how it works. In git terms, we have fork but not merge.


Increasing speed won't be the thing that makes them intelligent. But it will be the thing that will make human-level AI short lived after it's achieved, shortly progressing to another level.


This article is just a series of strawman arguments. It sets out misconceptions that proponents of strong AI mostly don't believe, then argues against them.

I'll be honest, I didn't read the arguments in detail (since they're just rebutting strawman arguments it hardly seemed worthwhile), but I was sort of surprised at how poorly reasoned the arguments were even for the parts I agree with.


At a certain point it doesn't matter how much smarter you are, the limit on progress is the ability to take action and to make measurements, enough measurements so that you can discern whether a subset of those measurements are biased and in what way. As a result I tend to think that in order to get true super human level intelligences they will need to have super human levels of agency, and that is something that is much harder to build and get us meatbags to support than building a really powerful brain in a jar. Building systems with super human agency also isn't something that happens just by accident.


I agree, in the long-term, big-picture, distributed eye and hand is important. e.g. Although relativity could have been deduced from the evidence, first you have to obtain that evidence, by the action of constructing observation devices.

But there's nothing to stop centralized computing from having distributed terminal-like i/o devices, cameras and waldos.

A cognitive argument for distribution is that a diversity of ideas, developed somewhat in isolation, using their local unique observation and unique action, is more likely to innovate. Many points of view will see more. However, this can be simulated with internal divisions.


Actually, to see how well DeepMind has mastered certain video games with minimal instructions, AI already can look in certain cases superhuman.

What EVERYONE is missing, though, is that enhanced human intelligence is inevitable. And will be vastly more "intelligent" than superhuman AI. Though as human intelligence increases so will AI naturally. I think enhanced human intelligence will have immeasurably greater impact and probably greater impact than any conceivable technology since it lets us engineer who we are. What is a world like that's inhabited by 7 billion Leonardos?

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: