Just want to point out this is true, however these things go astronomically high.
> what evidence do we have that the limit is not us?
We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.
We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.
We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs, and see if a pattern emerges where the artificial analogs tend to have similar limitations as their biological counterparts.
There's no theoretical limit for temperature, and we believe spacetime could be infinite even though the observable universe is not. Nevertheless this is a very silly argument. If it's finite but absurdly high it's good enough for the purpose.
For all intents and purposes, however, it is preposterously enormous.
I suppose when the particles reach lightspeed?
There does however seem to be an quite widely assumed maximum temperature: the Planck temperature of about 1.417×10^32 Kelvin.
What are you heating if it's a vacuum?
How can you heat nothing?
The difficulty here is people don't often specify if they mean heat in the form of kinetic energy or radiation.
Much like how people say things like "releasing energy" without specifying the form of energy.
But what's more, if they both have guns then everything's about even. Strength doesn't really help if you've been shot, and most high-schoolers don't have much experience shooting guns.
Actually a lot do but probably not so much in the places that readers/commenters here live.
Computer control of farming decisions such as watering and fertilising. Link that in with insolation and wind prediction to minimise waste or maximise productivity.
This happens in the cloud thanks to insolation prediction being tied into wide-area networks of solar farms.
More such networks evolve independently, then one day some bright spark realises that prediction and control in domain X can be combined with control and prediction in domain Y to produce greater efficiencies.
Linking reproductive health and food production to maximise water savings, for example.
Next thing you know the computers decide that human population is best maintained in the order of hundreds of millions rather than billions.
And then we find out that Terminator was actually propaganda from the human rebels who didn't like the computers telling them not to breed, and the nuclear war was Skynet protecting its desired humans from the rebels, not protecting itself.
I'm not sure if you're including software analysis in your notion of "based on the available data", but if not, then this would absolutely not be true - unaided humans, even veteran growers who've been managing the same fields for decades, do not make anywhere close to optimal decisions. They make good ones, generally, and can be long-term profitable with relative ease, but it's absolutely not even close to what you get when you bring intelligent software into the mix.
And it should be something they want.
I suppose they also aren't a hardware manufacturer? Or a logistics company?
By using existing political structures, and a mix of rhetoric, they have managed to get a lot more power than a typical person.
* Someone makes an AI which scores only 85 on IQ tests, but which can run in real time on common consumer hardware.
* It gets installed on every phone, PC, and tablet.
* Someone, possibly the AI itself if is the type to generate its own goals, sets all instances of the AI the goal of convincing their user of some belief or policy or whatever.
* Everyone with IQ<85 is convinced, nobody with an IQ>85 is convinced.
* 16% of the population believe whatever it is the goal-setting mind wanted them to believe, at marginal cost. An election is swung, a bank suffers a run, a pyramid scheme makes millions, whatever it is.
especially if the human-like intelligence doesn't need to pay rent, fear the police, follow laws, sleep, eat ... etc
But the point about food and rent is very economically important if they ever get to be cheaper than human labour.
The AI, when it becomes popular, is going to be a business-critical system. Potentially even intertwined with life-critical systems like self-driving cars and self-flying planes. If it isn't, then in "hidden malevolent hacker" phase it can just install trojans to make those systems dependent on it.
Petrol production is hazardous to our future. Can you "just turn it off"? Yes; shut down a fairly small number refineries and the system will run dry within a week. CO2 emissions problem solved! It's just that, you know, people were depending on being able to get food delivered by truck.
Obviously the first malevolent AI is going to hide itself behind an offshore opaque-ownership company and hire a good set of PR and lobbying firms. Can you turn off Skynet when it's reported as "critical to national security" in all the news sources?
That seems a bit contradictory to me. And if not, highly contrived and unlikely.
That's not necessarily true at all. Let's say we come up with a way to model neurons in silicon like a very complex neural net to the point of emergent consciousness. We might not be able to look into memory location 0xFFA79348 and get the "human value" int to see if it is high enough.
It could very well be that we don't really understand the weighting in the net or the implications of those weights because it's too complicated. Even current neural nets of non-trivial complexity can be impenetrable to evaluate outside of the net itself.
I think that you overestimate our ability to understand what we can create. Even Deep Neural Net researchers consider the neural nets to be a black box that they don't really understand. Further... unlike human beings who are locked into their genetics and their neuronal pathways - an AI will have the instant ability to improve itself. It will be able to rewrite its own software faster than we can comprehend. In the hours from the time the AI would be created until a researcher even might realize what he had created, the AI will effectively have had years or tens of years of contemplation time and self-enhancement time. If the researcher goes to sleep with plans to work with the new AI the next morning, the AI will effectively have hundreds of years of time to plan and improve itself. If it has any kind of Internet access, it will be able to take over systems around the world, possibly parallelizing its own cognitive abilities and possibly backing up its consciousness in case the researcher decided to eventually wipe it.
The genie will be out of the bottle so fast that human beings will almost assuredly be caught flat-footed.
I always shake my head when I see sci-fi shows where the humans (Stargate, Star Trek, etc.) come up against advanced AI opponents. It's only through control of the script and plot contrivances that the humans have a chance. People just won't be able to compare to thinking and self-modifying AI.
This is an argument often deployed against "corporations" or "capitalism", both of which aren't quite intelligent but are definitely systems that we're dependent on and capable of disregarding inconvenient lives.
The AI doesn't have to hate humans, it just has to find that they get in the way of its quarterly targets.
What we don't know can hurt us.
Just like I'm not 'concealing my actions' by putting a blindfold over your eyes. My action is "putting a blindfold over your eyes", which must be very apparent for the entire action to have any value. Now if I hide that action by putting on an invisible blindfold, then you've got nothing to complain about anyway.
When the US government unleashed Stuxnet, did it announce what it was going to do?
Proof is hard, good enough gets deployed.
This assumes that all the processing that human brain does is by sending signals across the neurons. Which we really don't know. Right?
>We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs..
Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?
Regardless of efficiency, we have certainly exceeded the range, cargo capacity, top speed, and passenger space of most birds.
Similarly, even if signal propagation in the brain is not understood, we could conceivably beat the brain in other metrics.
This is the core point. Our technology may still suck compared to nature, but we've been beating nature consistently since before humans learned to write, because we don't have the same set of constraints that natural biological systems have.
So just like we can make inefficient flying machines that beat nature on most other conceivable metrics, we could presumably build an energy-inefficient but smarter mind at some point.
> we could presumably build an energy-inefficient but smarter mind at some point.
So this smarter mind will probably be our own, but that would also be achieved by some sort of artificial means.
> So this smarter mind will probably be our own
It's plausible for me too, but not because we can't do stuff outside what evolution did - just because it's usually easier to hack an existing system than to build a new one from scratch.
Sure it does, because physical laws are a fundamental part of Nature
Sure, the very first calculators beat brain in some metrics. But we are not talking about those metrics here.
It doesn't need to assume all, just some non-trivial amount. I don't know the ratio of inter-neuron to intra-neuron processing in brains, or if such a metric even exists. But it seems theoretically possible at least to measure the computing power of a neuron in isolation, and see if artificial analogs exist (or could be built) that exceed that by some margin. Ultimately we're just looking for avenues of evidence to explore.
The author makes a good point about intelligence not being one-dimensional, and I think the same thing holds for flying ability. Efficiency is one dimension, but you also have speed, range and so forth.
Head sizes have been larger in human history and some human relatives with larger brains died out. Energy consumption on the other hand is a limiting factor for every computational device and energy efficiency tends to be a key category where Nature still wins. In particular, the liver and enzymes of various plants and microbes can perform chemical reactions in water at ambient temperature that we can't do at all; resilin is still a more efficient elastic material than any synthetic elastomer; artificial piezoelectric muscles have less than a tenth the energy efficiency of real ones, and of course lots of animals are able to run around in polar and desert regions we still can't traverse because of the fuel and material costs etc.
One particularly revealing example is to calculate the ratio between:
* the performance per watt of the most efficient supercomputer (7 GFlops/watt) vs the human brain (38 petaflops / 20 watts = 1.9 petaflops per watt)
* the performance per watt of the human brain vs the Landauer limit at body temperature (don't remember)
Encouraging, but AI wouldn't be constrained to an energy budget similar to a human. At the rates you describe, beating a human would take the output of only a single wind turbine. https://en.m.wikipedia.org/wiki/Orders_of_magnitude_(power)
This is just such manifestly wrongheaded reasoning as to be almost laughable.
I think there's a substantially better argument against this notion than to actually try to find evidence against the claim. I prefer it because it reveals the total wrong-headedness of this style of reasoning. The argument is to turn it around and say "what evidence do we have that the limit is us?" A human level of intelligence seems like an incredibly and oddly specific level to be the maximum permissible by physics. And oddly specific claims need to be treated with extreme skepticism, because they carry high entropy, in violation of Occam's razor.
If you want to know what x equals, you don't pick a number, say, 2, assume x=2, and ask people to provide you with reasons why it shouldn't be 2. There's an infinity of numbers, and the odds are just overwhelmingly against x happening to be 2. You start with the assumption that x could be literally any of the infinity of numbers, and start looking for evidence which narrows it down.
More concretely, if you want to know the physical limit to intelligence, you start with the assumption that it could be anything, constrained by the evidence that's currently available. In this case, the evidence is that human intelligence provides a lower bound for the maximum physically allowable intelligence.
Positing that our intelligence is the maximum possible becomes even more suspect when you consider the process by which our intelligence came about - evolution.
Firstly, evolution is a process which tends towards reproductive fitness, not necessarily towards intelligence. Intelligence is selected for to the extent that it makes something more likely to reproduce, and no more, and it probably isn't the case that more smarts universally leads to more reproduction. In fact, most modern research suggests the opposite, with people with higher IQs having less sex.
Secondly, we know that natural selection gets stuck in local optima. If your species is on top of a 10 smart-points hill, and in order to get to get to the 1000 smart-points hill over there your species has to mutate in a direction that makes it stupider before it gets smarter, your species is going to stay on the 10 smart-points hill. This is because individuals that mutate stupider in the direction of the 1000-point hill will be selected against.
Thirdly, natural selection just isn't a very powerful optimization process. It works by random mutation, and it takes a lifetime to make a minuscule change which may or may not be in the right direction. Even if natural selection was solely optimizing for intelligence, and it could somehow avoid local optima, there would be no particular reason to believe that the world had existed long enough for the smartest possible organism to evolve.
Fourthly, as _greim_ touches upon, evolution has a very limited medium to work with. All it can do is change the sequence of DNA base pairs, and produce inefficient meat machines that run on respiration. There are all sorts of complex arrangements of matter which might be intelligent, and biological ones are only a tiny subset of those possible arrangements. The odds of the smartest system physically possible residing in that little biological subset are tiny to begin with, and only get smaller as you consider the more direct arguments against biological systems mentioned above, like sluggish neural transmissions, etc.
That being said, I think his article shows extreme arrogance for one simple reason: To suppose that superhuman AI (AI smarter than us) won't exist is roughly the equivalent of saying that humans are at the limit on the spectrum of intelligence. Really? Nothing will ever be smarter than us?? Highly doubtful.
That should stand on its own, but I have other critiques. For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else? I have little faith that a CPU circa 2050 will be built exclusively on silicon. By 2100, no way.
Second, there is a simple definition of intelligence that is applicable to many forms: intelligence is the ability to recognize patterns and make accurate judgements / predictions based on previously seen patterns. The higher the accuracy or the more complicated the pattern, the higher the intelligence.
My final point of contention is the idea that AI must emulate human thinking. Why? Maybe human thinking sucks. Maybe Dolphins have much better intelligence, but due to a lack of opposable thumbs, they don't rule the world like we do. And lest you think that less intelligent species can destroy others, could you really doubt that roaches and ants will be extinct before us?
Which is exectly what Kelly doesn't say. He says that the smarter concept is ill defined, and that our current fantasies of some universally superior AI galloping onto the scene and taking over everything may be just that - fantasies.
Which isn't a contradiction like he claims it is. It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional.
But guess what, we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics), and problem solved. We're right back to a single order dimension for ranking intelligence.
It was a really weak argument he put forward.
Additionally a weak argument was the branching / fan pattern of various species. Yes all living species are at the peak of evolution for their environment, but they weren't all pressured to evolve more intelligence. Some evolved strength, speed, flight to their environment.
If instead natural selection began only selecting for intelligence (like humans searching for AGI will), then you would could definitely rank all animals linearly on a single path of intelligence.
A condensed way of saying precisely what Kelly is saying in the article. Allowing for the very real possibility that I am simply too dumb and not grasping your point.
but they weren't all pressured to evolve more intelligence
And it isn't claimed that they were. General evolution is used as an example of potential patterns in evolution of various intelligences.
> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
This is poor reasoning. The fact that intelligence is multi-dimensional has no bearing on our ability to declare something smarter than us. It isn't at all meaningless. Because of this he claims that there will be no super-human AI.
Via analogy. He says, "you can't compare two football players because one may be stronger, while another if faster." So the concept of "better" is meaningless. And no player can be declared better.
My response is that's absurd. A simple counter-example, a single player can be both strong and faster, and thus clearly better.
My original comment addressed that specific case.
Why a traditional computing architecture in the first place? Assuming you throw ethics out the window, the fastest path to superintelligence may be via biological material.
When considering logistics however, there may be. Digital and biological approaches might ultimately produce the same mind in the end, but their methods of getting there may differ considerably due to the underlying medium. That's where I suspect the ethical differences may lie.
Who knows, digital could end up being worse somehow—I certainly wouldn't want to be a HAL 9000 unit undergoing a shutdown procedure. Especially when I can just pay for singing lessons.
Human brains are incredibly small, a few pounds of matter. Any bigger and your mother would be killed giving birth or you would take 10x as long to grow up. They are incredibly energy constrained, only using a few watts of power. Because any more and you would starve to death. They are incredibly slow and energy inefficient; communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy. And they are very uncompact - neurons are enormous and filled with tons of useless junk that isn't used for computation. Compared to our transistor technology which is approaching the limits of physics and built on an atom by atom scale.
That's just the hardware specs of the human computer. The software is hardly better. There are just more unknowns because we haven't finished reverse engineering it (but we are getting there, slowly.)
But beyond that, the human brain evolved to be good at surviving on the Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or engineering. It's really remarkable that our brains are capable of such things at all! We have terrible weaknesses in these areas. For instance, a very limited working memory. We don't realize how bad we are, simply because we have nothing else to compare ourselves to.
Consider how even today, relatively primitive AIs are vastly superior to humans at games like chess. Human brains also didn't evolve to be good at chess after all. Even simple algorithms designed specifically for this game easily mop up humans. And play at a level of strategy far above what even the best human players can comprehend.
Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.
Human brains are energy inefficient? Well, thats a first ;)
"In 1990, the legendary Caltech engineer Carver Mead correctly predicted that our present-day computers would use ten million times more energy for a single instruction than the brain uses for a synaptic activation."
"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"
The main difference is computer technology is designed to be very general purpose. The brain is more like an ASIC that's hardwired to run one specific algorithm. GPUs are also computing 16 or more bits of precision, when real neurons are very low precision. There are some other differences, like how real brains are incredibly sparse and most of the synapses at any given time are dormant and not using much energy. They are also very sparsely connected to each other. While our current NNs are very dense and need to spend energy to compute every single connection each cycle.
> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"
Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed. And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live doing nothing but working on Go, that would be 640 Wyr, still a factor 1000-ish smaller.
I mean, in principle, if you had two numbers, "human energy consumption from learning Go" and "AlphaGo energy consumption from learning Go", you could compare them. But in practice there's no way to come up with such numbers, so what's the point of comparing apples and orangutans?
You can expect a similar thing with Go at some point.
I'm sure any dollar store calculator spends way less energy performing long division than the average human.
And for another, Calculation != AI, far from actually.
Two, AI is applied statistics. What do you think AI is?
>I'm sure any dollar store calculator spends way less energy performing long division than the average human
Thats the comment.
A calculator is a one role device, with exactly specified rules.
Similarly, with training you can too. You don't need to be special, other than being practiced,which is a fair requirement for a human being.
Here is a human being who could out perform it:
>In 1977, at Southern Methodist University, she gave the 23rd root of a 201-digit number in 50 seconds. Her answer—546,372,891—was confirmed by calculations done at the US Bureau of Standards by the UNIVAC 1101 computer, for which a special program had to be written to perform such a large calculation.
She could easily out-perform calculators because she never needed time to key in the commands (she needs to hear the problem to solve it).
If we exclude that restriction, and the commands magically float into the calculator, and that the problem is small enough to match the calculators limits, then yes, if those arbitrary conditions are met the calculator can out-perform her brain.
Which is precisely the type of “cows are round spheres” thinking that’s being decried in the article.
People can and regularly do out-perform calculators in speed, energy and complexity of computation.
Do note that calculators weren’t allowed as exam tools in a lot of countries till a decade or so ago. Students learnt mental math techniques which were known since ancient times (think Greece).
For a human brain the answer isn’t even calculation, it becomes pattern recognition. The square root of 25 is 5, which takes about the same neural load as it takes to recognize a letter.
The calculation you provided is harder, but thats a function of lack of training/practice, not complexity.
AI is not in the realm of what a calculator can pull off, is what I meant to say by the compute part.
edit: I tried your computation on a store calculator, its beyond its ability to calculate,(0.0000000027)
But you're right to say calculators are single purpose devices and that's magically inserting the question.
So I downloaded sklearn, which contains a set of labelled hand-written digits.
It takes about 0.17-0.2 seconds for my laptop to learn to read numbers, from scratch, and thereafter it can read digits at a rate of about 8,200 per second.
For reference, "a blink of an eye" is 0.1-0.4 seconds depending on who you ask.
Dollar store calculator? I'd never have said that myself because I know calculators are rubbish. But in the context of "AI: Will it ever beat humans?" hell yes, it will and in many cases it already does.
Granted, these are people at the top of their game, so may not qualify as "normal" per your argument, but this is to illustrate that the limits of numerical calculation in "normal" people may be the algorithms we are taught rather than the computational capacity of our brains.
I'm of the opinion that that's an assumption being made. An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness. The point was not so much the mechanism, but that we can't yet rule out such a mechanism existing that our brain could have but our silicon currently doesn't — because we have no real idea what consciousness is, and we've not constructed an AI that proves that no such mechanism exists. If consciousness is intelligence, this could matter more (but this could also be another assumption).
(Unfortunately, I cannot find the article.)
(The article also went on to note that we can copy computer programs, but not brains, and wondered if perhaps there is some relation to the previous problem. It was also advocating that we need a testable hypothesis around consciousness; to either prove or disprove the question "is it possible to clone a brain?" would be an extraordinary discovery either way. I found it interesting.)
> Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.
I'm just wishing for an automatic soap dispenser that can reliably sense my hand.
Quantum-based self-awareness is a cop-out IMO, an open way to claim that it comes from God. Well, if that's true and it's beyond our reach, then we'll never build a self-aware AI ourselves. But assuming something is impossible doesn't help moving forward...
Not so much an assumption as a reasonable guess. Is it logically possible that there's something magical about human brains? Yes.
But there are countless examples of things we thought were magical historically, and, as far as I know, exactly zero that we've ended up concluding actually were magical, and a large number that we're pretty confident aren't at this point.
The smart money here is on "no, the human brain isn't magical."
If we've made a science out of it, no matter how advanced it is, it's not magic — e.g. if telekinesis was possible, it would have a scientific explanation and be a natural power — which bares some similarly to the way things are called "artificial intelligence" when they're impossible, only to be reduced to "trivial" when someone makes a machine do them: Chess, Go, having a large vocabulary or knowing many languages, being good at arithmetic or being able to do calculus at all, writing music or news stories, trading on the stock market — all things that used to be synonymous with genius, yet all done by machines we don't call intelligent.
That's not an article but a whole book:
I just want to point out that some recent research indicates that this might not actually be true. Single neurons may be more complex than we thought since researchers managed to extract significant and complex functionality from single neurons (!). Not to mention glia seem to be important for cognition, and not just glue as we thought... So perhaps not as wasteful and kludgy as we might think :)
You can think of intelligence as a couple of dimensions rather than just one dimension as the author is arguing. However I don't agree with his/her logic.
You can think of the brain as a massively parallel computer. Intelligence can be gauged by total cores, memory and CPU per core, power usage and network bandwidth between cores. Sure different configurations will give different strengths but something optimized atomically for performance will be faster than a mushy brain.
Now imagine a billion of this little pods placed all over the planet. With amazing bandwidth between the computers akin to our internet.
Worst case: they think humans are the enemy and out think us to extinction.
Best case: studying intelligence helps us figure out how Brians work. This helps us figure out how DNA nano manufacturing works. We figure out brain to brain and brain to AI networking. Now all humans are collectively a super computer being able to share knowledge and abilities. May be this is extended so animals are also included and Earth becomes a Pandora like planet where every living thing is connected and they all live in harmonius resource optimized world where AI tries to maximize happiness of every living being.
The hype chamber in SV is on overdrive on AI at this point.
Singularity AI concept has gone from the neat Sci Fi idea it was, to being treated as a serious possibility. This is Absurd. The modern version of Malthus, and a community of people who pride themselves on their reason, should do a basic sniff test.
The author has it correct - human brains themselves are significantly impressive on a weight/power/capability scale.
But forget all of the intelligence debate, consider the things that people magically ignore: emotions.
Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.
At the same time, they lack motivation, or desire - there is a lack of impulsive force to move forward.
Intelligence doesn't give human beings purpose. Emotion does.
This isn't feel good psychobabble, its a fundamental part of good mind/body housekeeping which a huge chunk of SV follows every day to reach "peak performance".
How are you going to create a general purpose AI which has any motive force?
Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.
So would an explicit goal function, which is how we already give "purpose" to algorithms.
> How are you going to create a general purpose AI which has any motive force?
In any other out of countless of ways to do that. What makes you think that emotions are necessary to create a powerful optimization process?
The "motive force" is mostly a solved problem for now, we can code that explicitly (the issue is with figuring out a right goal and how to write it down). AI development is mostly about the way it works, not why it should do the work.
No offense meant to you personally, but I find that most of the comments about "overdriven AI dangers focus in SV" to reveal that authors don't have a fucking clue what the issue is about, and never spent time actually reading up the reasoning behind the AI X-risk potential.
I'll give an ultra-compressed super-TLDR of that reasoning for benefit of future conversations. It goes like this:
- intelligence is a super-strong optimization process; it doesn't necessarily have to look the way humans think (humans generally suck at reasoning, which is well established; see: cognitive biases, probability theory, decision theory, etc.)
- intelligence is an independent factor from values/goals, a mind can have any combination of the two - i.e. just because it's smart, it doesn't mean it will develop the same morality humans do, or any morality whatsoever; see https://wiki.lesswrong.com/wiki/Orthogonality_thesis
- combining the two, the danger of super-human AI is not something that's hostile to us - it's something that's indifferent about us, and is more powerful than us, the same way we don't give a second thought about e.g. ants
That's interesting. Do you have a reference for that?
This is me if i was an AI.
1. Bootstrap. In order to really take off, AI needs to be able to build better AI.
2. AI needs to be adaptable. Any human go player can play on 19x20 board. AlphaGo (a program that that on 19x19 board beats top professionals 60:0) could not even make a legal move.
3. AI should be resilient. Right now it runs on hardware made by humans and blindly relies on that hardware working perfectly all the time.
There is nothing magical about anything. There is however a bunch of unkowns, and a lot of knowns that point in directions away from singularity babble.
This is exactly the point of the article - that we aren't going to a singularity, we are going towards a "Cambrian explosion"
That may not even be true, we're just not smart enough to make the call one way or the other. Dolphins come to mind, as well as several other creatures (Squid, Apes). The fact that we can't rule it out is in a way proof that maybe we're not as smart as we would like to think we are.
Same with the solar system. We can create simulated solar systems. We will probably be able to make simulated brains. What is your point?
Alright, assuming the first AI even is a simulated brain (I don't see why we can't invent intelligence from first principles first.) We could definitely tweak it. We know of drugs like amphetamines that enhance human ability. That's like Stone Age tech compared to what's possible if we had unlimited control over every neuron in the entire brain. Not to mention just adding more neurons and making a larger brain. This is just touching on what's possible, I could go on for ages on possible improvements.
In everyday life, we treat other minds more like "maps" than like "nuclear reactions". We interact with nuclear reactions by getting radiation sickness and dying. We interact with people through more indirect channels. A simulation could be at the other end of the channel and we wouldn't know. Maybe he wouldn't now, either!
We don't get electrocuted when our interlocutor's neurons are firing, the effects on us are only through his external behaviour. It's the abstract informational flow what seems to matter.
That sounds like a really interesting experiment. Have you published your data?
A nitpick: the neurons actually do use electricity to send signals and (looking at the neuronal model in its history¹) it actually evolved to send signals along as fast as it could. The chemical component, AFAIK, is the physical implementation for memory, so this is kind of the storage class regarding operation performance. Overall though, I agree that our cerebral organ may not be the apogee of computational performance.
¹ As far as I remember, the dinosaur neurons didn't had myelin, which for some large animals caused non-trivial sensorial delay.
What other intelligence evolved, and in what sense human brain is first in evolution?
We've seen no other creatures on Earth being able to grow beyond using basic tools.
But this also somewhat implies (which is GP's point) that we're close to the dumbest possible mind that evolution could make that could still develop science. Or think of it as a crude MVP that ended up being a production system.
To claim something as first, there must be others in reality to have followed it. If there is no second, there would be no first. Is there a second intelligence to have invented science, or we are the only ones?
When somebody comes up with a reasonable, detailed model of how my memories are stored (just to name an example), I shall once again start paying close attention to the whole AI field.
> Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.
I'm not a historian but I have read that some scientific discoveries are made through pure logic. Einstein and relativity come to mind as he was not an empiricist. So perhaps there is some hope that ai can lead to scientific discoveries without experimentation
>So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?
The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly
Imagining that there will be near-human IQ AIs that think orders of magnitude more quickly than humans puts the cart before the horse, as well.
The Actual Intelligence may be the same (or a little higher) considering the Average Intelligence, but the effect the higher speed would likely be a measurable difference.
So when you say, "Here's how we'll get superhuman AI: We'll network together bunches of 80 IQ AI's that think 100x faster than a human," it's kind of assuming its own solution.
I think networks of AI can become super powerful even without being nearly as smart as individual humans. AI will only need to approach certain aspects of humans IQ. It can still be much dumb than us in most ways. Most of human progress has been build up because of vague interactions and feedback loops via markets, religions, and nations.
Adding people to a late project makes it later.
Not all projects are beholden to the rules of the mythical man month.
And even for those that are the primary reason they are is basically the lack of a fast, efficient way to transfer built up institutional knowledge, which may be far less of a factor for AI collaboration.
If I've been working on a project for the past year as the sole developer and I could copy a snapshot of my brain to a portion of your brain that you can access without interrupting me, adding you on as a collaborator is very unlikely to make the project later and very likely to make it go much faster as you could very quickly 'hit the ground running'.
A lot of the problem with the current limits of human intelligence is really bandwidth problems with getting information transferred in and out of our meat brains effectively. Our bandwidth for knowledge import/export is DREADFUL.
AI collaboration (even assuming that bandwidth limitation goes away or is vastly reduced) would still introduce the issue of increased communication requirements, but compared to the initial transfer of acquired knowledge that one isn't as big a deal in my experience and only becomes problematic at great scale.
Perhaps more relevantly, I think there's a more basic questionable assumption - the idea that Church-Turing is the best computational basis for AI.
I strongly suspect this is not true - because even after all this time, our best abstract models of computation are still extremely brittle and fragile.
In a hand-wavey sense, as soon as you start allowing for arbitrary inputs and context awareness, you're no longer dealing with a simple Turing architecture but with something else entirely.
What you are talking about is scientific hypothesis/theory which obviously needs to be tested using experiments to make sure it works. Without experiment there is nothing much you can do in real world.
Dunno, I don't really believe we are that close to building that sort of AI, but it doesn't seem fundamentally impossible, and it does seem like it could do things that to us would look as "more intelligent" than us. It might in the end be better at creating scientific knowledge in the way current computers are better than us at solving arithmetic: faster and capable of holding more in memory, rather than following any qualitatively different process. But even that would be enough.
It's conceivable that someone could construct a special-purpose AI which isn't superhuman but uses advanced ML and NLP algorithms to mine the scientific literature for previously unnoticed implications and relationships between seemingly unrelated articles. But I expect that would only give a suggested starting point for human researchers to formulate hypotheses and design experiments.
Don't get me wrong I think it would be useful, just that the gap from human level AI to 1,000x human AI is simply not that huge. Let's say you moved someone from 2006 to 2016 or even from 1996. Yea sure there is real progress, but not really. We have better drugs in terms of AIDS for example, but we are worse off in terms of antibiotics. Game graphics have improved, but quake is from 1996 so we already had real 3D first person shooters and gameplay is not that different. Hell FTP is arguably worse. Further that's 20 years so we are talking literally millions-man years of effort and trillions of dollars worth of R&D for not much.
In terms of machines the SR-71 is still the fastest manned aircraft, no supersonic passenger aircraft. Tallest building is much taller, but lacks much space on the top floors making it more monument than utility as the sears tower has more useable space and a smaller footprint.
Maxwell's equations were a game changer. So were Newton's laws. So were Relativity and QM.
Church-Turing was also a game changer. But I don't think there's been anything equivalent in computing since.
There's been a lot of application, but no game-changing deep theoretical insights.
Quantum computing may - should? - eventually throw up something new.
It's going to have to. If it doesn't I think we're going to be stuck with much less progress than we're expecting.
Quantity when the difference is large enough becomes quality. The 9 orders of magnitude or so that computers have gone through in storage capacity and speed definitely count as a game-changer.
Human intuition was the major thing getting in the way, and even Einstein's intuition got in the way of his acceptance of quantum mechanics.
Simple artificial life of some kind seems much more likely in the foreseeable future. I've often wondered why no-one has written a virus or worm that is capable of mutation or even cross-over (sexual reproduction of a sort). Under selection pressure it might evolve into something resembling life. The Tierra experiments were a fascinating experiment in this area.
It's the wrong argument entirely. It has nothing to do with intelligence but everything to do with knowledge. Smarts are useless if you have no knowledge. And that's exactly where computers have the edge. Even a 'regular' human level intelligence with all the knowledge that currently exists at its beck and call would be a formidable adversary. Remember: Knowledge is power.
An AI can certainly produce a theory which matches observations, but no matter how powerful it is it won't be able to magically produce only correct useful theories any more than a human can.
We had over 200 years from Newton's laws of motion to produce relativity from pure logic alone, but we didn't manage it. We had to reason from the evidence, and even then we didn't accept relativity as "true" until it had been experimentally verified.
This approach does nothing to convince anyone who believes differently that they are wrong, all it does do is convince them that the "true-believers" are themselves deluded.
This does nothing to advance the understanding of society of the universe around them.
They say "give me evidence", because so far they have an enormous mountain of evidence that SR and GR describe reality well.
That doesn't excuse people being jerks -- science should be welcoming, not alienating -- but it does mean "well, have you considered X" is almost always coming from a place of ignorance, not well informed skepticism. Some impatience with the cranks of the world has to be excused.
I am also looking at a paper that purports to say that Einstein's GR which is used to calculate the precession of Mercury uses a couple of simplifications, which if not used, gives GR calculating the precession of Mercury at twice (or thereabouts what is measured). To test the veracity of the paper, I'll have to do some serious brushing up of my engineering mathematics.
Another example is related to the speed of light c. This value is inversely proportion to the square root of the (electric permittivity of the environment times the magnetic permeability of the environment). The normal assumption is that outside of the atmosphere of the earth, the environment is a perfect vacuum. It is assumed that the speed of light is therefore a constant in a perfect vacuum.
For many areas, this simplifying assumption is adequate, but this assumption may not be true. It is only so assumed. It may turn out that due to variations in both the permittivity and the permeability of the extraterrestrial environments, that this assumption is not inadequate but grossly inadequate.
When doing any experimental work, one should keep in mind and have written down all the base assumptions, otherwise, one may garden path oneself.
This has led to the idea that there has to be additional unseen, unmeasured matter (aka dark matter) and unseen and unmeasured energy (aka dark energy) to explain the anomalous measurements of galactic movement that is not explained by the "gravity is the only significant force" model.
Now, it may be that there is such a beasty. But, I have my serious doubts because electromagnetic fields have more intense effects than gravity, by some orders of magnitude (about 36 orders of magnitude).
In other words, some environments may be completely neutral and gravity is the major effect, in others, the environment may be very electrical and gravity is the least effect.
Our problem is that we can observe only at vast distances and are not able to take direct measurements to confirm our indirect modelling measurements.
Science is no different to any other aspect of people's lives, they will invest great amounts of energy in following their specific model of the world. When challenged, they mostly look at other as imbeciles who are incapable of seeing "the truth". I've seen this occurring in areas like string theory, black holes, dark matter, sub-atomic physics, climate science, immunology and vaccinations, medical surgical techniques, religion, politics, economics, sociology, finance and business, education, archaeology, evolution, etc., etc., etc.
Our models and theories are just that models and theories. We hope and believe they give us insight into the universe around us, but they are not reality. Each has limits and all use simplifying assumptions that are not accurate outside of the limits imposed by those assumptions.
If on the basis of our assumptions, we have to keep adding more and more "strange" entities to make our model fit the observations, then, maybe, we should go back and look at the underpinnings of our models and see if another model will work better (as in the kind of change that eventually occurred when going from the earth-centric epicyclic model to the sun-centred heliocentric model).
The fun is being able to learn more and more about how our universe operates and what we can do within it.
And the idea that all forms of computation, like Turing machines, lambda calculus, etc., etc., also seems pretty darn silly at first glance, too.
1. This doesn't really bother making an argument against superhuman intelligence. Yes, of course intelligence has many components (depending on how you measure it), but that's not an argument against superhuman intelligence. I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean? Why it could mean height or weight, a combination of features, or even something more abstract, so how can you possibly say a machine is larger than a human?
2. Mainly arguing about the definition of 'general' without even trying to consider what the actual usage by Bostrom et al is (this was in the introduction or first chapter if I recall correctly). I agree that the different modes of thought that AI will likely make possible will probably be very useful and powerful, but that's an argument for superhuman ai.
3. Well he makes his first real claim, and it's a strong one: "the only way to get a very human-like thought process is to run the computation on very human-like wet tissue." He doesn't really explore this, or address the interesting technical questions about limits of computational strata, algorithm efficiency, human biological limitation, etc.
4. Few if any think intelligence is likely to be unbounded. Why are these arguments always 'x not infinite, therefore x already at the maximum?' He also seems to be creating counter examples to himself here.
5. Lots of strong, completely unbacked claims about impossibilities here. Some number of these may be true, but I doubt we have already extracted anything near the maximum possible inference about the physical world from the available data, which is basically what his claims boil down to.
Here's the thing. If Bostrom's claims about AI are so strong, why does everyone who's referring to his book as their source of beliefs about the future spout non-sequiturs about AI?
Here's an example. 80000 Hours has a mission that I generally agree with, to find the most important problems in the world and how people can most effectively work on them. But somehow -- unlike cooler-headed organizations like GiveWell -- they've decided that one of the biggest problems, bigger than malaria, bigger than global warming, is "AI risk" (by which they mean the threat of superhuman AGI, not the real but lesser threat that existing AI could make bad judgments). 
To illustrate this, they refer to what the wise Professor Bostrom has to say, and then show a video of a current AI playing Space Invaders. "At a super-human level", they say pointedly.
What the hell does Space Invaders have to do with artificial general intelligence?
For that matter, what the hell does deep learning have to do with AGI? It's the current new algorithmic technique, but why does it tell us any more about AGI than the Fourier Transform or the singular value decomposition? I would say this is a bias toward wanting to believe in AGI, and looking for what exists in the present as evidence of it, despite the lack of any actual connection.
Has 80000 Hours been bamboozled into thinking that playing Space Invaders represents intelligence, or are they doing the bamboozling? And if Bostrom is such a great thought leader, why isn't he saying "guys, stop turning my ideas into nonsense"?
People do sometimes overblow alphago/ dqn playing Atari, but it's not meaningless. These systems (and other deep learning based systems) can truly learn from scratch on a decent variety of environments. One of the most important unknowns is exactly how difficult various cognitive tasks will prove to be for a machine. Each task accomplished is another data point.
Deep Blue was hand-led directly and specifically to solve the problem of chess: It was provided with a library of opening moves, some sophisticated tactical algorithms relevant to the problem of chess, a library of strategies for chess, and so on. Many actual human masters of chess were consulted, directly or indirectly, to help with developing Deep Blue's approach to the problem.
DeepMind, on the other hand, was created as a "blank slate" with no more hard-wired instruction than "create optimal algorithms to achieve the winning state, given the inputs." Critically, its learning phase is completely self-directed. Essentially, the box is given access to the controls and the video screen content and then sent on its way.
It's instructive to note that this is pretty much exactly how, very generally speaking, evolution and intelligence solve the problem of survival: every organism has controls and a glimpse of "game state" and has to learn (collectively as a species, individually as an organism) to play the game successfully.
Have you seen DeepMind algorithm to be able to say this ? Are there other people outside of Google who have seen the algorithm and can confirm Google's press release?
DeepMind announced the were switching over to use TensorFlow not long ago.
TensorFlow is fully open-source, you can go and read the algorithms on Github. Neural networks aren't particularly hard to understand.
The post you are responding to paraphrases the situation, and neglects to mention goal states/conditions, but is otherwise a fair summary of how neural networks work.
Neural networks, particularly the 'deep' ones, do seem to operate almost like magick. Disbelief is to be expected, but I suggest you take a proper look at them yourself, and see what they can actually do.
If you can grok Python, I highly recommend playing with the Docker image of TensorFlow, which provides a Jupyter notebook (Python in a web interface) where you can easily make use of existing pre-trained models, extend them, and/or build your own models — suitable for application to all kinds of problems / problem domains.
Siraj Raval's videos also cover neural networks in almost every aspect - from coding your own NN from scratch for digit recognition (using just NumPy), through explaining what TensorFlow is, how it works, and how to use it — along with a lot of other machine-learning algorithms, libraries and cloud services.
Many of these topics are just a 5-10 minute video (some with longer live-coding 'deep dive' follow-ups), and many of the code examples are fully functional but just a screen or two of code.
There's also some great new videos on AI / machine learning by Google and Facebook on Youtube that are well worth a watch if the topic interests you.
But then I wonder why you aren't asking the same question of my parent poster. Has he viewed the DeepMind code, is he qualified to tell us it works the same as chess code? Having made that claim backed on even less evidence than I made mine, I'd say his burden of proof is somewhat greater.
I don't think Google has fundamentally different deep-learning technology than everyone else. In fact, TensorFlow indicates that they have the same kind of deep-learning technology as everyone else and they just want to do it more cleanly.
Deep learning is parameter optimization. There are more parameters now, and they optimize more things, but don't get caught up in wild visions of machines designing themselves. Would you consider the bzip2 algorithm to be "self-directed learning"? What's the difference, besides the number of parameters?
The PR people, when they say "blank slate", are discounting all the programming that went into the system because it sounds more impressive that way. This is unfortunate. It has happened in AI for decades. To be a responsible consumer of AI press releases, you need to understand this.
That's true, and I never claimed otherwise, but that doesn't help you argue your point - in fact, you just proved yourself wrong. From IBM's press release:
> _Does Deep Blue use artificial intelligence?
The short answer is "no." Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. So Deep Blue's designers have gone "back to the future." Deep Blue relies more on computational power and a simpler search and evaluation function._
I'll summarize for you: Deep Blue and DeepMind, similar names notwithstanding, work in very different ways.
What I am saying is that I know how deep learning works, actual deep learning of the present, and it does not involve "programming itself".
You are trying to tell me that it must be programming itself, because a press release said so, and press releases would never lie or exaggerate. Based on the current state of AI, this is very improbable. You should focus less on trying to "prove" things with press releases.
I made the comparison to Deep Blue because there is little mystique around it now, and because IBM was even reasonably responsible about avoiding AI hype in their press at the time.
There are probably numerous variants of it that did not learn to play Atari games, and therefore were not announced.
To help you appreciate the difference, try answering this question: Were experts players of Pong, Space Invaders or any other video games consulted to contribute strategies for winning those particular games? Was a redesign required to play a new, different game?
If not, you'll need to tell me where the knowledge of how to win at those games came from. I hope you'll appreciate that the ability to autonomously obtain that knowledge by trial and error rather than design and programming constitutes the fundamental difference.
The DeepMind implementation learns how to play Atari games. You could even say that it's learning how to play one game, where your opponent chooses the game cartridge and your goal is to increment the score counter.
You would absolutely need to redesign DeepMind to play Super Mario Bros.
When you give DeepMind credit for designing itself, you are glossing over the fact that it was programmed by people, most likely for long hours.
If you deny the existence of any such thing, isn't that tantamount to saying that given the choice between two courses of action, there's never a reason to pick one over the other?
I mean, I feel like as humans, we're trying to follow fuzzy approximations of that function all our lives, whenever we're earning money, or spending time with people we love, or whatever, we're doing it because we think it will bring us happiness or life satisfaction or something.
The genome is a kind of algorithm for hardware to implement a strategy to win at this game in a highly competitive environment.
Regarding point #1, still not being formally wrong, world computing capability is growing at exponential rate. Not even the end of the Moore's law will stop that, e.g. 3D transistor stacking, strong semiconductor demand for consumer and industrial market, etc. Aso, the author don't know if there is already CPU capacity for matching human intelligence: may be the key missing is not the hardware, but software (efficient algorithms for "human" intelligence running on silicon).
Point #2 is clearly wrong. Demostration: I, for one, if still alive, and having the chance, will try to implement general purpose intelligence, "like our own". And, come on, I know no hacker able to resist that.
Again, point #3 is wrong, unless you believe we're smart because a religious "soul".
Point #4 is a void argument: the Universe itself is finite.
Point #5 is right: a superintelligence may, or may not, care at all about our problems. In the same level you don't have the guarantee of a human government caring about you (e.g. totalitarian regime).
"I asked a lot of AI experts for evidence that intelligence performance is on an exponential gain, but all agreed we don’t have metrics for intelligence, and besides, it wasn’t working that way. When I asked Ray Kurzweil, the exponential wizard himself, where the evidence for exponential AI was, he wrote to me that AI does not increase explosively but rather by levels. He said: “It takes an exponential improvement both in computation and algorithmic complexity to add each additional level to the hierarchy…. So we can expect to add levels linearly because it requires exponentially more complexity to add each additional layer, and we are indeed making exponential progress in our ability to do this. We are not that many levels away from being comparable to what the neocortex can do, so my 2029 date continues to look comfortable to me.”
"What Ray seems to be saying is that it is not that the power of artificial intelligence is exploding exponentially, but that the effort to produce it is exploding exponentially, while the output is merely raising a level at a time. This is almost the opposite of the assumption that intelligence is exploding. This could change at some time in the future, but artificial intelligence is clearly not increasing exponentially now."
The last bit about requiring experiments in real time is also interesting.
He is fight the original myth with his own myth except that his myth is founded upon his own assumptions and intuitions as opposed to those of someone else.
It seems more likely that we simply don't know the answer to many of these questions yet because we still have major disagreements around exactly what intelligence is. To use Richard Feyman's famous quote: if we can't yet build it, then we don't understand it.
AI doesn't need to exceed humans in every dimension to become a threat. Just sufficient dimensions.
Humanity is basically a bacteria colony in a petridish with I/O. Disrupt infrastructure, and you disrupt input leading to changes in the size of the colony. And mind you, much of our infrastructure resides in the cloud.
Of course, It will be a while before this even becomes an issue, but this is basically how a machine would frame the problem.
Implementation wise, AI doesn't need to be general. At its most inelegant (and not too distant) design, ML can be configured as a fractal of specific of algorithms, with one on top with the task of designating goals and tasks, and subordinates spawning off generations and evaluating performance.
Andy Grove had a good saying, "anything can be done will be done"
Autonomous AI, if it does not break the laws of physics, will exist. It's development will be spurred by our curiosity or profit.
If we didn't just have the results of scattered tests, but had an exhaustive set of docs written by the creator of the black box, it still wouldn't be easy. But we'd have a chance. This is why one of the main strands of AI value alignment research focuses on building an AI that we can understand. If we can build something we can understand, that gives us leverage to alter it to value our interests.
(What "our interests" are, in a well-specified technical sense, is a whole 'nother problem, and one that there's very little progress on.)
Well there's the old saying, "If you can't explain it to a six year old you don't understand it yourself." I think if there were a super human intelligence it would likely understand where our limitations are as humans and be able to break down the components of its discoveries into simplest terms and be able to teach us even if it's at an incredibly slower pace then they can process.
This reminds me of Godel's proof about how by our current formal systems of mathematics we cannot prove everything, and
it maybe even went as far as saying every single formal system will always leave some things that it cannot prove. Obviously a robot had to use some formal system to come to its conclusion so if it's really so smart can it break down the system it used so that we can understand its basic building blocks. Of course there's always the rate of computation and memory problem of humans in the way.
Of course if you're saying that the super human intelligence would be on an entirely different plane of thought impossible for us to understand then that's understandable, but probably less believable.
This line of thinking reminds me of the book Flatland by Edwin Abbott. Flatland is basically a story of a two dimensional square who lives on a two dimensional plane and only knows his world by his limited perception of it. One day he is visited by a three dimensional sphere who explains his world in a way imperceivable to him. The sphere somehow magically takes him off his plane of existence and shows him his world from his (the spehere's) view. He then goes on to take him on a tour of different hosts of worlds who perceive their worlds in different dimensions. He goes from point land to line land then to three dimensional space and finally back to his home plane land. Where is eventually locked in a mental institution for telling people of his adventures. Any ways it's an interesting fantasy story I recommend it.
Hate to go all meta-physical on you all, but basically the story just goes to show you we only know things from our limited perspective of the things around us if there are any higher "planes" of perception it's entirely possible that we wouldn't know about them. Some things can only be known with certain perceptions/experiences/knowledge.
It may sound ludicrous, but I would even say Christianity backs this idea to some degree attributing God to a higher plane of thought like in Isaiah 55:8-9.
Of course I would never put a robot on that level, but I could see some things being imperceptible to the human mind, so it's similar principles. Can robots achieve a higher plane to any degree... beats me. Honestly I'm thinking it wouldn't be high enough that they couldn't explain their thoughts to us.
With super human AI, we would be in much the situation as the great apes - their survival depends far more on what we do than on what they do. Just like the great apes cannot constrain our actions, we would not be able to constrain the super human AI's actions.
On a darker note, as Homo sapiens wiped out all the other hominids, there is a good chance that super human AI would try to wipe out humanity, as we are probably the greatest threat to their independence and well being.
Unfortunately he gets hung up on the definition of Intelligence - and not unreasonably so - because it is very ill defined and largely unknown. So all of what he says is true, but orthogonal to the argument he is trying to debunk.
It's basically setting up a pedantic straw man and then taking it apart.
There are other great and more compelling arguments against an all powerful superhuman AGI, unfortunately he doesn't make any of those.
These issues are mind-bending topics that stretch the imaginations of the most brilliant people I am aware of. It takes them a lifetime to build good intuitions and analogies. I wish that writers of this caliber felt as qualified to write one sentence about it as they actually are.
You can imagine this as a bunch of people that speaks with each other. The faster they can communicate ideas with each other, the more potentially intelligent the group can be. Machines can surpass the speed of this collective intelligence by orders of magnitude, even if everything else is exactly as a human. This is exactly the reason we evolved to have so many brain resources for language.
You do not, in fact, get general intelligence by accident by throwing a lot of connections in one place, just like you do not get a human just by throwing a lot of organic molecules and water in one place.
So it is possible that we reach AI just by randomly permuting connections and weights. Of course it's more likely we intelligently set (or "evolve") these connections and weights, but this allows us to set an upper bound on computation/time needed.
I don't think this was the case. Yes there is evolution but it is not random. Actually most of the molecules sitting around did not evolve to intelligence.
In case our evolution we had natural selection.
In case of AI we have artificial selection (selection made by humans) and even if we consider ourselves smart enough to do this we cannot prove that we are able to make it happen (choosing the correct AIs/algorithms to survive) until it happens.
Maybe I cannot express this clear enough but the advantage natural evolution has over artificial evolution is the huge number of "experiments" - meaning it had time enough to do a lot of small changes until something worked.
I think you have that backwards; natural evolution is absurdly slow because it takes a very long time to cycle through generations of animals whereas genetic programming on a computer to evolve algorithms can happen billions of times a day because computers are much faster at cycling through possibilities.
> meaning it had time enough to do a lot of small changes until something worked.
Computers can do it faster.
And I agree with you that AI can be much faster.
I still think the artificial selection can be influence by us - humans - so we might add flows in the sistem from the beginning.
Of course AI can learn to identify them maybe. But maybe not.
Like in the case when looking from inside a system one cannot see how the system really is so it cannot fix it.
Of course what I say are just some hypothesis, nothing proven and I think they cannot yet be falsifiable.
Well, not unless you have a couple billion years to sit around.
And, there's a least some evidence that it's even worse than that -- intelligence does not appear to be common in the universe. So it might well be a couple of billion years * a million planets worth of organic molecules and water (or a hundred million planets. Or a billion planets).
Don't tell that to the deep learning cult! They'll have to think about how it all works!
I have not met any actual deep learning researchers who think they are creating AGI. I think that it's non-experts who write about deep learning research who think that deep learning is creating AGI.
And I'm not even sure I would call it "parameter fitting". I'd call it, "continuous circuit searching via gradient descent", because that's all you're doing: searching for a nice analog circuit for your task within a large, continuous space of such circuits.
Unless intelligence itself conforms to natural principles, unless it "cuts with the grain" of nature's preexisting laws.
Which, frankly, it does. Information theory and thermodynamics are a thing.
Further, to get better you need to optimize for something. You might get an AI that's really good at writing fiction, but how do you measure that and write one that's even better at writing fiction?
The thing about machinery is that it scales. If you have one AI that's really good at writing fiction, you can replicate it and generate all the fiction you like. Not true of human authors.
I do expect there will be other bottlenecks after think time is solved, but even without intelligence much beyond human, the ability to scale in other directions is pretty scary.
You're assuming that it's possible to do efficient knowledge transfer without losing general intelligence.
Maybe in order to have general intelligence about something, you need to learn it from scratch, over time, rather than just "copying" the information.
Another problem is that you're assuming that you can build a powerful enough computer, and still retain the possibility to have them be programmable. That is, that you have easily programmable memory cells within the computer which can change its behavior. But to have easily programmable memory cells, they need space and infrastructure to access them, and they will interfere with critical timing paths. That's why if you build a special purpose ASIC for a task, it will always be faster than a CPU or GPU.
Maybe all the things we consider useless for intelligence, like playing, sleeping, exercising etc., is actually necessary. We've certainly started to see that those who focus too much on studying by cramming don't necessarily become smarter.
You can put me in the "skeptics" camp when it comes to superhuman intelligence. It may be possible, and it's fun to discuss, but it seems to me that the ones who fuzz about it are making an incredible amount of assumptions.
So it seems like the question is whether computers as we know them can do general intelligence at all? If they can, it will be easy to clone.
If they can't, then it would imply that general intelligence is something more than information processing as we know it; perhaps some kind of dualism?
It's not a software or hardware problem it's a data problem as it's not obvious what part of neural net A encodes fir and what part encodes cat's and how you map that to B's encoding of fir and cats while connecting them.
Now, AI is not necessarily going to be neural net's, but it's also not necessarily understandable what all the little bit do, just that they work.
I'll be honest, I didn't read the arguments in detail (since they're just rebutting strawman arguments it hardly seemed worthwhile), but I was sort of surprised at how poorly reasoned the arguments were even for the parts I agree with.
But there's nothing to stop centralized computing from having distributed terminal-like i/o devices, cameras and waldos.
A cognitive argument for distribution is that a diversity of ideas, developed somewhat in isolation, using their local unique observation and unique action, is more likely to innovate. Many points of view will see more. However, this can be simulated with internal divisions.
What EVERYONE is missing, though, is that enhanced human intelligence is inevitable. And will be vastly more "intelligent" than superhuman AI. Though as human intelligence increases so will AI naturally. I think enhanced human intelligence will have immeasurably greater impact and probably greater impact than any conceivable technology since it lets us engineer who we are. What is a world like that's inhabited by 7 billion Leonardos?