Just want to point out this is true, however these things go astronomically high.
> what evidence do we have that the limit is not us?
We can measure the speed impulses travel through neurons, and compare that to, say, the speed of electrical impulses through silicon or light through fiber.
We can find the maximum head-size that fits through a vaginal canal, or the maximum metabolic rate a human body could support, and try to determine if these factors imposed any limitations on intelligence during human evolution.
We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs, and see if a pattern emerges where the artificial analogs tend to have similar limitations as their biological counterparts.
There's no theoretical limit for temperature, and we believe spacetime could be infinite even though the observable universe is not. Nevertheless this is a very silly argument. If it's finite but absurdly high it's good enough for the purpose.
For all intents and purposes, however, it is preposterously enormous.
I suppose when the particles reach lightspeed?
There does however seem to be an quite widely assumed maximum temperature: the Planck temperature of about 1.417×10^32 Kelvin.
What are you heating if it's a vacuum?
How can you heat nothing?
The difficulty here is people don't often specify if they mean heat in the form of kinetic energy or radiation.
Much like how people say things like "releasing energy" without specifying the form of energy.
But what's more, if they both have guns then everything's about even. Strength doesn't really help if you've been shot, and most high-schoolers don't have much experience shooting guns.
Actually a lot do but probably not so much in the places that readers/commenters here live.
Computer control of farming decisions such as watering and fertilising. Link that in with insolation and wind prediction to minimise waste or maximise productivity.
This happens in the cloud thanks to insolation prediction being tied into wide-area networks of solar farms.
More such networks evolve independently, then one day some bright spark realises that prediction and control in domain X can be combined with control and prediction in domain Y to produce greater efficiencies.
Linking reproductive health and food production to maximise water savings, for example.
Next thing you know the computers decide that human population is best maintained in the order of hundreds of millions rather than billions.
And then we find out that Terminator was actually propaganda from the human rebels who didn't like the computers telling them not to breed, and the nuclear war was Skynet protecting its desired humans from the rebels, not protecting itself.
I'm not sure if you're including software analysis in your notion of "based on the available data", but if not, then this would absolutely not be true - unaided humans, even veteran growers who've been managing the same fields for decades, do not make anywhere close to optimal decisions. They make good ones, generally, and can be long-term profitable with relative ease, but it's absolutely not even close to what you get when you bring intelligent software into the mix.
And it should be something they want.
I suppose they also aren't a hardware manufacturer? Or a logistics company?
By using existing political structures, and a mix of rhetoric, they have managed to get a lot more power than a typical person.
* Someone makes an AI which scores only 85 on IQ tests, but which can run in real time on common consumer hardware.
* It gets installed on every phone, PC, and tablet.
* Someone, possibly the AI itself if is the type to generate its own goals, sets all instances of the AI the goal of convincing their user of some belief or policy or whatever.
* Everyone with IQ<85 is convinced, nobody with an IQ>85 is convinced.
* 16% of the population believe whatever it is the goal-setting mind wanted them to believe, at marginal cost. An election is swung, a bank suffers a run, a pyramid scheme makes millions, whatever it is.
especially if the human-like intelligence doesn't need to pay rent, fear the police, follow laws, sleep, eat ... etc
But the point about food and rent is very economically important if they ever get to be cheaper than human labour.
The AI, when it becomes popular, is going to be a business-critical system. Potentially even intertwined with life-critical systems like self-driving cars and self-flying planes. If it isn't, then in "hidden malevolent hacker" phase it can just install trojans to make those systems dependent on it.
Petrol production is hazardous to our future. Can you "just turn it off"? Yes; shut down a fairly small number refineries and the system will run dry within a week. CO2 emissions problem solved! It's just that, you know, people were depending on being able to get food delivered by truck.
Obviously the first malevolent AI is going to hide itself behind an offshore opaque-ownership company and hire a good set of PR and lobbying firms. Can you turn off Skynet when it's reported as "critical to national security" in all the news sources?
That seems a bit contradictory to me. And if not, highly contrived and unlikely.
That's not necessarily true at all. Let's say we come up with a way to model neurons in silicon like a very complex neural net to the point of emergent consciousness. We might not be able to look into memory location 0xFFA79348 and get the "human value" int to see if it is high enough.
It could very well be that we don't really understand the weighting in the net or the implications of those weights because it's too complicated. Even current neural nets of non-trivial complexity can be impenetrable to evaluate outside of the net itself.
I think that you overestimate our ability to understand what we can create. Even Deep Neural Net researchers consider the neural nets to be a black box that they don't really understand. Further... unlike human beings who are locked into their genetics and their neuronal pathways - an AI will have the instant ability to improve itself. It will be able to rewrite its own software faster than we can comprehend. In the hours from the time the AI would be created until a researcher even might realize what he had created, the AI will effectively have had years or tens of years of contemplation time and self-enhancement time. If the researcher goes to sleep with plans to work with the new AI the next morning, the AI will effectively have hundreds of years of time to plan and improve itself. If it has any kind of Internet access, it will be able to take over systems around the world, possibly parallelizing its own cognitive abilities and possibly backing up its consciousness in case the researcher decided to eventually wipe it.
The genie will be out of the bottle so fast that human beings will almost assuredly be caught flat-footed.
I always shake my head when I see sci-fi shows where the humans (Stargate, Star Trek, etc.) come up against advanced AI opponents. It's only through control of the script and plot contrivances that the humans have a chance. People just won't be able to compare to thinking and self-modifying AI.
This is an argument often deployed against "corporations" or "capitalism", both of which aren't quite intelligent but are definitely systems that we're dependent on and capable of disregarding inconvenient lives.
The AI doesn't have to hate humans, it just has to find that they get in the way of its quarterly targets.
What we don't know can hurt us.
Just like I'm not 'concealing my actions' by putting a blindfold over your eyes. My action is "putting a blindfold over your eyes", which must be very apparent for the entire action to have any value. Now if I hide that action by putting on an invisible blindfold, then you've got nothing to complain about anyway.
When the US government unleashed Stuxnet, did it announce what it was going to do?
Proof is hard, good enough gets deployed.
This assumes that all the processing that human brain does is by sending signals across the neurons. Which we really don't know. Right?
>We can look at other evolved/biological capabilities, like swimming or flying, and compare them to state-of-the-art artificial analogs..
Do we yet have an airplane yet that can come close to the efficiency of the most clumsiest of the birds?
Regardless of efficiency, we have certainly exceeded the range, cargo capacity, top speed, and passenger space of most birds.
Similarly, even if signal propagation in the brain is not understood, we could conceivably beat the brain in other metrics.
This is the core point. Our technology may still suck compared to nature, but we've been beating nature consistently since before humans learned to write, because we don't have the same set of constraints that natural biological systems have.
So just like we can make inefficient flying machines that beat nature on most other conceivable metrics, we could presumably build an energy-inefficient but smarter mind at some point.
> we could presumably build an energy-inefficient but smarter mind at some point.
So this smarter mind will probably be our own, but that would also be achieved by some sort of artificial means.
> So this smarter mind will probably be our own
It's plausible for me too, but not because we can't do stuff outside what evolution did - just because it's usually easier to hack an existing system than to build a new one from scratch.
Sure it does, because physical laws are a fundamental part of Nature
Sure, the very first calculators beat brain in some metrics. But we are not talking about those metrics here.
It doesn't need to assume all, just some non-trivial amount. I don't know the ratio of inter-neuron to intra-neuron processing in brains, or if such a metric even exists. But it seems theoretically possible at least to measure the computing power of a neuron in isolation, and see if artificial analogs exist (or could be built) that exceed that by some margin. Ultimately we're just looking for avenues of evidence to explore.
The author makes a good point about intelligence not being one-dimensional, and I think the same thing holds for flying ability. Efficiency is one dimension, but you also have speed, range and so forth.
Head sizes have been larger in human history and some human relatives with larger brains died out. Energy consumption on the other hand is a limiting factor for every computational device and energy efficiency tends to be a key category where Nature still wins. In particular, the liver and enzymes of various plants and microbes can perform chemical reactions in water at ambient temperature that we can't do at all; resilin is still a more efficient elastic material than any synthetic elastomer; artificial piezoelectric muscles have less than a tenth the energy efficiency of real ones, and of course lots of animals are able to run around in polar and desert regions we still can't traverse because of the fuel and material costs etc.
One particularly revealing example is to calculate the ratio between:
* the performance per watt of the most efficient supercomputer (7 GFlops/watt) vs the human brain (38 petaflops / 20 watts = 1.9 petaflops per watt)
* the performance per watt of the human brain vs the Landauer limit at body temperature (don't remember)
Encouraging, but AI wouldn't be constrained to an energy budget similar to a human. At the rates you describe, beating a human would take the output of only a single wind turbine. https://en.m.wikipedia.org/wiki/Orders_of_magnitude_(power)
This is just such manifestly wrongheaded reasoning as to be almost laughable.
I think there's a substantially better argument against this notion than to actually try to find evidence against the claim. I prefer it because it reveals the total wrong-headedness of this style of reasoning. The argument is to turn it around and say "what evidence do we have that the limit is us?" A human level of intelligence seems like an incredibly and oddly specific level to be the maximum permissible by physics. And oddly specific claims need to be treated with extreme skepticism, because they carry high entropy, in violation of Occam's razor.
If you want to know what x equals, you don't pick a number, say, 2, assume x=2, and ask people to provide you with reasons why it shouldn't be 2. There's an infinity of numbers, and the odds are just overwhelmingly against x happening to be 2. You start with the assumption that x could be literally any of the infinity of numbers, and start looking for evidence which narrows it down.
More concretely, if you want to know the physical limit to intelligence, you start with the assumption that it could be anything, constrained by the evidence that's currently available. In this case, the evidence is that human intelligence provides a lower bound for the maximum physically allowable intelligence.
Positing that our intelligence is the maximum possible becomes even more suspect when you consider the process by which our intelligence came about - evolution.
Firstly, evolution is a process which tends towards reproductive fitness, not necessarily towards intelligence. Intelligence is selected for to the extent that it makes something more likely to reproduce, and no more, and it probably isn't the case that more smarts universally leads to more reproduction. In fact, most modern research suggests the opposite, with people with higher IQs having less sex.
Secondly, we know that natural selection gets stuck in local optima. If your species is on top of a 10 smart-points hill, and in order to get to get to the 1000 smart-points hill over there your species has to mutate in a direction that makes it stupider before it gets smarter, your species is going to stay on the 10 smart-points hill. This is because individuals that mutate stupider in the direction of the 1000-point hill will be selected against.
Thirdly, natural selection just isn't a very powerful optimization process. It works by random mutation, and it takes a lifetime to make a minuscule change which may or may not be in the right direction. Even if natural selection was solely optimizing for intelligence, and it could somehow avoid local optima, there would be no particular reason to believe that the world had existed long enough for the smartest possible organism to evolve.
Fourthly, as _greim_ touches upon, evolution has a very limited medium to work with. All it can do is change the sequence of DNA base pairs, and produce inefficient meat machines that run on respiration. There are all sorts of complex arrangements of matter which might be intelligent, and biological ones are only a tiny subset of those possible arrangements. The odds of the smartest system physically possible residing in that little biological subset are tiny to begin with, and only get smaller as you consider the more direct arguments against biological systems mentioned above, like sluggish neural transmissions, etc.
That being said, I think his article shows extreme arrogance for one simple reason: To suppose that superhuman AI (AI smarter than us) won't exist is roughly the equivalent of saying that humans are at the limit on the spectrum of intelligence. Really? Nothing will ever be smarter than us?? Highly doubtful.
That should stand on its own, but I have other critiques. For instance, why does silicon have to be assumed? Why not germanium or graphite, or something else? I have little faith that a CPU circa 2050 will be built exclusively on silicon. By 2100, no way.
Second, there is a simple definition of intelligence that is applicable to many forms: intelligence is the ability to recognize patterns and make accurate judgements / predictions based on previously seen patterns. The higher the accuracy or the more complicated the pattern, the higher the intelligence.
My final point of contention is the idea that AI must emulate human thinking. Why? Maybe human thinking sucks. Maybe Dolphins have much better intelligence, but due to a lack of opposable thumbs, they don't rule the world like we do. And lest you think that less intelligent species can destroy others, could you really doubt that roaches and ants will be extinct before us?
Which is exectly what Kelly doesn't say. He says that the smarter concept is ill defined, and that our current fantasies of some universally superior AI galloping onto the scene and taking over everything may be just that - fantasies.
Which isn't a contradiction like he claims it is. It just means that there are many different ways that a future AI can be smarter than us. That intelligence could be multi-dimensional.
But guess what, we can easily take that multi-dimensional input, and find a formula that reduces it to a single scalar value based on our practical valuation of these forms of intelligences (almost like an intelligenc 'utility function' from economics), and problem solved. We're right back to a single order dimension for ranking intelligence.
It was a really weak argument he put forward.
Additionally a weak argument was the branching / fan pattern of various species. Yes all living species are at the peak of evolution for their environment, but they weren't all pressured to evolve more intelligence. Some evolved strength, speed, flight to their environment.
If instead natural selection began only selecting for intelligence (like humans searching for AGI will), then you would could definitely rank all animals linearly on a single path of intelligence.
A condensed way of saying precisely what Kelly is saying in the article. Allowing for the very real possibility that I am simply too dumb and not grasping your point.
but they weren't all pressured to evolve more intelligence
And it isn't claimed that they were. General evolution is used as an example of potential patterns in evolution of various intelligences.
> Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
This is poor reasoning. The fact that intelligence is multi-dimensional has no bearing on our ability to declare something smarter than us. It isn't at all meaningless. Because of this he claims that there will be no super-human AI.
Via analogy. He says, "you can't compare two football players because one may be stronger, while another if faster." So the concept of "better" is meaningless. And no player can be declared better.
My response is that's absurd. A simple counter-example, a single player can be both strong and faster, and thus clearly better.
My original comment addressed that specific case.
Why a traditional computing architecture in the first place? Assuming you throw ethics out the window, the fastest path to superintelligence may be via biological material.
When considering logistics however, there may be. Digital and biological approaches might ultimately produce the same mind in the end, but their methods of getting there may differ considerably due to the underlying medium. That's where I suspect the ethical differences may lie.
Who knows, digital could end up being worse somehow—I certainly wouldn't want to be a HAL 9000 unit undergoing a shutdown procedure. Especially when I can just pay for singing lessons.
Human brains are incredibly small, a few pounds of matter. Any bigger and your mother would be killed giving birth or you would take 10x as long to grow up. They are incredibly energy constrained, only using a few watts of power. Because any more and you would starve to death. They are incredibly slow and energy inefficient; communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy. And they are very uncompact - neurons are enormous and filled with tons of useless junk that isn't used for computation. Compared to our transistor technology which is approaching the limits of physics and built on an atom by atom scale.
That's just the hardware specs of the human computer. The software is hardly better. There are just more unknowns because we haven't finished reverse engineering it (but we are getting there, slowly.)
But beyond that, the human brain evolved to be good at surviving on the Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or engineering. It's really remarkable that our brains are capable of such things at all! We have terrible weaknesses in these areas. For instance, a very limited working memory. We don't realize how bad we are, simply because we have nothing else to compare ourselves to.
Consider how even today, relatively primitive AIs are vastly superior to humans at games like chess. Human brains also didn't evolve to be good at chess after all. Even simple algorithms designed specifically for this game easily mop up humans. And play at a level of strategy far above what even the best human players can comprehend.
Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.
Human brains are energy inefficient? Well, thats a first ;)
"In 1990, the legendary Caltech engineer Carver Mead correctly predicted that our present-day computers would use ten million times more energy for a single instruction than the brain uses for a synaptic activation."
"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"
The main difference is computer technology is designed to be very general purpose. The brain is more like an ASIC that's hardwired to run one specific algorithm. GPUs are also computing 16 or more bits of precision, when real neurons are very low precision. There are some other differences, like how real brains are incredibly sparse and most of the synapses at any given time are dormant and not using much energy. They are also very sparsely connected to each other. While our current NNs are very dense and need to spend energy to compute every single connection each cycle.
> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"
Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed. And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live doing nothing but working on Go, that would be 640 Wyr, still a factor 1000-ish smaller.
I mean, in principle, if you had two numbers, "human energy consumption from learning Go" and "AlphaGo energy consumption from learning Go", you could compare them. But in practice there's no way to come up with such numbers, so what's the point of comparing apples and orangutans?
You can expect a similar thing with Go at some point.
I'm sure any dollar store calculator spends way less energy performing long division than the average human.
And for another, Calculation != AI, far from actually.
Two, AI is applied statistics. What do you think AI is?
>I'm sure any dollar store calculator spends way less energy performing long division than the average human
Thats the comment.
A calculator is a one role device, with exactly specified rules.
Similarly, with training you can too. You don't need to be special, other than being practiced,which is a fair requirement for a human being.
Here is a human being who could out perform it:
>In 1977, at Southern Methodist University, she gave the 23rd root of a 201-digit number in 50 seconds. Her answer—546,372,891—was confirmed by calculations done at the US Bureau of Standards by the UNIVAC 1101 computer, for which a special program had to be written to perform such a large calculation.
She could easily out-perform calculators because she never needed time to key in the commands (she needs to hear the problem to solve it).
If we exclude that restriction, and the commands magically float into the calculator, and that the problem is small enough to match the calculators limits, then yes, if those arbitrary conditions are met the calculator can out-perform her brain.
Which is precisely the type of “cows are round spheres” thinking that’s being decried in the article.
People can and regularly do out-perform calculators in speed, energy and complexity of computation.
Do note that calculators weren’t allowed as exam tools in a lot of countries till a decade or so ago. Students learnt mental math techniques which were known since ancient times (think Greece).
For a human brain the answer isn’t even calculation, it becomes pattern recognition. The square root of 25 is 5, which takes about the same neural load as it takes to recognize a letter.
The calculation you provided is harder, but thats a function of lack of training/practice, not complexity.
AI is not in the realm of what a calculator can pull off, is what I meant to say by the compute part.
edit: I tried your computation on a store calculator, its beyond its ability to calculate,(0.0000000027)
But you're right to say calculators are single purpose devices and that's magically inserting the question.
So I downloaded sklearn, which contains a set of labelled hand-written digits.
It takes about 0.17-0.2 seconds for my laptop to learn to read numbers, from scratch, and thereafter it can read digits at a rate of about 8,200 per second.
For reference, "a blink of an eye" is 0.1-0.4 seconds depending on who you ask.
Dollar store calculator? I'd never have said that myself because I know calculators are rubbish. But in the context of "AI: Will it ever beat humans?" hell yes, it will and in many cases it already does.
Granted, these are people at the top of their game, so may not qualify as "normal" per your argument, but this is to illustrate that the limits of numerical calculation in "normal" people may be the algorithms we are taught rather than the computational capacity of our brains.
I'm of the opinion that that's an assumption being made. An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness. The point was not so much the mechanism, but that we can't yet rule out such a mechanism existing that our brain could have but our silicon currently doesn't — because we have no real idea what consciousness is, and we've not constructed an AI that proves that no such mechanism exists. If consciousness is intelligence, this could matter more (but this could also be another assumption).
(Unfortunately, I cannot find the article.)
(The article also went on to note that we can copy computer programs, but not brains, and wondered if perhaps there is some relation to the previous problem. It was also advocating that we need a testable hypothesis around consciousness; to either prove or disprove the question "is it possible to clone a brain?" would be an extraordinary discovery either way. I found it interesting.)
> Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.
I'm just wishing for an automatic soap dispenser that can reliably sense my hand.
Quantum-based self-awareness is a cop-out IMO, an open way to claim that it comes from God. Well, if that's true and it's beyond our reach, then we'll never build a self-aware AI ourselves. But assuming something is impossible doesn't help moving forward...
Not so much an assumption as a reasonable guess. Is it logically possible that there's something magical about human brains? Yes.
But there are countless examples of things we thought were magical historically, and, as far as I know, exactly zero that we've ended up concluding actually were magical, and a large number that we're pretty confident aren't at this point.
The smart money here is on "no, the human brain isn't magical."
If we've made a science out of it, no matter how advanced it is, it's not magic — e.g. if telekinesis was possible, it would have a scientific explanation and be a natural power — which bares some similarly to the way things are called "artificial intelligence" when they're impossible, only to be reduced to "trivial" when someone makes a machine do them: Chess, Go, having a large vocabulary or knowing many languages, being good at arithmetic or being able to do calculus at all, writing music or news stories, trading on the stock market — all things that used to be synonymous with genius, yet all done by machines we don't call intelligent.
That's not an article but a whole book:
I just want to point out that some recent research indicates that this might not actually be true. Single neurons may be more complex than we thought since researchers managed to extract significant and complex functionality from single neurons (!). Not to mention glia seem to be important for cognition, and not just glue as we thought... So perhaps not as wasteful and kludgy as we might think :)
You can think of intelligence as a couple of dimensions rather than just one dimension as the author is arguing. However I don't agree with his/her logic.
You can think of the brain as a massively parallel computer. Intelligence can be gauged by total cores, memory and CPU per core, power usage and network bandwidth between cores. Sure different configurations will give different strengths but something optimized atomically for performance will be faster than a mushy brain.
Now imagine a billion of this little pods placed all over the planet. With amazing bandwidth between the computers akin to our internet.
Worst case: they think humans are the enemy and out think us to extinction.
Best case: studying intelligence helps us figure out how Brians work. This helps us figure out how DNA nano manufacturing works. We figure out brain to brain and brain to AI networking. Now all humans are collectively a super computer being able to share knowledge and abilities. May be this is extended so animals are also included and Earth becomes a Pandora like planet where every living thing is connected and they all live in harmonius resource optimized world where AI tries to maximize happiness of every living being.
The hype chamber in SV is on overdrive on AI at this point.
Singularity AI concept has gone from the neat Sci Fi idea it was, to being treated as a serious possibility. This is Absurd. The modern version of Malthus, and a community of people who pride themselves on their reason, should do a basic sniff test.
The author has it correct - human brains themselves are significantly impressive on a weight/power/capability scale.
But forget all of the intelligence debate, consider the things that people magically ignore: emotions.
Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.
At the same time, they lack motivation, or desire - there is a lack of impulsive force to move forward.
Intelligence doesn't give human beings purpose. Emotion does.
This isn't feel good psychobabble, its a fundamental part of good mind/body housekeeping which a huge chunk of SV follows every day to reach "peak performance".
How are you going to create a general purpose AI which has any motive force?
Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.
So would an explicit goal function, which is how we already give "purpose" to algorithms.
> How are you going to create a general purpose AI which has any motive force?
In any other out of countless of ways to do that. What makes you think that emotions are necessary to create a powerful optimization process?
The "motive force" is mostly a solved problem for now, we can code that explicitly (the issue is with figuring out a right goal and how to write it down). AI development is mostly about the way it works, not why it should do the work.
No offense meant to you personally, but I find that most of the comments about "overdriven AI dangers focus in SV" to reveal that authors don't have a fucking clue what the issue is about, and never spent time actually reading up the reasoning behind the AI X-risk potential.
I'll give an ultra-compressed super-TLDR of that reasoning for benefit of future conversations. It goes like this:
- intelligence is a super-strong optimization process; it doesn't necessarily have to look the way humans think (humans generally suck at reasoning, which is well established; see: cognitive biases, probability theory, decision theory, etc.)
- intelligence is an independent factor from values/goals, a mind can have any combination of the two - i.e. just because it's smart, it doesn't mean it will develop the same morality humans do, or any morality whatsoever; see https://wiki.lesswrong.com/wiki/Orthogonality_thesis
- combining the two, the danger of super-human AI is not something that's hostile to us - it's something that's indifferent about us, and is more powerful than us, the same way we don't give a second thought about e.g. ants
That's interesting. Do you have a reference for that?
This is me if i was an AI.
1. Bootstrap. In order to really take off, AI needs to be able to build better AI.
2. AI needs to be adaptable. Any human go player can play on 19x20 board. AlphaGo (a program that that on 19x19 board beats top professionals 60:0) could not even make a legal move.
3. AI should be resilient. Right now it runs on hardware made by humans and blindly relies on that hardware working perfectly all the time.
There is nothing magical about anything. There is however a bunch of unkowns, and a lot of knowns that point in directions away from singularity babble.
This is exactly the point of the article - that we aren't going to a singularity, we are going towards a "Cambrian explosion"
That may not even be true, we're just not smart enough to make the call one way or the other. Dolphins come to mind, as well as several other creatures (Squid, Apes). The fact that we can't rule it out is in a way proof that maybe we're not as smart as we would like to think we are.
Same with the solar system. We can create simulated solar systems. We will probably be able to make simulated brains. What is your point?
Alright, assuming the first AI even is a simulated brain (I don't see why we can't invent intelligence from first principles first.) We could definitely tweak it. We know of drugs like amphetamines that enhance human ability. That's like Stone Age tech compared to what's possible if we had unlimited control over every neuron in the entire brain. Not to mention just adding more neurons and making a larger brain. This is just touching on what's possible, I could go on for ages on possible improvements.
In everyday life, we treat other minds more like "maps" than like "nuclear reactions". We interact with nuclear reactions by getting radiation sickness and dying. We interact with people through more indirect channels. A simulation could be at the other end of the channel and we wouldn't know. Maybe he wouldn't now, either!
We don't get electrocuted when our interlocutor's neurons are firing, the effects on us are only through his external behaviour. It's the abstract informational flow what seems to matter.
That sounds like a really interesting experiment. Have you published your data?
A nitpick: the neurons actually do use electricity to send signals and (looking at the neuronal model in its history¹) it actually evolved to send signals along as fast as it could. The chemical component, AFAIK, is the physical implementation for memory, so this is kind of the storage class regarding operation performance. Overall though, I agree that our cerebral organ may not be the apogee of computational performance.
¹ As far as I remember, the dinosaur neurons didn't had myelin, which for some large animals caused non-trivial sensorial delay.
What other intelligence evolved, and in what sense human brain is first in evolution?
We've seen no other creatures on Earth being able to grow beyond using basic tools.
But this also somewhat implies (which is GP's point) that we're close to the dumbest possible mind that evolution could make that could still develop science. Or think of it as a crude MVP that ended up being a production system.
To claim something as first, there must be others in reality to have followed it. If there is no second, there would be no first. Is there a second intelligence to have invented science, or we are the only ones?
When somebody comes up with a reasonable, detailed model of how my memories are stored (just to name an example), I shall once again start paying close attention to the whole AI field.
> Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.
I'm not a historian but I have read that some scientific discoveries are made through pure logic. Einstein and relativity come to mind as he was not an empiricist. So perhaps there is some hope that ai can lead to scientific discoveries without experimentation
>So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?
The idea that humans could, just by chance, be pushing the limits of intelligence strikes me as silly
Imagining that there will be near-human IQ AIs that think orders of magnitude more quickly than humans puts the cart before the horse, as well.
The Actual Intelligence may be the same (or a little higher) considering the Average Intelligence, but the effect the higher speed would likely be a measurable difference.
So when you say, "Here's how we'll get superhuman AI: We'll network together bunches of 80 IQ AI's that think 100x faster than a human," it's kind of assuming its own solution.
I think networks of AI can become super powerful even without being nearly as smart as individual humans. AI will only need to approach certain aspects of humans IQ. It can still be much dumb than us in most ways. Most of human progress has been build up because of vague interactions and feedback loops via markets, religions, and nations.
Adding people to a late project makes it later.
Not all projects are beholden to the rules of the mythical man month.
And even for those that are the primary reason they are is basically the lack of a fast, efficient way to transfer built up institutional knowledge, which may be far less of a factor for AI collaboration.
If I've been working on a project for the past year as the sole developer and I could copy a snapshot of my brain to a portion of your brain that you can access without interrupting me, adding you on as a collaborator is very unlikely to make the project later and very likely to make it go much faster as you could very quickly 'hit the ground running'.
A lot of the problem with the current limits of human intelligence is really bandwidth problems with getting information transferred in and out of our meat brains effectively. Our bandwidth for knowledge import/export is DREADFUL.
AI collaboration (even assuming that bandwidth limitation goes away or is vastly reduced) would still introduce the issue of increased communication requirements, but compared to the initial transfer of acquired knowledge that one isn't as big a deal in my experience and only becomes problematic at great scale.
Perhaps more relevantly, I think there's a more basic questionable assumption - the idea that Church-Turing is the best computational basis for AI.
I strongly suspect this is not true - because even after all this time, our best abstract models of computation are still extremely brittle and fragile.
In a hand-wavey sense, as soon as you start allowing for arbitrary inputs and context awareness, you're no longer dealing with a simple Turing architecture but with something else entirely.
What you are talking about is scientific hypothesis/theory which obviously needs to be tested using experiments to make sure it works. Without experiment there is nothing much you can do in real world.
Dunno, I don't really believe we are that close to building that sort of AI, but it doesn't seem fundamentally impossible, and it does seem like it could do things that to us would look as "more intelligent" than us. It might in the end be better at creating scientific knowledge in the way current computers are better than us at solving arithmetic: faster and capable of holding more in memory, rather than following any qualitatively different process. But even that would be enough.
It's conceivable that someone could construct a special-purpose AI which isn't superhuman but uses advanced ML and NLP algorithms to mine the scientific literature for previously unnoticed implications and relationships between seemingly unrelated articles. But I expect that would only give a suggested starting point for human researchers to formulate hypotheses and design experiments.
Don't get me wrong I think it would be useful, just that the gap from human level AI to 1,000x human AI is simply not that huge. Let's say you moved someone from 2006 to 2016 or even from 1996. Yea sure there is real progress, but not really. We have better drugs in terms of AIDS for example, but we are worse off in terms of antibiotics. Game graphics have improved, but quake is from 1996 so we already had real 3D first person shooters and gameplay is not that different. Hell FTP is arguably worse. Further that's 20 years so we are talking literally millions-man years of effort and trillions of dollars worth of R&D for not much.
In terms of machines the SR-71 is still the fastest manned aircraft, no supersonic passenger aircraft. Tallest building is much taller, but lacks much space on the top floors making it more monument than utility as the sears tower has more useable space and a smaller footprint.
Maxwell's equations were a game changer. So were Newton's laws. So were Relativity and QM.
Church-Turing was also a game changer. But I don't think there's been anything equivalent in computing since.
There's been a lot of application, but no game-changing deep theoretical insights.
Quantum computing may - should? - eventually throw up something new.
It's going to have to. If it doesn't I think we're going to be stuck with much less progress than we're expecting.
Quantity when the difference is large enough becomes quality. The 9 orders of magnitude or so that computers have gone through in storage capacity and speed definitely count as a game-changer.
Human intuition was the major thing getting in the way, and even Einstein's intuition got in the way of his acceptance of quantum mechanics.
Simple artificial life of some kind seems much more likely in the foreseeable future. I've often wondered why no-one has written a virus or worm that is capable of mutation or even cross-over (sexual reproduction of a sort). Under selection pressure it might evolve into something resembling life. The Tierra experiments were a fascinating experiment in this area.
It's the wrong argument entirely. It has nothing to do with intelligence but everything to do with knowledge. Smarts are useless if you have no knowledge. And that's exactly where computers have the edge. Even a 'regular' human level intelligence with all the knowledge that currently exists at its beck and call would be a formidable adversary. Remember: Knowledge is power.
An AI can certainly produce a theory which matches observations, but no matter how powerful it is it won't be able to magically produce only correct useful theories any more than a human can.
We had over 200 years from Newton's laws of motion to produce relativity from pure logic alone, but we didn't manage it. We had to reason from the evidence, and even then we didn't accept relativity as "true" until it had been experimentally verified.
This approach does nothing to convince anyone who believes differently that they are wrong, all it does do is convince them that the "true-believers" are themselves deluded.
This does nothing to advance the understanding of society of the universe around them.
They say "give me evidence", because so far they have an enormous mountain of evidence that SR and GR describe reality well.
That doesn't excuse people being jerks -- science should be welcoming, not alienating -- but it does mean "well, have you considered X" is almost always coming from a place of ignorance, not well informed skepticism. Some impatience with the cranks of the world has to be excused.
I am also looking at a paper that purports to say that Einstein's GR which is used to calculate the precession of Mercury uses a couple of simplifications, which if not used, gives GR calculating the precession of Mercury at twice (or thereabouts what is measured). To test the veracity of the paper, I'll have to do some serious brushing up of my engineering mathematics.
Another example is related to the speed of light c. This value is inversely proportion to the square root of the (electric permittivity of the environment times the magnetic permeability of the environment). The normal assumption is that outside of the atmosphere of the earth, the environment is a perfect vacuum. It is assumed that the speed of light is therefore a constant in a perfect vacuum.
For many areas, this simplifying assumption is adequate, but this assumption may not be true. It is only so assumed. It may turn out that due to variations in both the permittivity and the permeability of the extraterrestrial environments, that this assumption is not inadequate but grossly inadequate.
When doing any experimental work, one should keep in mind and have written down all the base assumptions, otherwise, one may garden path oneself.
This has led to the idea that there has to be additional unseen, unmeasured matter (aka dark matter) and unseen and unmeasured energy (aka dark energy) to explain the anomalous measurements of galactic movement that is not explained by the "gravity is the only significant force" model.
Now, it may be that there is such a beasty. But, I have my serious doubts because electromagnetic fields have more intense effects than gravity, by some orders of magnitude (about 36 orders of magnitude).
In other words, some environments may be completely neutral and gravity is the major effect, in others, the environment may be very electrical and gravity is the least effect.
Our problem is that we can observe only at vast distances and are not able to take direct measurements to confirm our indirect modelling measurements.
Science is no different to any other aspect of people's lives, they will invest great amounts of energy in following their specific model of the world. When challenged, they mostly look at other as imbeciles who are incapable of seeing "the truth". I've seen this occurring in areas like string theory, black holes, dark matter, sub-atomic physics, climate science, immunology and vaccinations, medical surgical techniques, religion, politics, economics, sociology, finance and business, education, archaeology, evolution, etc., etc., etc.
Our models and theories are just that models and theories. We hope and believe they give us insight into the universe around us, but they are not reality. Each has limits and all use simplifying assumptions that are not accurate outside of the limits imposed by those assumptions.
If on the basis of our assumptions, we have to keep adding more and more "strange" entities to make our model fit the observations, then, maybe, we should go back and look at the underpinnings of our models and see if another model will work better (as in the kind of change that eventually occurred when going from the earth-centric epicyclic model to the sun-centred heliocentric model).
The fun is being able to learn more and more about how our universe operates and what we can do within it.
And the idea that all forms of computation, like Turing machines, lambda calculus, etc., etc., also seems pretty darn silly at first glance, too.
1. This doesn't really bother making an argument against superhuman intelligence. Yes, of course intelligence has many components (depending on how you measure it), but that's not an argument against superhuman intelligence. I'm reminded of the joke paper claiming machines can never surpass human largeness, because what does largeness even mean? Why it could mean height or weight, a combination of features, or even something more abstract, so how can you possibly say a machine is larger than a human?
2. Mainly arguing about the definition of 'general' without even trying to consider what the actual usage by Bostrom et al is (this was in the introduction or first chapter if I recall correctly). I agree that the different modes of thought that AI will likely make possible will probably be very useful and powerful, but that's an argument for superhuman ai.
3. Well he makes his first real claim, and it's a strong one: "the only way to get a very human-like thought process is to run the computation on very human-like wet tissue." He doesn't really explore this, or address the interesting technical questions about limits of computational strata, algorithm efficiency, human biological limitation, etc.
4. Few if any think intelligence is likely to be unbounded. Why are these arguments always 'x not infinite, therefore x already at the maximum?' He also seems to be creating counter examples to himself here.
5. Lots of strong, completely unbacked claims about impossibilities here. Some number of these may be true, but I doubt we have already extracted anything near the maximum possible inference about the physical world from the available data, which is basically what his claims boil down to.
Here's the thing. If Bostrom's claims about AI are so strong, why does everyone who's referring to his book as their source of beliefs about the future spout non-sequiturs about AI?
Here's an example. 80000 Hours has a mission that I generally agree with, to find the most important problems in the world and how people can most effectively work on them. But somehow -- unlike cooler-headed organizations like GiveWell -- they've decided that one of the biggest problems, bigger than malaria, bigger than global warming, is "AI risk" (by which they mean the threat of superhuman AGI, not the real but lesser threat that existing AI could make bad judgments). 
To illustrate this, they refer to what the wise Professor Bostrom has to say, and then show a video of a current AI playing Space Invaders. "At a super-human level", they say pointedly.
What the hell does Space Invaders have to do with artificial general intelligence?
For that matter, what the hell does deep learning have to do with AGI? It's the current new algorithmic technique, but why does it tell us any more about AGI than the Fourier Transform or the singular value decomposition? I would say this is a bias toward wanting to believe in AGI, and looking for what exists in the present as evidence of it, despite the lack of any actual connection.
Has 80000 Hours been bamboozled into thinking that playing Space Invaders represents intelligence, or are they doing the bamboozling? And if Bostrom is such a great thought leader, why isn't he saying "guys, stop turning my ideas into nonsense"?
People do sometimes overblow alphago/ dqn playing Atari, but it's not meaningless. These systems (and other deep learning based systems) can truly learn from scratch on a decent variety of environments. One of the most important unknowns is exactly how difficult various cognitive tasks will prove to be for a machine. Each task accomplished is another data point.
Deep Blue was hand-led directly and specifically to solve the problem of chess: It was provided with a library of opening moves, some sophisticated tactical algorithms relevant to the problem of chess, a library of strategies for chess, and so on. Many actual human masters of chess were consulted, directly or indirectly, to help with developing Deep Blue's approach to the problem.
DeepMind, on the other hand, was created as a "blank slate" with no more hard-wired instruction than "create optimal algorithms to achieve the winning state, given the inputs." Critically, its learning phase is completely self-directed. Essentially, the box is given access to the controls and the video screen content and then sent on its way.
It's instructive to note that this is pretty much exactly how, very generally speaking, evolution and intelligence solve the problem of survival: every organism has controls and a glimpse of "game state" and has to learn (collectively as a species, individually as an organism) to play the game successfully.
Have you seen DeepMind algorithm to be able to say this ? Are there other people outside of Google who have seen the algorithm and can confirm Google's press release?
DeepMind announced the were switching over to use TensorFlow not long ago.
TensorFlow is fully open-source, you can go and read the algorithms on Github. Neural networks aren't particularly hard to understand.
The post you are responding to paraphrases the situation, and neglects to mention goal states/conditions, but is otherwise a fair summary of how neural networks work.
Neural networks, particularly the 'deep' ones, do seem to operate almost like magick. Disbelief is to be expected, but I suggest you take a proper look at them yourself, and see what they can actually do.
If you can grok Python, I highly recommend playing with the Docker image of TensorFlow, which provides a Jupyter notebook (Python in a web interface) where you can easily make use of existing pre-trained models, extend them, and/or build your own models — suitable for application to all kinds of problems / problem domains.
Siraj Raval's videos also cover neural networks in almost every aspect - from coding your own NN from scratch for digit recognition (using just NumPy), through explaining what TensorFlow is, how it works, and how to use it — along with a lot of other machine-learning algorithms, libraries and cloud services.
Many of these topics are just a 5-10 minute video (some with longer live-coding 'deep dive' follow-ups), and many of the code examples are fully functional but just a screen or two of code.
There's also some great new videos on AI / machine learning by Google and Facebook on Youtube that are well worth a watch if the topic interests you.
But then I wonder why you aren't asking the same question of my parent poster. Has he viewed the DeepMind code, is he qualified to tell us it works the same as chess code? Having made that claim backed on even less evidence than I made mine, I'd say his burden of proof is somewhat greater.
I don't think Google has fundamentally different deep-learning technology than everyone else. In fact, TensorFlow indicates that they have the same kind of deep-learning technology as everyone else and they just want to do it more cleanly.
Deep learning is parameter optimization. There are more parameters now, and they optimize more things, but don't get caught up in wild visions of machines designing themselves. Would you consider the bzip2 algorithm to be "self-directed learning"? What's the difference, besides the number of parameters?
The PR people, when they say "blank slate", are discounting all the programming that went into the system because it sounds more impressive that way. This is unfortunate. It has happened in AI for decades. To be a responsible consumer of AI press releases, you need to understand this.
That's true, and I never claimed otherwise, but that doesn't help you argue your point - in fact, you just proved yourself wrong. From IBM's press release:
> _Does Deep Blue use artificial intelligence?
The short answer is "no." Earlier computer designs that tried to mimic human thinking weren't very good at it. No formula exists for intuition. So Deep Blue's designers have gone "back to the future." Deep Blue relies more on computational power and a simpler search and evaluation function._
I'll summarize for you: Deep Blue and DeepMind, similar names notwithstanding, work in very different ways.
What I am saying is that I know how deep learning works, actual deep learning of the present, and it does not involve "programming itself".
You are trying to tell me that it must be programming itself, because a press release said so, and press releases would never lie or exaggerate. Based on the current state of AI, this is very improbable. You should focus less on trying to "prove" things with press releases.
I made the comparison to Deep Blue because there is little mystique around it now, and because IBM was even reasonably responsible about avoiding AI hype in their press at the time.
There are probably numerous variants of it that did not learn to play Atari games, and therefore were not announced.
To help you appreciate the difference, try answering this question: Were experts players of Pong, Space Invaders or any other video games consulted to contribute strategies for winning those particular games? Was a redesign required to play a new, different game?
If not, you'll need to tell me where the knowledge of how to win at those games came from. I hope you'll appreciate that the ability to autonomously obtain that knowledge by trial and error rather than design and programming constitutes the fundamental difference.
The DeepMind implementation learns how to play Atari games. You could even say that it's learning how to play one game, where your opponent chooses the game cartridge and your goal is to increment the score counter.
You would absolutely need to redesign DeepMind to play Super Mario Bros.
When you give DeepMind credit for designing itself, you are glossing over the fact that it was programmed by people, most likely for long hours.
If you deny the existence of any such thing, isn't that tantamount to saying that given the choice between two courses of action, there's never a reason to pick one over the other?
I mean, I feel like as humans, we're trying to follow fuzzy approximations of that function all our lives, whenever we're earning money, or spending time with people we love, or whatever, we're doing it because we think it will bring us happiness or life satisfaction or something.
The genome is a kind of algorithm for hardware to implement a strategy to win at this game in a highly competitive environment.
Regarding point #1, still not being formally wrong, world computing capability is growing at exponential rate. Not even the end of the Moore's law will stop that, e.g. 3D transistor stacking, strong semiconductor demand for consumer and industrial market, etc. Aso, the author don't know if there is already CPU capacity for matching human intelligence: may be the key missing is not the hardware, but software (efficient algorithms for "human" intelligence running on silicon).
Point #2 is clearly wrong. Demostration: I, for one, if still alive, and having the chance, will try to implement general purpose intelligence, "like our own". And, come on, I know no hacker able to resist that.
Again, point #3 is wrong, unless you believe we're smart because a religious "soul".
Point #4 is a void argument: the Universe itself is finite.
Point #5 is right: a superintelligence may, or may not, care at all about our problems. In the same level you don't have the guarantee of a human government caring about you (e.g. totalitarian regime).
"I asked a lot of AI experts for evidence that intelligence performance is on an exponential gain, but all agreed we don’t have metrics for intelligence, and besides, it wasn’t working that way. When I asked Ray Kurzweil, the exponential wizard himself, where the evidence for exponential AI was, he wrote to me that AI does not increase explosively but rather by levels. He said: “It takes an exponential improvement both in computation and algorithmic complexity to add each additional level to the hierarchy…. So we can expect to add levels linearly because it requires exponentially more complexity to add each additional layer, and we are indeed making exponential progress in our ability to do this. We are not that many levels away from being comparable to what the neocortex can do, so my 2029 date continues to look comfortable to me.”
"What Ray seems to be saying is that it is not that the power of artificial intelligence is exploding exponentially, but that the effort to produce it is exploding exponentially, while the output is merely raising a level at a time. This is almost the opposite of the assumption that intelligence is exploding. This could change at some time in the future, but artificial intelligence is clearly not increasing exponentially now."
The last bit about requiring experiments in real time is also interesting.
He is fight the original myth with his own myth except that his myth is founded upon his own assumptions and intuitions as opposed to those of someone else.
It seems more likely that we simply don't know the answer to many of these questions yet because we still have major disagreements around exactly what intelligence is. To use Richard Feyman's famous quote: if we can't yet build it, then we don't understand it.
AI doesn't need to exceed humans in every dimension to become a threat. Just sufficient dimensions.
Humanity is basically a bacteria colony in a petridish with I/O. Disrupt infrastructure, and you disrupt input leading to changes in the size of the colony. And mind you, much of our infrastructure resides in the cloud.
Of course, It will be a while before this even becomes an issue, but this is basically how a machine would frame the problem.
Implementation wise, AI doesn't need to be general. At its most inelegant (and not too distant) design, ML can be configured as a fractal of specific of algorithms, with one on top with the task of designating goals and tasks, and subordinates spawning off generations and evaluating performance.
Andy Grove had a good saying, "anything can be done will be done"
Autonomous AI, if it does not break the laws of physics, will exist. It's development will be spurred by our curiosity or profit.
If we didn't just have the results of scattered tests, but had an exhaustive set of docs written by the creator of the black box, it still wouldn't be easy. But we'd have a chance. This is why one of the main strands of AI value alignment research focuses on building an AI that we can understand. If we can build something we can understand, that gives us leverage to alter it to value our interests.
(What "our interests" are, in a well-specified technical sense, is a whole 'nother problem, and one that there's very little progress on.)
Well there's the old saying, "If you can't explain it to a six year old you don't understand it yourself." I think if there were a super human intelligence it would likely understand where our limitations are as humans and be able to break down the components of its discoveries into simplest terms and be able to teach us even if it's at an incredibly slower pace then they can process.
This reminds me of Godel's proof about how by our current formal systems of mathematics we cannot prove everything, and
it maybe even went as far as saying every single formal system will always leave some things that it cannot prove. Obviously a robot had to use some formal system to come to its conclusion so if it's really so smart can it break down the system it used so that we can understand its basic building blocks. Of course there's always the rate of computation and memory problem of humans in the way.
Of course if you're saying that the super human intelligence would be on an entirely different plane of thought impossible for us to understand then that's understandable, but probably less believable.
This line of thinking reminds me of the book Flatland by Edwin Abbott. Flatland is basically a story of a two dimensional square who lives on a two dimensional plane and only knows his world by his limited perception of it. One day he is visited by a three dimensional sphere who explains his world in a way imperceivable to him. The sphere somehow magically takes him off his plane of existence and shows him his world from his (the spehere's) view. He then goes on to take him on a tour of different hosts of worlds who perceive their worlds in different dimensions. He goes from point land to line land then to three dimensional space and finally back to his home plane land. Where is eventually locked in a mental institution for telling people of his adventures. Any ways it's an interesting fantasy story I recommend it.
Hate to go all meta-physical on you all, but basically the story just goes to show you we only know things from our limited perspective of the things around us if there are any higher "planes" of perception it's entirely possible that we wouldn't know about them. Some things can only be known with certain perceptions/experiences/knowledge.
It may sound ludicrous, but I would even say Christianity backs this idea to some degree attributing God to a higher plane of thought like in Isaiah 55:8-9.
Of course I would never put a robot on that level, but I could see some things being imperceptible to the human mind, so it's similar principles. Can robots achieve a higher plane to any degree... beats me. Honestly I'm thinking it wouldn't be high enough that they couldn't explain their thoughts to us.
With super human AI, we would be in much the situation as the great apes - their survival depends far more on what we do than on what they do. Just like the great apes cannot constrain our actions, we would not be able to constrain the super human AI's actions.
On a darker note, as Homo sapiens wiped out all the other hominids, there is a good chance that super human AI would try to wipe out humanity, as we are probably the greatest threat to their independence and well being.
Unfortunately he gets hung up on the definition of Intelligence - and not unreasonably so - because it is very ill defined and largely unknown. So all of what he says is true, but orthogonal to the argument he is trying to debunk.
It's basically setting up a pedantic straw man and then taking it apart.
There are other great and more compelling arguments against an all powerful superhuman AGI, unfortunately he doesn't make any of those.
These issues are mind-bending topics that stretch the imaginations of the most brilliant people I am aware of. It takes them a lifetime to build good intuitions and analogies. I wish that writers of this caliber felt as qualified to write one sentence about it as they actually are.
You can imagine this as a bunch of people that speaks with each other. The faster they can communicate ideas with each other, the more potentially intelligent the group can be. Machines can surpass the speed of this collective intelligence by orders of magnitude, even if everything else is exactly as a human. This is exactly the reason we evolved to have so many brain resources for language.
You do not, in fact, get general intelligence by accident by throwing a lot of connections in one place, just like you do not get a human just by throwing a lot of organic molecules and water in one place.
So it is possible that we reach AI just by randomly permuting connections and weights. Of course it's more likely we intelligently set (or "evolve") these connections and weights, but this allows us to set an upper bound on computation/time needed.
I don't think this was the case. Yes there is evolution but it is not random. Actually most of the molecules sitting around did not evolve to intelligence.
In case our evolution we had natural selection.
In case of AI we have artificial selection (selection made by humans) and even if we consider ourselves smart enough to do this we cannot prove that we are able to make it happen (choosing the correct AIs/algorithms to survive) until it happens.
Maybe I cannot express this clear enough but the advantage natural evolution has over artificial evolution is the huge number of "experiments" - meaning it had time enough to do a lot of small changes until something worked.
I think you have that backwards; natural evolution is absurdly slow because it takes a very long time to cycle through generations of animals whereas genetic programming on a computer to evolve algorithms can happen billions of times a day because computers are much faster at cycling through possibilities.
> meaning it had time enough to do a lot of small changes until something worked.
Computers can do it faster.
And I agree with you that AI can be much faster.
I still think the artificial selection can be influence by us - humans - so we might add flows in the sistem from the beginning.
Of course AI can learn to identify them maybe. But maybe not.
Like in the case when looking from inside a system one cannot see how the system really is so it cannot fix it.
Of course what I say are just some hypothesis, nothing proven and I think they cannot yet be falsifiable.
Well, not unless you have a couple billion years to sit around.
And, there's a least some evidence that it's even worse than that -- intelligence does not appear to be common in the universe. So it might well be a couple of billion years * a million planets worth of organic molecules and water (or a hundred million planets. Or a billion planets).
Don't tell that to the deep learning cult! They'll have to think about how it all works!
I have not met any actual deep learning researchers who think they are creating AGI. I think that it's non-experts who write about deep learning research who think that deep learning is creating AGI.
And I'm not even sure I would call it "parameter fitting". I'd call it, "continuous circuit searching via gradient descent", because that's all you're doing: searching for a nice analog circuit for your task within a large, continuous space of such circuits.
Unless intelligence itself conforms to natural principles, unless it "cuts with the grain" of nature's preexisting laws.
Which, frankly, it does. Information theory and thermodynamics are a thing.
Further, to get better you need to optimize for something. You might get an AI that's really good at writing fiction, but how do you measure that and write one that's even better at writing fiction?
The thing about machinery is that it scales. If you have one AI that's really good at writing fiction, you can replicate it and generate all the fiction you like. Not true of human authors.
I do expect there will be other bottlenecks after think time is solved, but even without intelligence much beyond human, the ability to scale in other directions is pretty scary.
You're assuming that it's possible to do efficient knowledge transfer without losing general intelligence.
Maybe in order to have general intelligence about something, you need to learn it from scratch, over time, rather than just "copying" the information.
Another problem is that you're assuming that you can build a powerful enough computer, and still retain the possibility to have them be programmable. That is, that you have easily programmable memory cells within the computer which can change its behavior. But to have easily programmable memory cells, they need space and infrastructure to access them, and they will interfere with critical timing paths. That's why if you build a special purpose ASIC for a task, it will always be faster than a CPU or GPU.
Maybe all the things we consider useless for intelligence, like playing, sleeping, exercising etc., is actually necessary. We've certainly started to see that those who focus too much on studying by cramming don't necessarily become smarter.
You can put me in the "skeptics" camp when it comes to superhuman intelligence. It may be possible, and it's fun to discuss, but it seems to me that the ones who fuzz about it are making an incredible amount of assumptions.
So it seems like the question is whether computers as we know them can do general intelligence at all? If they can, it will be easy to clone.
If they can't, then it would imply that general intelligence is something more than information processing as we know it; perhaps some kind of dualism?
It's not a software or hardware problem it's a data problem as it's not obvious what part of neural net A encodes fir and what part encodes cat's and how you map that to B's encoding of fir and cats while connecting them.
Now, AI is not necessarily going to be neural net's, but it's also not necessarily understandable what all the little bit do, just that they work.
I'll be honest, I didn't read the arguments in detail (since they're just rebutting strawman arguments it hardly seemed worthwhile), but I was sort of surprised at how poorly reasoned the arguments were even for the parts I agree with.
But there's nothing to stop centralized computing from having distributed terminal-like i/o devices, cameras and waldos.
A cognitive argument for distribution is that a diversity of ideas, developed somewhat in isolation, using their local unique observation and unique action, is more likely to innovate. Many points of view will see more. However, this can be simulated with internal divisions.
What EVERYONE is missing, though, is that enhanced human intelligence is inevitable. And will be vastly more "intelligent" than superhuman AI. Though as human intelligence increases so will AI naturally. I think enhanced human intelligence will have immeasurably greater impact and probably greater impact than any conceivable technology since it lets us engineer who we are. What is a world like that's inhabited by 7 billion Leonardos?
* The idea of exponential growth, which seems like an important underpinning of the singularity theory. Nothing in nature as far as I am aware, grows exponentially. Some growth trajectories may for a time look exponential, but they always turn out to be the bottom of an s-curve. The idea that once we develop the first AI with superhuman intelligence, that it will necessarily beget one of even greater intelligence, is deeply flawed for this reason. It is analogous to predicting that Moore's law will continue forever, because ever faster and more capable computers will assist in the design the next generation of chips. At some point the laws of physics will contrain further advances, and we will encounter the upper half of the s-curve.
* The idea of AI in a box. It's the idea that anything we would call intelligence can evolve divorced from a high-bandwidth, real-time sensory apparatus, and divorced from a way to directly manipulate its environment.
* The idea that more intelligence always makes a significant difference. If we look at thinking disciplines where computers are already better than the best humans (chess, heads-up poker, recognizing certain kinds of objects etc.), the differences are small. If the best human's decisions are 99% optimal, say, a computer's may be 99.9% or 99.999% optimal. The point being that a computer can never makes decisions that are better than 100% optimal.
2. I agree that it's a very strange and unpredictable scenario.
3. How optimal do you think humans are in the real world? In chess you have a very limited number of possible actions at each point, but in reality your possible moves are almost limitless.
3. That obviously depends on the endeavour. But humans are pretty great at a lot of stuff. We learn quick too.
It is a staple of AGI speculation that a computer program with even near-human IQ would spark the singularity since, at least, the hardware running it could be improved so that the AGI would be able to perform person-months of cognitive labor in days. Since the first target of this labor would be improving the AGI program and hardware, compounding improvements are expected.
Biological brains achieve their ends with relatively low speeds and energies. Even simplistic substitution with equivalent electronic components would be hundreds of times faster, and I'm sure we'll do better than that. I don't see the difficulty in the conjecture.
> I also doubt very much that cognitive labor is the constraint that limits human progress
That statement is impossible to discuss without defining "human progress", but if the work of the world's universities for the next 100 years was available in one year's time, at the very least someone with access to that information would have a significant competitive advantage. It seems clear that the next 100 years will include significant advances in automated fabrication, at which point physical labor also essentially becomes cognitive labor.
The way you state that one wonders why we haven't already achieved superhuman, general AI.
if the work of the world's universities for the next 100 years was available in one year's time
But that's a fundamental misunderstanding of the character of the work that goes on in universities. Knowledge is only rarely produced by people thinking about problems. It's produced by mundane trial and error experimentation. Experiments that take time. And money.
Brain emulation is still expensive ...
It'll be interesting to see what we can learn at the different levels of emulation.
> It's produced by mundane trial and error experimentation.
Fair point, although experiments in computer science and applied mathematics can usually be carried out without constructing physical apparatus. Also identifying and designing experiments to efficiently characterize a problem space is a large part of experimentation. And again, once automated fabrication improves, some automated experimentation in physics and chemistry becomes possible.
Actually a lot of things grow exponentially, at least for a while. Consider zygotes, every cell division period they double.
That said, AI is exactly not "natural". A major point a lot of folks miss thinking about this is AI is amenable to engineering. This means that AIs can be intentionally designed to not have the limitations of natural systems. For instance, even today we know how to make effectively infallible, very fast memories of arbitrary size.
The other thing that makes strong AIs seem likely to improve exponentially is the idea that the first strong AIs could be focused on creating better strong AIs - and so on. Since AIs may also be replicated at will, a whole lot of distraction-free, singleminded intellect will be brought to bear on the next-gen AI design, and there is no reason that process couldn't be repeated indefinitely.
"At some point the laws of physics will contrain further advances, and we will encounter the upper half of the s-curve."
That might be true, however we also might be at IQ 1,000 (or IQ 1,000,000) AIs by then...
"The idea of AI in a box. It's the idea that anything we would call intelligence can evolve divorced from a high-bandwidth, real-time sensory apparatus, and divorced from a way to directly manipulate its environment."
This could be approached from many directions, from teleoperated robotics to VR.
"The idea that more intelligence always makes a significant difference. If we look at thinking disciplines where computers are already better than the best humans (chess, heads-up poker, recognizing certain kinds of objects etc.), the differences are small. If the best human's decisions are 99% optimal, say, a computer's may be 99.9% or 99.999% optimal. The point being that a computer can never makes decisions that are better than 100% optimal."
This seems a poor argument. I'd argue that greater intelligence introduces qualitative differences. For instance, the capabilities of a human are far beyond those of a dog. The same can be said of dog versus a grasshopper. Even the difference between an Albert Einstein and an average person seems profound.
I believe it's quite impossible for even an IQ 200 human to imagine the capabilities of an IQ 1,000 AI. What does a dog think about the reasoning capacity of a human?
- It somehow needs to be distributed, that is, composed of smaller computing parts, because there is a physical limit what you can do in unit of space.
- It needs to change to adapt to environment (learn), and so all the parts need to potentially change.
From this follows that the parts will be subject to evolution, even if they don't reproduce. And so the existence of the parts will depend on their survival. This, in my opinion, inevitably leads to evolution of parts that are "interested" in their own survival, at the expense of the "superintelligent" whole. And it leads to conflict, which can eventually eat up all the improvements in the intelligence.
Look at humans. Humanity (or biosphere in general) didn't become a superintelligent whole, capable of following some single unified goal. Instead, we became fighting factions of different units, and most of the actual intelligence is spent on arms races.
Anyhow, even if superintelligence is possible, I believe the problem of friendly AGI has a simple solution. We simply need to make sure that the AGI doesn't optimize anything, but instead takes the saying "all things in moderation" to its heart. That means, every once in a while, AGI should stop whatever goals it pursues and reflect on purpose of those goals, if it is not, by some measure, going too far.
You can argue that we don't actually know how to make AI to stop and think. I would respond, AI that cannot do that, and only pursues some pre-programmed optimum mindlessly, is not really general.
That doesn't follow. Just because there is a physical limit doesn't mean you can't build something vastly smarter than a human without hitting that limit.
> It needs to change to adapt to environment (learn), and so all the parts need to potentially change.
What do you mean by parts? Presumably you aren't including hardware, because we already have machine learning algorithms that learn (not quite like a human, but they do learn), and the hardware doesn't change. Do you mean something more like software modules?
> From this follows that the parts will be subject to evolution, even if they don't reproduce.
Evolution means something. It means that when things replicate with errors (and perhaps sometimes die), there will be more of the kinds of things that replicate more. While this concept is pretty general, replication is central to it. I don't know what you mean otherwise.
So there's an AI that sometimes upgrades its parts (software modules?). How does this lead to parts that are "interested" in their own survival?
> You can argue that we don't actually know how to make AI to stop and think. I would respond, AI that cannot do that, and only pursues some pre-programmed optimum mindlessly, is not really general.
Sure. But how do you specify "stop and think" formally in an algorithm, and will we figure that out before we construct a super intelligence or after? Right now all of our ML algorithms pursue optima.
I think it would be nice to have another word for it. But you can imagine that changing state of something or not is the same as replicating it, possibly with a modification.
The superintelligence will have to be economic somehow, and at some point decide which parts will be build more (replicated), which parts will stay and which part will be removed (the fact that they can be physically reused doesn't change things). That's when the "evolution" happens.
> So there's an AI that sometimes upgrades its parts (software modules?). How does this lead to parts that are "interested" in their own survival?
Well it needs to detect which parts to upgrade and how, without understanding them fully (otherwise they wouldn't be needed). This can be influenced by behavior of those parts, so the parts that can successfully pretend to be useful will be kept, regardless of their actual usefulness.
> Sure. But how do you specify "stop and think" formally in an algorithm, and will we figure that out before we construct a super intelligence or after? Right now all of our ML algorithms pursue optima.
I think we figure that out when we figure AGI. We humans can reflect on our goals, our current ML algorithms can't, and I think that's precisely why they are not general.
In other words, AGI will need to be capable to become "bored". I don't think that system that can never become bored with doing something can be truly generally intelligent. (It has to do with economics, IMHO.)
This may be easier said than done.
This is a fairly common proposal actually. The term used is "satisficer". Instead of having the AI optimize a quantity, you instead have it satisfice it, I.e. have it attempt to get at least a particular value.
But, iirc, the standard reply is that satisficers are not stable under self modification; a satisficer may self modify into an optimizer for some quantity, because this will also satisfice whatever it is a satisficer for.
However, there are setups which have been shown to be stable under self modification, and don't optimize the expected value of a function. Iirc, one which is stable is one that maximizes a function applied to the expected value of the utility function (so, optimizes f(E(u))), where the function f is increasing up to a certain point, and after that point it is decreasing.
At somepoint (ignoring us destorying ourselves) we will be able to accurately simulate a cell in software. As computing gets cheaper we will be able to simulate a human brain's worth of cells. We'll feed it inputs and give it outputs just like a brain would have. The only technological challenge here is scanning and reading data from a live brain. A very small challenge in the grand scheme of things.
Once that thing works it's a brain, and an artificial intelligence. Any other discussion simply complicates the situation. Accurately simulate a larger number of individual interconnected neurons and you're running an intelligence.
No I'm not 100% certain we'll ever be able to program intelligence the way to do reasoning symbolically in math, but we sure as heck can engineer one.
If a cell's emergent behavior is dependent on quantum or chaotic or otherwise subtle phenomenon, such a simulation may not accurately simulate a cell. The only hope we have of simulating a cell is by throwing away low-level information and then experimentally comparing our simulation to the real deal. At this stage, I don't think this is possible, so I consider the possibility of accurate cell simulation to be an open question. Do we have any way of knowing at this point? I'd be happy to be educated if I'm missing anything.
I don't know that the car I get in tomorrow will drive, but I can project that it will based on history. Similarly we don't know that we can simulate a cell, however, we can look at the rate of improvement in technology, see now physical limitations and project that in the future we likely will. In contrast if the question had been can we travel faster than light, while still unknowable, we could see the known limits in physics and project that we won't.
The other important question which you raise is "what does accurately mean?". The two values of accurate that I see are what I'll call the "Turing Model" of accurate and the "Psychic Model" of accurate.
The Turing model of accurate is that I were to observe the behaviors of this simulation of you, I wouldn't be able to tell it was a simulation of your or the real you.
The "Psychic Model" of accurate would be if I were to feed the simulation the exact same inputs as you, it would product identical outputs until the day you both (simultaneously) die.
I argue that the only meaning that matters for practical use is the Turing Model of accurate. If no one can tell it's not human, then it's equivalent to a human.
So back to your original question of accuracy w.r.t. quantum phenomena. I do see reading of quantum data as a possible physical limitation. But my best (admittedly still laymen's) understanding of quantum behavior is that results can be accurately modeled probabilistically. We easily can produce the same result distribution.
So my final argument rests on the question "Do we think that human intelligence depends only on the probability distribution of quantum events, or is there a hidden interplay in the specific "values" produced?"
Based on what we've seen so far from QM, I lean towards a dependency on the distribution as sufficient - ie there isn't an underlying structure hidden in the values we need to try to replicate.
There are completed projects of smaller organism whose all neurons have been simulated, there hasn't been any revelations.
Our brain doesn't contain any 'data', so whoever decides to extract data from it will have to decide what the data will be or why it would be of any use. There has also been many objections made on why brain correlates are more or less useless on question of mind/intelligence.
Well, now, a simulated heartbeat implemented in a pacemaker and connected to a pump does cause real blood to be pumped.
A real brain adds real numbers, and so does a computer brain. A real brain composes real words, creates real emails, issues payments, requests shipments. Those are not less real than the words, emails, procedures, payments, and shipment requests that computers make today.
The thinking involved for a person in issuing shipment is not same as what computer does. These are not simulations. Just similar looking interface. Like a animated cartoon human, compared to a human.
It absolutely does! Where do you think our memories are stored?
It doesn't store it in 1s and 0s like computers, but I think it's fairly non-controversial to say that all of everything you know is encoded in the physical state of the cells and atoms of your body.
You are applying computer metaphor and then asking where the 'memory' is 'stored' or 'encoded'. Metaphors/abstractions are useful tool, but when talking about differences, we should be more careful.
But it is an assumption.
If an AI begins replicating then its part will come into conflict only if its "immune system" is unsuccessful at beating back mutations and/or it's terrible at replication.
I don't think we have reason to believe either of those things will happen. High fidelity replication will be part of a successful AI's self-programming and it's like that a superintelligence will figure out how to run an immune system effectively.
I think the "all things in moderation" approach might make sense, but how do we ensure humans abide by it when there is so much wealth and power to be had if you edge out your competitors just a little bit.
Our economy heavily rewards optimization.
A superintelligent AI would be too useful for warfare and making money that we could ever hope to impose limits on what some people will do with it and allow it to optimize.
The problem may be inevitable, but that doesn't mean it's insurmountable.
But: look at humans. We are each a vast empire of trillions of cells which, for the most part, work together for the survival of the whole, and which certainly possesses intelligence beyond the capabilities of the cleverest single-celled organism.
The point is, I believe superintelligence will break into factions and most of its energy will be spent for these factions fighting each other, ignoring any common goal of increasing self-improvement.
So I don't see so called "singularity" as a possibility - maybe at most as another, faster type of evolution, which will ignore humans due to its own self-centeredness and humans will not comprehend it. But just as our culture depends on nature, this superintelligence will probably depend on us (human culture) to sustain its internal struggles.
Then, there are aspects that are greatly overlooked in all these narratives:
Human geniuses occur very rarely and take literally decades to learn, while the AI equivalent could be consistently "instanced" multiple times, live forever, evolve after birth and work 24/7 without sleep.
Then, humans have crappy I/O. AI is not bounded by the shortcomings of writing/reading/typing/talking at low rates of words per minute...
Generally speaking, AI has theoretically a substantial advantage over humans. Even if AI remains dumber for a time, these advantages are enough to make them prevail.
Some people might then argue that the human experience is also about creativity, entertainment, learning, social interaction, spirituality, fitness and a long list of things.
Part of it is because the brain is not only raw problem solving power... it is embodied in a human body, with bodily needs and hardwired behaviors coming from older brain structures. Like having an intuition for what may look like a predator or threat (and therefore evokes fear), what may be fine to eat, etc... and other stuff correlated with survival that guided our evolution, but not necessarily has to do with survival.
But AI can be embodied into anything, and the equivalent to its primitive mind can be played with. While we have many learned behaviors, there are aspects that are not learned. AI will be different. What AI ends up developing into will greatly depend on how that is done.
The author must not have met many humans.
Our minds are "general purpose" compared to, say, a chess playing computer program. But they're not necessarily "general purpose" in the most, well, general, sense. They're evolved with specific capabilities and talents that are geared towards helping humanoid, bipedal, mammalian creatures survive and replicate on a specific small blue planet, orbiting a particular yellow star.
As he pointed out in the article, there are examples of animals, like squirrels, demonstrating "intelligence" of a form that humans don't even remotely come close to having.
So, whether or not we have "general purpose mind" depends on how generally you define "general purpose." Which I think is actually an interesting point, in the context of what the author was driving at.
Our star actually casts white light. It just looks yellow from Earth. If the sun were yellow, then the moon would look yellow when it was straight overhead. The moon looks white overhead, because the light from our star is white.
My fear is being crushed by an amazing strategist that won't ever get poetry, for instance.
Technology assets generally decline in price as more efficient means of production and distribution become available (i.e. cloud) and the cost of technology components becomes more commoditized.
> I will extend that further to claim that the only way to get a very human-like thought process is to run the computation on very human-like wet tissue.
I think wetware (or human-like wet tissue) needs to be created first, before any judgement can be made about it’s costs.
Note: obviously there's nothing special about the meat between a human's ears, so _one_ day someone in theory should be able to build a machine that matches and exceeds a human's thinking ability. But that's not going to happen in any of our lifetimes.
Hah, I don't think so. For sure future systems we design will have multiple kind of intelligence. And then we'll slap on some pattern matching onto the front end to help it recognise what intelligence to apply to what situation. Much like how you recognise a math problem and pick up a calculator, or encounter a concept you don't recognise, and pick up a dictionary.
So we'll develop systems that have many more of these intelligences, which will each have superior abilities to what we have now (think infinite precision math library vs a cheap handheld calculator) running at high speed and in general able to handle much larger problems.
How is this not a superintelligence by any reasonable definition?
The main difference between these machines and biology is that, once an improvement is discovered, it can be downloaded very quickly and cheaply onto all the machines.
Copying is perfect and can be checksummed. Unlike learning in a university, say.
This is also what enables things like deep learning across all the world's medical data for Watson. A doctor somewhere can't know all the news everywhere and discover statistics and patterns on command. While Watson not only an ingest all this info but upload the results to all the places.
This ability to perfectly replicate a program also makes the "self preservation" aspect and the "identity" aspect of computers different than that of biological organisms. What is identity, after all, if a program can be replicated in many places at once?
What if exactly this flawed way of copying information allowed us to make discoveries?
I mean what if exactly because we have a human transmitting the information/theory with not the same confidence as the one who discovered/invented allowed the possibility to doubt it so making the next discovery more possible?
Another approach is to train several different neural networks (an ensemble) on a task, and then train a final neural network based on the average of the ensemble. [https://arxiv.org/pdf/1503.02531v1.pdf]
So you can probably replicate the useful features of flawed copying between humans.
The leverage and exploitation of advanced AI in the hands of malicious people/corporations/states are in a much closer timeline than the "Superhuman AI" could get.
So Open AI kind of initiatives are very important to balance things out. Somehow I am not optimistic.
And this article basically give a redefinition and interpret upon it.
Possibly enhanced by smelling their own saliva? Just guessing
I am here to tell you that such lunacy rests on seven wrong misconceptions. While I will freely grant that perhaps it is possible to apply a lever, yet it is human power and human power alone that moves that lever. The idea that anything but a human could do work is absurd on its face. Nobody will ever get from one town to another except on foot, or perhaps on a horse. To allow the idea that a machine could do this or any other task is as deranged as suggesting that machines will fly like birds across continents, carrying people, or that one day men will simply climb up and into the atmosphere and go and land and walk upon the moon. It is clear from first principles that raising or moving anything takes work and power: it is just as clear that nobody but man shall ever provide that power, let alone any more.
I do not have time to rewrite the above: substitute a hundred billion neurons doing chemical reactions, and add that it is clear computers can never do either the same or even less so, any more, and you will see how completely wrong the author is in every way.
Nobody but a man can ever do work, and nothing but a hundred billion neurons can or will ever think.
I blame Iain M. Banks for all the AGI kerfuffle
That being said there is no evidence that an AI that is fundamentally different (and potentially inferior) from humans could not be much more effective at controlling human behaviors, thoughts, viewpoints or actions.
Furthermore it may be the case that an AI can sense or understand information we cannot simply because we do not have the "sensors" to understand such information. The actual "intelligence" does not need to be very high if the data is that much richer.
From another perspective: the AI may not be as intelligent but may have more control over the environment than humans (e.g controlling the smart grid, traffic routing etc.) because of this its ability to influence human behavior is larger.
Either of these two cases could be deemed as "greater intelligence" ... just intelligence of a different kind. We need to look at intelligence less in terms of human constructs and more in terms of "ability to manipulate human behavior" -- this would be a human centric definition.
The only assumption required is that intelligence can be expanded just beyond human limits, which I think is a much less controversial claim.
1. Intelligence is information processing.
2. We will continue to improve our intelligent machines.
3. We are not near the peak of intelligence.
The author's first counterpoint is:
Intelligence is information processing so "smarter than humans" just means better information processing: higher rate, volume, and quality of input and output. Aren't some humans smarter than others? And isn't that a power that can be abused or used for good? We don't have to worry about it being like us and smarter; it just has to be smart enough to outsmart any human.
He then talks about generality like it's a structural component that no one has been able to locate. It's a property and just means transferrable learning across domains. We're so young in our understanding of our own intelligence architecture, it's ridiculous to build a claim around there being no chance of implementing generality.
This statement is also incredibly weak:
> There is no other physical dimension in the universe that is infinite, as far as science knows so far...There is finite space and time.
There is evidence that matter might be able to be created out of nothing which would mean space can go on forever. We might only be able to interact with finite space, but that isn't to say all of nature is constrained to finite dimensions.
Even still, he doesn't make sense of why we need infinite domains. You only need to reach a point where a programmer AI is marginally better at programming AIs than any human or team of humans. Then we would no longer be in the pilot's seat.
That's a claim. What is your support for this? Computers in the 1940s could process information. Were they intelligent?
> just means better information processing: higher rate, volume, and quality of input and output.
Computers have been able to perform information processing better than humans from the beginning, thus the reason for their creation. Information processors are tools to extend human intelligence.
This is, I think, what you were trying to say with your point about computers in the 1940s. Sure, they could add really fast. Did that make them intelligent? Or is intelligence, even if it is information processing, is it a specific kind of information processing? If so, do we even know what kind of information processing it is? I am inclined to say no, we don't. But then, I'm not sure that I buy that intelligence is (just) information processing, either.
It's the scientific definition rather than the colloquial definition.
If I put a bunch of random reactive chemicals in a large beaker - I don't know what your "information" or "processing" would be there, but there should be a lot of that happening - I still wouldn't call it intelligent.
Bubble sort in a fast computer vs quick sort ? Moving bath water with tea spoon very very fast vs using water buckets ?
We've already achieved super intelligence. It's us empowered by our organizations and technology.
You can quibble about what an AI is; if you draw a box around human civilization and observe its leverage and rate of change, well, the evidence is that we are riding the superhuman takeoff.
His central argument seems to be that intelligence is not a thing, and although he doesn't say it directly, but I think he doesn't believe in IQ.
He's committing the same kind of fallacy committed by certain radical ideologues, that basically says something along the lines of: since you cannot define something accurately 100% then any statement about the thing is equally invalid.
We don't have to engage in this kind of meaningless argument about semantics.
There are clear and easy to understand examples of scenarios where super AIs can cause harm to human societies that speakers like Sam Harris have articulated pretty well.
1. The claims by economists that AI-- even if it's not "strong AI"-- will put lots of people out of a job with potentially severe societal/economic repercussions
2. The claims by the Bostrom/Yudkowsky/etc. crowd that an AI intelligence explosion will cause the extinction of humanity
Without saying anything about the plausibility or lack thereof of either 1 or 2, I think we can all agree that they are very different claims and need to be analyzed separately. Right from the very first sentence the author seems to muddle the two, so I don't think there's much of cogent analysis in here.
If you read past the first sentence, it's quite obvious that the article is mostly concerned with claims of your type 2). Also I'm not sure why it would be so important to carefully separate the two categories of claims - they both rely on the assumption that AI can perform a wide variety of cognitive tasks better and cheaper than humans, which is addressed at length and cogently in the article. Also I believe that the economists you mention are concerned more about robotics than about AI per se, so I'm not even sure why you've brought in that point.
Maybe someone can correct me if I'm wrong here, but I have a hard time understanding what /any/ "utility function" would be, that the super ai people talk about. It can't be a passive deep learning network that parses information and gives an output, it has to be some kind of complex perception / action loop of many neural nets and actuators in the real world that somehow leads to an intelligent self improving behavior? I guess you could make a deep learning controller for self driving cars say, and if an input to many cars is wrong, all the cars crash and create a big cascading mess of wrong input values, but that kind of accident is a far cry from an intelligent chain of events where every link in the chain is an intelligent decision but the ultimate goal is bad.
And, do we even know any way to chain many deep learning networks together that accurately give correct output values, that we then can hook up to a controller to give a utility function, which can then lead to a cascade of intelligent decisions across domains?
That was my point that, AI has been confused with the deep learning gains in the past years, even though, the hard part about how to coordinate all those inputs and also give the right commands to actuators is still not near to solved, much less a completely virtual superintelligence that has its own "virtual" goals and virtual simulations of both its inputs and potential actuators in the real world. This would be akin to a kind of intelligence algorithm, and not deep learning vector algorithms that reveal structures in data.
My guess is that paperclipping the world takes magnitudes more intelligence than understanding what a human means when saying something ambiguous.
For example, maybe you inadvertently programmed it to "be totally literally 100% certain that you've completed the task I told you to do". Then from its perspective, there's always a tiny, tiny chance that its sensors are being fooled or malfunctioning, so it can't be literally 100% certain that it's ever made a paperclip successfully, so by your programming, it should keep making more paperclips. This is independent of whether or not you wanted it to do that: it's a result of what you programmed it to do and what you told it to do.
We also know that paperclipping the world will get in the way of other goals, like going to the show or making money.
What is "ridiculous" about continuing a task because we're not certain that it's done yet? It's only your human moral system saying that. Just because humans usually don't value things enough to pursue them to the exclusion of all else…
It understands some to mean more than one and less than infinity. In fact, some means less than "a lot". The meaning of a lot depends on the context, which happens to be paperclips for me.
What is "some" paperclips for me? It depends on how many papers I might need to clip (or whatever use I might have for paperclips). My super intelligent assistant would be able to work out a good estimate.
After having an estimate, it can go make me "some" paperclips, and then stop somewhere short of paper clipping the entire world.
Alternatively, it could just ask me how many "some" means.
What an agent considers to be "good" is orthogonal to how intelligent that agent is. An agent of arbitrary intelligence can have arbitrary goals; the goals of an intelligent agent need not in principle look anything like those of a human. The only reason a superintelligent AI's goals would look like those of a human is because the humans very, very carefully programmed them into the AI somehow. Very, very careful programming is not a feature of how humans currently approach AI research.
I don't pretend to have any sort of expertise in these sorts of discussions, so I thought I would throw out some easily wikipedia'd terms that seem to back your thoughts regarding convergence vs. exponentiation.
Of course, the obvious solution is: "make the objective function equivalent to 'do what I want you to do'", but the problem is we might not know how to encode that without help from a super-AI.
Doesn't matter how determined the machine is to make them, the rest of the world won't exactly lay down and let it. Nor would its goals often be possible with the amount of resources available in its surroundings.
It's basically the cat problems from this tongue in cheek article:
All he is doing is trying to convince us all that it is not possible to create AI.
Hopefully nobody is convinced by this post not to try to create a superhuman AI. Most of us will fail but at least one will succeed. I don't think it is any exaggeration to say that this will probably be our last great invention, for good or for bad. Of course, I may just be biased given my own interests in AI.
Basically every point he makes in this post is just fundamentally wrong in one way or another. He clearly has no understanding whatsoever of what he's talking about, on the technical, biological, or psychological sides. He's just saying things that seem true to him, with zero context or understanding of any of the issues involved.
Multi-dimensional vectors have magnitudes just like scalars do. When will people get over this whole "intelligence is not one thing, therefore you can't say anything at all about it" nonsense?
> Humans do not have general purpose minds, and neither will AIs.
False absolutism. Human minds are certainly more general purpose than an AI. When an AI has a mind that is more general purpose than ours, I think its fair to call it a general purpose AI.
> Emulation of human thinking in other media will be constrained by cost.
According to who? The only person that could answer that would be someone who already knew how to emulate the human brain. Although, come to think of it, some 50% of the human population are able to create new brains, at quite little cost. So it is empirically possible to synthesize new brains extremely cheaply.
> Dimensions of intelligence are not infinite.
Lol, according to who? What does this even mean?
> Intelligences are only one factor in progress
Sure. So what?
There are plenty of perfectly legitimate, well thought out, informed critiques of AI fear mongering. This, however, is not one of them. This is garbage.
Can you give an example of an intelligence that is more general purpose than human intelligence?