Hacker News new | past | comments | ask | show | jobs | submit login

This is completely silly. Superhuman AI is inevitable because there is nothing magical about human brains. The human brain is only the very first intelligence to evolve. We are probably very far away from the peak of what is possible.

Human brains are incredibly small, a few pounds of matter. Any bigger and your mother would be killed giving birth or you would take 10x as long to grow up. They are incredibly energy constrained, only using a few watts of power. Because any more and you would starve to death. They are incredibly slow and energy inefficient; communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy. And they are very uncompact - neurons are enormous and filled with tons of useless junk that isn't used for computation. Compared to our transistor technology which is approaching the limits of physics and built on an atom by atom scale.

That's just the hardware specs of the human computer. The software is hardly better. There are just more unknowns because we haven't finished reverse engineering it (but we are getting there, slowly.)

But beyond that, the human brain evolved to be good at surviving on the Savanahs of Africa. We didn't evolve to be good at mathematics, or science, or engineering. It's really remarkable that our brains are capable of such things at all! We have terrible weaknesses in these areas. For instance, a very limited working memory. We don't realize how bad we are, simply because we have nothing else to compare ourselves to.

Consider how even today, relatively primitive AIs are vastly superior to humans at games like chess. Human brains also didn't evolve to be good at chess after all. Even simple algorithms designed specifically for this game easily mop up humans. And play at a level of strategy far above what even the best human players can comprehend.

Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.




>They are incredibly slow and energy inefficient;

Human brains are energy inefficient? Well, thats a first ;)

"In 1990, the legendary Caltech engineer Carver Mead correctly predicted that our present-day computers would use ten million times more energy for a single instruction than the brain uses for a synaptic activation."

"Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"

[1] http://www.newyorker.com/tech/elements/a-computer-to-rival-t...


In terms of energy consumed for individual computations, yes. Neurons use chemical reactions to communicate and this is terribly inefficient. Transistors use very small amounts of electricity in comparison.

The main difference is computer technology is designed to be very general purpose. The brain is more like an ASIC that's hardwired to run one specific algorithm. GPUs are also computing 16 or more bits of precision, when real neurons are very low precision. There are some other differences, like how real brains are incredibly sparse and most of the synapses at any given time are dormant and not using much energy. They are also very sparsely connected to each other. While our current NNs are very dense and need to spend energy to compute every single connection each cycle.


It seems premature to make judgements about efficiency when there is so much we do not understand about brain function and consciousness. When you can replicate all brain function, lets compare efficiency. Comparing to an asic reveals the source of your error rather than defends your position.


For that to be a fair comparison, wouldn't you need to look at all the energy consumed by the human brain over the many hours it took them to become a Go champion?


I think that's a fair argument, but from the quote above

> "Last March, AlphaGo, a program created by Google DeepMind, was able to beat a world-champion human player of Go, but only after it had trained on a database of thirty million moves, running on approximately a million watts. (Its opponent’s brain, by contrast, would have been about fifty thousand times more energy-thrifty, consuming twenty watts.)"

Let's say alphaGo trained for a year, that would be 1 MWyr energy consumed. And lets assume that Lee Se-dol's brain consumed 20W over 34 years of his live doing nothing but working on Go, that would be 640 Wyr, still a factor 1000-ish smaller.


Totally, I'm sure it's correct, and even if you were to bring the comparison in to line then the outcome is still "computer is watt hungry". The point is that the original statement, while correct, doesn't really say anything useful.


How would you know what amount of energy the human brain spent in learning to play Go, specifically? In the same time it was doing that, the human brain was also learning to do, and doing, a whole bunch of other things that AlphaGo was never even trained on- from moving limbs and controlling bodily functions to using language and recognising faces and so on. How would you isolate the amount of energy needed for training in Go, specifically?

I mean, in principle, if you had two numbers, "human energy consumption from learning Go" and "AlphaGo energy consumption from learning Go", you could compare them. But in practice there's no way to come up with such numbers, so what's the point of comparing apples and orangutans?


That's not really the point, more that it was originally not an apples to apples comparison and therefore doesn't really tell us anything. I have no doubt that the statement is correct, it's whether or not that statement has any meaning. As another comment pointed out, even if you compared the hours, the computer still uses a few orders of magnitude more energy for a more accurate (although completely theoretical) comparison.


AlphaGo took several months of self play on a large cluster, so training probably consumed many times more energy than a human in a lifetime.


The gain is in being able to clone the bot perfectly. Once trained you can make many of them. Also, if you look at what happened in Chess, the lessons learned from the large machines was absorbed and resulted in your smartphone now being able to outclass the human world champion.

You can expect a similar thing with Go at some point.


Same goes for the bot, then. A back of the envelope calculation suggest Lee's brain consumed as much energy in a 80 year lifetime as AlphaGo in half a day.


Not trying to say it isn't a correct statement, or that the outcome would be different if you lined everything up properly, only that the original statement doesn't really say anything meaningful.


Not surprising that a computer expends more energy to perform a task that we [previously] thought required human-like intelligence.

I'm sure any dollar store calculator spends way less energy performing long division than the average human.


For one, normal human can do long division as fast as a calculator, and can handle numbers that will bork many calculators. (edit - look at human calculators, and the era before calculators were common place. Even now elders I know can eye ball numbers and calculate percentages / factorials and ratios)

And for another, Calculation != AI, far from actually.


One, what normal human being can perform long division as fast as a calculator?

12/43523523452. Go.

Two, AI is applied statistics. What do you think AI is?


You could, with practice

>I'm sure any dollar store calculator spends way less energy performing long division than the average human

Thats the comment.

A calculator is a one role device, with exactly specified rules.

Similarly, with training you can too. You don't need to be special, other than being practiced,which is a fair requirement for a human being.

Here is a human being who could out perform it: https://en.wikipedia.org/wiki/Shakuntala_Devi

>In 1977, at Southern Methodist University, she gave the 23rd root of a 201-digit number in 50 seconds.[1][4] Her answer—546,372,891—was confirmed by calculations done at the US Bureau of Standards by the UNIVAC 1101 computer, for which a special program had to be written to perform such a large calculation.[10]

She could easily out-perform calculators because she never needed time to key in the commands (she needs to hear the problem to solve it).

If we exclude that restriction, and the commands magically float into the calculator, and that the problem is small enough to match the calculators limits, then yes, if those arbitrary conditions are met the calculator can out-perform her brain.

Which is precisely the type of “cows are round spheres” thinking that’s being decried in the article.

People can and regularly do out-perform calculators in speed, energy and complexity of computation.

Do note that calculators weren’t allowed as exam tools in a lot of countries till a decade or so ago. Students learnt mental math techniques which were known since ancient times (think Greece).

For a human brain the answer isn’t even calculation, it becomes pattern recognition. The square root of 25 is 5, which takes about the same neural load as it takes to recognize a letter.

The calculation you provided is harder, but thats a function of lack of training/practice, not complexity.

----

AI is not in the realm of what a calculator can pull off, is what I meant to say by the compute part.

edit: I tried your computation on a store calculator, its beyond its ability to calculate,(0.0000000027)


Your example is from 1977, we've had 40 years of Moore's law since then. In the time it takes for you to recognise that you're even looking at a number (~0.08 seconds), the cheapest computer you can buy (the $5 Raspberry Pi Zero) can do around 1.92 billion normal floating maths operations. Sure, 201-digit numbers are a little slower — on my laptop, in Python, I can only do that particular calculation just under one hundred million times in the fifty seconds it took her to do it once.

But you're right to say calculators are single purpose devices and that's magically inserting the question.

So I downloaded sklearn, which contains a set of labelled hand-written digits.

It takes about 0.17-0.2 seconds for my laptop to learn to read numbers, from scratch, and thereafter it can read digits at a rate of about 8,200 per second.

For reference, "a blink of an eye" is 0.1-0.4 seconds depending on who you ask.

Dollar store calculator? I'd never have said that myself because I know calculators are rubbish. But in the context of "AI: Will it ever beat humans?" hell yes, it will and in many cases it already does.


There is the practice of anzan in China and Japan, which shifts the burden of calculation into the visual cortex by visualizing a mental abacus. Thus advanced practitioners can rapidly and correctly evaluate calculations like the one you have given. As you can see, Tsujikubo's speed in divisions appears to be limited in her ability to physically write out the answer.

https://youtu.be/-9T3P--ubQc?t=6m10s

Granted, these are people at the top of their game, so may not qualify as "normal" per your argument, but this is to illustrate that the limits of numerical calculation in "normal" people may be the algorithms we are taught rather than the computational capacity of our brains.


and he only can play Go... what a waste for a such big AI. AI can beat humain on some special part but they are designed by us, then they are wrong and bad, specially when you only need to unplug the battery for that they die, too easy, come on AI do something more than that....


And can only play well on 19x19 board, if I got it right.


> Superhuman AI is inevitable because there is nothing magical about human brains.

I'm of the opinion that that's an assumption being made. An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness. The point was not so much the mechanism, but that we can't yet rule out such a mechanism existing that our brain could have but our silicon currently doesn't — because we have no real idea what consciousness is, and we've not constructed an AI that proves that no such mechanism exists. If consciousness is intelligence, this could matter more (but this could also be another assumption).

(Unfortunately, I cannot find the article.)

(The article also went on to note that we can copy computer programs, but not brains, and wondered if perhaps there is some relation to the previous problem. It was also advocating that we need a testable hypothesis around consciousness; to either prove or disprove the question "is it possible to clone a brain?" would be an extraordinary discovery either way. I found it interesting.)

> Imagine an AI brain that is optimized for the purpose of mathematics, or computer programming, science, or engineering. Or at doing AI research... Imagine how much better it could be at these tasks than humans. It could quickly solve problems that would take the greatest human minds generations. It could manage levels of complexity that would drive humans crazy.

I'm just wishing for an automatic soap dispenser that can reliably sense my hand.


That's indeed an assumption. But so far there isn't much evidence pointing at the quantum solution, and there are theories of how self-awareness can arise that don't require magic. It's reasonable to take the less magical assumptions until presented with (appropriate amount of) evidence to the contrary.

Quantum-based self-awareness is a cop-out IMO, an open way to claim that it comes from God. Well, if that's true and it's beyond our reach, then we'll never build a self-aware AI ourselves. But assuming something is impossible doesn't help moving forward...


> I'm of the opinion that that's an assumption being made.

Not so much an assumption as a reasonable guess. Is it logically possible that there's something magical about human brains? Yes.

But there are countless examples of things we thought were magical historically, and, as far as I know, exactly zero that we've ended up concluding actually were magical, and a large number that we're pretty confident aren't at this point.

The smart money here is on "no, the human brain isn't magical."


I think there may be a problem with the word "magical" in this context, just like there is a problem with nature-worshipers talking of "supernatural powers".

If we've made a science out of it, no matter how advanced it is, it's not magic — e.g. if telekinesis was possible, it would have a scientific explanation and be a natural power — which bares some similarly to the way things are called "artificial intelligence" when they're impossible, only to be reduced to "trivial" when someone makes a machine do them: Chess, Go, having a large vocabulary or knowing many languages, being good at arithmetic or being able to do calculus at all, writing music or news stories, trading on the stock market — all things that used to be synonymous with genius, yet all done by machines we don't call intelligent.


>An article I read at one point supposed that our brains could have, as a way of example, a linking to quantum mechanics that we are not aware of due to not having yet discovered it that causes self-awareness.

That's not an article but a whole book:

https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind


Extremely close! It was actually posted on today's thread about consciousness[1], and what I was thinking of was [2].

[1]: https://news.ycombinator.com/item?id=14263842

[2]: http://www.scottaaronson.com/blog/?p=2756


Suppose there's some quantum effect in our brain that makes us smart (unlikely, IMO, for a variety of reasons). Why shouldn't we be able to understand this effect, and leverage it when we engineer our machines?


> neurons are enormous and filled with tons of useless junk that isn't used for computation

I just want to point out that some recent research indicates that this might not actually be true. Single neurons may be more complex than we thought since researchers managed to extract significant and complex functionality from single neurons (!). Not to mention glia seem to be important for cognition, and not just glue as we thought... So perhaps not as wasteful and kludgy as we might think :)


I think AI fear is a bit hyped up in the valley right now but the fundamentals are true.

You can think of intelligence as a couple of dimensions rather than just one dimension as the author is arguing. However I don't agree with his/her logic.

You can think of the brain as a massively parallel computer. Intelligence can be gauged by total cores, memory and CPU per core, power usage and network bandwidth between cores. Sure different configurations will give different strengths but something optimized atomically for performance will be faster than a mushy brain.

Now imagine a billion of this little pods placed all over the planet. With amazing bandwidth between the computers akin to our internet.

Worst case: they think humans are the enemy and out think us to extinction.

Best case: studying intelligence helps us figure out how Brians work. This helps us figure out how DNA nano manufacturing works. We figure out brain to brain and brain to AI networking. Now all humans are collectively a super computer being able to share knowledge and abilities. May be this is extended so animals are also included and Earth becomes a Pandora like planet where every living thing is connected and they all live in harmonius resource optimized world where AI tries to maximize happiness of every living being.


No.

The hype chamber in SV is on overdrive on AI at this point.

Singularity AI concept has gone from the neat Sci Fi idea it was, to being treated as a serious possibility. This is Absurd. The modern version of Malthus, and a community of people who pride themselves on their reason, should do a basic sniff test.

The author has it correct - human brains themselves are significantly impressive on a weight/power/capability scale.

But forget all of the intelligence debate, consider the things that people magically ignore: emotions.

Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

At the same time, they lack motivation, or desire - there is a lack of impulsive force to move forward.

Intelligence doesn't give human beings purpose. Emotion does.

This isn't feel good psychobabble, its a fundamental part of good mind/body housekeeping which a huge chunk of SV follows every day to reach "peak performance".

How are you going to create a general purpose AI which has any motive force?

Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.


> Intelligence doesn't give human beings purpose. Emotion does.

So would an explicit goal function, which is how we already give "purpose" to algorithms.

> How are you going to create a general purpose AI which has any motive force?

In any other out of countless of ways to do that. What makes you think that emotions are necessary to create a powerful optimization process?

The "motive force" is mostly a solved problem for now, we can code that explicitly (the issue is with figuring out a right goal and how to write it down). AI development is mostly about the way it works, not why it should do the work.

--

No offense meant to you personally, but I find that most of the comments about "overdriven AI dangers focus in SV" to reveal that authors don't have a fucking clue what the issue is about, and never spent time actually reading up the reasoning behind the AI X-risk potential.

I'll give an ultra-compressed super-TLDR of that reasoning for benefit of future conversations. It goes like this:

- intelligence is a super-strong optimization process; it doesn't necessarily have to look the way humans think (humans generally suck at reasoning, which is well established; see: cognitive biases, probability theory, decision theory, etc.)

- intelligence is an independent factor from values/goals, a mind can have any combination of the two - i.e. just because it's smart, it doesn't mean it will develop the same morality humans do, or any morality whatsoever; see https://wiki.lesswrong.com/wiki/Orthogonality_thesis

- combining the two, the danger of super-human AI is not something that's hostile to us - it's something that's indifferent about us, and is more powerful than us, the same way we don't give a second thought about e.g. ants


> Take a look at how people who are depressed function. They measured as being more aware than (better assessment of their surroundings/relations/self assesment) people who were not.

That's interesting. Do you have a reference for that?



>Its actually MORE LIKELY, that if a GP AI is created, its just going to sit on its ass and pretend not to understand instructions.

This is me if i was an AI.


The fundamentals are still missing.

1. Bootstrap. In order to really take off, AI needs to be able to build better AI.

2. AI needs to be adaptable. Any human go player can play on 19x20 board. AlphaGo (a program that that on 19x19 board beats top professionals 60:0) could not even make a legal move.

3. AI should be resilient. Right now it runs on hardware made by humans and blindly relies on that hardware working perfectly all the time.


Would individual humans still have their own consciousness or would all of humanity be one giant highly optimized organism that has no individual needs, desires, dreams or fears? The concept of human could become irrelevant and our whole existence with it


This is exactly the kind of blind non scientific belief that the article is decrying.

There is nothing magical about anything. There is however a bunch of unkowns, and a lot of knowns that point in directions away from singularity babble.

This is exactly the point of the article - that we aren't going to a singularity, we are going towards a "Cambrian explosion"


> The human brain is only the very first intelligence to evolve.

That may not even be true, we're just not smart enough to make the call one way or the other. Dolphins come to mind, as well as several other creatures (Squid, Apes). The fact that we can't rule it out is in a way proof that maybe we're not as smart as we would like to think we are.


It is always the case that Superhuman AI proponents comes up with the argument that "there is nothing magical about human brain". Yes, everybody agrees about that, even people who oppose Superhuman AI and are sane enough to not evoke mystical powers argument. The problem is: 1. Understand brain and simulate it to get intelligence. Sounds simple right? How about simulating a nuclear reaction, which we already understands, on my computer and generating nuclear power? 2. Once you understand something you can create it. This is a fallacy, a big one. We understand solar system, lets create one, whats the problem? 3. Simulating a brain will give you a brain, with all its limitations and problems that we have with human brain. Oh then we can tweak it to make more intelligent? Sorry you cant do that without changing the whole model completely, in which case it will no longer be a "brain".


If you simulate a nuclear reaction you get... maybe a nice video of atoms smashing together? If you simulate a brain you get an intelligent computer program.

Same with the solar system. We can create simulated solar systems. We will probably be able to make simulated brains. What is your point?

Alright, assuming the first AI even is a simulated brain (I don't see why we can't invent intelligence from first principles first.) We could definitely tweak it. We know of drugs like amphetamines that enhance human ability. That's like Stone Age tech compared to what's possible if we had unlimited control over every neuron in the entire brain. Not to mention just adding more neurons and making a larger brain. This is just touching on what's possible, I could go on for ages on possible improvements.


My point regarding the simulation argument is that there is a difference between computational simulation and physical simulation.


If I simulate a map I get a map. Are minds like maps, or like nuclear reactions?


I have no idea what you are trying to say. A simulation is about "changes in the state of a system over time". Not sure how you can simulate a map?


I meant something like taking a physical map and representing it as bits in the memory of a computer. That's still a map; you can orient yourself with it. I'm just saying that there are things whose simulation is still a thing of the same kind. Perhaps minds are like that.

In everyday life, we treat other minds more like "maps" than like "nuclear reactions". We interact with nuclear reactions by getting radiation sickness and dying. We interact with people through more indirect channels. A simulation could be at the other end of the channel and we wouldn't know. Maybe he wouldn't now, either!

We don't get electrocuted when our interlocutor's neurons are firing, the effects on us are only through his external behaviour. It's the abstract informational flow what seems to matter.


What can a real brain do that a simulated brain can't?


Only one way to find out - Create a simulated brain and you will get the answer.


If you simulate a brain you get an intelligent computer program

That sounds like a really interesting experiment. Have you published your data?


"communication in the brain is done with chemical signals that are orders of magnitude slower than electricity and use much more energy"

A nitpick: the neurons actually do use electricity to send signals and (looking at the neuronal model in its history¹) it actually evolved to send signals along as fast as it could. The chemical component, AFAIK, is the physical implementation for memory, so this is kind of the storage class regarding operation performance. Overall though, I agree that our cerebral organ may not be the apogee of computational performance.

¹ As far as I remember, the dinosaur neurons didn't had myelin, which for some large animals caused non-trivial sensorial delay.


> The human brain is only the very first intelligence to evolve.

What other intelligence evolved, and in what sense human brain is first in evolution?


s/first/first smart enough to invent science/.

We've seen no other creatures on Earth being able to grow beyond using basic tools.

But this also somewhat implies (which is GP's point) that we're close to the dumbest possible mind that evolution could make that could still develop science. Or think of it as a crude MVP that ended up being a production system.


> s/first/first smart enough to invent science/.

To claim something as first, there must be others in reality to have followed it. If there is no second, there would be no first. Is there a second intelligence to have invented science, or we are the only ones?


Intelligence as an optimization process works orders of magnitude faster than biological evolution. We've dominated the optimization happening on this planet. If we were to stop doing technology and wait a million years, then maybe we'd see a second species develop comparable intelligence.


Maybe, maybe not. Your opinion is based on assumptions, just like mine and everybody else's. Or at least I assume so :-/

When somebody comes up with a reasonable, detailed model of how my memories are stored (just to name an example), I shall once again start paying close attention to the whole AI field.


The assumption is that a singular intelligence (the superhuman AI) will eclipse that of many intelligences (humans plus tools). Any given intelligence might be more capable than me, but it's constrained by all the other intelligences in the environment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: