Hacker News new | past | comments | ask | show | jobs | submit login
To Keep Up with AI We Need Bigger Brains (wsj.com)
19 points by jkuria on Oct 29, 2017 | hide | past | favorite | 16 comments



I think the first thing we should do is to understand how superlative human intellectual performance as it exists today even works, and try to make it commodity knowledge taught in everyday classrooms.

In a typical physics, mathematics or engineering curriculum, we learn a lot of fields, such as mechanics, electrodynamics, calculus, signal processing, control theory, and so on. But we don't learn about the human cognitive processes that went into the discovery of those fields. At best we get a caricatured view of the subject's history. Thus, you might learn that Galois came up with the idea of groups. But what was he thinking when he did that? What were the challenges that he was up against that inspired his idea? In short, how did he think, and how might we emulate him for our problems? Ditto for Maxwell, Riemann, etc.

In all the courses that I took, there was none that explicitly taught me how the mind solves challenging problems or comes up with creative solutions. This knowledge is supposed to be somehow imbibed indirectly by learning the fields themselves, which I think is very inefficient. We need to make dissemination of this wisdom more systematic.

A parallel is the field of strength training, in which specific exercises, equipment, and nutritional methods were discovered over a brief period of time. The professional body builders of today are giants compared to their peers over a century ago. Something similar needs to happen for cognitive skills as well.


Bodybuilders today are on mass quantities of steroids, insulin, human growth hormone, etc. The old strong men (Görner, Sandow, etc.) can compare favorably with pretty much anyone not on PEDs today. In general they had smaller chests, but I think that was an aesthetic choice, as I've heard large chests were considered somewhat effeminate at the time.


Not only bigger brains. Human I/O as of now is limited.

Communicating one word at a time, one picture at a time or one sound at a time is slow and has significant overhead.

AI doesn't need to transfer information one word, picture or sound at a time. It can potentially pass on any arbitrary representation no matter how large, at massive speeds.

Then, going back to humans: a human expert take decades to train. Once trained, transferring that expertise involves starting the process again. Not the case with AI. Take the state of an AI, serialize it, instance it thousands of times, you've got thousands of experts.


'One word' seems almost a little unfair. Many words convey complex things that are expressed using a single word.

Point still remains, but it still seems a wee bit unfair.


> Communicating one word at a time, one picture at a time or one sound at a time is slow and has significant overhead.

We don't. We have massive parallelism in how we communicate. The aphorism "a picture is worth a thousand words" is based on that.

All in all, AIs beat humans on very specific problems, but they're worthless for most tasks - and an AI that's specialised to one task is very rarely suitable for another. You can get a human to do quite complex tasks and use initiative with only the simplest of instructions. Don't even need to be at expert-level. Where is the AI that can wash, dry, and fold the clothes, change the baby, make lunch and dinner, fix the leaky sink, and bring the dog in at night? Yet that's a very humdrum day for a human.


A man in the middle ages thinks of the future of a time with more castles, more knights, more kings. What you described is similar: an extrapolation of the current state of AI, forever basically, even despite the overwhelming evidence that AI is progressing far faster than evolution ever did in just a few decades. This discussion is not about the current state of AI.

Despite the skepticism, strong AI will happen, maybe not in your lifetime but close enough... after that, given the fact that our fitness function includes dumb variables such as having nice hair my money is on the AI. Given a human and a strong AI (human level intelligence AI), the AI would prevail.

e.g: When a human reads text, the human needs to reconstruct the semantic meaning from the text. The AI can transfer semantic meaning directly. The AI can even share trained neural ensembles or any computation device to perform a specific task.

And btw, there is already some basic form of narrow intelligence built in your washing machine, it's called fuzzy logic, so technically you already have a form of AI washing your clothes.


Hi partycoder, both for your (quote) "man in the middle ages", whose vision of the future was biblical apocalypse real soon now, not more kings and castles, and for your contemporary (quote) "human" whose future somehow is also an apocalypse of sorts, if you are not yourself doing something to prevent/hasten this future, may I put it to you that you are much more like a middle-ages man than you think? Apocalypse are convenient exonerators.


It seems you didn't get the point of the analogy. My point is that the outcome of paradigm shifts are hard to predict. It would have been hard to predict for a middle ages person to predict the Reinassance and everything that followed after.


> "The AI can transfer semantic meaning directly"

What does this mean? An AI that reads text still has to decipher it. Show it a newspaper and it still has to analyse what it sees and convert it into something sensical; it doesn't have mystical powers that absorb the information.

Even if you're talking wire protocols, it still needs to decipher the incoming data. This series of 1s and 0s makes these ascii characters. This series of ascii characters makes this sentence. This sentence means 'foo'.

> The AI can even share trained neural ensembles or any computation device to perform a specific task

It was a massive coup when they designed a robot that could fold laundry. Not do the laundry, just fold it. How many other AIs has that been shared to?

I can only read the first paragraph of the article due to paywall, but it's making the case that 'computers are faster than humans' and they are... for certain tasks. Particularly ones with clear-cut rules. Machine learning and neural networks can do a lot of 'fuzzy' thinking, but they also make a lot of errors that fans just gloss over, errors that a human would never make (see the google image of two black people that got AI-tagged as 'gorillas' for one that made the news). Google did machine learning with their sketch program and it could tell if you drew a bicycle or a car... if you drew them in the stereotyped cartoonish side-on way. It had no idea what was going on if you drew it from a non-stereotyped angle. Fancy pattern-matching just doesn't compare to human understanding on that task.

> A man in the middle ages thinks of the future of a time with more castles, more knights, more kings.

A man in the middle ages also thought there were devils in the cemetery, that unicorns were a thing, and that alchemy was a way to get rich quick if you could just find out how... and all of these things turned out to be bupkis. He was never in danger from being abducted by devils if he took a shortcut through the church's side yard.

> Given a human and a strong AI (human level intelligence AI), the AI would prevail.

Until the AI has full control of its supply chain, from the tantalum mines in Africa, to the chip fabs in China, and the power plants in <locale>, it will not prevail. Parts break; it needs to replace and repair them. It needs 'food' (power) to think. The only real future where we have to seriously worry about a strong AI is one where it is being controlled by (or working with) a hostile party - ie: it's humans vs humans + AI, not humans vs AI.

Or if you mean on a personal level, depends where we are. If it controls the environment (doors etc), then it can just imprison me, and the strength of the AI (or even if it is an AI at all) is irrelevant. But if it can't do that, well... I can just go and flip the breakers, and wait for the power backup to drain.

Or, if you want an android-shape AI, keep in mind that AlphaGo was run on 1200 CPUs and 200 GPUs, and all it could do was play Go (albeit rather well). That's the kind of computational power required to do things 'the computer way', and requiring commensurate power and cooling. Moore's Law has already buckled, and chips are already being designed down to the atomic scale; there isn't much more miniaturisation to be done. Trying to fit a 'super AI' capable of all the things fans want into a form-factor of a human just ain't going to happen (kinda like alchemy).


> An AI that reads text still has to decipher it.

Actually no. e.g: a word embedding, a document corpus, etc.

> It was a massive coup when they designed a robot that could fold laundry.

It took humans millions of years to evolve. Our closest evolutionary relative, the chimpanzee, may have issues understanding how to fold laundry.

In contrast, computers as we know them appeared no more than 80 years ago. Look at how fast things have moved in such short period of time, especially the last 20 years.

> Moore's Law has already buckled

Moore's Law applies to how many transistors you can pack into an integrated circuit. Prior to the invention of the transistor, the best we had was the vacuum tube. That limited what we could do.

Memristors could potentially cause a leap like what we experienced when we moved from the vacuum tube to transistors. If that doesn't happen then something else may, e.g: more efficient algorithms / architectures.

...

Finally... when the Wright brothers invented their clunky first airplane that could only fly for lest around a minute at low altitude in 1903, nobody thought that few years later, in 1945, you would have a plane flying faster than the speed of sound.

Right now you are like the people in 1903 saying "haha! Look at those dumb guys with this ridiculous machine!".


More sinisterily, armies used planes to drop bombs on cities and shot soldiers on battlefields in Europe as early as 1914.

https://en.m.wikipedia.org/wiki/Strategic_bombing_during_Wor...


There is a subtle difference between Knowledge and Intelligence


I can't read the full story because of the paywall so I can't be sure of the point.

From the title it sounds as good as if we need bigger legs to keep up with cars. However there are fundamental differences between moving around and thinking. I'd be surprised to be able to get a bigger brain in time to be on par with AIs. I'd be more surprised to have the time or the will to go through all the training that would make me an expert in several fields. However we already have specialized AIs in many fields. An example: I played a few thousands of games of Go, a typical pro played tens of thousands, AlphaGo millions. It had nothing else to do. I'll never match that, even with a bigger brain.


I assume it's some superifical point about robot AI being too advanced for humans to debug.



Or, you know, people could stand up for themselves and stop working on AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: