Hacker News new | past | comments | ask | show | jobs | submit login

Milking neural networks out completely is pretty much AI as depicted in the movies. If we can milk it completely there probably isn't a need for the next epiphany.



You're basically saying that there's no task (including passing the Turing test, programming web apps, etc.) which requires intelligence and is best tackled with either something else than a neural network or with NN combined with something else. I think it's a pretty bold statement which is really hard to back up by anything but a hunch.


Our current assertion is that neural networks basically replicate the brain's function, so our current understanding of this paradigm is that "milking neural networks" is going to match or exceed human general purpose intelligence.

I believe hmate9 is correct. If this paradigm is exploited to the full, unless we've missed something fundamental about how the brain works, we don't need to bother ourselves with inventing the next paradigm (of which there will no doubt be many), because one of the results of the current paradigm will be either an AGI (Artificial General Intelligence) that runs faster and better than human intelligence, or, more likely, an ASI (Artificial Super Intelligence). Either of those is more capable than we are for the purpose of inventing the next paradigm.


No deep learning researcher believes neural networks "basically replicate" the brain's function. Neural nets do a ton of things brains don't do (nobody believes the brain is doing stochastic gradient descent on a million data points in mini-batches). Brains also do a billion things that neural nets don't do. I've never even taken a neuroscience class, and I can think of the following: synaptic gaps, neurotransmitters, the concept of time, theta oscillations, all or nothing action potentials, Schwann cells.

You have missed something fundamental about how the brain works. Namely, neuroscientists don't really know how it works. Neuroscientists do not fully understand how neurons in our brain learn.

According to Andrew Ng (https://www.quora.com/What-does-Andrew-Ng-think-about-Deep-L...):

"Because we fundamentally don't know how the brain works, attempts to blindly replicate what little we know in a computer also has not resulted in particularly useful AI systems. Instead, the most effective deep learning work today has made its progress by drawing from CS and engineering principles and at most a touch of biological inspiration, rather than try to blindly copy biology.

Concretely, if you hear someone say "The brain does X. My system also does X. Thus we're on a path to building the brain," my advice is to run away!"


You are right, we do not know everything about the brain. Not even close. But neural networks are modelled on what we do know of the brain. And "milking" neural networks completely means we have created an artificial brain.


Did you just ignore the first few lines of argonaut's comment?

Recently, we also introduced activation functions in our neural nets, like rectified linear and maxout just for their nice mathematical properties without any regards to biological plausibility. And they do work better than what we had before.


"unless we've missed something fundamental about how the brain works"

But we don't know how the brain works. I think you extrapolate too far. Just because a machine learning technique is inspired by our squishy connectome it does not mean it's anything like it.

I'm willing to bet there are isomorphisms of dynamics between an organic brain and a neural net programmed on silicon but as far as I know, there are still none found - or at least none are named specifically (please correct me).


   Our current assertion is that neural networks basically replicate the brain's function
No. Just, no. This was never really a claim made by people who understood neural nets (there was a little perceptron confusion in the 60s iirc).


> Our current assertion is that neural networks basically replicate the brain's function

come on, that's hyperbole


Or, at the very least, the next epiphany need not be human-designed. Just train a neural network in the art of creating AI paradigms and implementations that can do general purpose AI. Once that's "milked", the era of human technological evolution is finished.


I don't want to be mean, but that's like saying you'll train a magic neural net with the mystical flavour of unicorn tears and then the era of making rainbows out of them will be finished. Or something.

I mean, come on- "the art of creating AI paradigms"? What is that even? You're going to find data on this, where, and train on it, how, exactly?

Sorry to take this out on you but the level of hand-waving and magical thinking is reaching critical mass lately, and it's starting to obscure the significance of the AlphaGo achievement.

Edit: not to mention, the crazy hype surrounding ANNs in the popular press (not least because it's the subject of SF stories, like someone notes above) risks killing nascent ideas and technologies that may well have the potential to be the next big breakthrough. If we end up to the point where everyone thinks all our AI problems are solved, if we just throw a few more neural layers to them, then we're in trouble. Hint: because they're not.


I totally see your point and my purpose is definitely not to be alarmist and sound the alarm that skynet is about to come out of AlphaGo or some other equivalent neural net. But I think the opposite attitude is also false.

As others have pointed out, we don't really know how the brain works. Neural nets represent one of our best attempts to model brains. Whether or not it's good enough to create real intelligence is completely unknown. Maybe it is, maybe it's not.

Intelligence appears to be an emergent property and we don't know the circumstances under which it emerges. It could come out of a neural network. Or maybe it could not. The only way we'll find out is by trying to make it happen.

Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

This is Hacker News, not a mass newspaper, so I think we can take the more nuanced and complex view here.


>> Neural nets represent one of our best attempts to model brains.

See now that's one of the misconceptions. ANNs are not modelled on the brain, not anymore and not ever since the poor single-layer Perceptron which itself was modelled after an early model of neuronal activation. What ANNs really are is algorithms for optimising systems of functions. And that includes things like Support Vector Machines and Radial Basis Function networks that don't even fit in the usual multi-layer network diagram particularly well.

It's unfortunate that this sort of language and imagery is still used abundantly, by people who should know better no less, but I guess "it's an artificial brain" sounds more magical than "it's function optimisation". You shouldn't let it mislead you though.

>> Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

I don't agree. It's a subject that's informed by a solid understanding of the fundamental concepts - function optimisation, again. There's uncertainty because there's theoretical limits that are hard to test, frex the fact that multi-layer perceptrons with three neural layers can learn any function given a sufficient number of inputs, or on the opposite side, that non-finite languages are _not_ learnable in the limit (not ANN-specific but limiting what any algorithm can learn) etc. But the arguments on either side are, well, arguments. Nobody is being "blind". People defend their ideas, is all.


Convolutional neural nets are the most accurate model of the ventral stream, numerically speaking. See work by Yamins, DiCarlo etc.


We don't really know how AI works either. NNs (for example) do stuff, and sometimes it's hard to see why.

>Taking a position that neural networks cannot ever result in strong AI is as blind as taking a position that they must.

Not really. Right now it's taking the position that there is no practical path that anyone can imagine from a go-bot, which is working in a very restricted problem space, to a magical self-improving AI-squared god-bot, which would be working in a problem space with a completely unknown shape, boundaries, and inner properties.

Meta-AI isn't even a thing yet. There are some obvious things that could be tried - like trying to evolve a god-bot out of a gigantic pre-Cambrian soup of micro-bots where each bot is a variation on one of the many possible AI implementations - but at the moment basic AI is too resource intensive to make those kinds of experiments a possibility.

And there's no guarantee anything we can think of today will work.


That sounds like a bad idea.


It's the core idea of AI, the primary reason why it is suspected that developing strong AI will inevitably lead to the end of the human era of evolution.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: