Hacker News new | past | comments | ask | show | jobs | submit login

Have you looked at how we write code? It is directly related to the hardware architecture. Operators over here, memory over here.

Why wouldn't there be a benefit in having the hardware mimic the type of software they plan on using, namely, neural networks? That way, from the ground up, you've got these "neurons" that can computer and store information locally, which is pretty much what you've got in a neural network.

I'm really excited to see what kind of ground-up software architecture is going to come from using memristors as opposed to the combination of logical operators and separate memory. It sure as hell isn't going to look anything like Assembly. :)




>Have you looked at how we write code? It is directly related to the hardware architecture. Operators over here, memory over here.

This argument really doesn't make sense.

My desktop PC has a haskell interpreter, a prolog environment, and neural network code that I program on it.

I can write using functional programming, do maths operations, do connectionist computing.

Once the underlying computer is turing complete, all these things are possible.

Maybe having the better hardware will run the appropriate programs faster, but how we write code is not directly related to the hardware architecture.

The hard part will be figuring out what to tell the memristors to do.


There is really no mathematical advantage that NNs have over other approaches namely statistical, optimization and formal symbolic approaches/algorithms which are (imho) better engineering tools as they are more amenable to analysis. At the same time NNs are not inferior to other approaches, mathematically. The only advantage of NNs are sounding cool and being superficially similar to our brains. Consider this simple and often repeated analogy, sticking a beak and feathers on your toy airplane won't make it go faster. In real AI research, NNs are just a bullet point in the huge field of Machine Learning (http://ijcai-11.iiia.csic.es/calls/call_for_papers).

AI people should stop throwing around cool names and instead build things which are real (Please do not start another AI winter.) Watson is a refreshing step in the right direction.


In this case, neural networks have the advantage of looking similar enough to networks of memristors that you may be able to cram a big neural net into a small, low-power chip. It's not a huge breakthrough in AI, but prove very handy for some applications.


There wouldn't be a benefit because we don't know how to use neural networks to do AI yet.


Would you please clarify this statement?

I thought that there was plenty of progress in the field...

http://en.wikipedia.org/wiki/Artificial_neural_network


I think the problem is a disagreement over what "AI" means. We don't know how to do strong, general-purpose AI with artificial neural nets. We do know how to do a lot of smaller tasks with neural nets. Just because they're not currently a panacea doesn't mean they're not useful.


I once read something along the lines of "AI is whatever computers can't currently do". There is a ring of truth to this. We once thought that it would take AI for a computer to be the best chess player on the planet, but now most phones can beat 99.99% of people of the planet without trying. One day the Turing Test will be easily passed and I am sure we (humans) still won't be satisfied.


> I think the problem is a disagreement over what "AI" means.

Simple: "If it works, it isn't AI."

Remember when playing chess was a sign of undeniable intelligence? Remember when playing Jeopardy was?


No, neural nets have topped out long before that saying came into effect. They aren't useless, but don't be fooled by facile analogies to the human brain; neural nets as we actually know how to make them do useful things are rarely the best solution. We don't know how to train complicated ones very well, and the simple ones we know how to train are still very slow to learn vs. many other techniques, yet simultaneously bring very little to the table that aren't done better by numerous other techniques.

Reminds me of genetic algorithms and programming; no, they aren't necessarily useless, but remove the facile analogies to real-world processes and they carry a lot of expense for not a lot of gain. Only rarely are they the right choice.

Goals aren't results.


Your comment reminded me (for some reason) of Robert Full's 2002 TED Talk about embedding computation in the structure and material properties of a robot legs.

http://www.ted.com/talks/robert_full_on_engineering_and_evol...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: