Hacker News new | past | comments | ask | show | jobs | submit login
DARPA's new memristor-based approach to AI (ieee.org)
42 points by boh on Feb 11, 2011 | hide | past | favorite | 24 comments



Yeah, that's like saying that you've got a new approach to writing the world's greatest novel based on using the letter 'y' in place of 'i'. AI is a software problem and if you don't know how general intelligence works, having a new kind of transistor really, really, really ain't gonna help you much. Wrong level of organization.


Have you looked at how we write code? It is directly related to the hardware architecture. Operators over here, memory over here.

Why wouldn't there be a benefit in having the hardware mimic the type of software they plan on using, namely, neural networks? That way, from the ground up, you've got these "neurons" that can computer and store information locally, which is pretty much what you've got in a neural network.

I'm really excited to see what kind of ground-up software architecture is going to come from using memristors as opposed to the combination of logical operators and separate memory. It sure as hell isn't going to look anything like Assembly. :)


>Have you looked at how we write code? It is directly related to the hardware architecture. Operators over here, memory over here.

This argument really doesn't make sense.

My desktop PC has a haskell interpreter, a prolog environment, and neural network code that I program on it.

I can write using functional programming, do maths operations, do connectionist computing.

Once the underlying computer is turing complete, all these things are possible.

Maybe having the better hardware will run the appropriate programs faster, but how we write code is not directly related to the hardware architecture.

The hard part will be figuring out what to tell the memristors to do.


There is really no mathematical advantage that NNs have over other approaches namely statistical, optimization and formal symbolic approaches/algorithms which are (imho) better engineering tools as they are more amenable to analysis. At the same time NNs are not inferior to other approaches, mathematically. The only advantage of NNs are sounding cool and being superficially similar to our brains. Consider this simple and often repeated analogy, sticking a beak and feathers on your toy airplane won't make it go faster. In real AI research, NNs are just a bullet point in the huge field of Machine Learning (http://ijcai-11.iiia.csic.es/calls/call_for_papers).

AI people should stop throwing around cool names and instead build things which are real (Please do not start another AI winter.) Watson is a refreshing step in the right direction.


In this case, neural networks have the advantage of looking similar enough to networks of memristors that you may be able to cram a big neural net into a small, low-power chip. It's not a huge breakthrough in AI, but prove very handy for some applications.


There wouldn't be a benefit because we don't know how to use neural networks to do AI yet.


Would you please clarify this statement?

I thought that there was plenty of progress in the field...

http://en.wikipedia.org/wiki/Artificial_neural_network


I think the problem is a disagreement over what "AI" means. We don't know how to do strong, general-purpose AI with artificial neural nets. We do know how to do a lot of smaller tasks with neural nets. Just because they're not currently a panacea doesn't mean they're not useful.


I once read something along the lines of "AI is whatever computers can't currently do". There is a ring of truth to this. We once thought that it would take AI for a computer to be the best chess player on the planet, but now most phones can beat 99.99% of people of the planet without trying. One day the Turing Test will be easily passed and I am sure we (humans) still won't be satisfied.


> I think the problem is a disagreement over what "AI" means.

Simple: "If it works, it isn't AI."

Remember when playing chess was a sign of undeniable intelligence? Remember when playing Jeopardy was?


No, neural nets have topped out long before that saying came into effect. They aren't useless, but don't be fooled by facile analogies to the human brain; neural nets as we actually know how to make them do useful things are rarely the best solution. We don't know how to train complicated ones very well, and the simple ones we know how to train are still very slow to learn vs. many other techniques, yet simultaneously bring very little to the table that aren't done better by numerous other techniques.

Reminds me of genetic algorithms and programming; no, they aren't necessarily useless, but remove the facile analogies to real-world processes and they carry a lot of expense for not a lot of gain. Only rarely are they the right choice.

Goals aren't results.


Your comment reminded me (for some reason) of Robert Full's 2002 TED Talk about embedding computation in the structure and material properties of a robot legs.

http://www.ted.com/talks/robert_full_on_engineering_and_evol...


"AI is a software problem and if you don't know how general intelligence works, having a new kind of transistor really, really, really ain't gonna help you much."

I'm not sure anyone is arguing against the latter part of your statement but you as you contradict yourself by arguing that it's a software problem first. AI is a biological, psychological, cog sci, EE/hardware, et al problem because, as you state, understanding how general intelligence works is important--as it is important to know how to build the technology that will run it (ie, possibly the memristor + something else, of course).

Sure you can program some sort of AI, but try doing it at the massively parallel scale of a brain. What I think the memristor opens up is the very ability to do this. I'm not exactly sure how--not sure that anyone does yet--but "in-memory" processing and moving away from the von Neuman (serial) architecture might be a good step as I'm not sure we can easily develop highly parallel software without letting the low-level architectures handle that automatically in more of a stochastic way.


> I'm not sure anyone is arguing against the latter part of your statement but you as you contradict yourself by arguing that it's a software problem first. AI is a biological, psychological, cog sci, EE/hardware, et al problem because, as you state, understanding how general intelligence works is important--as it is important to know how to build the technology that will run it (ie, possibly the memristor + something else, of course).

AI is a software (or other implementation of a specific algorithm) problem first, because general intelligence boils down to having an algorithm along with the right sort of inductive bias, and then running that in a world where it has input and its outputs matter.

While the other fields you mention may have something useful to say about what that inductive bias is, that doesn't mean that the core problem isn't one of software. The parent's point, as I read it, is that the specific type of general purpose hardware you run it on is irrelevant to solving that problem, and thus that this article is really rather sensational.

Apart from that, since we don't know what algorithms we need to solve that problem, it seems like premature optimization to start coming up with new hardware configurations for it. Although I do agree that parallelism seems to be important (e.g. for evolutionary methods) there's no reason that something standard like a GPU can't do the trick as well as it does for other types of computations that need parallelism, or at least prove the concept.

I imagine the DARPA grant is for something more sensible though, like for being able to run known algorithms (or adaptations thereof) which already have known uses, much, much more quickly; or at greater scale.


If you look at how the brain operates, it's clear that its implementation is far from a "software problem". The line between the software and hardware is as blurred as it can be in a biological brain. AI is only a software problem in silicon because our hardware is static. With memristors we'll be able to more closely mimic how the brain actually operates. This is a potential game changer when it comes to creating true AI.


Why is that clear?

Just because the lowest level of computation occurs in a 'mix of hardware and software' does not mean that all the important abstractions don't run at a higher level.

And indeed, some evolutionary arguments about complex systems suppose that higher level structure is likely: its easier, evolutionarily, to build systems from many levels of subcomponent, rather than from the lowest level component.

Maybe with memristors we'll be able to simulate the particular high level process that occurs in the human brain faster. But unless we know what to simulate, that doesn't solve the hard problem. The game changer will be when we know what to simulate/run; after that we can work on finding a computational substrate (which may be memristors) that is optimised for it.


I agree with the gist of your response. I think we just disagree on the semantics of software in this case. Software being "instructions" that control the operation of hardware doesn't apply to a biological brain because the instructions are a combination of the state of each neuron and the connections each neuron makes with its neighbors, which themselves change over time. There may be higher level abstractions that can exist independent of a neural network as its basis, but I wouldn't label that the software in the case of a biological brain.

Even if there are higher level abstractions yet to be discovered, neural nets can open the door to them by allowing us to create more life-like simulations and then run experiments on it. I can imagine this is rather difficult in an actual biological entity.


If you look in the later pages of the article, once they've finished with the hyperbole and pop-science spin, their actual plans look pretty reasonable. They want to use a chip combining memristors and conventional microprocessors to control a robot in a complex obstacle course. The hope is that some kind of neural network will be helpful for such tasks, if they figure out the right layout for the neurons and interconnections. If that's the case, then they have the hardware to make this potentially faster, more compact, and more power-efficient than more conventional approaches.

tldr: This isn't about general intelligence. It's about improving robotics: computer vision, motion planning, integrating multiple senses -- that sort of thing.


> If that's the case, then they have the hardware to make this potentially faster, more compact, and more power-efficient than more conventional approaches.

In which cases are those the limits that keep us from achieving "AI"?

For example, what robots are limited by those factors as opposed to the actuators?


substrates matter.


This is so bollocks. And from the IEEE no less. Shame on them.

A turing-complete architecture is a turing complete architecture. There is no difference in computational ability between turing-complete architectures.

The advance here is in hardware architecture, not in AI. Make no mistake, it's an interesting approach - possibly the memristor will allow a neural network to be a practical chip, which has not proven feasible to date. But it's not going to open the magic gates to Big AI.

The mathematical, philosophical, and algorithmic aspects of AI are simply not understood in any sort of great depth of thought. Say, what is intelligence anyway (rhetorically)?

I look forward to seeing the memristor research results, and I hope some useful hardware neural network advances come from it.


The article comes across as either boastful or gushing about memristors, and hand-waves over as many hard unsolved problems as old AI predictions ever did. This makes it quite hard to take seriously. Prefix all these bold statements with: "If MoNETA is successful in its goals..."

I have no doubt that trying to simulate mammalian brains with memristors will lead to some great new AI advances, but I won't hold my breath that it will solve all AI problems simultaneously, and neither should you.


I'm staying healthy as long as possible. An exciting future awaits.

Forget going to Mars... the next frontier is understanding and emulating a mammalian brain, preferably the human brain with its amazing cerebral cortex. And boy will the memristor and related technology come at a great time when we're bombarded with an exponential increase in data that we don't yet fully exploit.


+1 for staying as healthy as possible for as long as possible - not that I'm actually all that healthy.

There's a good chance that human lifespans will increase dramatically over the next few decades.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: