This whole hoopla about AI using memristors seems misguided to me. Memristors are interesting because they can more efficiently do things that currently require collections of transisters and capacitors, but they don't have any particular advantage in AI.
Certainly AI may benefit from some specialized neural network style hardware, and memristors may be particularly well suited to this, but general specialized hardware like this isn't going to give us AI that can do anything that couldn't be done before memristors. (Unless as a pure consequence of performance improvements.)
The human brain has 10e15 synapses [1] and 2.5 petabytes of storage[2]. A modern quad core processor has about 7e8[3] transistors.
Now the brains switches and storage probably can't be directly equated to a processor, but presumably it is within a few orders of magnitude. A conservate estimate would require about 10e6 times the density to get the same density as the human brain on a chip. 10e6 ~ 2^20, so that's about 20 years assuming Moore's Law holds.
So in terms of raw performance, we have a ways to go.
Memristors are about 10 times as dense as transistors currently [4]. They also offer non-volatile storage in addition to processing power and the possibility of fitting in a three dimensional structure. I won't guess what this imlies, but I assume it's going to get us much closer to a circuit similar to a human brain.
Am I wrong? Probably somewhere. Does any of this matter? I have no idea. We won't know until we try. Personally, I'm VERY excited about the possibilities behind memristors, and wish I understood them better. I can't think of a development in the past 20 years that has more potential to advance computers.
Following your reasoning: on the one hand adjusting for volume easily gains three orders of magnitude.
On the other hand, there is the issue of power. The human brain takes about the same amount of power as a top of the line quad core processor.
For now, I do not think either of these matter, though. We simply do not know enough of how the human brain works to warrant attempting to build one. Physicists didn't start by building the LHC, either.
I believe the common arguments is that memristors are more similar two neurons in that they cna be used for processing as well as storage.
Not that it's a good argument, the fact that we need more kinds of pieces was never an obstacle.
Memristors(both in their neuromorphic forms and in their fpga forms) will offer many more people access to very high performance hardware, and will greatly decrease the time and cost of experimenting with AI algorithms.
So while it isn't definite that they'll solve the AI problem , they'll probably do alot to advance the field.
This whole hoopla about AI using memristors seems misguided to me. Memristors are interesting because they can more efficiently do things that currently require collections of transisters and capacitors, but they don't have any particular advantage in AI.
Uh, the argument that is being made is that moving memory and processing power much closer together allows a much more direct simulation of brain-like functions. Do you any counter-argument to this?
Yes, of course it's a "consequence of performance improvements", that's always the reason for advances due to new hardware. It seems a memristor could act directly as a cellular automaton or conventional AI neural network so these would not be the awkward simulations that they are today. It plausible that this could result in notable advances. If you don't agree, perhaps you could say why.
1. Will memristors give a performance improvement? Yes, they probably will. Will this be helpful to AI? Yes, it will, because at least some aspects of AI are limited by performance right now.
2. Are memristors particularly well suited to modeling the brain? Though there are small scale similarities, I think this is akin to saying a mechanical computer based on gears turning is uniquely suited to doing CAD for automobiles, or a wooden keyboard helps with furniture design.
The brain consists of a huge number of slow processing elements. Electronic computers consist of orders of magnitude fewer processing elements running orders of magnitude faster. Even if we approach AI from neuron-level simulations, we're better off using the ~1000 floating point op per second per transistor to simulate a set of neurons serially than to waste 5 or 10 memristors per neuron and enormous interconnect bandwidth to try to build an electronic analogue of the brain.
Edit: You quoted one thing I said, but you cut the crucial word "pure". If memristors are uniquely suited to AI, then this is not a "pure consequence of performance improvements". When I say that I mean that performance is fungible. Memristors or transistors doesn't matter for brains any more than it matters for high frequency trading or rendering graphics.
(I'll assume that, here, performance means speed of memory access and calculation.)If all that memristors offer is performance, and you propose memristors will lead to AI, you are suggesting that what is currently preventing an artificial intelligence from succeeding is speed of calculation, and further that the particular kind of calculation needed is the same kind potentially improved by memristors.I can't completely demonstrate that to be untrue without referencing every attempt at AI ever made, because you're asking for a negative proof --- that NO current AI research fulfills those two requirements. I haven't read all the AI papers ever, let alone talked to researchers that haven't published. But I've read what I can, and I... I guess I just don't feel like the evidence suggests that current research is hindered solely by speed, NOR that "moving memory and processing power closer together" has much to do with any somewhat-successful attempts machine intelligence.
So what DO you think is holding us back? My understanding is the memristor is a much closer model of a synapse. So if the problem isn't raw computing power and it's not the synapse model then what do you think it is? What's holding us back from simulating a brain? Do we just not know enough about the design of a brain? Or do we need to understand something about consciousness itself?
For what it's worth, this project seems to think we are only 10-20 years away from simulating it with conventional computing:
If we knew precisely what was holding us back, it wouldn't be holding us back. You're asking a question you can always just move the goalposts for. (I'm not saying you will, but you can.)
I'm a computational molecular biologist, and I don't understand the cell or the brain. Nobody does. What's holding us back? The proteome, interactome, connectome, etc. etc.
You'll see biologically-inspired AI after we solve a MUCH simpler set of problems: cancer, aging, and viruses. I doubt you'll still be young when that happens. (Not that I don't hope for this to happen as soon as possible.)
I wouldn't concern myself AI anyway. With the rate at which our machines are becoming more parallel and gaining in throughput, I might expect to see an accurate model of a single neuron in a few decades.
Get over the Kurzweil fantasy and live each day as best as you can. If we don't get to live forever it isn't the end of the world. (And if you believe it is then maybe you should be working on this problem instead.)
I think that Kurzweil has already answered arguments such as this. My rendition of that argument would be that...
<argument> Scientists who study these things are absolutely correct in saying that we won't achieve quick progress with walls they can see and the tools that they have. Yes, the thing is that we won't see notable progress in the experimental biological realm but rather we will first see super-quick progress from something like a very crude, biologically inspired AI that indeed doesn't require us to understand the brain but works well-enough at the exe-scale. From that, we'll have far more powerful tools to actually start to understand the brain. </argument>
I'm not qualified to say whether Kurzweil's answer here is correct. However, I don't think anyone is going to be able to make final judgment until a "singularity" happens or until computing progress definitively hits a wall we can't go around.
The living-forever question, of course, is a whole, other discussion.
The world is changing very rapidly. Given this, any extrapolation thirty years into the future is fantasy. Things may be very much nicer, very much more unpleasant or in-between. My argument is for uncertainty, not certainty. The idea that things will remain about the same seems like at least as much of a "fantasy" as the idea that the singularity will come and solve all of our problems.
What's holding us back in our attempts to simulate a brain is that we don't know how the brain works. That's a much more important hurdle to overcome than performance, I believe.
I'm not even sure that a single helium atom can be accurately modeled, if you take in to account all of the subatomic particles and processes involved (which are still in the process of being discovered and understood).
Well... you've upped the ante a lot from my earlier post. I only asked for the reason why people didn't think memrisors wouldn't lead to a break through when various people argued that it would. I'm not certain it will but I'm also not certain it won't.
Still.. Maybe, maybe, just by being slightly optimistic about this development, I deserve to be tagged as "propos[ing] memristors will lead to AI"
Fine, tag me as claiming that memristors will jump us to the singularity fifteen years earlier than Ray Kurzweil claims, with ponies for everyone!
But seriously...
Neural networks have been pretty standard AI model for a while now. Memristors claim to be able to simulate them on a chip rather than forcing them to go back-and-forth between disk and chip. Why would that lead to some level of advance?
Nonsense. We have no idea we're reconstructing -- the governing dynamics of cortical, much less subcortical, interactions are still nearly totally unknown to us -- and memristors do nothing to shed light on that. Further, neural codes are digital and memristors are analog elements. There's no property of memristors that makes the job of "building a brain" easier than it would otherwise be. The point that the metaphors are closer because there's no separation between computation and storage is missing the larger issues.
Certainly AI may benefit from some specialized neural network style hardware, and memristors may be particularly well suited to this, but general specialized hardware like this isn't going to give us AI that can do anything that couldn't be done before memristors. (Unless as a pure consequence of performance improvements.)