This whole hoopla about AI using memristors seems misguided to me. Memristors are interesting because they can more efficiently do things that currently require collections of transisters and capacitors, but they don't have any particular advantage in AI.
Uh, the argument that is being made is that moving memory and processing power much closer together allows a much more direct simulation of brain-like functions. Do you any counter-argument to this?
Yes, of course it's a "consequence of performance improvements", that's always the reason for advances due to new hardware. It seems a memristor could act directly as a cellular automaton or conventional AI neural network so these would not be the awkward simulations that they are today. It plausible that this could result in notable advances. If you don't agree, perhaps you could say why.
1. Will memristors give a performance improvement? Yes, they probably will. Will this be helpful to AI? Yes, it will, because at least some aspects of AI are limited by performance right now.
2. Are memristors particularly well suited to modeling the brain? Though there are small scale similarities, I think this is akin to saying a mechanical computer based on gears turning is uniquely suited to doing CAD for automobiles, or a wooden keyboard helps with furniture design.
The brain consists of a huge number of slow processing elements. Electronic computers consist of orders of magnitude fewer processing elements running orders of magnitude faster. Even if we approach AI from neuron-level simulations, we're better off using the ~1000 floating point op per second per transistor to simulate a set of neurons serially than to waste 5 or 10 memristors per neuron and enormous interconnect bandwidth to try to build an electronic analogue of the brain.
Edit: You quoted one thing I said, but you cut the crucial word "pure". If memristors are uniquely suited to AI, then this is not a "pure consequence of performance improvements". When I say that I mean that performance is fungible. Memristors or transistors doesn't matter for brains any more than it matters for high frequency trading or rendering graphics.
(I'll assume that, here, performance means speed of memory access and calculation.)If all that memristors offer is performance, and you propose memristors will lead to AI, you are suggesting that what is currently preventing an artificial intelligence from succeeding is speed of calculation, and further that the particular kind of calculation needed is the same kind potentially improved by memristors.I can't completely demonstrate that to be untrue without referencing every attempt at AI ever made, because you're asking for a negative proof --- that NO current AI research fulfills those two requirements. I haven't read all the AI papers ever, let alone talked to researchers that haven't published. But I've read what I can, and I... I guess I just don't feel like the evidence suggests that current research is hindered solely by speed, NOR that "moving memory and processing power closer together" has much to do with any somewhat-successful attempts machine intelligence.
So what DO you think is holding us back? My understanding is the memristor is a much closer model of a synapse. So if the problem isn't raw computing power and it's not the synapse model then what do you think it is? What's holding us back from simulating a brain? Do we just not know enough about the design of a brain? Or do we need to understand something about consciousness itself?
For what it's worth, this project seems to think we are only 10-20 years away from simulating it with conventional computing:
If we knew precisely what was holding us back, it wouldn't be holding us back. You're asking a question you can always just move the goalposts for. (I'm not saying you will, but you can.)
I'm a computational molecular biologist, and I don't understand the cell or the brain. Nobody does. What's holding us back? The proteome, interactome, connectome, etc. etc.
You'll see biologically-inspired AI after we solve a MUCH simpler set of problems: cancer, aging, and viruses. I doubt you'll still be young when that happens. (Not that I don't hope for this to happen as soon as possible.)
I wouldn't concern myself AI anyway. With the rate at which our machines are becoming more parallel and gaining in throughput, I might expect to see an accurate model of a single neuron in a few decades.
Get over the Kurzweil fantasy and live each day as best as you can. If we don't get to live forever it isn't the end of the world. (And if you believe it is then maybe you should be working on this problem instead.)
I think that Kurzweil has already answered arguments such as this. My rendition of that argument would be that...
<argument> Scientists who study these things are absolutely correct in saying that we won't achieve quick progress with walls they can see and the tools that they have. Yes, the thing is that we won't see notable progress in the experimental biological realm but rather we will first see super-quick progress from something like a very crude, biologically inspired AI that indeed doesn't require us to understand the brain but works well-enough at the exe-scale. From that, we'll have far more powerful tools to actually start to understand the brain. </argument>
I'm not qualified to say whether Kurzweil's answer here is correct. However, I don't think anyone is going to be able to make final judgment until a "singularity" happens or until computing progress definitively hits a wall we can't go around.
The living-forever question, of course, is a whole, other discussion.
The world is changing very rapidly. Given this, any extrapolation thirty years into the future is fantasy. Things may be very much nicer, very much more unpleasant or in-between. My argument is for uncertainty, not certainty. The idea that things will remain about the same seems like at least as much of a "fantasy" as the idea that the singularity will come and solve all of our problems.
What's holding us back in our attempts to simulate a brain is that we don't know how the brain works. That's a much more important hurdle to overcome than performance, I believe.
I'm not even sure that a single helium atom can be accurately modeled, if you take in to account all of the subatomic particles and processes involved (which are still in the process of being discovered and understood).
Well... you've upped the ante a lot from my earlier post. I only asked for the reason why people didn't think memrisors wouldn't lead to a break through when various people argued that it would. I'm not certain it will but I'm also not certain it won't.
Still.. Maybe, maybe, just by being slightly optimistic about this development, I deserve to be tagged as "propos[ing] memristors will lead to AI"
Fine, tag me as claiming that memristors will jump us to the singularity fifteen years earlier than Ray Kurzweil claims, with ponies for everyone!
But seriously...
Neural networks have been pretty standard AI model for a while now. Memristors claim to be able to simulate them on a chip rather than forcing them to go back-and-forth between disk and chip. Why would that lead to some level of advance?
Uh, the argument that is being made is that moving memory and processing power much closer together allows a much more direct simulation of brain-like functions. Do you any counter-argument to this?
Yes, of course it's a "consequence of performance improvements", that's always the reason for advances due to new hardware. It seems a memristor could act directly as a cellular automaton or conventional AI neural network so these would not be the awkward simulations that they are today. It plausible that this could result in notable advances. If you don't agree, perhaps you could say why.