Hacker News new | past | comments | ask | show | jobs | submit login

"AI is a software problem and if you don't know how general intelligence works, having a new kind of transistor really, really, really ain't gonna help you much."

I'm not sure anyone is arguing against the latter part of your statement but you as you contradict yourself by arguing that it's a software problem first. AI is a biological, psychological, cog sci, EE/hardware, et al problem because, as you state, understanding how general intelligence works is important--as it is important to know how to build the technology that will run it (ie, possibly the memristor + something else, of course).

Sure you can program some sort of AI, but try doing it at the massively parallel scale of a brain. What I think the memristor opens up is the very ability to do this. I'm not exactly sure how--not sure that anyone does yet--but "in-memory" processing and moving away from the von Neuman (serial) architecture might be a good step as I'm not sure we can easily develop highly parallel software without letting the low-level architectures handle that automatically in more of a stochastic way.




> I'm not sure anyone is arguing against the latter part of your statement but you as you contradict yourself by arguing that it's a software problem first. AI is a biological, psychological, cog sci, EE/hardware, et al problem because, as you state, understanding how general intelligence works is important--as it is important to know how to build the technology that will run it (ie, possibly the memristor + something else, of course).

AI is a software (or other implementation of a specific algorithm) problem first, because general intelligence boils down to having an algorithm along with the right sort of inductive bias, and then running that in a world where it has input and its outputs matter.

While the other fields you mention may have something useful to say about what that inductive bias is, that doesn't mean that the core problem isn't one of software. The parent's point, as I read it, is that the specific type of general purpose hardware you run it on is irrelevant to solving that problem, and thus that this article is really rather sensational.

Apart from that, since we don't know what algorithms we need to solve that problem, it seems like premature optimization to start coming up with new hardware configurations for it. Although I do agree that parallelism seems to be important (e.g. for evolutionary methods) there's no reason that something standard like a GPU can't do the trick as well as it does for other types of computations that need parallelism, or at least prove the concept.

I imagine the DARPA grant is for something more sensible though, like for being able to run known algorithms (or adaptations thereof) which already have known uses, much, much more quickly; or at greater scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: