Hacker News new | comments | show | ask | jobs | submit login

The convolutional neural network code that pylearn2 and the Toronto group use is specifically tuned for GTX580 cards - users have reported factors of 2x-10x slowdown using Kepler series cards. In general, most users (of pylearn2 at least) highly recommend a GTX5xx device.

I personally use a GTX570, and it is pretty decent, though not spectacular. Costwise, it is reasonably priced, and "good enough" for most of the networks I have tried (minus ImageNet...)

A key problem is limited GPU memory in the Fermi series, as it is difficult to fit a truly monstrous network on a single card. Krizhevsky's ImageNet work had some very tricky things to spread it across 2 GTX580s, and the training still took a very, very long time.




I didn't realize the performance dip was so dramatic. As of late I've been considering acquiring some hardware and I guess I'm going to have to keep this in mind. If ones were to buy one today I don't even think I can find stores that sell the GTX 580. Would probably need to search on ebay.

Thanks for the heads up friend!


Check out this discussion - it may help you decide what card to get. There was also an email somewhere about how TITAN is currently not any faster than a 580, though no hard numbers.

https://groups.google.com/forum/#!topic/pylearn-dev/cODL9RXP...

Once again, my 570 is slower than a 580 (about 2x), but "good enough" for now.


Huh. Any theories as to why that is? Highly tuned coalesced reads that backfire on the kepler arch?


From what I understand, it is due to the programming specifics of the training algorithms, primarily being focused on exploiting certain registers and architecture features specific to Fermi. The code actually got updated from GTX280 series to GTX580 series IIRC, so it is likely that it will be updated again at some point by a motivated researcher or group. I suspect there simply isn't a need to update right now for most labs (though I suspect TITAN / TITAN LE / TITAN II may change that). Also, Alex Krizhevsky now works for Google :), so someone else may need to do the updating.

http://www.wired.com/wiredenterprise/2013/03/google_hinton/

You can check out the code here - it is really good IMO.

http://code.google.com/p/cuda-convnet/




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: