Hacker News new | past | comments | ask | show | jobs | submit login

Its going to need to use CUDA or it will not be competitive with alternatives. CUDA makes training networks more than an order of magnitude faster.



But that may or may not matter, depending on what you're doing. And how often you do it. If I have a network that I only retrain once a month, I can deal with it taking a day or two to train. Heck, it could take a week as far as that goes.

OTOH, it obviously matters a lot if you're constantly iterating and training multiple times a day or whatever.


The difference is between training taking a week, and training taking 10 weeks.

It takes a week to train a standard AlexNet model on 1 GPU on ImageNet (and this is pretty far from state of the art).

It takes 4 GPUs 2 weeks to train a marginally-below state of the art image classifier on ImageNet (http://torch.ch/blog/2016/02/04/resnets.html) - the 101 layer deep residual network. This would be 20 weeks on an ensemble of CPUs. (State of the art is 152 layers; I don't have the numbers but I'd guess-timate 3-4 weeks to train on 4 GPUs).


For state of the art work "a day or two" is pretty fast for a production network, and that's on one or more big GPUs. Not using CUDA is definitely a dealbreaker for any kind of real deep learning beyond the mnist tutorials. It's common to leave a Titan X to run over a weekend; that would be weeks on a CPU.


Well not using CUDA isn't necessarily synonymous with "use a CPU". There is OpenCL. But still, you have a point even if we might quibble over details. This is why I am very much hoping AMD gets serious about Machine Learning and hoping for OpenCL on AMD chips will eventually reach a level of parity (or near parity) with the CUDA on nVidia stuff.


Its unlikely that AMD is going to be able to make serious inroads in the near future. nVidia has built quite a lead not just in terms of chips but tooling. I had thought a couple of years ago that AMD should be building a competitor to the Tesla. It should be able to build a more hybrid solution than nVidia can given its in house CPU development talent. But I haven't seen them building anything like that and a competitor to nVidia may have to come from somewhere else. In the absence of a serious competitor OpenCL is not very interesting.


Yeah, and that's sad. I really hate to see this whole monoculture thing, especially since CUDA isn't OSS. :-(


Its really a hardware problem.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: