
Now anyone can train Imagenet in 18 minutes - stablemap
http://www.fast.ai/2018/08/10/fastai-diu-imagenet/
======
alfalfasprout
Training is rarely the issue though. Multi-GPU training has been at a point
where you can do this for models in a reasonable amount of time on a current-
generation 8 GPU box.

The more annoying issue is inference on large amounts of images. CPU inference
is slow and distributed GPU inference is tricky (Spark + GPUs is not a fun
prospect). Then providing stripped down versions for realtime inference is a
whole 'nother can of worms.

~~~
ithkuil
I'm confused. I always thought that once you trained a model then you can use
to do inference and at that point the model is only a read-only data structure
hence is trivial to scale out inference.

I guess you're talking about something else. Could you please elaborate?

~~~
alfalfasprout
Inference is still very computationally taxing for deep learning models. Using
GPUs in a cluster for inference often proves far more efficient than large
numbers of CPUs (eg; spark) but then that suffers from issues like I/O
bottlenecks, memory bandwidth bottlenecks, etc.

So yeah, it's "read-only" but it's not so trivial to naively scale due to the
large amounts of computations involved and often enormous datasets.

------
crunchlibrarian
I get really wary of any "solution" provided or supported by google in this
space, it's just a matter of time before they turn.

