

Deep Learning With Commodity Off-The-Shelf High Performance Computing [pdf] - ironchief
http://stanford.edu/~acoates/papers/CoatesHuvalWangWuNgCatanzaro_icml2013.pdf

======
nkurz
Abstract:

    
    
      Scaling up deep learning algorithms has been
      shown to lead to increased performance in
      benchmark tasks and to enable discovery of
      complex high-level features. Recent efforts
      to train extremely large networks (with over
      1 billion parameters) have relied on cloud-
      like computing infrastructure and thousands
      of CPU cores. In this paper, we present tech-
      nical details and results from our own system
      based on Commodity Off-The-Shelf High Performance
      Computing (COTS HPC) technology:
      a cluster of GPU servers with Infiniband 
      interconnects and MPI. Our system is able to 
      train 1 billion parameter networks on just 3 machines
      in a couple of days, and we show that it can scale
      to networks with over 11 billion parameters using
      just 16 machines.  As this infrastructure is much
      more easily marshaled by others, the approach 
      enables much wider-spread research with extremely
      large neural networks.
    

For $20,000, they were able to build a 1-billion-connection system comparable
to the $1MM system they built the previous year. Also in this paper, Andrew Ng
and others detail how for $100,000 they also created an 11-billion-connection
deep learning system with 16 commodity servers, each loaded with 4 Nvidia
GTX680 GPU cards.

~~~
cr4zy
This means that to emulate the near one quadrillion connections in the human
brain would take about $10B today, assuming they can scale this system by 5
orders of magnitude with the same connections per dollar. Although they did
manage to double their connections per dollar going from 1 to 11 billion
connections.

~~~
duaneb
I would assume that at some point latency would be a larger issue.

~~~
waps
The great thing about pulse-based neural networks (like a human brain) is that
latency, as long as it's constant, is not an issue. In fact you'd want
connections with a wide variety of different latencies. They should be random,
but remain constant over the life of the simulation.

Plus you have to keep in mind that this is a simulation. Computer can't keep
up ? Run it at half "real-time".

------
visarga
Weird that a team led by Andrew Ng couldn't do better with the 11 billion
parameters model than with the 1 billion one.

This is becoming accessible for everyone. It's both exciting and terrifying.
It has the potential to save humanity from itself or condemn us to
totalitarianism. I am sure machine learning and NLP and statistical models are
what enables them to analyze the data they collect on us.

Big, fast noSQL tables, clustering technology (map-reduce) and machine
learning are what allows these guys to do what they do. Our most prized toys
became our enemies.

~~~
dumitrue
It's possible that the underlying model is just not particularly good at
learning from data. 11B parameters is a lot of free parameters to learn -- for
instance, the main competitor to that paradigm is the work by Krizhevsky et
al., which are convolutional networks with lots of parameter sharing, and I
think they get better performance (on a comparable task) with ~60M free
parameters.

