

Deep Learning Courses - cjdulberger
https://developer.nvidia.com/deep-learning-courses

======
krat0sprakhar
Few other Machine Learning / Deep Learning courses here -
[https://github.com/prakhar1989/awesome-courses#machine-
learn...](https://github.com/prakhar1989/awesome-courses#machine-learning)
</shameless-plug>

------
ojbyrne
So I see a link at the bottom: "Andrew Ng's Coursera course provides a good
introduction to deep learning" which links to his "Machine Learning" class.
This leads me to believe that "deep learning" is a synonym for "machine
learning." Honest question: is that the case? Just a rebranding?

~~~
vonnik
"Deep learning" is used a couple ways. In the loose sense employed by some
journalists, deep learning is probably synonymous with both machine learning
and AI. It's at the cutting edge of the current hype.

In the strict sense, deep learning refers to neural networks with more than
one hidden layer. The depth of neural networks is equal to the longest path
between input and output nodes. "Shallow" networks like a simple autoencoder
might have two layers. They're not considered deep or part of deep learning.
But if you stack them together, you have a deep net; e.g. many restricted
Boltzmann machines form a deep-belief network. [1]

As @paulsutter mentioned, one aspect of having several hidden, or
intermediate, layers in a neural network is that you can combine relatively
simple, granular features (like individual pixels or words) into more complex
combinations. Neural networks recombine simple features automatically, and
then learn which groupings should be lent significance as signals through the
backpropagation of error.

They're attracting all this hype because they actually do something amazing,
albeit through brute force computation. Many in AI scoff at neural networks
because they've been around a long time and have no particular elegance, but
we're now in a historical moment where we have the hardware to make them work,
and they're breaking records in almost every data type; e.g. images, sound,
time series, etc.

So no, it's not a rebranding, it's a thing. We can now replicate the human
faculty of perception with machines in many domains, and that's going to make
the future quite weird.

[1]
[http://deeplearning4j.org/restrictedboltzmannmachine.html](http://deeplearning4j.org/restrictedboltzmannmachine.html)

------
ericmo
Glad to know there are recordings of this. I registered, but I've been missing
the lectures because of timezone differences.

------
ris
Translation: CUDA indoctrination courses.

~~~
itsnotlupus
Is indoctrination even needed at this stage?

Are there openCL equivalents to the popular GPU-accelerated NN
frameworks/libraries?

You only need to convince people when they have another choice.

~~~
agibsonccc
Disclaimer, it's my project, but I run an open source project called
deeplearning4j, who's algorithms have a hardware abstraction layer built in to
them called nd4j. You get numpy on the jvm and hardware as a jar file.
Deeplearning4j itself is built on top of that. Would love to help spread deep
learning to different runtimes.

Here's the current work being done on opencl:
[https://github.com/deeplearning4j/nd4j/tree/master/nd4j-jocl...](https://github.com/deeplearning4j/nd4j/tree/master/nd4j-jocl-
parent)

We'd love to get this finished. Bit more to do yet though...definitely looking
for contributors here. You'll get opencl neural nets for free.

~~~
dharma1
interesting. What's the performance like on the same/equivalent GPU when
comparing CUDA to OpenCL?

~~~
agibsonccc
Need to run empirical benchmarks. CUDA is usually faster. I'd like to run my
own benchmarks with nd4j though. We have our own benchmark setup that works
for every backend. It allows us to do some interesting things. Cuda itself is
usually faster with data transfer latency though[1].

Looking forward to running these ourselves after our opencl support kicks in
(only the kernels are written =/)

I plan on basing the work for open cl on our cuda work which is fairly well
established at this point (mainly doing optimizations not much change in
architecture)

[1]: [http://arxiv.org/pdf/1005.2581.pdf](http://arxiv.org/pdf/1005.2581.pdf)

~~~
ris
"CUDA is usually faster."

I wouldn't be surprised if this turned out not to be accidental. I mean, it
wouldn't work against NVidia for OpenCL to continue to be seen as the "slower
option". So I'm sure their efforts to improve their OpenCL implementation
aren't considered as important from a business point of view.

~~~
dharma1
True, though if you compare roughly equivalent Nvidia and AMD GPUs, my
impression is that the CUDA implementation on Nvidia still outperforms OpenCL
on AMD for deep learning. Is this right?

AMD cards are a lot cheaper, would be good to be able to use them for deep
learning too -
[https://www.reddit.com/r/linux/comments/2zgpj8/15000_nvidia_...](https://www.reddit.com/r/linux/comments/2zgpj8/15000_nvidia_developer_system/cpjjff5)

And new fast, low power FPGA's from Altera support OpenCL.
[https://www.altera.com/products/design-software/embedded-
sof...](https://www.altera.com/products/design-software/embedded-software-
developers/opencl/overview.highResolutionDisplay.html)

~~~
ris
"Is this right?"

Yeah, I think this is the previously mentioned float performance thing.

