
Dive into Deep Learning: Berkeley Course - left-left
http://d2l.ai/chapter_introduction/index.html
======
minimaxir
This course appears to be above average in terms of terminology and and code
quality for a free online course. (although the DL part focuses on MXNet,
which is not as ubiquitous as TensorFlow/PyTorch)

~~~
gm3dmo
Spelling not so much.

~~~
EForEndeavour
Feel free to fix any typos you find via pull request!
[https://github.com/d2l-ai/d2l-en](https://github.com/d2l-ai/d2l-en)

------
astonzhang
Thanks.

1\. We have just translated contents from the Chinese book
[https://github.com/d2l-ai/d2l-zh](https://github.com/d2l-ai/d2l-zh). However,
the translation quality is not good enough; thus, we are still editing. As
@EForEndeavour put, you are more than welcome to become a contributor of the
book if you spot any issue:
[https://github.com/d2l-ai/d2l-en](https://github.com/d2l-ai/d2l-en) Your help
is valuable for making the book better for everyone.

2\. Indeed, recently we were also asked by a few instructors why we use MXNet
in 'Dive into Deep Learning' (D2L). Here is what we think:

a) Traditional deep learning (DL) textbooks often illustrate algorithms
without implementations.

b) In view of this, D2L features both algorithms and implementations for DL.
Doing so does not require exclusive features of any deep learning framework.

c) Thus, even when re-implementing the algorithms in the book with other DL
frameworks, the code descriptions won't be too different. We use MXNet because
we are familiar with it. No matter which DL framework one uses, it should be
easy to switch to another one.

As a concrete example, in the case of applying RNN to language models, the
implementation includes data preprocessing, model construction, and training
loops. D2L will guide you through how to transform text data to allow
efficient mini-batch iteration, how to implement RNN (with or without using
RNN api), and how to efficiently and effectively train a language model. On
one hand, even if a DL novice can memorize the algorithms in a traditional
textbook, it is still hard to apply it into a real project without knowing
implementation details. On the other hand, such implementations are general:
the code will be similar even when being re-implemented with another
framework.

3\. We thank institutions for adopting or recommending D2L in their courses,
such as UCLA CS 269 Foundations of Deep Learning, University of Science and
Technology of China Deep Learning, UIUC CS 498 Introduction to Deep Learning,
and UW CSE 599W Systems for ML. When we wrote the book in Chinese, the book
benefited from a lot of feedbacks at
[https://discuss.gluon.ai/latest?order=views](https://discuss.gluon.ai/latest?order=views)
and pull requests from 120+ contributors. It would be very helpful if we could
get feedbacks and help from more readers when we are editing.

Happy weekends!

------
meow_mix
Could we get a good course on something _other than_ machine learning / ML?
Maybe hardware?

~~~
gnulinux
Even something niche. I want a great course in computability theory. HN is
almost 100% ML these days, as if ML is the only exciting thing happening in CS
world. Very disappointing.

~~~
philzook
Not computability exactly, but Ryan O'Donnell from CMU has put up videos of
his undergraduate and graduate complexity courses. Might be of interest.
[https://www.youtube.com/channel/UCWnu2XymDtORV--
qG2uG5eQ/pla...](https://www.youtube.com/channel/UCWnu2XymDtORV--
qG2uG5eQ/playlists)

------
m0zg
Looks pretty great, except for the fact that it would require you to learn
MXNet, which hardly anybody uses. Paid for by Amazon I guess?

~~~
maldeh
The instructors include core contributors to MXNet. I don't imagine one would
expect, say, François Chollet to do a course on Pytorch :)

~~~
m0zg
Keras is quite popular as well. My point is, these courses have the cognitive
load that's quite high even without having to deal with an unfamiliar
framework.

Don't get me wrong, it seems like a great course and anyone interested in the
field will certainly get a lot of mileage out of it. But by choosing a rather
obscure framework it kind of shoots itself in the foot compared to e.g.
Fast.ai course, which is jam packed with practical advice and uses PyTorch.
Like it or not, there are two dominant frameworks right now: TF and PyTorch,
the latter is my personal favorite by far. A more practical (and fairly low-
effort) approach, therefore, would be to duplicate code samples in PyTorch.

That said, Fast.ai shoots itself in the foot a little too, by requiring Python
3.6 and up, which a lot of people don't have out of the box. I understand why
they do it (type annotations), but still. They also hide PyTorch behind a
rather large ball of Python with cognitive loads of its own.

~~~
maldeh
Your point is understood, and reiterated in other posts here. I just wanted to
add context to the course as being presented by an author of MXNet, who is
free to present it in his own framework.

Anyway, there's no dearth of courses in the popular frameworks already, it
doesn't really help enrich the community if all educational resources only
taught the current "market winner", we might otherwise be missing other
opportunities or ways of thinking. Imagine if all schools only taught
Javascript or Java!

