1. We have just translated contents from the Chinese book https://github.com/d2l-ai/d2l-zh. However, the translation quality is not good enough; thus, we are still editing. As @EForEndeavour put, you are more than welcome to become a contributor of the book if you spot any issue: https://github.com/d2l-ai/d2l-en Your help is valuable for making the book better for everyone.
2. Indeed, recently we were also asked by a few instructors why we use MXNet in 'Dive into Deep Learning' (D2L). Here is what we think:
a) Traditional deep learning (DL) textbooks often illustrate algorithms without implementations.
b) In view of this, D2L features both algorithms and implementations for DL. Doing so does not require exclusive features of any deep learning framework.
c) Thus, even when re-implementing the algorithms in the book with other DL frameworks, the code descriptions won't be too different. We use MXNet because we are familiar with it. No matter which DL framework one uses, it should be easy to switch to another one.
As a concrete example, in the case of applying RNN to language models, the implementation includes data preprocessing, model construction, and training loops. D2L will guide you through how to transform text data to allow efficient mini-batch iteration, how to implement RNN (with or without using RNN api), and how to efficiently and effectively train a language model. On one hand, even if a DL novice can memorize the algorithms in a traditional textbook, it is still hard to apply it into a real project without knowing implementation details. On the other hand, such implementations are general: the code will be similar even when being re-implemented with another framework.
3. We thank institutions for adopting or recommending D2L in their courses, such as UCLA CS 269 Foundations of Deep Learning, University of Science and Technology of China Deep Learning, UIUC CS 498 Introduction to Deep Learning, and UW CSE 599W Systems for ML. When we wrote the book in Chinese, the book benefited from a lot of feedbacks at https://discuss.gluon.ai/latest?order=views and pull requests from 120+ contributors. It would be very helpful if we could get feedbacks and help from more readers when we are editing.
Don't get me wrong, it seems like a great course and anyone interested in the field will certainly get a lot of mileage out of it. But by choosing a rather obscure framework it kind of shoots itself in the foot compared to e.g. Fast.ai course, which is jam packed with practical advice and uses PyTorch. Like it or not, there are two dominant frameworks right now: TF and PyTorch, the latter is my personal favorite by far. A more practical (and fairly low-effort) approach, therefore, would be to duplicate code samples in PyTorch.
That said, Fast.ai shoots itself in the foot a little too, by requiring Python 3.6 and up, which a lot of people don't have out of the box. I understand why they do it (type annotations), but still. They also hide PyTorch behind a rather large ball of Python with cognitive loads of its own.