I've been trying to teach myself ML and AI for a while now, and though only tangentally relevant to the article perhaps others can take a few tips from my experience. First, I didn't really have a breakthrough until I ditched the 'recommended' textbooks and video courses and just started picking out a topic and learning by doing. Anything I don't understand or know when trying to implement it I just google/youtube/wikipedia it and keep messing with it until I know how it works at a conceptual level. Thats where resources like this article really come in handy.
For the heavier math parts, I just write a short summary of the formula and what its various parameters do and try to make a mental note of it. I certainly do not try to use the formulas to solve complex mathematical problems or write an implementation in python. I chalk this task up to `someday when I have the time`.
Finally, I'll get a dataset and try to solve various problems using the new skill I just learned using R/python & co.
This method has ridiculously accelerated the speed at which I've been able to acquire ML/AI skills that I also know how to apply in the real world. Before I felt like I was moving at a snails pace.
This method might not work well for everyone but its at least an interesting alternative to most of the recommendations of doing A,B,C online courses and reading X,Y,Z books.
I personally found it very hands on, it jumps into practical application right of the bat which helps keeping the motivation steady. Having said that, its not easy or dumbed down in any sense.
You should probably also include the gensim package https://radimrehurek.com/gensim/, since it has the most popular python word2vec implementation. One of the links I clicked on mentions it and walks you through using it, but I think it would make sense to point people to it directly in case they don't have time for a tutorial.
There is also a new higher-level library built on Spacy which looks good: https://textacy.readthedocs.io/en/latest/
I skimmed though Spacy briefly and it looks great.
The v2 English models are more accurate, and can assign vectors to any word, including unknown words using the context and the word shape. Overall it's much better -- but it's still in alpha. The docs are already better, though.
(This is an awesome resource, thank you for compiling it.)
These were written to demonstrate modern techniques with readable code, after seeing way too many indecipherable tangles of models. PyTorch plays a big part in that readability.
Ps. Only 1 day since i installed Jupyter to recreate graphs and implement formulas
Ps2. Thought about implementing c# as a kernel in Jupter, but i think it's better to continue with Python.
On twitter  I saw mention of GDGS: Gradient Descent by Grad Student :)
Before you say that OCR is a solved problem because of Tesseract, please read this:
Machine learning - no one will tell you the answer b/c they're working on it (whatever "it" is) right now. But here's a good tip from Mr. Andrew Ng:
"...almost anything a typical human can do with less than one second of mental thought, we can probably [do] now or in the near future automate using AI... Take a security guard looking at a video feed and saying, “Are there people in this? Are they doing something suspicious?” That task is actually a lot of one-second judgment thoughts strung together, so I think a lot of it can be automated."
So that's the level of current ML.
NLP - most valuable application is natural language.