Direct link to 2018-2019:
You're right in Saying that Stanford, UCB, UW, and other schools have had a much stronger track record producing deep learning models that have revolutionized certain fields (img classification, segmentation, nlp, etc.). I think MIT has hired some seriously talented researchers who have chosen to invest their time into different avenues of research that align more closely with the "engineering" side of deep learning.
For example, Eyeriss kickstarted the AI accelerator race. Halide is a DSL+runtime used as the basis in a lot of deep learning compilation tools, like Tensor Comprehensions. I don't think you're grossly misinformed, I just think that while other schools have invested in the theory, MIT is betting on a level of abstraction that's a bit lower.
1 = http://eyeriss.mit.edu/
2 = https://halide-lang.org/
3 = https://research.fb.com/blog/2018/02/announcing-tensor-compr...
4 = https://openai.com/blog/ai-and-compute/
In 2019, the top 5 institutions were:
1. Google (USA) — 167.3
2. Stanford University (USA) — 82.3
3. MIT (USA) — 69.8
4. Carnegie Mellon University (USA) — 67.7
5. UC Berkeley (USA) — 54.0
NeurIPs is a big conference covering lots of topics, though. All of this says relatively little about MIT's true impact on deep learning in the last eight years.
PyTorch is one of the biggest frameworks but it's unreasonable to suggest it is the only one that matters.
Also, for an intro class, Pytorch would have been a better choice imo, because it has a simpler and cleaner API. In my experience, it was a lot easier to get up and running with Pytorch compared to Tensorflow.
1 - https://openai.com/blog/openai-pytorch/
2 - https://news.ycombinator.com/item?id=21216200