I’m a bit surprised that this doesn’t mention suffix notation, especially as multilinear maps are so common in machine learning and suffix notation is such a good way to work with them.
I am in the beginning, but so far, I'd recommend the book. Though, the steep price (almost 100USD over here) makes it a bit difficult to recommend. I might buy Boyd’s and Vandenberghe’s Introduction to Applied Linear Algebra later and write a comparison.
For further reading, try Matrix Analysis and Applied Linear Algebra by Meyer.
- It is inaccurate because n doesn't enter into the definition (and thus all Euclidean spaces would be the same).
- You want ordered n-tuples, not sets. If you actually defined elements of R^n as sets of n real numbers, you would not be able to represent the diagonal (or, if you fix that, not be able to represent the difference between certain points on the diagonal of R^n and certain points on the diagonal of certain subspaces).
Moreover, figure 5 talks about "overlapping vectors". This is highly non-standard, and definitely would need to be defined.
Further, you're setting your readers up for trouble by defining vector arithmetic in terms of bases.
The first is more an error of understanding (for the lack of a better term), whereas talking about tuples as being de facto sets is truly about being pedantic.
After stumbling across the author's twitter where he already complains that people are "being mean on HN", and seeing his responses there, I have some serious doubts about whether he is fit to be teaching people mathematics. I honestly applaud him for writing the article, and don't hold the math mistake against him at all. But the incredible defensiveness when confronted with a small mistake is absurd.
I mean no offense by saying that, and it's human nature to be wrong.
However, this is not a "conceptual inaccuracy". It's wrong. Straight up, old-fashioned wrong. And don't tell me it's of "little to no relevance" that your definition of R^2 does not distinguish between (0,1) and (1,0).
She's a phenomenal teacher, and has grounding in the practice of applying it to ML and deep learning.
On the topic of open source learning, I take every chance I can to heartily recommend fast.ai’s course . It’s a good intro to Deep Learning that leaves you informed enough to build things, and equips you to ask follow-on questions and dive deeper when/where you need to.
It has a lot more detail on stuff like floating point storage, memory layout, sparse matrices, iterative methods, etc than most linear algebra courses, but doesn't go much in to proofs, geometric interpretations, and other stuff that's less needed for algorithm design and implementation.
(Disclaimer, I'm from fast.ai.)
The rendering of some of the Latex code does seem to be bit of a challenge; I see some intermingled code/text (on latest Mozilla on Ubuntu 18). The CPU usage also shoots up.
What scripts are rendering the Latex code?
Just another example of the huge power in creating accurate digital models of the world.
As an aside, my background is in optimal control/control theory and instrumentation, and operational calculus, state space representations, and matrices were a huge part of the "Jiu-Jitsu" we did. Tinkering with RST control, robust systems, finding optimal systems and sequences (Hamilton-Jacobi-Bellman-Pontryagin) with matrices flying around on our exam papers solving these systems. It was a nice abstraction.
Very, very, powerful tools that command immense respect for Laplace, Lagrange, and people like them who invented things to solve problems we're facing now.
OCW so it's free(!), including the videos and handouts.
Please don't gatekeep education like this. I've caught teachers from established institutions in mistakes hundreds of times in my life, and often just as defensive. And I've found "random blogs" on the internet that explain things far clearer than the school recommended book. The only thing that is important is the quality of the resource, not where it came from.
Strang's free videos are great. That's the important thing to share.
(I edited to clarify which part of the parent comment I took issue with).
Absolutely! And I've had medical doctors make lots of mistakes, while random blogs on the Internet have been right. Buuuut, you know.
> The only thing that is important is the quality of the resource, not where it came from.
Right. And the blog author we're discussing is flat out refusing to accept that his material has basic mistakes. That's a gigantic red flag. Again: it's not about being wrong. It's about refusing to accept that he's wrong.
No I don't. Could you please explain? If random blogs have helped you catch doctors making mistakes (like they have me) then it would seem to strengthen my point that gatekeeping sources of valuable information based on their origin instead of their quality is a mistaken approach.
> If you have the option of listening to a world-renowned mechanic instead.
With the note that "world-renowned" has almost nothing to do with "education from established institutions" I feel like we've made progress here, so I'll leave it at that.
I also really appreciate linking to other learning materials in the beginning (both free and non-free)