in particular, ~300+ pages are spent focusing on techniques for doing parsing and lexing.
Now if I want to write a baby yacc [sic], this might be useful. But in this modern era, parsing is a very well understood problem, with lots of easy to use tool. Yes in a production compiler helpful syntax and type error messages are key, but when you're learning the compiling part, you want a book that doesn't spend half of its volume on that topic. Also, the 2nd edition doesn't seem to have a single level reader in mind.
The intro book people should look at is (as mentioned elsewhere), should be the intro to compilers in ml book by appel, and for advanced stuff folks should look at stuff like appel's compiling with continuations, the munchnick (spelling?) book, and one or two others.
I think the point is that 1) most exposure to the dragon book for most folks predates the 2nd edition, and in your experience, most of the learning sounds like it was from the lecture notes and problems sets rather than the text (presumably used as a reference supplement in practice?)
Also, IDE / editor support like intellisense can greatly benefit from integration with a recursive descent parser. If you encode the cursor's position as a special token, the parser can handle that token and do a deep return (throw an exception, longjmp, whatever) with relevant context, active scopes, etc.
I certainly don't begrudge you using the existing tools, but
speaking as someone writing a "baby yacc", I don't think parsing is quite the solved problem you make it out to be.
Yes there is TONS of literature on the subject, but new techniques and algorithms are being discovered all the time. ANTLR's LL(*) parsing hasn't been published yet (though I believe he's working on it) and only three years ago Frost, Hafiz and Callaghan published an algorithm for generalized top-down parsing in polynomial time. There's also the idea of PEG, published by Bryan Ford in 2004, a guy at UPenn who is carving out a set of languages between regular and push-down languages (http://www.cis.upenn.edu/~alur/nw.html).
All of this is to say; we're still discovering things about parsing. It's not a settled subject.
Also, here's a good blog post in which the author discovered how using parsing tools to syntax-highlight text as it's being modified quickly led him to the frontiers of parsing research.
Actually, I don't have any personal experience with the Stanford course. I am reading the book for fun really. (For all its faults, it is quite engrossing!) I just thought it would be relevant to investigate under what context this book is generally used.