I audited Might's "Compilers" this spring. He live-coded a parser that parsed with derivatives, returning all possible parse trees whenever there were ambiguities in the grammar.  (Try getting that from yacc, or basically any other tool in existence right now.)
All of his coding was done in Racket scheme. At the beginning he told us we could use whatever for the projects, but doing certain things in C++ / Java / Python / some imperative language was "like bringing a knife to a gun-fight."
The final project was a working Python -> C translator.
Really badass class.
Any GLR-based tool can do that trivially, including recent versions of Bison (derived from yacc): http://www.gnu.org/software/bison/manual/html_node/GLR-Parse...
Ultimately I don't think this is the best way to develop parsers, particularly when you are designing the language itself (as opposed to writing a grammar for an existing language), because it gives you no hint that ambiguity exists until you actually encounter an ambiguous string (since the question of whether a grammar is ambiguous is undecidable).
I wrote more about this on my blog: http://blog.reverberate.org/2013/09/ll-and-lr-in-context-why...
Actually, when using such approach you have to fight the power of resulting parser. You have to restrict it or your parser will retain data for too long.
Including generator functions? That would be impressive. Well, one could solve it by putting all locals into an heap allocated object and use the "switch over the whole function body"-trick to continue execution at the correct position.
While the teacher was walking students through how to do loops, I got permission to hack away in the back of the room on this. I ended up building a BASIC-like interpreter with a decent graphics API. By the end of the class, my project was a multi-level breakout game I'd written in the interpreter I'd written. TLDR; two years later, I talked to a girl.
That same game project I started then still has been going on since then, multiple compilers and VMs written to do parts of the game. It's ended up an incredibly complex monstrosity with procedurally generated worlds and randomly altered scripts (some actions just won't be available on some play-throughs).
I spent probably 30 minutes 2 or 3 times a week for a couple of months. Most of that was probably adding features to my interpreter and coding the game itself. I recall it being extremely clear to me, even without any sort of formal CS. Even things like working with the stack and recursion were clear to me at the time.
If you have never built a compiler before, I cannot think of a better place to start.
Afterward, if you're curious about theory and advanced topics, I recommend heading to Compilers: Principles, Techniques, and Tools by Aho, Sethi, and Ullman (which covers a lot of theory associated with front-ends) then proceeding to Modern Compiler Construction in ML by Appel (which covers some more advanced topics and back-end stuff). Then you can continue reading about more specific/advanced topics if you like.
Taking this course was an great way to learn more about compilers and fill a hole in my CS curriculum. Professor Alex Aiken is a great instructor and covers a good amount of material. I learned a lot about compiler construction despite having toyed with my own compiler before starting the course. The programming assignments were particularly tough, giving me useful experience in building compilers and a great sense of achievement.
(TL;DR from my full blog post: http://dirkjan.ochtman.nl/writing/2012/07/21/compilers-on-co...)
Edit: I don't say that to disparage it; I actually think that's an impressive accomplishment.
It's not the same but Vidar Hokstad has been writing a series for several years now in Ruby: http://www.hokstad.com/compiler/ .. and other resources aplenty: http://stackoverflow.com/a/1672/3951
The really great part with the Crenshaw tutorial, is that it's so cohesive and concise. It's reminiscent of Wirths compiler construction texts, but much simpler to follow.
Pascal is pretty much directly derived from early dialects of Algol, which is where most modern programming language syntaxes derive; it's the original block-structured syntax, as opposed to line-structured assembly and early FORTRAN and the fully-bracketed Lisp syntax (which is also block-structured if you indent it sanely).
This is quite interesting: Pascal and its Successors - Niklaus Wirth
Lisp in Small Pieces is also a useful book, for those interested in Lisp/Scheme. It covers much of the same stuff as in the PDF I mentioned.
Easy to read, concise, and good for beginners.
I haven't gone to the bare metal level since (as well as using parser generators), but it's a great piece of work that gives you a slight understanding of what YACC+family do under the covers (even though they are different types of parsers). I continually recommend it as a starting point for anyone who wants to learn how to write parsers.
pdf - http://www.penguin.cz/~radek/book/lets_build_a_compiler.pdf
html - http://www.penguin.cz/~radek/book/lets-build-a-compiler/
Reading that reminded me why I'll never make predictions on computing, especially on what can't be done.
Not least because no language includes sufficient semantic information for the compiler to be able to safely optimize all the parts that the programmer can.
It's still mostly true - just not worth the effort for anything but smaller fragments.
I realize that many people believe that computers will some day be able to truly think on a human level. I just don't happen to be one of those people.
At the bottom of the page.