[sorry for the plug, but I think my tutorial is more comprehensive in terms of explaining how the thing actually works]
For example, the "(" token handles grouping in prefix position and function calls in infix, and those don't have anything to do with each other, so I split out PrefixParselets and InfixParselets into entirely separate dictionaries.
This perfectly describes all parsing discussions and papers I've read. I love it.
Edit, I also have a generalized version that can take pretty much any input and any output (F#) here: https://github.com/fholm/Vaughan
Good post. I need to ruminate on this some more but it might well apply to something I'm working on.
Delphi uses this parsing method for expressions, and is one of the reasons it's so fast at compiling.
However, I'd be skeptical that the parser is one of the main reasons for its fast compile times. In my experience, parsing is really a tiny amount of compilation time. The VB.NET compiler, for example, uses the same parsing technique, and it is ... "not fast" at compiling. It spends most of its time binding symbols.
The parser (or more accurately, the lexer) is the bit of the compiler which ultimately limits its performance, because it needs to see every character of the source. The compiler can never be faster than linear in the length of its input. Metrics on the Delphi compiler actually show that string hashing is the hottest piece of code, and that's already optimized about as far as it can go.
Delphi does indeed inherit unit syntax from Turbo Pascal. This is both a blessing and a curse; it makes writing well-structured programs easier by enforcing a logical consistency in ordering (programs more or less have to be written in a procedurally decomposed way), but it also makes writing programs with complex interdependencies between parts more awkward. The unit format is also one of the things that makes the compiler fast (or not slow); relinking a binary unit based on a partial compile is very quick, and as it's not a dumb C-style object format, the compiler can be intelligent about dependencies. Every symbol has a version, a kind of hash, associated with it, computed from its type, definition, etc. When units are relinked after a recompile of a unit, only the symbols need be looked up and their versions compared; if there is no version mismatch between imports and exports, then dependent units don't need recompilation.
Random factoid: the guy who wrote the original source for the current Delphi compiler was also one of the developers for the .NET GC, and CLR performance architect (Peter Sollich).
the pratt parser is just a way of implementing such a parser.
to be technical, it is a form of left-corner parsing