Jack Crenshaw's series of articles is an interesting example of this from an era that's already bygone (his example was a compiler written in Turbo Pascal directly generating Motorola 68000 assembly) but is close to our own time in many ways that count. For example, he emphasizes the fast turnaround time you get when your compiler is running on a PC instead of a mainframe across campus.
Specifically, his little digression in the eighth part is on-topic here:
He specifically mentions the multi-pass design common to early compilers (63 passes for a compiler written for the IBM 1401!) as being down to having to fit the compiler into limited RAM. Similarly, Crenshaw's compiler relies on the existence of a fairly capacious stack to allow the parser to be recursive; on a 1950s-era mainframe without a stack, or a 1960s-era minicomputer where procedure calls were not re-entrant, you'd be left to do a lot more bookkeeping by hand.
Then you have the massive emphasis on error detection and reporting forced on programmers by the slow turnaround inherent in batch systems: You can't just die at the first error with a one-line message if it'll take two weeks to run the program again; soldiering on and verbosely reporting every step was the only way to get the most out of your precious computer time.
And, of course, less-optimized code is easier (and faster!) to generate, but can only be justified if your machine is fast enough the optimizations aren't actually needed. tcc, the tiny C compiler, is a modern incarnation of this idea; it compiles fast enough you can throw the result of the compilation away and run the compiler every time, like you do in Perl or Ruby.
And he mentions standardization: His compiler can only handle grammars you can express cleanly in BNF, which is okay because those kinds of grammars are dominant now anyway. Kinda hard to predict that ahead of time, though.