In that sense, the architecture of the underlying machine, even a VM, is a redundant detail. Who cares what's there as long as it executes?
Stack discipline is not a compiler thing, but an Algol thing. Wanna figure out how to implement lexical scope on a modern processor without much fuss? Implement Oberon/Pascal and be done with it. You will get your usual stack and base pointers, and you can even have fun with the C kids, even read the Aleph1 and Mudge papers.
But there is more to control semantics than the petty static/dynamic links of Algol to implement nested functions and dynamic environments without first-class closures, if not continuations. "push ebp, mov esp, ebp" is trite when your "procedures" are heap allocated objects that can change at runtime, even get garbage collected.
So a change in the complexity and the semantic geography of the problem necessitates a departure from more primitive implementation techniques.
As for IRs; the traditional lisp intermediate representations are just the primitives for binding, closure construction and closure application. Straight out of theory. This is what the lambda conversions spit out, and they're directly executable by your underlying system. So now you have more time to tinker with the semantics, improve the optimization phases, and generally do more.
Having said that, you will recall that SICP implements both a register machine, and has section on evaluating reverse polish expressions :-)
That's an absurd statement. That's like saying "text" is simply one representation of a program. While technically true, it covers a wide enough portion of the space that its interesting to anyone writing real programs.
Furthermore, I'd say it has more to do with what you're targeting, rather than the source language. In any case, it captures such a wide space, it's naive to think that it's just a detail.
And I'm not sure what compilers you've written, but modern compilers... even less modern compilers, like gcc, use IRs for more than simply that -- unless your definition for binding is exceptionally wide. And even the most primitive compilers use the IR for control flow. Some use it for modeling all possible control flow. Others to capture data flow.
Unless your goal is to write an HP 48S calculator or reimplement Scheme, I think you'll need a fair bit more than SICP.
Compiler construction != native machine code generation.
At a lower-level, function composition is a form of stack discipline, where ordering is important, but placement less so.
In any case, I'm sure you can write a compiler that translates from X to Y, for some X and for some Y, using only Tic-Tac-Toe as your language, with readings from the back of Lucky Charms as the totality of your knowledge base. My point was that to really know compilers, you need to know a bit more than that.
What it is is a gateway drug to serious computing, including compiler construction.
However, let me just come back to your point that stack discipline is necessary for control structures and non-local transfer of control.
Well, that depends on the execution model of the machine, and I would say explicit stacks are a performance hack, made to model the mathematical concepts of closure and application.
Sorry, I had to remove my wordy examples explaining "Scope and Extent", and how they can be represented with environment pointers, and heap allocated activation records.
TODO: Point yet again to the "3 Implementations of Scheme" paper, Scheme being the cleanest Algol variant this side of modern computing.
Then I found this:
I was reacting to this statement you made, "Few chapters of a Lisp book will spare you volumes of traditional compiler construction techniques".
S. P. Jones' text was more interesting. I also like "Modern Compiler Design" (http://www.cs.vu.nl/~dick/MCD.html), which covers imperative/oop programs as well as logical and functional paradigms.