the core dilemma of computer science is this: conceptually simple systems are built on staggeringly complex abstractions; conceptually complex systems are built on simple abstractions. which is to say the more work your system does for you, the harder it was to build.
there are no stacks which are pure from head to toe. I guarantee you, old LISP Machine developers from Symbolics probably had a hard time designing their stuff as well.
And I wouldn't say PPC lost. IBM has competitive CPUs in the market, they're just not in consumer devices. But they're just that: "competetive". They aren't much better (actually pretty much nothing is better than Sandy Bridge right now).
To me TFA is about a reassessment of fundamental assumptions, and it's about exploration. It doesn't suggest concrete solutions because nobody knows what they are, but it does suggest that our efforts to better the art have been short-sighted. Right now the next Intel chip or ARM chip is just another target for compilation, just another language or library fight with instead of solving real problems - solving old problems, not just the latest new/interesting/imagined ones.
(FWIW, this particular example doesn't excite me too much - If the future is DWIM, it almost certainly has to be done first in software, even if it is eventually supported by specialised hardware.)