And that was the 90s, after Lisp had been around for decades...
It's hard to say how many time was spent on GC in a Lisp program in the 70s. The installations were different, programs were different. How much time did a Multics Emacs written in Maclisp spend in GC? I have no idea. A macsyma running for some hours? AN early rule based system? Probably there were taken some benchmark numbers, but one would need check the literature to find them.
To see how different benchmark numbers were:
There are also pathological cases where Memory is mostly full and time would be spent mostly in GC, or where a lot of virtual memory is used, and much time is spent in IO paging in/out memory.
Note also that the invention of generational GCs was triggered by the availability of larger virtual memory based systems and the programs using it - available to the market outside of the original research. Before that, most of them were quite a bit smaller or very rare. Systems like the Symbolics 3600 were extremely rare and expensive. When they first shipped, GCs were copying and compacting, but not generational. Systems like that did not exist on the market before.
The claims being debated here are, as I understand them, my half-remembered claim that it was common to spend ⅓ to ½ of your execution time in the garbage collector until generational GC, and Simon Brooke's claim that actually 5% was normal and 10% was in pathologically bad cases. I think Gabriel's book shows conclusively that allocation-heavy programs could easily spend ⅓ to ½ of their time in GC, or 80% to 90% in pathological cases. But Brooke could still be right about the typical case, since when allocation is expensive, people avoid it in performance-critical code when possible.
I agree that the problem mostly went away once generational (aka ephemeral) GC was shipped. In the essay, I dated that to Lieberman and Hewitt in 1980, but in a more recent discussion with Allen Wirfs-Brock, I learned that actual generational GC didn't ship until 1986. (And Lieberman and Hewitt's paper, though submitted in 1980, wasn't published until 1983.)
Given the question of whether non-generational GC would cost more like 5% or more like 25% of your performance, the performance of generational GCs is somewhat of a false trail. Hopefully nobody was claiming generational GCs would eat double-digit fractions of your CPU time. I wasn't, at least.
Ephemeral GC btw. means something slightly different. An ephemeral GC is mostly concerned with the detection and collection of short lifetime objects in main memory.
The main problem is that architectures, implementations, actual computers, programs were so different, that you could find all kinds of numbers. Even a generational GC can get extremely busy and may not prevent from having full GCs over large virtual memory from time to time. It's also a huge difference if the GC was incremental (the work was spread over the whole compuation) or if the system stopped for much of the GC work.
But come on, I have used emacs for 25 years, and on a daily basis it stalls for annoying amounts of time while I am just doing something simple like editing a .cpp file. Today. In the year 2017.