Hacker News new | past | comments | ask | show | jobs | submit login

Exactly, which is about as common as nonstop paging on a Unix system, and as sensible as a criticism.



When I was in college in the early 90s, unix systems were nonstop-paging quite often, and lisp systems would frequently pause while you were interacting with them, for nontrivial amounts of time, like multiple seconds.

And that was the 90s, after Lisp had been around for decades...


Commercial Common Lisps, or the text based ones developed by students writing master thesis?


That's why my university had a site license of Allegro CL for its SUN SPARC cluster. Thus every student had access to a useful Allegro CL installation.


Please note that SPARCs didn't ship until after generational GC had already been invented and shipped in commercial products.


above the 90s were mentioned. SPARC systems shipped in the late 80s, 87something. At that time (90s) Lisp systems on stock hardware had already quite useable GCs, especially the big three commercial CL implementations on Unix (Lucid, Allegro and Lispworks). Even MCL on Mac OS had a useful ephemeral GC.

It's hard to say how many time was spent on GC in a Lisp program in the 70s. The installations were different, programs were different. How much time did a Multics Emacs written in Maclisp spend in GC? I have no idea. A macsyma running for some hours? AN early rule based system? Probably there were taken some benchmark numbers, but one would need check the literature to find them.

To see how different benchmark numbers were:

http://rpgpoet.com/Files/Timrep.pdf

There are also pathological cases where Memory is mostly full and time would be spent mostly in GC, or where a lot of virtual memory is used, and much time is spent in IO paging in/out memory.

Note also that the invention of generational GCs was triggered by the availability of larger virtual memory based systems and the programs using it - available to the market outside of the original research. Before that, most of them were quite a bit smaller or very rare. Systems like the Symbolics 3600 were extremely rare and expensive. When they first shipped, GCs were copying and compacting, but not generational. Systems like that did not exist on the market before.


Thank you for the link to Gabriel's book! I paged through it just now and summarized what I found in https://news.ycombinator.com/edit?id=13307012.

The claims being debated here are, as I understand them, my half-remembered claim that it was common to spend ⅓ to ½ of your execution time in the garbage collector until generational GC, and Simon Brooke's claim that actually 5% was normal and 10% was in pathologically bad cases. I think Gabriel's book shows conclusively that allocation-heavy programs could easily spend ⅓ to ½ of their time in GC, or 80% to 90% in pathological cases. But Brooke could still be right about the typical case, since when allocation is expensive, people avoid it in performance-critical code when possible.

I agree that the problem mostly went away once generational (aka ephemeral) GC was shipped. In the essay, I dated that to Lieberman and Hewitt in 1980, but in a more recent discussion with Allen Wirfs-Brock, I learned that actual generational GC didn't ship until 1986. (And Lieberman and Hewitt's paper, though submitted in 1980, wasn't published until 1983.)

Given the question of whether non-generational GC would cost more like 5% or more like 25% of your performance, the performance of generational GCs is somewhat of a false trail. Hopefully nobody was claiming generational GCs would eat double-digit fractions of your CPU time. I wasn't, at least.


See also ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-1417.pdf. for an early perspective on generational GC.

Ephemeral GC btw. means something slightly different. An ephemeral GC is mostly concerned with the detection and collection of short lifetime objects in main memory.

The main problem is that architectures, implementations, actual computers, programs were so different, that you could find all kinds of numbers. Even a generational GC can get extremely busy and may not prevent from having full GCs over large virtual memory from time to time. It's also a huge difference if the GC was incremental (the work was spread over the whole compuation) or if the system stopped for much of the GC work.


This feels to me like moving the goalposts.

But come on, I have used emacs for 25 years, and on a daily basis it stalls for annoying amounts of time while I am just doing something simple like editing a .cpp file. Today. In the year 2017.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: