Hacker News new | past | comments | ask | show | jobs | submit login
Quantifying the Performance of Garbage Collection (2005) [pdf] (umass.edu)
38 points by pplonski86 89 days ago | hide | past | web | favorite | 9 comments

This is from 2005. The state of the art in GC has advanced substantially since then.

As has the state of the art in explicit memory management, though I'm quite sure the ideas weren't completely unheard of at the time of this writing.

Do you have any quick references for someone not privy to this state of the art but who has experience reading academic papers and reading code and documentation?

If you want to dive into some dense research, I recommend learning about afine type systems. Maybe play with Rust a bit too.

Afaict that's the paper that says code using GC needs five times as much memory to have performance comparable to manual management.

What's the figure these days?

What has exceeded Azul's pauseless GC which was released in 2005?

They do not even consider C4 or region-based collectors in that paper. In fact they only do single-threaded execution.

As for the state of the art: low-pause/pauseless collectors (G1 and shenandoah) have been gaining the ability to reduce footprint based on allocation pressure, i.e. the heap size is load-following which can be important in cloud environments with usage-based billing.

I'm not talking about their paper, I'm talking about their execution. That absolutely was multithreaded for instance; it was designed for a custom chip that allowed 768 cores in a system in 2005.

Any recent references you can link to, pls?

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact