Hacker News new | past | comments | ask | show | jobs | submit login

I haven’t checked since 1.10 but at that time you could definitely see what I presume was the impact of lack of compaction on heap allocations over time. That is allocations would take longer as your program ran longer.

This is mitigated quite a bit by how much gets stack allocated but I imagine there are workloads that this hampers a lot, in particular large in memory data sets.

That’s not to disagree with your point the GC improvements are great for my workloads but only to point out there is a cost to them.




>was the impact of lack of compaction on heap allocations over time.

On the other hand, heap compaction is itself very expensive, as you are stopping the world while moving data around in the heap.

With Go allocation being so stack-oriented, not using compaction could very well be the best choice.


There are GC paradigms that do compaction and do not stop the world (https://www.azul.com/products/zing/). They are expensive in other dimensions though (actual cash).

Again, not disagree-ing with the choices the golang team are making on GC, they work great for my workloads but they aren't without trade off for other kinds of workflows.


Actually you can now get two open source pauseless compacting collectors: ZGC and Shenandoah.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: