Hacker News new | past | comments | ask | show | jobs | submit login

Manual heap allocation can slow the program down non-trivially compared to using an arena which is cleaned up all at once; hence, manual heap allocation is a kind of GC. Checkmate atheists.



>Manual heap allocation can slow the program down non-trivially compared to using an arena

But GC has nothing to do with whether heap allocations 'slow the program down', it's who owns the lifetime of the allocated object - for GC not the programmer, but the runtime.

If I create a an object, and then then soon after deref it [MyObj release], then I know it will dealloc immediately after. I'm basically in control of its lifetime, even though it's ref counted.

If I call [MyObj free] and it's still owned by another object (e.g. I added it to some collection), that's OK because the object is still useful and has a lifetime outside of my control, and its destruction will be deferred until it doesn't.

But, with a GC object, if I call new MyObj() and then soon after try my best to destroy it, I can't because I'm not in control of its lifetime, the runtime is.

That's what I see the distinction between GC and ref counted (not GC), and why I mostly don't agree with so many people here insisting that ref counting is garbage collection. How can it be when I can easily be explicitly in control of an objects lifetime? I create it, then I destroy it, and it happens exactly in the sequence that I dictate.

For sure, ARC makes it a bit more subtle, but even then, I can reliably predict when an object will be destroyed and factor that into the sequence of events in my program.

-An Atheist.


I went down a path researching the viability of region based memory management (a form of arenas).

A language based on such a paradigm can be provably memory safe, and regions can have their own allocators and optionally provide locking when the regions are shared.

This approach obviates the need for reference counting individual allocations (since regions are tracked as a whole), but it suffers from excess memory usage in the event of many short-lived allocations (i.e. they leak until the entire region's allocations go out of scope). But those types of memory accesses can be problematic in every systems language as they can eventually cause memory fragmentation.

That problem can be minimized using per-allocation reference counting, but that can incur a heavy performance hit. Although not having to use it everywhere could minimize the impact.

The plus side is you don't have to worry about borrow checking, so such a language can be more flexible than Rust, while still maintaining the memory safety aspect.

The question, as always, is: is the juice worth the squeeze?

Truthfully, I suspect no. The Rust train has left the station and has a decade head start. Even if it is a pain in the ass, lol.


What’s old is new again. Cyclone, one of the influences on Rust, is a systems language using regions for memory management.


You may be interested in Rust language proposals for local memory allocators and "storages"; they may be enough for something very much like this. The lifetime concept in Rust is quite close already to that of a "region", i.e. an area of memory that the lifetime can pertain to.


Depending on the semantics of the implementation, something like that would go a long way toward eliminating one of my biggest issues with Rust. For a low-level systems language, it is imperative to offer 2 things, which are currently a pain in Rust: (1) you must be able to "materialize" a struct at an arbitrary location; why? Because hardware tables exist at a specified location and are provided by hardware- they are not created or instantiated in the host language; and (2) be able to reference structs from other structs, which immediately triggers lifetime annotations, which begin to color everything they touch, must like async does to functions.

And I admit, I loathe the borrow checker. Ironically, I never really have problems with it, because I do understand it, it's just that I find it too limiting. Not everything I want to do is unsafe, and I hate the way it has made people think that if you really do know better than the borrow checker, you must clearly be doing something wrong and you should re-architect your code. It's insulting.


GC is bad because of memory fragmentation due to pointer aliasing. (Not GC pauses per se.)

Theoretically you could have a GC'd language that cared about scope, lifetimes, stack vs heap and value vs reference distinctions, but no such thing was ever seen in the wild. (Perhaps because people use GC'd languages precisely because they don't want to care about these distinctions.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: