Hacker News new | past | comments | ask | show | jobs | submit login
Baby's First Garbage Collector (stuffwithstuff.com)
217 points by daw___ on Dec 8, 2013 | hide | past | web | favorite | 84 comments



So every once in a while I come across old timey C optimizations in the spirit of Duff's device or bit twiddling to swap variables, 'etc 'etc...

While they have a certain kind of charm to them they seem to be almost universally bested by increasingly mature compilers and complex (or virtualized) hardware.

So I'm kind of coming to the conclusion that clever pointer arithmetic games and even manual malloc/free are increasingly futile unless you're targeting embedded devices. If a young language like Go gets you at least in the same ballpark as C with a relatively immature GC and even toy implementations like the OP take you rather far these days... when is coding without a garbage collector even really defensible anymore?

I'm not trolling. I think pointer arithmetic is neat and the aforementioned "optimizations" are magical and possibly my generation has missed out on something wonderful. Seems it just isn't often practical to do that sort of thing anymore.

Edit: I'd like to stress that I wasn't talking in absolutes, and asking rather than telling. I'm not sure how to phrase my post better, made some edits nonetheless.


Let me link to a (very longwinded, sorry) post I made a while back explaining how a particular type of memory pool allocator works: https://news.ycombinator.com/item?id=6176091

tl;dr if you think "manual memory management" means "malloc thing, use thing, free thing" you're kind of missing the point.

Also, tricks like Duff's device and XOR swap have long since become pessimizations, not optimizations, on desktop and mobile-class hardware. Duff's device, for instance, wastes instruction cache when any decent CPU's branch predictor can eliminate most of a simple loop's overhead anyway. With the exception of manually vectorizing tight loops to take advantage of SIMD hardware (which is more of assembly's domain anyway), I think the "optimized C" of today looks pretty straightforward and unobfuscated, just with more attention paid to the details that higher level languages mostly gloss over for you.

> ...and possibly my generation has missed out on something wonderful...

I'm pretty certain I'm younger than you, I just grew up really wanting to make games and know how computers work. Maybe you don't need or even want to think about this kind of stuff, but it's not like the problems just magically disappeared one day. Or, more bluntly, "if C and C++ are so dumb then why isn't your browser written in Python?"


I think the HN crowd honestly believes that the only reason the next browser hasn't been written in a GC language is momentum and time. That the next one will very obviously be written in Ruby or Python. I've seen discussions here to that effect.

I find it hard to read HN discussions about C. One faction is totally amazed by the C equivalent of being able to tie their shoes. The other faction believes shoelaces are obsolete and everyone should wear velcro.

This "my generation has missed out" thing is really odd... If you are honestly interested in it, go learn how it works! The commenter you're replying to should quit making excuses.


Sorry for the poor phrasing, maybe you've misunderstood: I've mentioned that I'm interested, am fond of it, and have learned how it works. Not sure what excuses you're talking about.

OTOH, for when I'm not writing a browser, a HFTs, or the next-gen video game engine, the trade offs are increasingly not worth it. On top of that we don't have Amigas to have fun with anymore and stay close to the hardware on (I guess maybe Raspberry Pi qualifies as some sort of substitute) and so that particular brand of hacker, a small group to begin with, keeps on shrinking. That's what I meant by missing out.


> Or, more bluntly, "if C and C++ are so dumb then why isn't your browser written in Python?"

Because Python's standard implementation, CPython is an interpreter?


I grew up on assembler, cycle counting, and computer graphics. You're right, a lot of the old-timey optimizations are not applicable on today's hardware, but this is just the march of technology. I didn't have to worry about generating graphics scanline-by-scanline like those poor Atari 2600 programmers. They didn't have to worry about instruction pairing on the Pentium though. Each generation has their tricks.

As for 'manual' management, consider script engines that need to temporarily store the AST while compiling/interpreting a piece of script. These typically allocate each tree node by pointer-bumping from a large pre-allocated block of memory, and free the entire thing once done by resetting the pointer. Basically this is the same idea as a nursery in a generational collector, with the difference that you already know beforehand that you're not going to promote any of the elements.

What GC algorithm is best (mark/sweep, generational, soft real-time, hard real-time) depends on your needs I guess. In that light, it's too bad MSR canned Singularity: http://research.microsoft.com/pubs/69431/osr2007_rethinkings...


> In that light, it's too bad MSR canned Singularity:

Or that few people are aware of Oberon and Spin projects outside the respective universities.

As for Singularity at least parts of the native C# (Sing#) compiler landed on the Windows Phone 8 .NET native compiler (MDIL).

I would really like to know how project Midori relates to Singularity, but it is still kept internal.


Someone has to implement those garbage-collected languages. If you ever want to be one of those someones, you need to learn manual memory management and raw-memory manipulation.


Depends. Do you want, for example, a browser that pauses for upwards of 50 ms at a time? How about a game? Go's garbage collector can very much result in 50 ms pauses.

When The Verge reviews new phones, they definitely dock points for a laggy OS, and dropping three frames in a row regularly is considered laggy these days...


There are certainly GCs that don't pause for 50ms at a time -- Go's GC is not very good, and they even admit it.

Felix Klock's PhD is an interesting exploration of this: http://www.ccs.neu.edu/home/pnkfelix/thesis/ although his collector does have some long pauses on particularly evil programs (those programs cause much longer pauses on conventional collectors).


There are research prototypes for such GCs and usually the performance decreases if you want time guarantees. As far as I know there is no GC, which could be used in a AAA FPS game. Convincing would be CryEngine, UnrealEngine, idTech, SourceEngine, etc starting to use a GC in their rendering, physics, sound parts.


John Carmack suggest that someone should write a engine in more functional way (with GC).

So you would reconstruct all the objects given access to the last frame's memory, (which would immutable). Then run the GC between frames. This may cause problems if a slower computer never has time between frames to run GC. (Running at 27fps trying to reach 30fps, my old Laptop would do this most of the time)


The Witcher 2 for the XBox 360 has a GC on their engine, although they did face some issues.

http://www.makinggames.de/index.php/magazin/2155_porting_the...


> There are certainly GCs that don't pause for 50ms at a time -- Go's GC is not very good, and they even admit it.

Of course. I'm just refuting the specific claim of "ballpark C" performance with an immature GC.


So? The fact that there are good garbage collectors doesn't save us from popular languages and runtimes with bad GCs. As long as there are people who expect to be taken seriously when they release a GC'd language with a bad GC, manual memory management should not be treated as an obsolete, obscure, or archaic skill.


You act like most of those phones aren't Java based. Android's GC is actually pretty nice these days.


But all Android phones are still strictly laggier than the iPhone, and part of that is choice of runtime implementation.


The optimization tricks have merely changed, instead of the whole concept being redundant.

Nowadays you should aspire to reduce branches as much as possible to keep branch prediction happy. Dependent memory accesses should be kept to a minimum. Loops should be written in a way where autovectorizer can easily convert it to a bunch of SIMD instructions. One should understand what the compiler can optimize and write code that allows it to do it's job as efficiently as possible.

The same principles apply as they did in the ages gone by. There is a place for scripting languages and there is a place for heavily optimized library. It's just that the way to write those heavily optimized libraries has changed a bit. Instead of hand writing assembler one just has to write C in a way that allows the compiler to do it for you.

From a web development point of view this means that in general it's good to use <insert the framework that is currently fashionable> and for the parts that require some oomph it's good to have something really optimized like ImageMagick on background.

From mobile perspective it doesn't really matter in what way <It's like Yelp but for old shoes with location awareness> is written but you better hope that the basic system is optimized as much as possible. Because that transfers immediately into battery life.


I was introduced to programming through python. Now that I'm in school, I've been doing most of my work in C++. I have to say, as much as I like programming in python, my knowledge of programming is made so much better because I've had to write C++. From an educational perspective, I think non-GC languages will always have a place (if only for teaching about computers).

In production environments, I don't think that any programming practice should ever need to be "defended." Non-GC languages have their ups and downs, just as GC'ed languages have their ups and downs. Saying "it's indefensible to use one over the other" feels like saying "it's indefensible to fly somewhere instead of driving there." It doesn't make sense, since each is useful for some things and not others.

Once again, when asked the question "which tool is better" my answer is "the one that's best for the job."


I think the parent comment was pointing out that less and less, there's no real reason to need C++. The performance in safer languages is usually far more than adequate, while requiring less code (less bugs) and blocking entire classes of bugs.

Even Tim Sweeny of Epic (Unreal Engine) said they'd gladly switch languages to improve productivity. And game engines are one of the few places that still do need to eek out as much perf as possible.

In short, something like C++ increasingly has more downs than ups. I like C, I just have seen it used in so many places where it makes the code far more verbose, introduces security holes, and has a near-negligible performance benefit.)

"Best tool for the job" is misleading - some tools are nearly completely better than others. Just like globals and goto: they have uses, but less than more.


I once read a comment that said that C++ is like a very sharp knife. If you know how to use it, it's safer than a dull knife; if you don't, then you need to have your totin' chip taken away.

I like to compare it to the F4U Corsair: anyone but a talented and experienced pilot at the yoke will lead to certain disaster. But the thing is just so powerful and so absurdly versatile that until something decisively and significantly better comes along it remains a mainstay of any halfway decent air force.

A lot of people confuse being the Corsair of programming languages with being the Corvair of programming languages -- "unsafe at any speed". They're full of it. Their ignorance and lack of experience prevents them from seeing the scenarios in which the utility of C++ is absolutely indispensable.


That argument seems rather incorrectly biased. The gist is "C++ is awesome, and if you have any problems, it's because you're not awesome enough".

Tim Sweeny's talk[1] indicates that a majority of their bugs are due to low-level "unsafe" code issues. If the people writing Unreal have those problems, I think it's safe to say that it's a problem with the language. Unless you want to make the argument that Epic doesn't hired "talent and experienced" pilots.

C++ sure has places where it might be needed, but those places are growing less every day. And it's more due to the momentum, not the inherent merits of C++. If, say, Microsoft threw their weight behind a language like, say, Rust, the landscape could change. Or if they really pushed on the CLR to make the type system encompass more things and really beef up codegen, we might get somewhere. (The CLR can already easily slide into low level code, even hand-written assembly; the general codegen and limited memory management options are what kills.) Instead, we'll see more things grafted on to C++ (like recent type inference and lambdas) that make C++ somewhat less painful and further reduce the pressure to switch.

1: http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-...


CLR seems to be getting some new toys past Visual Studio 2013.

There is the new JIT compiler based on Visual C++ backend and more targets besides Windows Phone 8 will be supported for native code deployment.


With C++ you trade a bit better performance for a truckload of complexity, busy work and unsafety. Whether that trade-off is worth it depends on the application, but the area in which this is a good trade-off is getting smaller and smaller. The analogy with the knife doesn't make much sense. A sharp knife is safer because you have to apply less pressure, and muscles are more precise and controlled when applying less pressure. C++ is the opposite.


No all GC enabled languages are like Python.

Have a look at Oberon for example. A systems programming language with GC, used to program a fully working single user desktop system in the mid 90's.

Used at many universities around the world.

Yes it required the use of Assembly for hardware communication, but so do C and C++.


Given that calls to malloc aren't permitted in big swaths of the code in my current (non-embedded) job, I think it's safe to say that there are still places where GC is not appropriate.


I never claimed otherwise, I enjoy this sort of coding but increasingly GC languages seem to be the pragmatic choice and that only makes usage like yours more precious and interesting.

As that is what I was asking about I'd love for you to elaborate. :)

Sorry for the confusion, I guess I could have worded my post better.


Heh, fair enough. My day job is in HFT.


What you are asking for is essentially areas of computing where you want to get the most out of the hardware. Non-casual games is one such area. Some parts of finance is another.

I'm not sure why you are referring to it as "games". The high-level programming you can enjoy today rely on efficient low-level implementations. You might not see them, but they are still there and still being developed.


"manual malloc/free are increasingly futile unless you're targeting embedded devices"

Or have any algorithm that uses a lot of memory, or if you want control over how much memory the algorithm uses at a time.

In the article (which is great, by the way), you can see that mark-and-sweep means scanning all of the active objects. This does not have great scalability properties. CPU gets faster, but memory gets larger, so matters won't improve over time.

GCs are great for processes that are tiny. As soon as you have a memory-hungry algorithm, it starts looking worse.

Also, as others have pointed out, manual memory management is not synonymous with malloc/free. There are many other choices available, and region-oriented allocators are often nice for algorithms that use a lot of memory. PostgreSQL internally uses "memory contexts" where you allocate objects with the same lifetime (e.g. transaction-local state) in the same context, and wipe it all at once.


For example: It's nice to be able to write a big hunk of state to persistent store and read it back in again with a single operation (like for app pause/resume) vs. iterating through a huge object graph and serializing. Of course it'd be rad if languages could help you not screw this up (maybe Rust does?)

GC doesn't solve all problems, for instance some of the weak reference bugs crop up just as often in Java as they do in Objective-C w/ ARC and ref counting.


No, but it does simplify coding a lot.

In the presence of a GC many algorithms and code refactorings become much easier if you don't have to track all the time who is responsible for releasing a chunk of memory.

Specially in the presence of multicore algorithms and big code bases.

Added to that the fact that in today's industry developers tend to know just a little bit of the code base, given the size of some projects and team's attrition.


Not having studied the topic in depth, my first thought is whether true pointers and a garbage collector can coexist (without artificially removing parts of the language). The big condition for knowing if something is still in use is whether any references exist to it. But in a language like C , you could cast a pointer to something else (like a void pointer for some quasi generic linked list), store it somewhere, and then cast it back later after garbage collection has run. That seems like a problematic situation. So is completely giving up on pointers (or just never doing tricky things like casting or pointer arithmetic) a requirement to having a garbage collector?


C# has pointer goodness, in that you can do pointer arithmetic and bypass type safety.

However, you have to declare an 'unsafe' context in which you'll do it. Unsafe contexts are tied to a local scope, which means that you can't do anything like having classes with fields of pointer types. Pointers don't outlive the function in which you use them.

Also, before you can get a pointer to an object you have to "pin" it, which tells the garbage collector not to move it around. This is necessary, of course, but it can have severe performance implications since it basically torpedoes all the garbage collector's best performance optimizations.

So pointers are there when you need them. . . but the language makes them extremely inconvenient to use them more than you absolutely need to. A pretty solid compromise, IMO.


You can't store a raw pointer, but you can store the 'safe' forms of pointers (IntPtr, etc) which allows you to do some of the stuff you'd want to do - allocate a buffer of given size in unmanaged heap, then manipulate it entirely using struct pointers and pointer arithmetic. The key/value store I wrote in C# does this - no data lives in the managed heap, it's all sitting in unmanaged memory (mapped files, as it happens).


> You can't store a raw pointer

You can if you mark the containing class/struct "unsafe", which allows both type safety and increased ease of use.


Hmmm, I've never seen this done. Thanks for the heads-up, I'll have to try it out. Are you sure you can do it with classes? I would not be shocked by being able to do it with structs.


Sure am.

  using System;
  
  namespace Test2012
  {
  	unsafe class C  { public byte* P; public override string ToString( ) { return new IntPtr(P).ToString( ); } }
  	unsafe struct S { public byte* P; public override string ToString( ) { return new IntPtr(P).ToString( ); } }
  
  	static class Program
  	{
  		static unsafe void Main( )
  		{
  			fixed( byte* test = new byte[] { 1, 2, 3 } )
  			{
  				var c = new C( ) { P = test };
  				var s = new S( ) { P = test };
  				Console.WriteLine( "Test: {0}", c );
  				Console.WriteLine( "Test: {0}", s );
  			}
  		}
  	}
  }
Result:

  Test: 40510364
  Test: 40510364
  Press any key to continue . . .


C# can also use managed pointers (type& instead of type*). All out/ref parameters use this type of pointer, and it'll get updated by the runtime, like an object reference - no pinning necessary.


Interesting, can you link to an example of their use? I've never seen them in the documentation or examples, but I remember that Managed C++ and C++/CLI have something similar.

Or are you just saying that you can 'use' them via out/ref? I interpreted your post to mean that you can store them in fields, which would be stellar but seems like it would break reference lifetime guarantees.


You only use them via out/ref. Section 13.4.2 of the CLI spec says:

- Managed pointers can be passed as arguments, stored in local variables, and returned as values.

- Managed pointers cannot be stored in static variables, array elements, or fields of objects or value types.

So you're right, they have very limited scope as it'd be messy otherwise. It's my understanding that the JVM does not have pointers even at this level, hence Java can't do out/ref parameters. (Sure, tuples are a better way to handle many cases of out params, but sometimes a well chosen ref can really be nicer.)

I think C++ also uses the term "managed pointer" but not in the same technical sense (I think it just means reference.) Using "safe" C++/CLI generation limits the scope of pointers just like we'd expect.


You don't need to give up pointers, or pointer arithmetic, just the ability to obscure what a pointer points to. Casting to void does not do this (and indeed, has no run time effect at all). Metadata is associated with the pointed-to block of memory, so as long as the address is recognizable as a pointer at runtime, you can do GC.

Some of the things that break GC aren't even technically valid C code. For example, say I allocate three memory areas with malloc() then hold on only to the area with the lowest address, and reference the other two areas with offsets relative to the first area. This is illegal in C, though it will work on most existing implementations, because it's illegal to dereference a pointer past the end of an allocation.

Specific GC algorithms might not be able to handle say pointers into the interiors of objects, but that's not a general limitation for GC.


it's illegal to dereference a pointer past the end of an allocation.

Yes, but there's a legal way to do this.

reference the other two areas with offsets relative to the first area

If you cast the pointers to uintptr_t, and perform your arithmetic on uintptr_t and cast your final pointer back (void * ) before using it, what you've done is perfectly legal and safe (albeit weird) since uintptr_t is an unsigned integer type with all the flexibility of unsigned integers, and if x = (uintptr_t)p then it's guaranteed that p == (void * )x.

In other words:

    char * x = malloc(100);
    char * y = malloc(100);
    ptrdiff_t yminusx = y - x;

    x[yminusx] = '\0'
is undefined behaviour, but

    char * x = malloc(100);
    char * y = malloc(100);
    uintptr_t yminusx = (uintptr_t)(y) - (uintptr_t)(x);

    *(char *)(void *)((uintptr_t)(x) + yminusx) = '\0'
is valid C (and has the same meaning on a DWIM compiler).


It looks like the second is still valid C, but a C++11 compiler is allowed to break it if it declares "strict pointer safety" rules are in effect at runtime (which a GC implementation would presumably do)


What if it's not a DWIM compiler? I'm pretty sure (happy to be corrected) pointer <-> integer conversions are implementation defined. Therefore casting between could perform some reversible operation that ensured round tripping worked, but does not require that ptr(a - b) == a - b even though a == int(ptr(a)) holds.


I'm pretty sure (happy to be corrected) pointer <-> integer conversions are implementation defined.

Quite correct, subject to the requirement that converting the same uintptr_t value back to (void ) will return the original pointer. So the conversion could be implemented as a hash table if the compiler author wanted.

But if you look carefully at what I wrote, I'm never converting a value which was not previously returned by a (void )->(uintptr_t) conversion.


I see. What I was worried about was a version of the code you didn't actually write (casting the ptrdiff_t result to a [u]intptr_t, not casting the pointers first).


Right. You could also convert a pointer to a uintptr_t, XOR it with some value, then at some later point XOR it again to get back the original address and cast it back to a pointer and dereference it, and it would be legal despite breaking GC. However, a number of things that also break GC, like your first example, also happen not to be legal C even though they'll probably work on any existing implementation.


That's very close to the classical example of code that maims pointers: the doubly linked list that uses only one 'pointer' in each node (http://www.geeksforgeeks.org/xor-linked-list-a-memory-effici...)


Does this still work if the GC compacts memory? I don't know much about GCs, but it seems like once your GC is advanced enough with generational collecting, compacting, etc, then allowing raw pointers seems at odds with what the GC is trying todo. I'm sure it's possible, but it's probably a heck of a lot easier just to not allow pointers.


It works fine as long as you have enough type information to know what is and is not a pointer.


When you hear talk of "conservative" Garbage Collectors, what the "conservative" refers to is that anything on the stack that looks like a pointer is treated as if it is a pointer. After all, it doesn't matter what you cast a pointer to, it's all just 1s and 0s on the stack...


Of course, you can make something look like not a pointer and then make it a pointer again if you actually change the zeros and ones.


Most languages don't give you any guarantees if you do that, though. In C, for example, pointer arithmetic on a void*, or on a pointer cast to an integer, is not guaranteed to produce sensible results. The standard only provides guarantees for pointer arithmetic if it's performed on a non-void pointer that points within an array, and the results remain within the bounds of the same array (arithmetic that produces a pointer pointing outside the array of the base pointer is undefined, with the exception that pointing to one element past the end of the array is defined).


In ordinary C, without a garbage collector, I'd be baffled to find an implementation where one could not:

    1) Cast a pointer to an integer of sufficient size.
    2) Xor that integer with another value.
    3) Xor the result with the same value.
    4) Cast back to a pointer.
    5) Dereference.
It's sufficiently corner-case that I am not certain what the standard says about it. In any event, pointers matter when coding in assembly, where language standards aren't even relevant.


That is actually allowed in the standard; it specifically says that casting a pointer to an integral type of sufficient size is a reversible operation (though it I'm not sure if this is not well defined: (int i[2]; (int )((int)i)+sizeof(int)


Could you have a standards-conforming implementation where pointers were big-endian and other integral values little-endian? That would break your example code. I don't know of any architecture that would motivate that, of course...


You can serialize it to a hex string, put it in a json request that you send off to some cloud service, and then an hour later parse it back out of the json reply, with nothing left in RAM that looks like that pointer for the time inbetween.


I don't know of any language that has pointers and a GC. Are there any? C# lets you use pointers but only if you tell the GC to not move your object and promise to behave.


> "I don't know of any language that has pointers and a GC. Are there any?"

Objective-C used to have a garbage collector, though (at least with regards to iOS) Apple removed this capability with the introduction or ARC. On the following URL is some discussion on the Objective-C garbage collector: http://cocoasamurai.blogspot.nl/2010/12/objective-c-memory-m...


iOS never got the GC, only the desktop version got it, and afaik it was deprecated rather than removed entirely.


It's been removed entirely in Xcode 5.1.


OCaml, though in a very restricted sense:

http://www.cs.cornell.edu/courses/cs3110/2011sp/recitations/...


The Boehm–Demers–Weiser garbage collector is a fairly famous implementation of a GC for vanilla C, pointers and all: http://en.wikipedia.org/wiki/Boehm_garbage_collector


Yeah I'm aware of optional GCs. What I meant was languages that ship with a GC as standard and also have pointers. In the case of Boehm, since you choose to bring it in yourself, it's up to you to make sure you don't abuse it.


Well, pretty much any language with a GC can abuse the GC, regardless of whether it's optional or designed in from the start. It just happens to be a lot harder to abuse a GC than to forget to free/delete something.


Optional GC[1] in a language like C++, and Rust has pointers with task-local GC

[1] - http://en.wikipedia.org/wiki/Boehm_GC


> I don't know of any language that has pointers and a GC. Are there any? C# lets you use pointers but only if you tell the GC to not move your object and promise to behave

Oberon, Oberon-2 allows you to use no-GC pointers via the SYSTEM package.

Active Oberon additionally adds syntax support for untraced pointers.

Modula-3 has unsafe modules which allow you to use untraced pointers.

D has unsafe pointers in system blocks.

Ada and C++11 have support for optional GC defined on the language standard. However so far, there isn't any compiler vendor that really offers them.

In Java you can do a bit of dirty pointer tricks via the com.sun.unsafe package, but it is Oracle's VM specific. Although there are plans to make it part of the official library after Java 8.


You can have a GC without compaction.

Go has a GC and pointers, including internal pointers (e.g. to a field of a struct). Its GC started out fully conservative, then precise for the heap and conservative for the stack, and future versions will be fully precise.


Various Lisp dialects over the years have supported the use of locatives, or pointer-like objects.

In general, objects which can be moved or copied around by the runtime behind the programmer's back cannot have indefinite-extent hardware-address pointers taken directly into them. This includes objects managed by most halfway-decent garbage collectors (which tend to be implemented as stop-and-copy, not mark-and-sweep), but also includes such things as drawing surfaces in SDL and DirectDraw. Many runtimes take the C# approach -- provide a memory locking operation that fixes the object in memory while you do some pointer stuff on it and a dual unlocking operation that releases the object so it can be kicked around by the runtime again.


Some languages in the Wirth family have pointers and GC, e.g. Modula-3. It has a module-level notion of safety, so that modules using potentially unsafe pointer manipulation are marked as unsafe and break runtime safety guarantees. It also doesn't guarantee that all pointers will be roots for GC.


Objective-C has pointers and a GC.


See this: http://www.hpl.hp.com/personal/Hans_Boehm/gc/

But also with a very small number of restrictions, you can make C have non-conservative GC.

ANSI C forbids aliasing as any type but a character. Furthermore, unions are to be interpreted as the last type they are accessed as.

The three places you run into issues are:

1) You can treat two structs that begin with the same values interchangeably so long as you only access the common beginning of the structures.

2) You can freely convert pointers to integers and back.

3) Storage returned by malloc() can be used heterogeneously (however, with strict aliasing rules, you can't switch types of the data once you've accessed it)

If you forbid those three things, then you can have a strict gc by adding runtime overhead to accessing data (once malloced data is accessed by a non-character type, you can treat it like that type forever).

However, lots and lots of code violates these rules, (the most famous code that does is probably the fast inverse square-root)[1]

http://betterexplained.com/articles/understanding-quakes-fas...


Now THIS is the kind of articles I want to see in HN.

Saved for later reading, seems very interesting and well explained.



Nope, thanks for the link!


I don't understand. When sweep frees an object, it invalidates the "next" object of the previous object in the linked list, breaking the traversal next time a gc() is called. Doesn't the linked list have to be patched? (e.g., keep track of "prev_object" and set its next to unreached->next before freeing unreached)


I think `*object = unreached->next;` does that, since it's using a pointer to a pointer.


That's correct. It patches the "next" pointer of the previous object to point past the freed object.

It's a pointer to a pointer to handle the case where you're freeing the first object in the list. In that case, it's "firstObject" that needs to be modified, not some Object's "next" pointer.

Using a pointer to a pointer (while admittedly harder to read at first) lets you handle both of those cases with the same code.


This technique is talked about in this Stack Overflow answer: http://stackoverflow.com/questions/12914917/using-pointers-t..., which also references this Slashdot interview (http://meta.slashdot.org/story/12/10/11/0030249/linus-torval...) with Linus Torvalds where he gives this as his "favorite hack".


Yup, I got the idea for this from Linus. Before then, I've always done the uglier "if this is the first node then ..." special case branch instead.


That was actually very easy to understand!.

Thanks for that. I'm thinking in build a toy language and this make on magic thing die..


storing the 'mark' bit with the objects themselves is not so good for COW semantics, but should be just fine for this toy example i think...


Nice post! It's very precise. I didn't have a hard time understanding it. Thanks for sharing!




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: