Hacker News new | comments | show | ask | jobs | submit login
Rust has a static “garbage collector” (steveklabnik.com)
239 points by steveklabnik 6 days ago | hide | past | web | favorite | 199 comments





> I don’t generally find writing Rust to be significantly harder than using a GC’d language. I’ve been trying to figure out why that is.

That's quite surprising! Here's some examples of where things would be easier to write if Rust had a GC:

1. Any sort of lock free algorithm. This is a big one - hazard pointers and the like are much harder than letting the GC clean up.

2. Data structures which may contain "back" pointers (e.g. parent pointers in a tree).

3. Data structures which may be cyclic. For example the classic LRU cache is best implemented with a cyclic linked list which is hard to express in Rust.

4. Any sort of refactoring that may adjust ownership. E.g. going from T to Rc<T>. GC requires fewer choices so these refactorings are easier.

Surely this pain is real, even if Rustaceans think it's worth tolerating?

In fairness precious resource cleanup (file descriptors, etc) is easier without a GC.


Crossbeam makes lock-free data structures easier to write than in any other mainstream language I know of.

Regarding graphs, just use indices and vectors. It's often better to use indices for graph nodes anyway, for example in games, where an ECS design using IDs is generally preferable to direct references to objects.

For a while, I was kind of obsessed with showing that Rust could do doubly linked trees and graphs just as well as C++ could. I now realize that this was a mistake, and I made a big mess of a lot of code in the process. Having a single owner and using IDs for "secondary" references is often preferable even in languages that easily allow multiple strong references to objects. In the small, direct references and an OO style can be convenient, but in the large, you often want to break up your graph code into components and systems anyway, and it's kind of nice to have the language push you to front loading that kind of design.


I know this works, but whenever I hear this advice, I can't help thinking: Aren't indexes and vectors basically just a way of smuggling pointers past the borrow checker?

It's not bypassing the borrow checker; it's bypassing references. C++ doesn't have a borrow checker, and the same pattern of forgoing references for integer IDs is seen there. References are a just a tool to express certain pointer patterns, and like every other tool they have contexts for which they work well and contexts for which they don't. Really the most surprising result of all this is the broadening realization that references aren't the end-all abstraction for reasoning about pointer-like behavior (familiarity with smart pointers like C++'s shared_ptr makes this less of a surprising realization, but it's easy to initially dismiss a library-level feature as being less intrinsically fundamental than a language-level feature).

This isn't to say that futzing with integers is the best imaginable solution for these tasks. Language-level support (or at least stdlib support) for generational indices would be an interesting subject to pursue.


I think the point is that if you're juggling integers open you up to the huge correctness messes from before, even if theoretically you've gotten rid of your memory mess.

Granted, you could `Option`-up all your operations, but the dynamic GC languages (or even C++) will let you operate on stuff "correctly", so long as your implementation is right.

You can go `unsafe` and implement correct stuff and still be better than C++. I really feel like reference IDs aren't a great way to solve something like a doubly-linked list. Reference IDs make you lose almost all guarantees of correctness, unsafe + a good API will at least give you a fighting chance.

There are problems (like entity system stuff) where reference IDs are just the way to go, though


> I think the point is that if you're juggling integers

But I don't think anyone should be juggling integers themselves; they should be using a library (just like a library-defined smart pointer) that handles that (the presentation at https://www.youtube.com/watch?v=P9u8x13W7UE may be of interest).

> There are problems (like entity system stuff) where reference IDs are just the way to go, though

Sure, and there isn't anything stopping anyone from both using them where they're practical and not using them where they're not. Some of the comments in this thread seem to be suggesting that you must choose one or the other, but that's simply not the case; using references in one part of one's program and indices in another is totally fine, as is using references and indices together in the same part of the program. They're not exclusive.

In particular, nobody is saying that one should be reaching for indices regularly; I've written Rust for years and have never, ever needed them. References work just fine for plenty of applications. But we have other smart pointers for good reasons, and sometimes an application will call for the use one of those smart pointers, and that's totally cool; again, references may be thoroughly useful, but they are not the end-all be-all of pointer-like abstractions.


Some of us have to write those libraries.

And the GCs too, so that they actually do not cause performance problems (e.g. unpredictable latencies) and help with debugging them instead of getting in the way.

> References are a just a tool to express certain pointer patterns

I'd phrase it the other way - pointers are just a tool to express certain reference patters. The solution you proposed - integers - are another. Conceptually, the way both are used, both are references - a way to indirectly access another object.

But if you do it with indices into another data structure, then you do, in fact, just work around the borrow checker, and escape to the land of freedom where you do whatever you want - and pay the price. Now you can pass them with impunity, sure... but now you also have to deal with indices that reference non-existent elements, or (depending on your usage patterns) indices that were not properly updated when an element was deleted from the middle etc.


> The solution you proposed - integers

I explicitly said that I don't think integers are the solution; it's simply that for tasks like e.g. managing the thoroughly interconnected world state of a video game, where you have a web of dynamic entities, references aren't the right solution, and so people start to concoct other solutions using the primitives they have available. I mention that I think generational indices are the solution (or at least a solution).

> But if you do it with indices into another data structure, then you do, in fact, just work around the borrow checker

Why is it "working around the borrow checker" when it happens in Rust, but "employing an ECS architecture" when it happens in C++? The fundamental insight is that pointers and references as we know them kinda sorta suck at managing arbitrarily-interconnected webs of entities with dynamic lifetime; a garbage collector neatly solves this use case (dynamic lifetimes are their entire jam), but if you're using Rust or C++ then we're assuming that you have stricter performance requirements than a typical garbage collector provides (and if you don't, then you should consider using a GC'd language). An ECS-like approach using indices (which, if you squint and turn your head, resembles a very specialized form of garbage collection) is a way to keep entity management tractable without going all-in on GC.

> now you also have to deal with indices that reference non-existent elements, or (depending on your usage patterns) indices that were not properly updated when an element was deleted from the middle etc.

Generational indices address these problems. Again, I don't think anyone should use integers directly, you should seek to leverage abstractions that other people have already sat down and thought about.


I don't see how regular pointers don't solve the problem with non-tree-like data structures in general. In C++, for the most part, for a cyclical or back-linking data structure, you'd just do that, and then have a single "real owner" of the whole thing that knows how tear it down; and that one gets an owning reference. The class of problems that can't be solved in this manner is substantially smaller than the class of problem that can't be solved with tree-like structures only.

Anyway, the point still stands: this is bypassing the borrow checker, because something cannot be adequately handled by it. And by doing that, you're back to square one with all the problems that the borrow checker was supposed to solve.


C++ is getting one via static analyzers though.

If this is referring to the proposed C++ lifetime profile, it seems as though recently it has scaled back its ambitions from detecting all misuses of references to now detecting only common misuses of references. Still useful for C++ programmers, certainly, but it's no longer really comparable to Rust's borrow checker. And even with it, it's not going to change anyone from using an ECS to using a reference scheme; the point is that references aren't the best pointer abstraction for these sorts of thoroughly-dynamic graphs with elements of dynamic lifetime.

Yes I am referring to that proposal, as for ECS, indexes are not a must.

https://www.gamedev.net/blogs/entry/2265481-oop-is-dead-long...

In this particular case, currently Rust's borrow checker isn't of much help either, hence the workaround with generations.


A broken one.

I find this argument deeply frustrating. C++ doesn't even try to make graph memory management safe. Rust has several different strategies for graphs, all of which have some drawbacks, but which are all nonetheless safe. But because they aren't quite as ergonomic as C++, C++ somehow wins. No. C++ graph management is unsafe, regardless of whether you're using the lifetime profile.


My devil's advocate comments are exactly about "But because they aren't quite as ergonomic as C++, C++ somehow wins", better language ergonomics are critical to a language's adoption.

One reason why Pascal, Ada and Modula-2 lost against C, was because they weren't ergonomic enough to the eyes of many developers.

The typical magazine articles why using them felt like ceremony and straitjackets by those whose security wasn't on their top concerns.

So I get to have a bit of experience being on the losing side, arguing how using array indexes with bounds checking (which could be disabled if really needed) was a better option than just incrementing away bare bones pointers. Or why explicit type casts were a better option.

Ideally, I would be using C# or Java for everything I do, so it isn't about C++ vs Rust as such, rather making the point that it is not only about grammar and semantics when comparing programming languages.

Given Biscuit's paper, I would say Go is on the good path if their Go 2.0 plans actually play as promise. Which remains to be seen, nonetheless.

In the end it is a matter of ergonomics and perceptions.


Yes, but that's kind of the point: Rust's borrow checker only really works for data-ownership graphs that are trees, and although it does pretty well for that use-case, it's not perfect and research continues on how to make it less conservative while still being safe (consider the non-lexical-lifetimes feature arriving in a near-future release of Rust). A static borrow-checker that works for directed-acyclic or even cyclic ownership graphs would be wonderful, but that's definitely an open research topic, not a production-quality-compiler topic.

So if you can't prove your ownership graph is safe at compile time, you have to do it at runtime. For some ownership graphs, reference counting with Rc<T> is the right choice, but in the general case indexes and vectors are the least-verbose way to represent a runtime ownership graph.


Kind of but not really. You still can't access an undeclared element, and you still can't take a mutable and an immutable reference to an element by index. The borrow checker only cares about memory safety, and index accessors are implemented in a memory safe manner (whether that's panic!ing or returning an Option<>). If you want… umm… "data safety" (I'll call it for now), so you can't use an index to access an element that you didn't intend to access because stuff got shifted around for instance, then generational indexes will solve that issue for you.

And to clarify something that's probably obvious. A direct pointer to a vector element could easily end up becoming invalid when the vector has to be copied somewhere else for it to grow or if the element gets deleted in another thread, so you can't just easily use pointers to vector (or array) elements in Rust.


> The borrow checker only cares about memory safety, and index accessors are implemented in a memory safe manner (whether that's panic!ing or returning an Option<>).

Reductio ad absurdum: Implement the entire heap as a single array with typed views and you have the asm.js model. Memory safety is ultimately just another class of bug and it's worth keeping the end goal in mind. (And the asm.js approach is resistant against smashing the return address even if the hosted application has buffer overflows out of every orifice, and of course it provides isolation from the rest of the host process, so segregating the heap into a separate memory space has real security value.) I use indices and tagged handles instead of pointers in low-level C code all the time for sound engineering reasons as it's often the superior solution (and the move to 64-bit pointers created further incentives), but I think skepticism is warranted towards the growing Rust prescription of indexed arrays whenever you're dealing with non-tree-structured data.

Probably the greatest thing about pointers is that they enable generic code without any abstraction cost. It's painful enough to work in a language like Java with object references but without interior pointers to array elements or structure fields. You may feel tempted to introduce a case-specific 'fat pointer' pairing of an array reference with an index, which is not only awkward to work with but puts you in the absurd situation of having an index type which will often round up to 16 effective bytes on a 64-bit platform due to packing alignment for the next value in memory, when often one of the goals of replacing pointers with indices on a 64-bit platform should be to reduce the size from 8 bytes to 4 or 2 bytes.


> the growing Rust prescription of indexed arrays

I think there might be a disconnect here between people who talk about Rust and people who use Rust, because the use cases that are amenable to indexing into arrays are few and far between. I have never encountered such an approach in any Rust project that I have contributed to; tracking indexes is far, far less common than e.g. something like the Rc smart pointer. It does get talked about a lot though, for some reason.


I think the point is that Rust's checking allows you to statically know you don't have a dangling pointer. Indices explicitly can be dangling or accidentally incorrectly point at a new value that replaced the same slot or whatever. You suddently need to manually decide when to free values.

It does avoids the worst security issues (bad reads are at least contained to the same data structure, which can easily be a serious security issue but at least not arbitrary memory reads) but for a lot of purposes it has the same properties that you have with C style pointers.

It seems like a giant problem that Rust advocates are strangle quick to gloss over as a good pattern.


Usually programmers expect to destroy objects in some specific location. For example, if a widget is removed from a window and then goes out of scope, then most programmers would expect it to be destroyed shortly thereafter. If there are actually references to the object remaining, contrary to programmers' intent, then one of four things can happen:

1. Use-after-free (C, C++). This is a logic bug and a security problem.

2. "Logically" dangling pointers (safe languages with non-generational indices). This is a logic bug, but not a security problem.

3. A memory leak (safe languages with garbage collectors). This is a bug that can be difficult to track down.

4. Panic/exception (safe languages with generational indices or weak pointers). This is a runtime failure.

I'd argue that, of these four options, (4) is generally the best. The ideal would of course be a static guarantee instead of any of these. However, static guarantees always come with restrictions of some kind. For the truly unrestricted case, in which your data references are completely irregular, I'm not sure you can really do better.


The point still stands. If you hold an index into an array of widgets, and the widget you reference is removed or replaced in the array, you could get behaviours 1, 2 or 4 depending on your implementation. Rust's index-into-array pattern removes the possibility of memory leaks, but it also removes most of the benefit of having a borrow checker in the first place.

Its basically a less performant, less ergonomic version of using raw C pointers. Less ergonomic because you need to write your own allocator for your array. And its less performant because fetching the associated struct from memory requires 2 fetches rather than 1. If you're convinced dynamic memory management is the only solution to your data structure, rust's unsafe{} seems a better choice. Its in the language for a reason.


You can't get security vulnerabilities due to use-after-free if you use indices. Don't use unsafe for this.

Sure you can.

If my index 2 now contains the contents of lets say index 5, due to array rearrangement after deletion of the element at 2, whatever happens with data[idx].is_data_valid() is not what the developer would be expecting.


Easily solved with generational indexes as described in https://www.youtube.com/watch?v=aKLntZcp27M

I have seen that talk.

To me it feels a bit like a workaround for something that cannot be validated by the borrow checker, because it is something that developers have to go the extra mile to implement, or get a third party library, which someone has to validate that actually works as expected.

In a way, it isn't much different than expecting C and C++ developers to use static analysis for lifetime validations.

Meaning, using a tool outside of the core language for added safety.


> To me it feels a bit like a workaround for something that cannot be validated by the borrow checker

Is it so hard to believe that references are not the correct abstraction for 100% of use cases? Reaching for something other than references when references are the wrong tool for the job is not working around the borrow checker; it's choosing the right abstraction for the right task. Being hung up on the borrow checker is missing the forest for the trees.

> get a third party library, which someone has to validate that actually works as expected.

How is this different from using any third-party library, ever?

> Meaning, using a tool outside of the core language for added safety.

Rust explicitly supports users defining their own smart pointer types to provide pointer-like abstractions with custom semantics; using tools outside of the core language for added safety (for whatever definition of "safety" one wants) is completely expected and encouraged.


The point being, that for developers focused on managed languages, C++ takes the role of the unsafe layer.

Meaning Java/C++, .NET + (C++/CLI | C++/WinRT), Node.js + C++, Swift / Objective-C++.

So with C++ improving its safety history, usually with ideas taken from Rust, Rust ergonomics and tooling need to have a better story than C++'s to replace it on the above stacks.


Interesting, because in my experience people use Java/C, Node/C, Swift/Objective-C, Python/C, Ruby/C... rarely do I encounter anyone using C++ as an extension language (I encounter Rust being used more often than C++, in fact, but perhaps that's an artifact of the circles I'm in).

As for C++ adding more static analysis, its lifetime analysis is a nice-to-have, but doesn't compare to Rust's borrow checker. You simply can't tack on a sound borrow checker to C++, because the language wasn't designed to accommodate one, and trying to impose the concomitant rules regarding mutability, aliasing, and single-ownership would break every C++ program ever written. For anyone who prioritizes sound static analysis WRT lifetime verification, C++ isn't a competitor to Rust. And there are plenty of people for whom that isn't the case, and they will continue to use C++, and that's not a problem. Rust exists to provide an alternative systems language for people who favor memory safety, and it's pretty good at that. :)


I guess you spend more time in UNIX platforms?

As for being an alternative systems language for people who favor memory safety, I fully agree, my point is that it still needs to improve its productivity and eco-system.

At CppCon 2018 Embedded Development panel, one theme was that only now companies are slowly willing to migrate from C to C++11(!), with a language that allows for a progressive rewrite from C while keeping the existing toolchains.

Another productivity example, with .NET I can get the safety I advocate, while C++/CLI/CX/WinRT allow for a seamless interoperability story with native code.

So even if the lifetime analysis is a subset of what Rust is capable of, mixed debugging and seamless CLR/COM/UWP integration are more attractive than rewriting that code in Rust, without having VS integration and WIP integration with Windows APIs.

I think Rust on its current state, is more indicated for GC free scenarios with either CLI or headless execution.


> I think Rust on its current state, is more indicated for GC free scenarios with either CLI or headless execution.

That's funny, because the largest deployment of Rust is in Firefox, which has a UI.


So can you point us to an example in Firefox's source how to create an UI widget in pure Rust code?

As far as I am aware, Rust is only being used for low level rendering, not widgets.


I can confidently predict that the lifetime profile for C++ will get almost no real-world use.

Requiring a library is only really a big deal in the C/C++ world though. Everywhere else it's trivial, and most projects will depend on some foundational libraries.

That's pretty different to static analysis tools which most likely won't always work.


Nobody is asking for generational indices to be added to the core language. There would be zero benefit. We have a package manager for a reason.

Maybe we could uplift them to the nursery, but again, generational indices are nowhere near the top 10 crates on crates.io.


That is not use-after-free in the way that the security community typically uses the term. This kind of problem is an order of magnitude less likely to lead to RCE.

> 2. "Logically" dangling pointers (safe languages with non-generational indices). This is a logic bug, but not a security problem.

Many logic bugs become security issues. This specific one -- by way of TOCTOU races -- is one of the richest sources of security issues in Unix. The proposed "dormant index" solution is likely to generate a lot of these as well.


> The proposed "dormant index" solution is likely to generate a lot of these as well.

What is a dormant index? I've not seen anything like this proposed, or heard the terminology before.


I borrowed the terminology from ryani’s comment above; it’s the same thing you are discussing iirc, weak reference expressed by an index into a container and some way to identify that container (expressed or implied).

I like the “dormant” name because unlike a weak pointer (say, in Java or Python), you can’t use it directly - you have to borrow the actual object to use it; so it lies “dormant” until you wake it up. It’s not (necessarily) weak because there is no automatic destruction once all other references are gone.


I think the disconnect for me is that circular references just aren't "irregular", they are the natural expression for doubly linked lists, trees with references to parents, etc: these usage patterns are the primary cause of memory issues that I come across in C++.

In other words, static safety in the face of circular references is what would provide me a major clear advantage for better-than-C++, and unfortunately that doesn't seem to be something that is even seen as an opportunity for Rust given all of the rhetoric about that being an abnormal pattern and index-into-array being safe enough under that world.


> In other words, static safety in the face of circular references is what would provide me a major clear advantage for better-than-C++, and unfortunately that doesn't seem to be something that is even seen as an opportunity for Rust.

Having a compiler able to solve the halting problem would also be “a major clear advantage for better-than-C++”, but too bad it's impossible … Arbitrary circular graphs can't be statically managed, that's it. Your need a runtime support, and it will either be a garbage collector, or some kind of constructions with array an indices.


Circular graphs are perfectly possible to detect and manage compile time. It just takes a massively not complex borrow checker to run full dependency analysis to find where and if the cycle is broken or the whole thing released.

Checking for graph cycle is elementary CS - planning the entry/exit nodes is not. It is similar to link time optimization to implement.


> Checking for graph cycle is elementary CS

Yeah, but that's for built graphs, not graphs that will be built at runtime? Seems really, really different.


Indexes in an ECS system are basically a way to implement weak references. If you use generational indexes, it's a checked weak reference. Otherwise it's unchecked. Perhaps some future Rust-like language will have weak references built in?

Even with unchecked weak references, you can't get arbitrary undefined behavior. It's more like Java where you can defeat the type system due to the way generics work, but it's still memory-safe.

Also, relational databases work the same way. Sometimes people use constraints to avoid dangling references, but not always.


Well it kind of reminds me of this - back when BASIC had just plain arrays (DIM), one had to do tree structures using indices. Later (with Pascal/C/C++) you would tend to use pointers, but then back in SQL land you would go back to indices (foreign keys?).

So if SQL can deal with it, BASIC too, what would be the issue in Rust? Even in Pascal/C/C++, especially in 64-bit platforms, encoding index maybe the better thing to do, and then in GC collected languages - the less pointers to explore the less work the GC has to do.


In SQL, foreign keys are semantically closer to references in many ways, not least because deleting a row doesn't change the keys of other rows. So it's not really an index - it's just a row ID. And then with ON DELETE you can make it safe in a sense of not having dangling references (because it'd either block deletion if such were to appear, or cascade to remove all rows that have them). Still a bit closer to raw pointers in that you can treat it as a raw number and point at some random rows this way. But still.

In BASIC, yeah, you pretty much had to use indices. And it was very much not fun! BASIC was actually the first thing I remembered in the context of this discussion, because I wrote a lot of that kind of code.

The other thing that it reminded me of is J2ME. People used indices into arrays of primitives there to avoid GC, because it was not really practical given the memory constraints of your average phone back then.


You have lots of options for non-borrowing references to instances of T: Indicies, Arc<T>, RefCell<T>, Cell<T>, dictionary keys, paths to files containing serialized Ts... the list is unbounded. This is all perfectly fine: It's not the goal of the borrow checker to prevent simultaneous references to data, that'd be ridiculous and useless. The goal is to prevent unsafe simultaneous access of data.

And accessing data through an index still requires you to first acquire e.g. a borrow, and the useful invariants of that borrow still hold: for the duration of that borrow, you have freedom from data races - there will be no other borrows to that instance when you're holding a &mut T, or there will be no other mutable borrows to that instance when you're holding a &T.

An index may allow you to reduce the scope and lifetimes of your borrows, but it does not allow you to eliminate your borrows.


a nice side effect - index can now be "compressed" - e.g. just 1 bytes, 2, 4 or more. Such approach comes useful in GC languages too, as you limit the work a GC has to do, going through the roots, checking anything that can be a pointer.

Indeed. If you take that to the logical conclusion, you reach the K/J/APL mindset - objects are not a useful abstraction. Just use arrays with correlates indexes and be done with it.

So I used to think this, but I've come around a bit. What indexes do is let you put a pointer "to sleep"; it can be woken up only by pairing it with a live data structure which you correctly have safe ownership of at the time. This "dormant" pointer can be safely stored in data structures that have no concept of the containing object's lifetime, which in games is exactly the common use of entity handles (usually implemented as generational indices), hibernating until the original object is really needed.

I think it's possible to write a DormantPtr library that encapsulates this pattern with better ergonomics than vec<> + generational index.


Kind of like a WeakRef in Java?

It depends on what you mean by “smuggling.” This implies something bad; I don’t think this pattern is inherently bad. It’s just different. It’s another example of moving a compile-time check to a runtime check, and depending on what you’re doing, can have better cache locality and better performance.

It has worse ergonomics, though. Most obviously, every time you want to access data through an index ‘reference’, you need to name two things: the array and the index. With a pointer you only have to name one thing. Thus, you might have to write `self.widgets[widget].foo` instead of `widget.foo`: not the end of the world, but annoying if you have a lot of references floating around.

Further, even if Rust guarantees memory safety, the approach is still less safe in the broader sense of guaranteeing program correctness. For one thing, if you just use raw integers for your indices, you basically have the equivalent of a C void pointer, a pointer to some unknown type. The compiler doesn’t know what the index is for and can’t catch you if you accidentally index into the wrong array. You can partially solve this by making a newtype wrapper for indices into each type of array, but even that can’t differentiate between multiple arrays of the same type.

Also, if you have a system for ‘freeing’ array indices and reusing those array slots for new data, you run the risk of keeping an index around too long and causing a semantic use-after-free: not as bad as a traditional memory use-after-free, usually, but certainly a source of incorrect and unpredictable behavior. And whereas tools like ASan let you ‘flip a switch’ to catch traditional use-after-frees if they happen in development builds, with arrays you have to add the extra checking manually. (On the other hand, if you do design extra checks, they might be cheap enough to run in production, where ASan is probably not. And yes, I know, when it comes to security, “if they happen in development builds” is quite meager comfort.)

But I’m not saying all this just to be negative. Personally, it’s my hope that someday in the future, Niko and co. will find ways to make the borrow checker more expressive when it comes to parent-child relationships, and in other situations where it currently struggles. If so, it won’t just help with pointers, but with array-based designs as well: it’ll become a more viable approach to have the borrow checker check array indices, by adding lifetimes to those array index newtypes. That would remove all of the aforementioned safety issues, while keeping the performance benefit of arrays – indeed, increasing it, since with the right design you would be able to safely disable the bounds check when indexing.


> Also, if you have a system for ‘freeing’ array indices and reusing those array slots for new data, you run the risk of keeping an index around too long and causing a semantic use-after-free

One of the many great things described by Catherine's keynote about ECS's in Rust: generational indices. Generational indices are a dynamic fix for this problem.

> Personally, it’s my hope that someday in the future, Niko and co. will find ways to make the borrow checker more expressive when it comes to parent-child relationships, and in other situations where it currently struggles.

I agree it'd be nice, and I've given it a lot of thought, as has Niko. In the limit, though, I think you're always going to have the situation in which there just aren't any static guarantees and the graph structure is truly unrestricted. In those cases, I'm not sure you can really do much better than dynamic checks of some kind, and it's hard to get much more efficient than a single bounds check.


> One of the many great things described by Catherine's keynote about ECS's in Rust: generational indices. Generational indices are a dynamic fix for this problem.

Hmm… I'd heard of stuffing a generation ID into unused bits (which can help but is limited), but I see this proposes having the index type be a full (index: usize, generation: u64). Well, that works, but now instead of indices potentially being half the size of pointers (if you don't need more than 2^32 objects), they're double the size. Also, each object must store a generation, and you have to check that each time you use the index. (So it's not just a single bounds check, though I suppose you could turn the generation check off in release builds or something.)

If you're willing to accept that overhead, it seems to me that you may as well just store a pointer instead of an index; design the arena to ensure that the pointers remain valid as long as the arena itself lives, and use lifetimes to track that (much easier than managing lifetimes of each object). That way you don't need to keep track of the array and the index separately, and you also save on the bounds check. Well… I guess that with an array you can have unique mutable access, whereas with pointers you'd have to rely on interior mutability. But it sounds like a pain to ensure uniqueness when threading around &mut references to the arrays (especially if there's one big object with all the arrays); you could never keep references across calls even when that would be completely safe. I would rather rely on interior mutability. (Have I mentioned that I think Cell should be built into the language?)


Yeah, I've given a lot of thought to ideas similar to your last paragraph, but all my attempts ended up having worse usability than just using arrays and indices. I eventually gave up and decided to stop worrying and love the indices. :)

I should note that one nice thing about the ECS style is that it's built around threading around Systems, which consist of references to the various Components. So it's a natural fit for the borrow checker.


Though one of the nice things about using indexes is that you can lookup multiple arrays with the same index. This can be very useful for performance to break up a large data structure into smaller ones (e.g. to fit more elements in a cache line when you only use a subset of the structure for a complex operation) whilst still keeping their relationship intact. Further it’s also useful for all the reasons a relational database is in terms of querying data relationships.

Indeed. As a colleague of mine once noted, “eventually you write FORTRAN code, no matter what programming language you happen to do that in”.

FWIW, it is possible to make it a bit more ergonomic:

  pub struct SmartRef<'a> {
    container: &'a Container,
    widget: usize,
  }
  impl <'a> std::ops::Deref for SmartRef<'a> {
    type Target = Widget; // Widget provides non-traversing functionality
    fn deref(&self) -> &Widget {
      &self.container.widgets[self.widget]
    }
  }
  impl <'a> SmartRef<'a> {
    fn children(&'a self) -> impl Iterator<Item = SmartRef<'a>> {
      self.container
        .widgets[self.widget]
        .children.iter().map(move |w| SmartRef { container: self.container, widget: *w })
    }
  }

  fn boo(container: &Container) {
    let root = SmartRef { container, widget: 0 };
    println!("name: {}", root.name);
    for child in root.children() {
      println!("child name: {}", child.name);
    }
  }
(removed GAT remark -- it does not apply here; was thinking about generalizing this with traits)

Hmm… that would work, but at the cost of requiring the data to be immutable or use interior mutability. It also removes the size advantage of storing indices over pointers, unless you only make SmartRefs temporarily rather than storing them in your data structures.

Yes, the idea is that you only create them temporarily.

Mutability is also possible to some extent with these "smart" pointers. It gets a bit trickier and less ergonomic, though. See https://play.rust-lang.org/?gist=fbf1c24397e7020c95774bf0906...

Another option would be to store something like "Rc<RefCell<Container>>" instead of "&'a mut Container", in which case you will be able to achieve something that behaves like multiple mutable references (with all the concurrency issues of them).


Yeah, that's a pattern we used all over the place in games where we also abused the hell out of pointers(some in-place loaders I've seen would turn your hair white) because the pattern has other merits.

If you think about it, an index is just a pointer that's checked for validity every time it's used. That's precisely what you want in the case of irregular memory access patterns.

I feel like raw pointers and indices are about the same. Indices can be runtime bounds-checked, with a violation resulting in a crash. Raw pointers—if they're single heap allocations—are also runtime validity-checked, in the sense that violations will (probabilistically) result in a General Protection Fault, given a large address space and ASLR.

References (i.e. pointers the compiler knows about), on the other hand, are strictly better than either. Indices can only be bounds-checked at runtime, while references can be both alias-checked and bounds-checked (if they're references into a slice of memory that is known to the compiler) at compile-time.

But, of course, you can't do math to references. Construct a pointer by through an integer cast + math, and now the compiler has no idea what that thing is, what it's inside of, or how many other pointers point to the same place it does.


> Raw pointers—if they're single heap allocations—are also runtime validity-checked

I prefer my runtime validity checking to not come with CVE numbers, the corruption of instances of completely unrelated types, and other such heisenbugs.

> I feel like raw pointers and indices are about the same

On a large codebases with lots of contributors, there's several orders of magnitude of difference - between the number of bounds checked indices that could be "corrupting" your instances of type T (only those used to index arrays of T or things containing them), vs the number of pointers that could be corrupting your instances of type T (basically any pointer in the program whatsoever.)

Frequently with a similar "several orders of magnitude" difference in debug times.

Conceptually similar, practically not.


> > Raw pointers—if they're single heap allocations—are also runtime validity-checked

> I prefer my runtime validity checking to not come with CVE numbers, the corruption of instances of completely unrelated types, and other such heisenbugs.

I agree with your point, but DoS attacks do get you CVEs as well.

Personally, whenever I've got into writing Rust I was quite worried about the relaxed way you were told to use runtime-checked structures after many pages describing how Rust has such strong statically-checked safety guarantees.

I get that you need both because compilers and compiler research aren't close to proving many cases that programmers need to make use of, but selling both the ease of runtime-checked structures and how strong the static checking is feels a bit too much like trying to have it both ways. Sure, both are true, but then strong static checking doesn't really mean the same thing (that you're sure your program won't do certain things, which means you have to now do similar reasoning and debugging when dealing with other runtime-checked languages).


> selling both the ease of runtime-checked structures and how strong the static checking is feels a bit too much like trying to have it both ways. Sure, both are true, but then strong static checking doesn't really mean the same thing

I don't quite understand this sentiment. If you try hard enough, you can verify anything at compile-time (modulo the halting problem, etc.). If you want, you can also use a language that verifies nothing at compile-time and does all verification at runtime. Where we choose to draw the line between static and dynamic verification depends on our requirements. The existence of dynamically-verified entities does not obviate the usefulness of statically-verified ones; Rust statically guarantees memory safety and data-race safety, and using e.g. Rc or indices into an array doesn't change any of that. If you're simply looking for a systems language with even stronger static guarantees than Rust, then look at ATS.


ASLR loads segments into random places, but it doesn't split the heap up. Memory allocators typically (always?) arrange things contiguously for ease of management.

> Raw pointers—if they're single heap allocations—are also runtime validity-checked, in the sense that violations will (probabilistically) result in a General Protection Fault, given a large address space and ASLR.

Not for off-by-constant-offset errors it won't. Even with address sanitization there's still a significant chance your overflowed offset will correspond with a valid part of some other a array--not really detectable at runtime in C.


This was extensively discussed on reddit and a few other places (not HN afaik, or I couldn't find the thread), after the video^ Jonathan Blow posted on the topic.

I think the tldr; is, opinions vary.

Using indexes manually doesn't by-pass the borrow checker, its just a different, manual, memory allocation and management strategy.

It preserves memory-safety, but, it's questionable if you're better or worse off in terms of correctness of application logic when you use it.

...you're probably better off using the borrow checker (that's why it exists) or an abstraction in a crate to deal with this sort of problem, regardless of whatever implementation strategy it uses internally (unsafe, this, etc).

[1] - https://www.youtube.com/watch?v=4t1K66dMhWk


No. On 64bit machines, a 32bit index saves you significant memory, and if you can use a 16bit... even more.

These savings matter a lot in some domains of computing.


An index into any std data structure has type usize, though, and usize is 64 bits on a 64 bit machine.

They don't have to be stored as a usize. This is slightly annoying because it may add casts, but not if one already has a newtype wrapper around the integers: the type stored in the wrapper could be a u32, with the conversions to usize hidden by the indexing APIs.

2 cycles or so, I think. In my experience, this cost is usually overwhelmed by the inherent (to this class of algorithms) cache misses.

2 cycles for a 32-bit to 64-bit extension? I don't think that makes sense: for instance, on x86-64, there's no cast necessary, just use the %rax register instead of the %eax one (plus, memory addressing works fine with 32 bit registers directly). I believe the same is true of AArch64 (64-bit ARM) with the 32-bit register W0 being the bottom half of the 64-bit register X0.

Yeah, your intuition seems correct to me. I wonder if I was thinking about 64 to 32 signed, or if just didn't know.

Using an index/ID rather than a direct object reference basically excludes the object from any kind of automatic memory management. Aren’t you just arguing that they should all be managed manually?

The object will still be freed when its owner is dropped... It's just the accesses to the object that are manually managed, not memory.

There is no way to tell if there is an index to the object or not, so obviously it’s lifetime has to be managed manually.

Indexes aren't references to objects, they're references to slots. The lifetimes of the objects are determined by whoever puts them into and out of those slots. That doesn't have to be determined manually.

Sure, objects aren't deallocated when the index is dropped, but that doesn't mean it isn't being automatically deallocated somewhere else for some other reason.


Wherever the object is deallocted, it can’t be done safely via the language or languages runtime because those indices are completely opaque. I’m not saying this is a bad thing, it definitely has a use case, but just that it isn’t really showing off a viable alternative to GC, it is very weird to be states as such. All manual memory management schemes work as you describe, they aren’t classes as automatic because the language runtime (or standard libs) can’t provide it for free.

I don't see any reason why you couldn't have a Rust ECS that keeps track of the number of outstanding references to each entity and doesn't free them while there are any references left. You'd have an opaque "reference" object instead of an index. (Care would have to be taken to make sure the references can't go dangling when the vector is resized, but it should be possible.)

But usually programmers intend to destroy entities in one place; for example, in UI code, a programmer typically expects a widget to be destroyed once it's removed from the window. It can be beneficial to have the runtime system diagnose if a reference to an object thought to be destroyed is actually used. So I haven't seen anyone actually go to the trouble…


At this point you're basically writing a garbage collector. Which you might as well do with raw pointers, if it's a part of the language, and it will be more efficient.

Crossbeam is awesome. That said; I think his point is more on the "how hard is it to write a crossbeam-like library in Rust compared to a language with a GC".

From crossbeam docs [1]: "Programming languages that come with garbage collectors solve this problem trivially."

[1]: https://docs.rs/crossbeam-epoch/0.6.0/crossbeam_epoch/


It's not just the core mechanism of crossbeam-epoch that I'm talking about: it's the entire ecosystem that surrounds it. Crossbeam-epoch adds some syntactic overhead compared to GC, but it comes with a whole bunch of tools that most GC'd languages don't have, including a compiler that ensures that you take locks when you're supposed to.

> for example in games, where an ECS design using IDs is generally preferable to direct references

In scripting/gameplay parts of games maybe.

In rendering or other performance-critical components, pointers are just faster. They are single RAM reference, IDs are 2 RAM references (and in case of languages like C# or Rust, plus bounds checking).


Sure, and fortunately ECSes are usually for managing world state, leaving one free to use references for graphics, physics, AI, etc. An ECS-esque design for one component of the system does not preclude using pointers elsewhere.

> An ECS-esque design for one component of the system does not preclude using pointers elsewhere.

I know how ECS work, but GP was advocating Rust's way i.e. not using pointers at all.


Not using pointers at all for graphs.

I suspect a lot of the data where you want to use pointers for efficiency, is already in stricter shapes than graphs.


> Not using pointers at all for graphs.

And trees.

> I suspect a lot of the data where you want to use pointers for efficiency, is already in stricter shapes than graphs.

In games, graphs are used for pathfinding and other AI, for skeletal animation incl. IK.

Trees are everywhere: scene graph, bounding volumes, space partitioning, many others.


If you have a tree then your data necessarily isn't cyclic or self-referential, in which case you likely won't have any problem using references.

> then your data necessarily isn't cyclic or self-referential, in which case you likely won't have any problem using references.

Because caches hierarchy, I usually want tree nodes to be located in nearby areas of RAM, i.e. a small arena allocator per tree/graph. This creates cycles, nodes are owned by arena and yet they need to have pointers between them.

I know about custom allocators in rust, but still, such data structure is much simpler to express in C++ with unsafe pointers. Games often know maximum sizes at compile time (e.g. in GTA5 there’s a hard limit of 255 skeletal bones) so that thing becomes a trivially simple wrapper around std::array.

Another problem with rust references for trees, sometimes nodes need to have pointers to parents. That again creates cycles.


Could you elaborate on the "nearby areas of RAM" part? From what I've been learning, cache lines are only 64 bytes, so if you can't fit things in the same 64 bytes then it's not worth thinking about. What am I missing?

1. If these structures take 100 bytes, they will span 2-3 cache lines. You access an item, CPU caches these 2-3 lines. If shortly after that you access the neighbor one because your algorithm walked the tree/graph pointers and come to a neighbor item, you save some RAM latency. If these items are 1MB/each the win will be very small, but for small structures the performance difference can be huge.

2. MMUs in modern CPUs have prefetcher silicon in it. If the CPU detects you’re doing something resembling sequential access, it will prefetch more cache lines after that.

3. Modern CPUs also have TLBs https://en.wikipedia.org/wiki/Translation_lookaside_buffer Accessing data within the same page (platform-specific, on Windows often 4kb) is faster that accessing random locations because the virtual address->physical address mapping for that page will be in the cache.

4. Last but not least, with small arenas per tree/graph memory allocations and deallocations will be faster than even jemalloc, from the point of view of C runtime you’ll only call malloc/free once per graph, not once per item.

Look at the data in my repository: https://github.com/Const-me/CollectionMicrobench As you see, adding my custom allocator to these standard C++ collections improved performance substantially.

Update: also, with 1 arena per tree, it becomes orders of magnitude faster to copy the tree. You just memcpy and then sequentially walk through the arena adjusting the pointers. Or combine both in a single step.


> advocating Rust's way i.e. not using pointers at all

I'm not sure where anyone could have gotten this impression, because 99.99999% of Rust code uses pointers (via references, which are statically verified and compile down to raw pointers). Even the people making graphs out of indices will be using references in some capacity.


In hot code, base will be cached in a register.

I don’t think so.

It can be cached by very small chance. Or it can be cached if the developer did profile-guided optimization. Otherwise it’ll be evicted from these registers pretty often. There’re not that many general purpose registers, so unless the processing code is trivially simple like CRC32, the compiler will reuse these registers for something else.


I wrote a lot of the Rust compiler. Slice base and length is almost always cached if it matters for performance.

OK, but wasting 2 general purpose registers on that, out of just 15 available, also matters for performance.

non-ruster question: does rust let you easily define a new type to use for the index, so you're not just using ints everywhere (with all the lack of type safety and clarity that implies)?


how would you use it in practice? i'm guessing, but seems like it would be somewhat heavyweight. does anyone end up doing this, or just using raw ints?

thinking in comparison of something like go, where you'd just do

> type NodeIndex int


Newtypes are actually zero overhead abstractions; one prominent example of Newtype is the structs Instant and Duration from std::time: https://doc.rust-lang.org/std/time/index.html

Or they could just implement integer subtypes like Ada has since 1983, but it seems that Rustaceans prefer to write boilerplate instead.

Ada's integer subtypes are newtypes, but you don't need integer subtypes to have newtypes. Furthermore, newtypes are applicable to more than just integers; e.g. one might newtype String to represent an HTTP header.

That's incorrect. Ada has subtypes and new types for scalars and they behave differently. For example, a new integer type is:

type MyInteger is integer;

declares a new integer type that needs to be explicitly converted to any other integer type. You can also declare a new range integer type as in

type MyPositive is integer range 1 .. Integer'Last;

But you can also delcare integer and float subtypes:

subtype Day_Number is integer range 1 .. 31;

or (a bit pointless)

subtype MyInt is integer;

The difference between new types and subtypes is that no operations of the base type are defined for the new type since it's an entirely new type. A subtype on the other hand allows the operations of the base type, and their range is checked at compile time if possible and runtime if necessary.

In addition to this, Ada also has modular integer types, real types, floating point types, fixed point types, and decimal types in the numeric type system and all of them can be subtyped.

Of course, you can create a new type for any other type in Ada as well, but that's not what I meant when I was talking about integer subtypes.


1. Rust is such a godsend for concurrent programming that any pain associated with lack of GC is worth it. We also have GC like tricks to make it not that bad: https://docs.rs/crossbeam-epoch/0.6.0/crossbeam_epoch/

2. I agree on this one. The basic solution is "well instead of a pointer use a index into a vector" which has some correctness advantages (but less than "normal" rust does), but generally a bit of extra programmer pain.

3. See 2.

4. The compiler takes good enough care of you during things like this that it's basically painless.


#4 highly depends on a project size. On big projects, some of these decisions might become effectively one way doors.

I've been struggling a bit with these issues on the project of ~70k lines. I cannot even imagine what the refactoring would look like if we had, let's say, 1 million LOC.

To be fair, though, we use Rust the way it wasn't specifically designed for (large "enterprise" software, think Java-like enterprise).

I think, potentially, Rust could offer a much better story for this kind of software (assuming we are not mad and the issues we are facing are not because we doing something completely wrong :) ). In my opinion, the key thing would be to allow building "bridges" between pieces of the system which are "ownership-incompatible", so your decisions around ownership are not "one way doors" anymore (at the cost of translation / adapter layer).

Some random things which I think would be helpful:

1. Better self-referential structs, to allow going from "owned A + borrowed B" into "fully owned A+B" (rental crate helps here, though). Basically, hiding lifetimes in scenarios where you cannot easily change the original data structure to "own".

2. GATs. Honestly, this one is my speculation, but it seems like certain patterns which are hard to express now (abstraction of a "mutable reference", for example) would be possible with GATs. In our case, this would allow to bridge the gap between "trait object" world and "parametric over trait" world. The issue I was having is it is hard to express "mutably borrow from self" with traits (this is something similar to the issue "streaming iterator" crates solves). I was able to hack something using arbitrary self types, but it's quite... hacky.

3. Trait objects stable(r) ABI. Again, purely my speculation, but would allow to go back from "trait reference" world into "trait object" world. I won't go into details here, but trait objects want to "borrow" from something and it is not always easily possible (think that favorite vector+indices data structure) -- being able to "fake" those borrows would be nice (maybe).

Issues #2 and #3 specifically happens around deciding on data structure: regular structs have one set of tradeoffs, vectors with indexes -- another. In a big enterprise software, I would like to have an option to use whatever works in a particular spot and still have it API-compatible to the rest of the system.


> #4 highly depends on a project size. On big projects, some of these decisions might become effectively one way doors.

The transformations between T, Box<T>, Rc<T>, Arc<T>, etc. are mechanical, so I expect someone will write a refactoring tool that makes a giant PR for you automatically. (Subject to certain limits, like if you're actually cloning the ref-counted pointer, it's indeed harder to go back.) Would that satisfy your need?

> To be fair, though, we use Rust the way it wasn't specifically designed for (large "enterprise" software, think Java-like enterprise).

IMHO, this is a valid use case for Rust. I'm not saying everyone should stop using Java (in some cases I think it's significantly faster to write) but Rust has some strong performance advantages and no data races in safe code.


>Would that satisfy your need?

It's not always possible to change the data structure -- different ways of modeling data have different trade-offs. So, for me it is about not having to make a choice than about tools that will help you to change your mind.

Also, it could be something like structure coming from a 3rd-party crate using borrowing and you want to stick it into "Arc" of some sort. Or put it (with the thing it borrows from) into a lifetime-less struct, so you don't have to care about these lifetimes.

>IMHO, this is a valid use case for Rust.

I very much hope so :)


> It's not always possible to change the data structure -- different ways of modeling data have different trade-offs.

I agree with "different ways of modeling data have different trade-offs", but I don't understand how that leads to "it's not always possible to change the data structure". I revisit trade-offs all the time.

Could you explain? I might need a concrete example.

> Also, it could be something like structure coming from a 3rd-party crate using borrowing and you want to stick it into "Arc" of some sort. Or put it (with the thing it borrows from) into a lifetime-less struct, so you don't have to care about these lifetimes.

Yeah, certainly the refactoring becomes harder (maybe implausible to do automatically) when you can't change both sides in one PR, and when you have to convince someone else to change their interface / bump the major version. It still can be done (partially?) by hand at least; it's just a matter of cost/benefit.


>Could you explain? I might need a concrete example.

You might want different trade-offs in different places.

Like, in our case, the conflict is between three different representations:

1. Typed Rust structs 2. Vector with indexes 3. Untyped structs (HashMap of strings to values, essentially)

None of them covers 100% of the use-cases we have (though we also not sure exactly are these the use-cases we will have year from now? three years from now?), and some parts of the system needs to work with all of them.

>Yeah, certainly the refactoring becomes harder (maybe implausible to do automatically) when you can't change both sides in one PR, and when you have to convince someone else to change their interface / bump the major version. It still can be done (partially?) by hand at least; it's just a matter of cost/benefit.

One case was Transaction from postgres crate, which uses lifetime. But I want to stuff it in Arc. Would be possible, if Transaction itself used Arc instead of borrowing, but there are about zero reasons for them to change API that way.


I think the typical way to move from one lifetime to another, especially for a borrowed object, is copying?

Cloning is not always possible (performance reasons, non-cloneable data, etc) and would not necessarily remove lifetime (for example, it could be a struct, defined somewhere else, with a lifetime parameter).

The other posters have sorta said this, but the key is Cargo, and strong generics. I don’t write these things, I use a package where someone else has written them. The difficulty of writing this stuff does not impact my day-to-day getting stuff done. I don’t write Hashes in Ruby, I don’t write HashMaps in Rust.

I'd just like to add, most engineers aren't writing data structures all day. There are certainly some sharp corners, some of these Rust will help you with though: re: multithreaded algorithms. Robust libraries are helpful also.

I'm a relatively new Rustacean but surely that statement is hyperbole -- one of Rust's only drawbacks is it's learning curve.

That statement more reflects how many feel (myself included) after working with rust for a little while, and internalizing the beneficial nature of the extra pain (being more strict about how you pass around memory, etc), and getting used to being more explicit. Also, other languages have difficulties that eclipse and are more endemic than rust's so there's probably consideration of that too.


It’s not really hyperbole; I don’t find Rust particularly painful.

Heck, I used to say “rust will never be a good choice for web apps” but by now I’ve written several. Times change!


Hey I mean this in the nicest way, but I don't think you can think in an unbiased manner, your skill/involvement with rust is just too high.

I read and enjoy a bunch of your articles when it gets posted here and in r/rust and you've been doing rust[0] too long to reasonably be in touch with what newcomers face IMO.

To summarize, thanks for all your contributions to rust -- I still think I'm right about it being a tiny bit hard for newcomers :)

[0]: https://words.steveklabnik.com/five-years-with-rust


I never said rust is easy for newcomers. I said that once you develop some skill, it’s not particularly onerous.

(And, I interact with beginners all the time; I know that they struggle.)


You can down vote this comment all you like, but it's just Steve's opinion, and you can't be wrong about your own opinion. :)

However, this is, anecdotally, what people say after they start using rust heavily.

I've heard it from quite a lot of people.

It's like turning on all the jslint/tslint's as hard errors.

Yes, it's painful to start with... but, as you relax, and let the compiler to the hard work of checking things, you can confidently write code without worrying about a certain entire set of domain concerns, because the tooling is taking care of it.

Sure, I'd love it if there was a 'alt-enter, fix this' for rust errors in [my editor of choice for rust], and no one is saying, 'oh hey, rust is super easy to learn!'.

...but more and more (as the ecosystem matures) rust is being used to Get Stuff Done, not just to write little toys; and the people using it to Get Stuff Done aren't complaining about how hard it is, they're <3'ing it.

(Of course, I disagree on the web-apps front; I feel like that's still very painful with the 15+ frameworks out there fighting / breaking changes / being abandoned; but at the very least its starting to look like a few plausible stable options are emerging; and I admit this is survivor-bias, where the people who didn't like it gave up before they started to get productive... but, I think the idea that 'rust is hard always' isn't true at all)


Oh yeah, there’s still plenty of work to do. But it’s far closer than what I previously imagined was possible, that’s all I’m saying. You can see my HN comment history change over the years.

Hey just to add one more to the 15 web frameworks you've seen -- I've been using actix_web[0] but recently heard about tower[1] and want to give it a try very badly

[0]: https://actix.rs/

[1]: https://github.com/carllerche/tower-web


> In fairness precious resource cleanup (file descriptors, etc) is easier without a GC.

Not really, as proven by languages like Mesa/Cedar, Modula-3 or more recently D, Swift, ParaSail or Chapel.

Somehow I think many CS degrees fail at teaching a proper history of programming languages.

Just because there is a GC involved, doesn't mean that it the only language feature available for resource management.

As for the other points, that is why I would advocate the use of other ML like compiled languages like Swift, OCaml, Haskell if having making use of a GC is not an impediment for the application being deployed.

If memory management is absolute a no-go, e.g. MISRA, High Integrity, DOJ certifications, real time kernels, high performance graphics, then Rust is a very good tool.


Precious resource management is easier without pervasive sharing and with (modern) C++ or Rust style ownership. Most GC (or RC'd like Swift) languages end up attaching resources to their heap object type, with the free sharing that implies. Even in a reference counted implementation, code can squirrel away a strong pointer to keep a resource alive beyond it's obvious syntactic region, or keep trying to access after it has been invalidated.

Those languages do offer some tools and conventions that help, but they're typically opt in.


> or more recently D, Swift, ParaSail or Chapel.

Gotta love Python's 'with' statement too, not that you'd necessarily choose Python where resources are precious, but it depends on what the resources are...


1. As your other reply said, crossbeam is Rust's gold-standard for this.

2. Make the front-pointers be Rc's or Arc's, then use the downgrade() method to construct the back-pointers.

3. Array-based ring buffer instead, perhaps?

4. Like any refactoring, the compiler errors guide you to the places that you haven't yet changed.


> 4. Any sort of refactoring that may adjust ownership. E.g. going from T to Rc<T>. GC requires fewer choices so these refactorings are easier.

This pain is real. I've been using Rust for four years now, and it hasn't gotten any easier. And unfortunately, the best strategy I have for refactoring is still "start refactoring; then see what issues come up and if I have to revert".


In a lockfree data structure, how do you ensure GC stalls never happen in the real-time code?

I write a lot of lockfree C++ that requires 100 nanosecond-magnitude latencies 100% of the time. But in doing so, my algorithms are always implemented with a cache coherent wait-free allocator pool as well.

I don’t see how you can achieve the same thing in a GC language without a lot of manual tuning or a similar memory pool design.


Multiple options are possible depending on the language.

1 - Disabling the GC altogether during access to the data structure

2 - Ensuring that the GC wont run during the region (TryNoGCRegion() in .NET or a real-time thread in Real-Time Java)

3 - Or if the language supports it, keep part of the structure away from the GC, using RAII to keep track of the blocks


If you need sub-microsecond latencies then the requirements can be summarized as "don't allocate, don't deallocate." Now the memory management technique is irrelevant, because the goal is to avoid memory management at all costs.

Yes you'll need a memory pool design in whatever language. But most GC'd languages only collect at allocation sites, and so if you don't allocate you won't trigger a GC.


Non sequitur, but in my experience the parent pointer in a tree can almost always be passed as an argument in whatever recursive function is traversing the tree.

This doesn't work for a mutating traversal in general (although it can work if the tree structure is split from the data being stored, so only a subset of the parent needs to be passed into the recursive calls).

I have read The Rust Programming Language from start to beginning and afterwards decided to use Go instead because Rust does not have a Garbage Collector. If somebody added a GC to it, including syntactic sugar that doesn't make it feel 'glued on', then I'd be happy to use it.

Something else ultimately turned me away from investing more time into it, though: Chapter 17 of the book, which spends a great deal of time trying to advertise why its OOP support is so impoverished. In comparison, Donovan & Kernighan's The Go Programming Language is much more honest about the (many) misfeatures of Go.

That being said, already existing libraries and Rust's tooling are excellent, and so I'll surely return to the language. Not because it has no optional GC but despite of it.


What did you find dishonest? Chapter 17 was one of the hardest to write.

First of all, I didn't use the word "dishonest", there is nothing dishonest about your book. It is a fine book. :)

What I found problematic was the section on "Inheritance as a Type System and as Code Sharing", especially the paragraph "Inheritance has recently fallen out of favor..." While there is some truth to the claim, you forget to mention that there are also metric tons of highly successful OOP based software libraries out there, that it is perfectly feasible to use inheritance (and multiple inheritance) in good and productive ways, and that the Rust community is still quibbling about how to implement GUI frameworks in a "Rust way" and how to interface to foreign OOP libraries in general because of Rust's OOP limitations. It is a drawback not to have full OOP with inheritance and multiple dynamic dispatch. A non-biased and informed language user may choose Rust despite this limitation, but not because of it.

Another thing I found problematic was the "newtype pattern" pp. 439-441. Whether the wrapper is optimized away or not, the newtype pattern is clearly a cumbersome solution for a language limitation. At least to a beginner who has seen many other languages, this just looks like a horrible workaround for the lack of scalar subtypes, as e.g. Ada has them. To be honest, pp. 440-441 were probably the final reason why I decided to postpone writing software in Rust. (Coincidentally, Ada also allows you to define type aliases that are discussed on the following pages, but in Ada they are normally considered bad style and only used for clearly identifiable abbreviations.)

That being said, I found your book very informative and overall good reading, and I'm planning to come back to Rust once I have the time.


Rust would be a worse language if it had traditional OO support. I sort of proposed such a thing at one point and we ended up backing it out.

Newtypes are better for Rust's purposes than subtypes because they interact better with type inference, which is deeply important, especially when trait matching is concerned. Subtyping makes type inference much more complicated, and in fact a long-term goal in rustc is to remove all subtyping from the typechecker and relegate it to the lifetime pass.


I don't buy this, things like that are not a matter of "perspective".

1. A language that has an optional GC and a borrow-checker is better than a language that only has a GC or borrow-checker.

2. A language that has scalar subtypes with range checking and new types is better than a language that has only new types and no range checking.

So subtyping is complicated to implement in the typechecker. And? If GNAT can do it, why can't the Rust team not do it?

3. A language that has full support for OOP including inheritance and traits (or mixins) is better than a language that has less of these features.

I'm not claiming that every language should offer everything but a general purpose language that wants to be a viable substitute for C++ should at least have its OO capabilities.


> 1. A language that has an optional GC and a borrow-checker is better than a language that only has a GC or borrow-checker.

Disagree. It is worse to have two worlds that are poorly integrated with one another. Dividing the ecosystem in two would be a terrible idea.

> If GNAT can do it, why can't the Rust team not do it?

Because Ada isn't Rust?

It's not that we can't do it. It's that typechecking complexity has a cost, and Rust is already operating at near maximum feasible typechecking complexity.

> 3. A language that has full support for OOP including inheritance and traits (or mixins) is better than a language that has less of these features.

No. Not every language needs to have every feature. Every feature has a cost. Not enough people are asking for inheritance to justify its cost at this time.


That's true, but you did say that the other book is "more honest", which to me implies dishonesty. No worries regardless.

> A non-biased and informed language user may choose Rust despite this limitation, but not because of it.

Yeah, I guess this comes down to a difference in perspective; many people do prefer that Rust has no inheritance, and if it did, would choose Rust despite that, not because of it :)

> the newtype pattern is clearly a cumbersome solution for a language limitation.

It depends on your view too; that is, sub-typing has a lot of other problems. Sometimes, this "cumbersome"ness is what you want, because you don't want everything automatically forwarded through. For situations where you do, there has been talk of adding delegation support in some fashion.

Anyway, thank you, this is all good to know :)


Rust will never have a garbage collector in the sense that other languages do. It just would not interoperate well with ownership and borrowing.

One thing to note is that borrowing in Rust is more powerful than escape analysis can ever be in other languages, because in Rust it's in the type system and therefore works well with separate compilation and higher-order functions.

To give an example, suppose we have the following:

    struct A { ... }
    
    fn g(a: &A);

    fn f() {
        g(&A { ... });
    }
And let's say g() is defined in another crate and separately compiled. In Rust, we can safely allocate the instance A in f()'s stack, because we know via the type system that that instance can never escape. But compare the equivalent example in, say, pseudo-Java:

    class A { ... }

    class G {
        public static void g(A a);
    }

    class F {
        public static void f() {
            G.g(new A());
        }
    }
Can we promote A to the stack? Well, it depends. If the compiler can see the source of G.g and prove that A never escapes, then it can. Otherwise, it has to conservatively assume that A could escape.

(Incidentally, this sort of thing is one of the main reasons why the JVM usually uses a JIT: because Java allows you to replace the bodies of classes at runtime via classloaders, you really want to be able to do these kinds of interprocedural optimizations based on the information you know at the time, but reserve the right to back them out if the class bodies change. Only a JIT is able to do this.)

This gets even more difficult when you get to higher-order functions:

    class A { ... }

    interface G {
        void g(A a);
    }

    class F {
        public static void f(G g) {
            g.g(new A());
        }
    }
Can we allocate A on the stack? Well, it depends on whether any possible instance of G could possibly have its argument escape. Java's HotSpot compiler is quite clever here and can actually make assumptions based on the classes that are currently loaded (as a side effect of devirtualization). But Go, for example, will always allocate that A instance on the heap, as far as I'm aware.

This is not a problem for Rust, because the type system ensures that A can always be allocated on the stack in the analogous code:

    struct A { ... }
    
    trait G {
        fn g(&self, a: &A);
    }

    fn f(g: Box<dyn G>) {
        g.g(&A { ... });
    }
Because the signature of the method G.g() ensures that every implementer must not let the instance of A escape, the compiler can soundly place A on the stack. In this way, lifting the escaping behavior of values into the type system is a very powerful technique that allows Rust to go beyond what typical escape analysis can do.

This claim is too strong, because Rust can't reduce heap allocations to stack allocations. Java, Swift, and others can optimize apparent heap allocations into stack allocations. But (in my understanding) Box, Rc, etc. can never allocate on the stack.

I'm confused; in Rust, you use Box when you want a value to escape the function. That's the whole point of Box! Likewise I can't think of any situation where you'd ever want to construct a value in an Rc without that value escaping somehow, because in a single scope you're better off just handing out shared references (I think you may be confusing Rc with RefCell, since they are often used together and RefCell can be useful within a single scope--but note that RefCell never allocates). You have no need to cleverly promote these things to the stack because nobody is using them for data that they don't want to live on the heap; you have to go out of your way to use them! :)

They could allocate on the stack; we just don't implement that optimization yet. That's because, as kibwen said, it would almost never kick in in practice.

It feels like the title abuses the word "garbage collector", which implies automatic and dynamic. That sounds like "Bike is a human-powered motorcycle".

However the article is well-written and very informative. Just need to skip the title :-)


Ok, I took a crack at replacing the title with a representative sentence from the article. If anyone can suggest a better title, we can change it again.

"Better" here means more accurate and neutral, and preferably using representative language from the article.


What about

    Rust has a static "garbage collector"
It reflects Klabnick's thesis here, whereas the current "I don't find writing Rust to be significantly harder..." title is probably more controversial.

Ok, we'll give that a try.

This is better than the last one, for sure. Thanks.

This title does not really reflect the entire point of the piece. The point is automatic memory management; the difficulty thing is tangential.

If you'd like to take a crack at a title that is neither misleading nor linkbait, in accordance with the site guidelines (https://news.ycombinator.com/newsguidelines.html), we'd be happy to change it again.

It's hard for us to guess perfectly every time, but an imperfect guess is usually better than leaving a misleading or baity title up.


I don't feel the original title is misleading at all.

But, I also don't think this is a huge deal, given that it's no longer on the front page.


> Historically, there has been two major forms of GC: reference counting, and tracing. The argument happens because, with the rise of tracing garbage collectors in many popular programming languages, for many programmers, “garbage collection” is synonymous with “tracing garbage collection.” For this reason, I’ve been coming around to the term “automatic memory management”, as it doesn’t carry this baggage.

It's fairly confusing to refer to this as automatic memory management. That term already exists to refer to stack variables getting allocated and initialized in C/C++.


I think the comparison is quite apt (and may be deliberate). Automatic memory management for locals in C/C++ uses the scope of a variable to determine when reclamation should occur. Likewise, ownership-based automatic memory management in Rust also simply leverages scope to determine when memory should be freed (though you can pass the memory into other scopes, but at the end of the day it's all just lexical scoping). It's RAII for memory.

Just out of curiosity, how many rust users have run into memory fragmentation problems with long running processes? I love the performance and low memory profile of deterministic/static memory allocation, but without a compactor, I can't see it turning out well with long running processes. Has it turned out to be a problem in practice?

Isn't that one of the reasons rust compiles with JEMalloc? Fragmentation avoidance is one of its main features. Otherwise, using arenas or other strategies work very well for avoiding fragmentation. I generally haven't run into issues, mainly because rust allows me to use a lot less memory which helps.

Rust code also has less dynamic memory allocation in general. You allocate for dynamically sized datastructures and long-lived data, but not like c# or java where each non-value class type is allocated on the heap (unless the compiler is smart enough to use escape analysis).


It's a good question. As others have noted, jemalloc attempts to mitigate fragmentation, but you can't ever compact in a language where you can hand out pointers willy-nilly, so you would expect fragmentation to strictly increase as uptime approaches infinity. That said, Rust code doesn't tend to be especially allocation-heavy in the first place, and repeatedly allocating/freeing in a hot loop is already something that tends to be avoided as a core tenet of performance tuning, so allocator traffic may not be very heavy in the first place.

I don't have any long-running Rust code myself, but I do know people who have had Rust servers that have been running for upwards of six months and they seem to be pleased at both how little memory it consumes and at how reliable it's been (you don't ever get tremendous uptime if your code isn't robust in the first place :P ).


I'm definitely sympathetic to the argument that by avoiding allocation in the first place that you don't have to worry as much about memory profile.

I allocate and destroy a bunch of arrays in my code (~100MB every minute), so it's not huge but definitely quite a few pages every time it happens. And so far I've got one process that's been running for 6 months. For the most part, whenever I get new data, I take a slice of an old array and append the new data to it to create a new array. It's fast enough, but more importantly it's just so much easier to do it that way and with a compacting GC I don't have to worry about anything.

Of course I don't know how good jemalloc is at avoiding fragmentation, and I don't have the time to rewrite my code (maybe enough time to simulate, not sure). But my code creates a ton of fragment-y garbage, and I would imagine that with Rust I would end up not just translating my code, but changing semantics to mutate in place, just to avoid fragmentation. I guess maybe /r/rust would be the place to ask.


Mostly, what I’ve heard about long-running processes is that, if they don’t have a leak somewhere, they tend to be pretty stable, so compaction isn’t a huge issue.

What purpose does compaction serve in a 64 bit virtual address space on Intel? The processor addresses and caches by cache line and the address space is huge enough to handle most allocations.

Not to say it’s impossible in C/C++, but I’ve only ever seen fragmentation issues in old versions of Java and C# where the runtime repeatedly commits large swathes of contiguous memory. The key differentiator here is the VM’s insistence on contiguous allocations, whereas malloc has no such requirements.


Has anyone used Rust as a replacement for their Go management utilities? I'd like to find something that could be used in a set of different contexts, but using the same language.

I prefer the correct definition of a GC:

> [1]: Garbage collection is simulating a computer with an infinite amount of memory.

Chen even provides the "memory reclamation" definition in his post, but points out that it is incomplete. The article could be made much shorter (although the full read is still very interesting): as Rust has no free() call, Rust simulates infinite memory and is hence garbage collected.

[1]: https://blogs.msdn.microsoft.com/oldnewthing/20100809-00/?p=...


> [1]: Garbage collection is simulating a computer with an infinite amount of memory.

But that definition also applies to virtual memory. On any modern OS, you can simulate a computer with an infinite amount of memory in C and C++ by just never calling free(). It'll even work pretty well for programs with small working sets, as the kernel will start paging out unused memory to swap.

I think any definition of garbage collection needs to specify that it is a form of memory management: it can automatically determine what memory will be used later so that it can recycle memory that will not.


> just never calling free()

So your code would look a lot like it was running under a garbage collector? The OS memory manager is the garbage collector in this case.


No, because the OS isn't reclaiming the memory specifically; and this only works with overcommit and/or swap. The memory isn't actually being reclaimed, it's being paged out. If your application actually begins to use the memory but never uses the data again (eg; can't reference it) it will continue to eat memory until the OOM killer kicks in or the system crashes.

Well, unless your program is particularly long lived, it will be reclaimed. There are many tools that could reasonably forego `free` for their entire runtimes.

I presume we're talking about long running programs otherwise the conversation isn't really super useful since for any program, short or long memory is reclaimed when a program exits (well, at least on any modern OS).

This is what the D compiler does to keep its speed lmao.

Major memory leaks, but who cares it compiles hella fast.


"GC: a simulation of a computer with an infinite amount of memory."

This is not a correct definition.


That definition doesn’t work when presented with a program with a working set that grows arbitrarily large.

I always think that GCs are incomplete in a way. They can reclaim memory in cases such as:

    x = [1, 2, 3]
    ...
    x = []
But not in cases such as:

    x = [1, 2, 3]
    ... (x not used)
Of course, determining if x is used might be uncomputable in general, but in practice it might be computable in a lot of cases.

The article I linked deals with this. GCs typically guarantee cleanup in a scope. With managed languages, this is whenever the runtime decides to run the GC (probably when memory is needed and isn't available). With Rust RAII this is when the ref count reaches zero which is, at the very soonest, at the end of the current scope (you can force the collection with a naked scope if you need[1]).

[1]: https://play.rust-lang.org/?gist=66740557884b17683b868483e52...


A compiler can determine that a local variable x has no "next use", and insert the x = [] at that point to help garbage collection (or else provide some map of what is live).

https://en.wikipedia.org/wiki/Live_variable_analysis


Yes, but it's too simple. Consider:

    a = { x: [1, 2, 3], y: [1] }
    ... (a.y used, but a.x not used)

This is theoretically solvable, assuming those arrays are by reference.

However, it's just not worth the effort of a significant increase in GC complexity.


I don't understand what you mean. Any GC will reclaim `x` after it is unreachable from a root.

Unreachable is not the same as unused

It could still be reachable, just never read from. Static code analysis can detect some forms of this, but not all e.g.

if (someUnknownCondition) { // access x }


In this case, you cannot free the memory manually either without making your program potentially invalid. It is unreasonable to expect a GC to do runtime control flow analysis, which is also more than you would do in a normal case of manual memory management.

If you have tricky situations where you know that program flow dictates that an expensive but reachable variable is no longer needed, you would insert a "x = nil" or similar in the position where you would otherwise have made a free.

However, this is solely a case of fine-tuned optimization, rather than a case of correctness.


> without making your program potentially invalid

You'd only need the GC to disconnect that local from the stack frame, as anything that escapes would be rooted elsewhere. This can be done by having the compiler/JIT emit a liveness map: instruction pointer ranges indicating which variables uninitialized, live or dead.

Otherwise, exactly.

Rust could do this with zero runtime overhead (as the map would be used by borrowck). This is pretty simple to do but generally isn't, precisely because of this: "this is solely a case of fine-tuned optimization."

If you are part of the 0.1% of the people who is actually solving one the of 0.001% problems where this level of control is _genuinely_ important, then use a language that supports this. Assuming that the method is executing for a very long time (>μs), because this the only time you'd probably care about memory management at this level, any FFI is virtually free.

However, even if you are part of the 0.1% of people who have a legitimate concern for this, and you are working on a 0.001% problem, why on earth are you doing so many allocations and frees? Against your own better judgement? If you care enough to worry about when memory is freed, you should know enough that you should re-use memory as much as possible without involving the allocator.


> This can be done by having the compiler/JIT emit a liveness map: instruction pointer ranges indicating which variables uninitialized, live or dead.

I do not see this as improving the situation. I certainly do not see it fix the case where a runtime condition mean that a variable is no longer needed, as branches touching it will no longer be hit.

A JIT could theoretically deal with this, but only if the code is retraced and recompiled after the runtime condition changed, which is not generally something you want.


Well as the person I was replying to points out, this is uncomputeable in general (`someUnknownCondition` might be brute-forcing all possible mathematical proofs to see whether the Reimann hypothesis is true), and I would expect optimizing compilers to already handle this in the cases where it is feasibly computable. Is that not the case?

I'd suggest that uniqueness typing is a special case of refcounting..

[flagged]


> Downvoters either explain yourself or go fuck off.

Oh, please let's not do this.

https://news.ycombinator.com/newsguidelines.html


> Go knows that doPerson can't modify person so it is free to allocate the variable on the stack

Escape analysis does not care about modification. It only cares about whether the scope of the variable may end up expanded past the end of the function. Whether that scope is read or write is not relevant.

Causes of escape would be returning the value from the function, referencing the value in a closure, or using a reference to the value in a context with a different lifetime (channels, structs outliving the function). Calling a function with an argument as reference is not a cause of escape.

Escape analysis is tricky and fragile. Go often has to bail out to the heap.

Also, you don't really pass around structs by value much in real code, so the point that this is a benefit to GC performance is kind of moot.


True! What matters is that doPerson() doesn't get hold of a reference to the object which could cause escape. Since it doesn't get hold of a reference, it of course can't modify the object either. Had the call been doPerson(&person) it would have gotten a reference and Go would be forced to heap-allocate person. I agree that escape analysis is tricky because it is easier than one might think for objects to escape. That Go is better than Java at imprisoning objects I think is mostly because of different usage patterns between programmers of the two languages.

> I liked the article! But I don't think that Rust has static gc because it would imply that C has it too. I

Could you clarify how C would count?

From my understanding of the article's premise, a key part is whether the memory management is automatic or not. In C, everything on the heap is manual, you have to explicitly free and ensure it lives long enough for all owners and borrowers

C++ adds RAII. Going by the article, you might be able to refer to this as gradual automatic memory management except it only helps with the owners and not borrowers.

Rust has owner and borrow tracking, it just verifies it all at compile time and not runtime like traditional GCs (unless you opt-in to Rc). This puts it in a weird middle ground where I can see it being referred to as either automatic or manual, depending on how you look at it.


> From my understanding of the article's premise, a key part is whether the memory management is automatic or not. In C, everything on the heap is manual, you have to explicitly free and ensure it lives long enough for all owners and borrowers

But everything on the stack is automatic, so by this definition any type of automatic memory management being a "GC", then C has a GC (stack), paired with an unsafe mode (heap).


> then C has a GC (stack)

IIUC the C11 standards etc actually just say "automatic storage" and stacks are never specifically mentioned.


Ah, yes, I forgot the term.

C goes to great lengths to avoid defining/touching machine details, which is quite ironic considering how it has gained reputation in this day and age as a very low-level language.


Because C has automatic variables. https://en.wikipedia.org/wiki/Automatic_variable For the argument "that's a different kind of memory" see this thread from last week: https://news.ycombinator.com/item?id=18124431 It is entirely possible to write complex C programs using nothing but automatic variables.

This is not garbage collection because these are stack (or statically) allocated. The "garbage" in "GC" are heap allocations.

If we ignore the heap in C it doesn't make it a GC'd language because the concept of GC isn't even defined with no heap.

> It is entirely possible to write complex C programs using nothing but automatic variables

For a very narrow subset of "complex". As soon as dynamically-sized type is introduced you need the heap.

It's also surprisingly inefficient: suddenly all your buffers (including strings) must be as long as the largest buffer you could imagine you'd need.

Which also introduces artificial limits.


C has VLAs https://en.wikipedia.org/wiki/Variable-length_array Apparently I suck at linking because this was the thread I was looking for: https://news.ycombinator.com/item?id=18124708 As you can see, there is no heap in C so we are free to ignore it.

I pretty much agree with that thread's replies, not much to add.

I didn't know VLAs though (they didn't exist back in the day when I used C). Even though you still have no growable buffers/reallocation (AFAICT), I concede those open up a wider class of complex programs.

Thanks for the pointer (pun intended) on VLAs, will check them out.

Still no garbage collection though (and I agree with you that neither is Rust's).


From the wiki:

> In C, using the storage class register is a hint to the compiler to cache the variable in a processor register. Other than not allowing the referencing operator (&) to be used on the variable or any of its subcomponents, the compiler is free to ignore the hint.

Doesn't sound like a garbage collector to me, by any stretch of the imagination.


The register hint is a bit of a semi-unrelated sidetrack, which you would only ever use in a really hot location when proven necessary by profiling + inspection of the generated code, and the compiler is more often than not smarter than you anyway so it's rarely necessary. It's not relevant to GPs point about regular locals being stored on the stack. See https://en.wikipedia.org/wiki/C_syntax#Storage_class_specifi... for a better overview.

The author explicitly defines his notion of static garbage collection in opposition to what it's like programming in C:

> This also opens up two more things about why I don’t feel that “Rust is too hard and so you should use a GC’d language when you can afford it.” First of all, Rust’s strong, static checking of memory provides me a similar peace of mind to a runtime GC. To make an analogy to types and tests, when I program in a language like C, I have to think a lot about the details of memory management, because I don’t really have anything at all checking my work. Its memory management is completely unchecked, similar to (compile time) types in a Ruby program. But with automatic memory management, I don’t have to worry about this, as the memory management system has my back. I won’t get a use after free, I won’t get dangling pointers.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: