Hacker News new | past | comments | ask | show | jobs | submit login

Although object lifetime and borrowing is mentioned, he focuses on the pattern expression which he's likely to get in any modern language.

Rust has many great qualities and I've been looking to find a project for it, but it just doesn't seem like the model for long-running processes with immutable data-structures and complex/dynamic/interlinked lifetimes. Seems like dev. effort is better spent on functionality than managing lifetimes, especially given that modern GC is so fast. (And I've seen C++ ports to C# become faster, due to unneeded calling of copy c'tors, and other goodness.)

I can totally see the case for Rust in systems code where lifetime is gated by a system/API call and you want to safely pass buffers around during the duration of the call, for inner-loop gaming, for embedded, etc. But seeing it's en vogue, there are many examples of Rust code where it doesn't seem like the right tool -- but then again, I feel that way about much code written in C++ as well.




Lifetimes aren't a worse version of GC, just like static typing isn't a worse version of dynamic typing. They're different trade-offs.

Rust can express ownership and lifetime of objects statically at compile time, as opposed to dynamically at run time. For beginners this is a new aspect of programming they need to learn, and it adds to the learning curve (same as you need to think about types in C++, instead of winging it in JavaScript), but it has very nice side benefits:

• Deterministic destruction. You don't need to remember about closing handles/cleanups, and all things are freed immediately when their scope ends (rather than "finalized" sometime later). Some languages have `defer` or `withFoo(callback)` to handle these. In Rust it's automatic and more flexible.

• Ownership adds useful information to the program. You know which args will be kept or shared, so you don't get shared mutable state by accident. Libraries don't have to copy their args just in case the caller changes them unexpectedly later. And as a caller you know what each function does to the data you pass in.

The last point is especially useful for passing buffers around, because you control at all times when you give exclusive or shared access, and how long each part of the program can keep them.


> Lifetimes aren't a worse version of GC, just like static typing isn't a worse version of dynamic typing. They're different trade-offs.

This is a completely bogus comparison.

First, static typing is mostly about finding bugs. GC-induced overhead is not a _bug_.

Second, it is theoretically impossible to create an algorithm that checks almost any interesting property of a program in a Turing-complete language in finite time, that doesn't reject valid programs. In most (non-academic) programming languages with static typing compiler checks assumptions about the _data_, and doesn't wade in control flow at all. Data is more or less "static" by itself (types of variables remain the same in most algorithms, with exceptions covered by ADTs), that's why static typing is practically useful.

Lifetime control is a problem on a whole different level. Roughly speaking, lifetime of an object _easily_ depends on whether or not some part of the program terminates, so, from the get-go, your static checker has to solve halting problem, which it can't.

I went through several Rust projects to see how they deal with this. And there are two solutions: 1) sprinkling code with unsafe's 2) resorting to reference counting. GC is safer than both alternatives, and its amortized cost may be less than same for ARC.

Absolute majority of the code written doesn't have hard real-time requirements, and should just rely GC as a safer and more convenient option.


Lifetimes and ownership are about finding use-after-free and double-free bugs. The fact that they enable automatic memory management without a tracing GC overhead is just a cherry on top.

For a long time languages were stuck with the false dichotomy of either trying to be 100% safe (and pay the cost of a VM/sandbox/GC) or said "halting problem" and gave up. Rust solved this dilemma by allowing safe abstractions around unsafe blocks. It works like a charm — I get speed I got from C, but don't have to use gdb.

There's option 3) use program architecture that has clearer ownership and less shared mutable state. Borrow checker doesn't like doubly linked lists or mutual parent-child relationships. Instead you use different containers and DAG structures.

And when you still have to have shared ownership without a clear scope, then you use refcounting. The nice thing in Rust is that (unlike C++/ObjC/Swift) it doesn't always have to be atomic in multi-threaded programs, and objects can be borrowed and used without touching the refcount.


What false dichotomy?

Modula-3, Mesa/Cedar, Component Pascal, System C#, ...

You get the productivity of automatic memory management, RAII, stack and global memory allocation and unsafe blocks just like Rust, with C's speed as well.


> Absolute majority of the code written doesn't have hard real-time requirements, and should just rely GC as a safer and more convenient option

Yes.

Most Rust code deals with "trivial" lifetime issues -- not trivial in the the sense of how it gets used (usage can still be in a "serious" context, such as systems programming) but how the lifetime is controlled. Only certain models lend themselves to be expressed as "borrowing from an owner", and ref-counting will break with general object graphs.

Most garbage is gen-0, which gets recycled with little overhead, and when scavenging is needed its fairly quick and tunable these days and can optionally take advantage of multiple cores.

Another "trick" I see in Rust code is to use a symbol-table of sorts to represent object graphs using names and lookups, but you're just inventing your own, slower version of object references, and then there's the issue of who owns the graph symbol table, so you're only passing the buck in the general case.


You don't spend developer time "thinking" about lifetimes as you write Rust, it becomes totally automatic, no more than you spend time "thinking" about functions in C#. The benefits you get far outweigh the negligible overhead, providing things like data race prevention across threads in addition to memory safety with incredibly low overhead. Modern GCs are faster but they're by no means free, and totally unnecessary if you just give the compiler the information it needs to do its job statically instead of continuing to strap more rockets to the GC rocket-powered horse. It's the fastest horse but it's still a horse, not a car.


> Modern GCs are faster but they're by no means free, and totally unnecessary if you just give the compiler the information it needs to do its job statically

These are not mutually exclusive.

(1) A GC'd language can still do static analysis at compile-time or JIT-time to avoid GC all-together when possible (2) GC'd languages can have RAII/static-scoping capabilities such as immutable value-typed (non-ref) structs which require initialization and using{}/after blocks for scoped lifetime (3) GC'd languages can have owned-object semantics like C#'s Span<> to avoid GC (4) You can use unsafe and pointers and GC'd languages a well

So you can still program with an eye toward efficiency for any code that shows perf issues using many of the same constructs you do in Rust, in a GC'd language -- but be able to fall back on GC when lifetime issues are complex or hard real-time perf is not an issue.

Again, I'm not bad-mouthing Rust at all -- I'd far prefer it to C++ in the context where the latter is appropriate.


Try to write a sizeable Gtk-rs application to see how automatically it gets if you don't create your own clone macro to deal with all the Rc<RefCell<>> instances, like the Gtk-rs samples show.


I’m not sure a single example of a C wrapper not idiomatically designed is a good example of the potential of a language. It’s like pointing to the source for any given STL implementation as quintessentially C++ coding style. RefCells tend to be considered an anti-pattern AFAIK since they leave invariant checking to run-time. It's meant as a hack/patch if you can't actually design it idiomatically for some reason, such as in this case, FFI to a C programming model.


I can point to other UI examples, like game engines written in pure Rust, using array indexes with clocks on access to invalidate possible use-after-free accesses with outdated indexes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: