
Why the developers who use Rust love it so much - cyber1
https://stackoverflow.blog/2020/06/05/why-the-developers-who-use-rust-love-it-so-much/
======
nickm12
I love this quote from a friend of mine who was learning Rust:

"It's hard but I love it. Dealing with the compiler felt like being the novice
in an old kung fu movie who spends day after day being tortured by his new
master (rustc) for no apparent reason until one day it clicks and he realizes
that he knows kung fu."

~~~
WJW
Sounds a lot like programming in Haskell. Applying higher-order functions or
condensing a parser into a oneliner just makes me _feel smart_ in a way that
generating a new Rails controller never does.

My personal theory is that a lot of Rust and Haskell (and many other
"advanced" languages) usage is caused by people chasing that feeling. The
publicly expounded benefits like memory and type safety are true, but the real
reason is that people want to use the language for their own reasons and "I
really like it" usually does not have enough weight in a business context.

~~~
gameswithgo
Unlike Haskell, Rust offers a unique combination of safety and runtime
performance, which does provide real business value in sone domains.

~~~
UK-Al05
Algebraic data types, monads, and things based around those have tons of
business value for writing correct business logic for non-performance critical
code. You get that in haskell.

Though admittedly you can get them other functional languages that are not
quite as different. Like Ocaml.

~~~
jolux
Monads are so cumbersome to implement and consume in OCaml that they are not
very commonly used. Jane Street Core has monadic abstractions for things like
async but it’s a pain without syntax extensions.

~~~
lpage
That was true until 4.08. OCaml now supports monadic let bindings, and more
generally, user defined let bindings [1].

You get the benefits of ppx_let (Jane Street) but in a more general and
syntactically nicer form. I still wouldn’t call it as nice as do notation in
Haskell but it makes monads highly usable in OCaml without the need for an
external dependency.

[1]:
[https://github.com/ocaml/ocaml/pull/1947](https://github.com/ocaml/ocaml/pull/1947)

~~~
jolux
Wow, I completely missed this! Nice.

------
atoav
Coding in Rust isn't easy, but where it is hard it is hard in a good way. It
is like being on a journey with a good friend, who deeply cares about not
letting you shoot yourself into the foot and explains you why without judging
you.

Even if Rust would be wiped off the face of the earth tomorrow, thr things I
learned from it have definitly made me a much better programmer.

~~~
logicchains
>explains you why without judging you

Yep, the compiler doesn't judge you, that role is left to the community when
they discover you posted a library on Github that uses more unsafe code than
they'd like.

~~~
uryga
for anyone not following Rust, the drama around the (popular) Actix web
framework was a prominent instance of this. iirc people opened issues and PRs
re: the project's (significant) usage of `unsafe`, but the maintainer (Nikolay
Kim) wasn't receptive; then some folks got nasty about it on reddit,
escalating into full-blown drama. sadly, it ended with Kim posting he's "done
with open source" and quitting the project.

a more detailed (and probably more accurate) account: [A sad day for
Rust]([https://words.steveklabnik.com/a-sad-day-for-
rust](https://words.steveklabnik.com/a-sad-day-for-rust))

~~~
chrismorgan
By no means is it _only_ about Actix; that’s just one of the more notable
times that something like that has happened. The eminently reasonable
criticism boils down to baulking at people undermining Rust’s safety
guarantees with _demonstrably wrong_ unsafe code, while publishing said code
in such a way that you’re suggesting that others use it ( _i.e._ it’s not just
private code). It is also then often taken further to a general unease at
gratuitous use of unsafe code, which I consider fairly reasonable because it’s
so hard to get right (it’s unsafe for a reason). Then sometimes a _few_ people
take it beyond what might be considered socially reasonable.

~~~
Dowwie
It is a mistake to marginalize what a small number of people achieved. An
organized team successfully "cancelled" the Actix project and smeared its
author after repeated attempts-- three distinct episodes over 12 months. This
was a campaign that spanned social media and github.

It is worth noting that to this date, none of the members of the anti-actix
effort have contributed towards a viable alternative, nor have they attempted
to resolve their concerns in the actix ecosystem. It turns out that the
patrons who complained about free beer never intended to brew their own.

~~~
chrismorgan
I never followed the matter _closely_ , but from the parts that I _did_ follow
or investigate, your comment doesn’t feel particularly accurate.

• I don’t believe there was any organised team; it arose organically.

• In each case the complaints were of concrete problems that incorrect usage
of unsafe caused. _After that_ , people did then tend to pile on and baulk at
other unsafe code that was probably OK, _because the author had been proven
untrustworthy in using unsafe code_. (“Unsafe” says “trust me, it’s OK”, and
that trust had been broken.) And there _were_ one or two people that made it
more direct personal attacks—but probably only one or two.

• Your final paragraph just seems _completely_ wrong. Various of those that
complained _did_ offer alternatives, _some of which were turned down_. And
people _did_ go with viable alternatives, switching to competitors to Actix.
Even apart from all that, if you can prove that by some metric a piece of
software is bad, why would the burden lie with you to fix it? ( _Especially_
if your patches are rejected.) If the problem won’t be fixed, recommending
that others avoid it seems perfectly reasonable. You’re presenting a common
logical fallacies, that you can’t criticise something unless you can provide
an alternative. I don’t need to be able to build a better bridge to be able to
point out that it’s falling down.

Feel free to correct me if I’m wrong, but I intend to engage no further on
this. The points have been hashed out before and there is nothing new to say;
things just tend to get heated. Have an enjoyable day. :)

~~~
Dowwie
Have a great weekend

------
ncmncm
Money quote: "Rust benefits here that very few people are being forced to use
Rust."

Probably more people pick up C++ for the first time, in any given week, than
the total who use Rust in production today.

Rust also benefits from the limited historical baggage that comes with being
new and incompatible. Unlike Java, which was in the same position, Rust
adopted very few old mistakes, and especially unlike Java made few new ones.
But as the language approaches industrial maturity (possibly within 10 years)
early mistakes will become evident, and cruft will be seen to accumulate.

Rust designers have consciously chosen to keep the language's abstraction
capacity limited, which makes it more approachable, but reduces what is
possible to express in libraries. Libraries possible, even easy in C++ cannot
be coded in Rust. The language will adopt more new, powerful features as it
matures, losing some of its approachability and coherence. But Rust has
already passed a key milestone: there is little risk, anymore, that it could
"jump the shark".

The language is gunning for C++'s seat. Whether it becomes a viable
alternative, industrially, is purely a numbers game: can it pick up users and
uses fast enough? The libraries being coded in C++ today will never be
callable from Rust.

Go proved that the world will make room for a less capable language (in Go's
case, than Java) if it is simpler. Rust is much more capable than C, Go, or
Java, and the world would certainly be a better place if everybody coding
those switched to Rust. So, my prediction is that Rust and C++ will coexist
for decades. The most ambitious work will continue to be done in C++, but a
growing number will have their first industrial coding experience in Rust
instead of C, and many will find no reason to graduate to C++.

~~~
acln
> and the world would certainly be a better place if everybody coding C or Go
> switched to Rust

Perhaps. Let's engage in a thought experiment. Sorry for moving slightly off-
topic, but the line I quoted made me think about this.

Someone fashions a magic wand, which you can wave over C, C++, and Go programs
/ libraries to instantly re-materialize them as idiomatic Rust, while
preserving all of the "good" output they produce, and simultaneously removing
the "bad": all memory safety and data race related bugs they exhibit.

You get to use this magic wand on any program you like, instantaneously. You
do so, creating linux-rs, glibc-rs, chromium-rs, etc. in the process. You
cargo build all of this new software and replace the old C / C++ versions with
it, in-place.

In the brave new Rust-powered software world, does your day-to-day computing
experience change? It is materially better?

Speaking for myself, the answer is "no", unfortunately. Perhaps this message
is coming from a place of frustration with my own day-to-day computing
experience. Most software I use is much more fundamentally broken, in a way in
which doesn't seem to be dictated the programming language of choice. The
brokenness has to do with poor design, way too many layers of absolutely
incomprehensible complexity, incompatibility, and so forth. I don't remember
the last time I saw a Linux kernel oops or data corruption on my machine, but
I am waiting _seconds_ to type a character into Slack sometimes.

I like most of the ideas behind Rust (I don't like the language itself and
some of the choices the authors made, but that is another discussion).
However, I think there is only so much you can fix with the shiny and sharp
new tools, because it seems to me that most issues have little to do with low
level matters of programming language or technology, but with higher level
matters of design, taste, tolerance for slowness / brokenness /
incompatibility, etc.

~~~
toyg
Part of the reason your Slack is so slow is that a lot of stuff is built to
protect from problems that Rust might eventually solve.

Slack builds the UI on web technology that got widespread in part because it
solves awkward problems with deployment (self-contained and consistent graphic
libraries, so you don’t have to worry about how your DE compiled this or that
other toolkit) and safety (web tech is heavily sandboxed so that crashes and
executions won’t open doors to bad actor). In the long run, Rust will
definitely make the latter less cumbersome (less worrying about crashes ->
simpler, lighter, faster sandboxes) and possibly help with the former a bit
(desktop environments and their libraries could shed some complexity when
moving to Rust and make it easier for programs to access them safely).

I think it’s a noticeable step forward. Will it solve everything? No, some of
the problems with Slack-like situations are due to economic factors (browsers
sticking to JS will forever continue to make JS programmers cheaper and more
plentiful than basically any other type of programmer) that Rust is unlikely
to affect. But perfect is the enemy of good in this sort of thing: incremental
progress is better than no progress.

~~~
skohan
But I think Rust is also quite vulnerable to the layering problem the previous
commenter is speaking about. One of the best things about Rust is how easy
Cargo makes it to include 3rd party code in a project, but this is also one of
Rust's biggest risks. It's already common for Rust projects to have massive
lists of dependency, and that's something which generally gets worse as time
goes on rather than better.

Rust as a language may have favorable properties with respect to speed and
safety, but programs which run on top of a massive tree of third party code
which has been written by god-knows-who tend not to be very fast or very
secure.

NPM has already shown that dependencies can be used as an attack vector, and
unless Rust can solve this problem, I don't think it's going to bring us some
brave new world where we don't have to sandbox anymore.

~~~
nindalf
> programs which run on top of a massive tree of third party code which has
> been written by god-knows-who tend not to be very fast or very secure.

You have a point about security, but not about the speed. I can probably link
5 "we rewrote in Rust and it was much faster" articles. All of these used
third party libraries. ripgrep for example, is faster than grep, despite
having more dependencies. In reality, it just promotes better code reuse
without impacting run time speed. If anything, separating your code into
crates improves incremental compilation times.

It's possible that you might pull in a large dependency with many features.
Compiling all of this and removing the unused code will cause a compile time
penalty and no run time penalty. In practice, Rust crates that expose multiple
features have a way to opt-out/opt-in to exactly what you need. No penalty at
all. In any case, most rust crates err towards being small and doing one thing
well.

Examples

\- [https://blog.mozilla.org/nnethercote/2020/04/15/better-
stack...](https://blog.mozilla.org/nnethercote/2020/04/15/better-stack-fixing-
for-firefox/)

\- [https://hacks.mozilla.org/2018/01/oxidizing-source-maps-
with...](https://hacks.mozilla.org/2018/01/oxidizing-source-maps-with-rust-
and-webassembly/)

~~~
skohan
I agree that Rust has very favorable characteristics when it comes to
performance. My argument would be that language choice is not a panacea. It's
certainly possible to write performant code which leans on dependencies, but
the style of development which relies heavily on piecing together 3rd party
libraries and frameworks without knowledge of their implementation details is
not a recipe for optimal performance.

------
FlyingSnake
Rust is a fantastic language, and once past the borrow checker level, it can
be quite productive. One interesting compatriot of Rust is Swift, which also
ticks most of the checks in that list. Also Swift has IMO a better development
experience, due to Apple’s initiatives. I wonder what Rust developers think
about swift.

~~~
the_duke
Fun fact: several early Rust developers, including the initial author Graydon
Hoare, switched to Apple and started working on Swift.

The languages have a shared legacy and feel quite similar in certain aspects,
with Swift being a higher level adaptation.

Swift is a lot easier to use, with automatic reference counting, quite a bit
more convenience and syntax sugar, and a class system.

It's a lovely language.

It lacks what are the defining features of Rust (for me) though: low level
control available when required but usually hidden behind nice abstractions,
borrow checker, concurrency safeguards (Send/Sync in Rust), trait system over
classes, and a good macro system.

The biggest conceptual downside for me is the class system with inheritance,
overrides, etc: Rust has traits that are somewhat similar to Haskell type
classes, and are much nicer and more coherent to use in many domains. (it's
far from perfect, especially around the severely limited trait objects and
dynamic dispatch, but that is a longer topic)

The biggest practical downsides are Apples disregard for other platforms
(Linux/Windows are very much second/third class citizens) and the Objective-C
compatibility baggage, which makes the language a bit messy.

But overall I think it is easy to like and appreciate both languages.

~~~
FlyingSnake
Yes, there's a lot of cross-pollination between Rust and Swift, and Swift devs
pragmatically use features from Rust ecosystem. IMO Rust has much better FFI
than Swift, but that's primarily due to Apple. I'm really stoked up for both
languages to gain a larger space in developer ecosystem.

> The biggest practical downsides are Apples disregard for other platforms.

This is unfortunately true, but I see Swift committee making serious efforts
to overcome that. I hope it gets better with time. Swift on Backend
(vapor/Kitura) are simply ages behind Actix.

Another language that's syntactically close is Kotlin/native, even though it
has quite different goals. These 3 languages have brought much excitement to
development in the past few years.

~~~
zozbot234
> IMO Rust has much better FFI than Swift, but that's primarily due to Apple.

How so? Swift has a stable ABI that, among other things, enables it to expose
FFI bindings to many other languages. Rust has made the pragmatic choice of
having no stable FFI beyond the C one, so if you want to setup FFI to a Rust
library you'll have to write C-compatible wrapping code in Rust and expose
that.

~~~
FlyingSnake
Swift got stable ABI in version 5, that’s bit late IMO. Also the community
efforts to improve Rust interop (e.g. Rustler/Elixer) are way ahead when
compared to Swift. That being said, I personally am happy that both languages
to have first class FFI

~~~
skohan
> that’s bit late IMO

It's pretty rare for a language to have a stable ABI, no? I mean Rust still
doesn't have one.

------
Skunkleton
Also: almost everyone who uses rust does so by their own choice.

~~~
pengaru
I don't get this comment, rust is a new language that isn't widely embraced by
employers yet.

How would rust have a significant number of users forced to use it?

~~~
mcintyre1994
It’s discussed in the article but that quality kind of ‘games’ the metric SO
are using - what % of people using it want to keep using it? If people were
forced to use it for work then you’d likely see a lower number regardless how
good it is just because it wouldn’t be every single person’s preference.

It’s not a bad thing - a new language that doesn’t reach that point probably
dies - but it does make it easier to score highly on their metric.

------
smabie
"In type checking, only the signature of functions are considered. There’s no
relying on the implementation for determining if callers are correct (like you
can do in Scala, or Haskell)"

What does this mean?

~~~
twic
You can know the exact parameter and return types of a function by reading its
signature. You can't do that in Scala, because of type inherence. This is
legal Scala:

    
    
      def twice(x: Int) = { x * 2 }
    

But you can't know the return type without reading the function body. That
example is not so bad. I routinely used to confront things like:

    
    
      def applyComputation(x: Int) = { determineComputation().compute(x) }
    

And now you're off on an adventure to work out the return type.

EDIT There's sort of a deviation from this with "impl trait" returns. There
the signature says that the function returns some type which implements a
certain trait, but you can't tell exactly what.

~~~
jfim
It's been a while since I've written any Scala, but if I recall correctly it's
frowned upon to use type inference for return values in method signatures.

~~~
MrBuddyCasino
> it's frowned upon to use type inference for return values in method
> signatures

The language shouldn't allow it in the first place. Its a symptom or not
balancing power with complexity and shows up elsewhere in Scala. They "were so
preoccupied with whether or not they could that they didn't stop to think if
they should."

~~~
logicchains
OCaml also does global type inference like this, and it generally works out
fine (unlike Haskell, it's not idiomatic to write a type signature for every
single top level function in OCaml). Maybe because the type system is more
principled compared to Scala.

~~~
momentoftop
Ocaml and the MLs have a very strong commitment to global type inference, much
stronger than Haskell's. Scala has no commitment.

That said, you do see a lot of types in Ocaml code. Ocaml source files
typically have an accompanying signature file (with extension ".mli" rather
than ".ml"). The signature file gives explicit types for all of the
definitions (fields) in the structure file. Often, you need to write these
signature files because you want to hide implementation details from the user
and so give narrower types than the ones inferred by the compiler.

You can and do write Ocaml without ".mli" files, and there, you are relying
heavily on global type inference, and built-in Ocaml tools to tell you what
your most general ".mli" file would look like. You can and do get the compiler
to write them for you and then add your restrictions and documentation. As
such, Ocaml programmers are very used to reading these signatures as the entry
point to understanding a library.

This doesn't work so well in Haskell, because Haskell doesn't have global type
inference, and annotations are sometimes mandatory. Consider the expression

    
    
       show (read "1")
    

Without saying what type "read" is supposed to return here, there's no way to
know what this code is supposed to do.

~~~
logicchains
>because Haskell doesn't have global type inference

I'm curious what you mean by that. Is it because Haskell's additions mean that
its version of Hindley-Milner type inference is not able to infer types for
all expressions?

Is this also why the OCaml Emacs mode is much better, even though Haskell has
more developers?

~~~
tome
I'm curious what about Ocaml's Emacs mode is better. I use dante for Haskell
in Emacs which is good, but I'm always interesting in hearing about better
technologies.

~~~
momentoftop
I'm not the parent, but thanks for the link to Dante.

One of the main things I like about Ocaml and merlin is how robustly it can
tell you the types of expressions by hitting "C-t". It usually works on
incomplete code, and it will tell you the type of arbitrary subexpressions
(not just identifiers) in your selected region.

It will do automatic destructuring of an identifier (producing a match/case
expression with the patterns in the ADTs filled in for you). It's not perfect,
but I use it a lot for complex ADTs.

The autocompletion is great too. It will complete for local variables in
scope, and it must be having to do some fairly complex stuff in the
background, since it'll autocomplete for local modules applied to functors.
For example, you can write

    
    
       let foo x =
         let open Foo(Bar) in
         ...
    

and when you autocomplete inside the "...", it will bring in completions from
the module generated by applying Bar to Foo.

I'd be interested to hear how dante compares.

~~~
tome
Thanks!

> I'd be interested to hear how dante compares.

> it can tell you the types of expressions

dante has flaky support for this

> It will do automatic destructuring of an identifier

dante does support this. It's a bit hokey because the code it inserts doesn't
match pre-existing indentation, but it is useful.

> The autocompletion is great too. It will complete for local variables in
> scope

That sounds very cool. I don't think dante does that, although I've never
tried it.

------
cassepipe
Rust is great but it could gain from explaining itself with memory and
pointers or else why a String has the Clone trait and a u32 the Copy trait? I
tried to learn it as a first language and it was hard until I started learning
C, groking the stack the heap and pointers. I think all the tutorials out
there are ill suited since they hide away so much. It's harder to remember
stuff if you don't understand why it is that way, well at least it stands true
for me. So I would love to see a Rust for beginners tutorial for C beginners.
Maybe I'll write it someday. I really got discouraged when it got into the
ugly lifetime syntax. But I will definitely come back to it. (Unless Zig shows
to be just as safe and less verbose)

~~~
ssokolow
Fair point.

The Copy on u32 and lack of Copy on String _is_ confusing until you've grasped
that things containing heap allocations are ineligible for Copy, and that Copy
is primarily intended for data types the same size as or smaller than a
pointer.

------
turbinerneiter
It feels like being part of a village that learns to love the dragon it
battles.

------
devit
It's the only production-ready language that is both memory safe and has zero-
cost abstractions (i.e. for any C code you have Rust code that compiles to
equivalent assembly, and using more abstractions in Rust does not make the
assembly less efficient unless the abstraction can't be implemented
otherwise).

Also as long as you accept not having dependent types (at least for the short
and mid-term) and several currently unimplemented features, Rust is the
optimal way a programming language can be designed other than assorted minor
warts.

~~~
Dowwie
Batteries are very much already included. The missing features you are
referring to are likely not show stoppers.

------
orthoxerox
It's a niche language that dominates its niche.

I wouldn't write a LoB application in Rust, for example. But if I wrote
programs with really tight speed and memory requirements for a living, I would
pick Rust for the task.

If people were forced to write their website backends in Rust (or even their
frontends in Rust targeting WASM) they would hate it. Its performance is
overkill for 99.9% of backends, but the means of getting this performance kill
your productivity.

~~~
majewsky
My current side project has a frontend in Rust via WASM, and I love it. Way
better than the huge mess that is the JS ecosystem.

~~~
Dowwie
Rust WASM is still very bleeding edge. Let's not give anyone the false
impression about what anyone can manage to build today.

------
robotmay
I've been casually playing with Rust for a few years now. Wrote a few small
things in my previous job that still run a large part of their business, which
is pretty satisfying, but only very recently have I found a couple of hobby
projects where it just feels like the right tool for the job (for me). Web
stuff I'd still much rather write in Ruby, if I'm honest, but for playing
around on systems Rust is super fun. I ended up making
[https://git.sr.ht/~robotmay/amdgpu-
fancontrol](https://git.sr.ht/~robotmay/amdgpu-fancontrol), of which there's
already an equivalent in Python, but the lack of dependencies when installing
a piece of Rust software makes it feel very portable and neat.

My favourite metaphor for Rust is that it's like a friendly bare-knuckle fist-
fight with the compiler. It's not as user-friendly as, say, Elm, but it's
streets ahead of Haskell's errors.

~~~
slowwriter
Can I just say, I really appreciate the Community reference

Edit: Btw, if you have to ask what I mean you’re streets behind

------
kumarvvr
A newbie question.

As a seasoned C#, Python and JS programmer, what conceptual foundations in CS
will make me use rust more effectively?

Say I want to create a new database service, on top of Postgresql, using rust.
Would the design of rust help me in a specific way?

I want to learn and use rust, for systems programming, the kind where I build
a high performance underlying system, called by other languages, but it always
feels I need to learn quite a bit of theory to _effectively_ use rust.

I never felt the same with C# or python. A bit of OO stuff was usually enough
to be productive with them.

------
lbj
Where's a good place to start with Rust? Which domains is it particularly good
in ?

------
andi999
Also the same reason why ppl love c++: Stockholm syndrom.

~~~
nindalf
86.1% of people using Rust love it. For C++ it's 43.4%. [1] You'd have to
explain the disparity in the two numbers if you think they're loved for the
same reason.

Further, Rust is mostly used by people who choose to do so. There are very few
people out there forced into maintaining shitty, legacy codebases in Rust
because there aren't very many such code bases ... yet.

[1] - [https://insights.stackoverflow.com/survey/2020#technology-
mo...](https://insights.stackoverflow.com/survey/2020#technology-most-loved-
dreaded-and-wanted-languages-loved)

~~~
drewcoo
"It occurs when hostages or abuse victims bond with their captors or abusers."
[1] The stories speak of abuse and an unwillingness to leave. This does not
detail how they came to be abused or captives. I'm not saying they were asking
for it, dressed that way, but maybe these Rust victims' behavior led them to
entrapment. That doesn't mean it's not real. This seems a lot like Stockholm
syndrome.

[1} [https://www.healthline.com/health/mental-health/stockholm-
sy...](https://www.healthline.com/health/mental-health/stockholm-
syndrome#definition)

------
moonchild
I don't believe that rust solves the right problems in the right ways. This is
specifically with respect to the single-owner raii/lifetime system; the rest
of the language is imo pretty nice (aside from the error messages, which are
an implementation problem).

For starters, ATS[1] and f-star[2] both provide much stronger safety
guarantees, so if you want the strongest possible guarantees that your low-
level code is correct, you can't stop at rust.

    
    
      _____________________________________________
    

Beyond that, it's helpful to look at the bigger picture of what
characteristics a program needs to have, and what characteristics a language
can have to help facilitate that. I propose that there are broadly three
program characteristics that are affected by a language's ownership/lifetime
system: throughput, resource use, and ease of use/correctness. That is: how
long does the code take to run, how much memory does it use, and how likely is
it to do the right thing / how much work does it take to massage your code to
be accepted by the compiler. This last is admittedly rather nebulous. It
depends quite a lot on an individual's experience with a given language, as
well as overall experience and attention to detail. Even leaving aside
specific language experience, different individuals may rank different
languages differently, simply due to different approaches and thinking styles.
So I hope you will forgive my speaking a little bit generally and loosely
about the topic of ease-of-use/correctness.

The primary resource that programs need to manage is memory[3]. We have
several strategies for managing memory:

(Note: implicit/explicit below refers to whether something something is an
explicit part of the type system, not an explicit part of user code.)

\- implicitly managed global heap, as with malloc/free in c

\- implicit stack-based raii with automatically freed memory, as in c++, or c
with alloca (note: though this is not usually a general-purpose solution, it
can be[4]. But more interestingly, it can be composed with other strategies.)

\- explicitly managed single-owner abstraction over the global heap and
possible the stack, as in rust

\- explicit automatic reference counting as an abstraction over the global
heap and possibly the stack, as in swift

\- implicit memory pools/regions

\- explicit automatic tracing garbage collector as an abstraction over the
global heap, possibly the stack, possibly memory regions (as in a nursery gc),
possible a compactor (as in a compacting gc). (Java)

\- custom allocators, which may have arbitrarily complicated designs, be
arbitrarily composed, arbitrarily explicit, etc. Not possible to enumerate
them all here.

I mentioned before there are three attributes relevant to a memory management
scheme. But there is a separate axis along which we have to consider each one:
worst case vs average case. A tracing GC will usually have higher throughput
than an automatic reference counter, but the automatic reference counter will
usually have very consistent performance. On the other hand, an automatic
reference counter is usually implemented on top of something like malloc.
Garbage collectors generally need a bigger heap than malloc, but malloc has a
pathological fragmentation problem which a compacting garbage collector is
able to avoid.

This comment is getting very long already, and comparing all of the above
systems would be out of scope. But I'll make a few specific observations and
field further arguments as they come:

\- Because of the fragmentation problem mentioned above, memory pools and
special-purpose allocators will always outperform a malloc-based system both
in resource usage and throughput (memory management is constant-time + better
cache coherency)

\- Additionally, implicitly managed memory pools are usually easier to use
than an implicitly managed global heap, because you don't have to think about
the lifetime of each individual object.

\- Implicit malloc/free in c should generally perform similarly to an explicit
single-owner system like rust's, because most of the allocation time is spent
in malloc, and they have little (or no) runtime performance hit on top of
that. The implicit system may have a slight edge because it has more flexible
data structures; then again, the explicit single-owner system may have a
slight edge because it has more opportunity to allocate locally defined
objects directly on the stack if their ownership is not given away. But these
are marginal gains either way.

\- Naïve reference counting will involve a significant performance hit
compared to any of the above systems. _However_ , there is a heavy caveat.
Consider what happens if you take your single-owner verified code, remove all
the lifetime annotations, and give it to a reference-counting compiler.
Assuming it has access to all your source code (which is a reasonable
assumption; the single-owner compiler has that), then if it performs even
_basic_ optimizations—this isn't a sufficiently smart compiler[5]-type case—it
will elide all the reference counting overhead. Granted, most reference-
counted code isn't written like this, but it means that reference counting
isn't a performance dead end, and it's not difficult to squeeze your rc code
to remove some of the rc overhead if you have to.

\- It's possible to have shared mutable references, but forbid sharing them
across threads.

\- The flexibility gains from having shared mutable references are not
trivial, and can significantly improve ease of use.

\- Correctness improvements from strictly defined lifetimes are a myth.
Lifetimes aren't an inherent part of any algorithm, they're an artifact of the
fact that computers have limited memory and need to reuse it.

To summarize:

\- When maximum performance is needed, pools or special-purpose allocators
will always beat single-owner systems.

\- For all other cases, the performance cap on reference counting is identical
with single-owner systems, while the flexibility cap is much higher.

    
    
      _____________________________________________
    

1\. [http://www.ats-lang.org/](http://www.ats-lang.org/)

2\. [https://fstar-lang.org/](https://fstar-lang.org/)

3\. File handles and mutex locks also come up, but those require different
strategies. Happy to talk about those too, but tl;dr file handles should be
avoided where possible and refcounted where not; mutexes should also be
avoided where possible, and be scoped where not.

4\.
[https://degaz.io/blog/632020/post.html](https://degaz.io/blog/632020/post.html)

5\.
[https://wiki.c2.com/?SufficientlySmartCompiler](https://wiki.c2.com/?SufficientlySmartCompiler)

~~~
rcxdude
> then if it performs even basic optimizations—this isn't a sufficiently smart
> compiler[5]-type case—it will elide all the reference counting overhead.
> Granted, most reference-counted code isn't written like this, but it means
> that reference counting isn't a performance dead end, and it's not difficult
> to squeeze your rc code to remove some of the rc overhead if you have to.

This is only the case if the compiler can effectively inline all functions.
When compiling a function on its own, the compiler has no idea if the function
incrementing a reference count is the first to do so or not. In rust the type
signatures of the called functions are all that is needed to verify the type
and lifetime correctness of a given function implementation.

> Correctness improvements from strictly defined lifetimes are a myth.
> Lifetimes aren't an inherent part of any algorithm, they're an artifact of
> the fact that computers have limited memory and need to reuse it.

Rust's lifetime analysis and 'mutability xor shared' semantics are also useful
for correctness, both in threading (as you mention), but also in the case of
unexpected mutation in the same thread: iterator invalidation is probably the
most obvious case of this (and it's not just because 'computers have finite
memory', it's intrinsic to how a lot of datastructures work).

What's more, Rust's lifetime and ownership system works neatly with pools and
other special-purpose allocators, and implementing such patterns in a safe
manner is frequently done in rust (in some cases Rust lets you get away with
patterns which would be so wildly unsafe in C++ as to be impractical). If Rust
didn't care for allowing such control over memory allocation it probably would
not have much of the features it does have.

~~~
moonchild
> > rc elision is super trivial

> no it's not

Fair enough.

It's still not a very difficult problem, though. You don't have to inline all
functions (which you don't want to do anyway); you can infer lifetime
attributes for each function.

RC has another benefit: it's easier to make a naïve compiler; it'll just
produce slow code. Whereas a naïve single-owner compiler implementation (e.g.
mrustc) will allow bad code.

------
dirtydroog
Never before has a programming language received so much marketing. It's very
odd.

~~~
nickm12
I take it you weren't programming when Java was the new hotness?

~~~
HugoDaniel
or Ruby, or Haskell, or elixir... Rust so happens to appeal the front-end
crowd as much as the backend people and they are leveraging those windows of
opportunity much better than any other language or community. Wasm bindgen is
a bliss of fresh air, it even works very well with TypeScript.

