
Two Years with Rust - anotherevan
http://brooker.co.za/blog/2020/03/22/rust.html
======
herodotus
Here is an historical note for those who might be interested. In 1979 I did a
Post-doc with Rod Burstall. I helped implement the programming language Hope,
which was the first functional programming language to use pattern-matching in
function definitions. The implementation was written in an older language
called POP-2. Robin Milner and his group were at the the time working on LCF
and ML, and they eventually incorporated Burstall's pattern ideas. Eventually,
I recall there was a decision within the UK functional programming community
to consolidate their efforts into what became Haskell. And of course we see
how much Haskell has influenced the newest crop of imperative languages.

It is interesting (to me) to compare factorial in Rust with factorial in Hope:

Rust:

fn factorial(i: u64) -> u64 {

    
    
        match I {
    
            0 => 1,
    
            n => n * factorial(n-1)
        }

}

Hope:

dec factorial : num -> num;

\--- factorial 0 <= 1;

\--- factorial n <= n* factorial(n-1);

Note that, in Hope (unlike its own inspiration, Prolog, and, I think unlike
Rust), the order of the rules does not matter: the most specific pattern takes
precedence. Hope was primitive with respect to types, and did not use Milner's
type inferrencing ideas. I don't think Burstall ever intended it to be a
"real" programming language. When I left Edinburgh, Don Sanella took over from
me, so I was not involved in writing the paper about Hope, but my contribution
is acknowledged.

I implemented the pattern matching code, and the idea of pattern compilation
occurred to me. I remember showing it to Rod. He freaked out at first, but
after about a five minute harangue, the penny dropped, and I remember him
saying "clever Michael", "clever Michael" in his charming way.

~~~
steveklabnik
Very cool, thank you for this history!

> I think unlike Rust

Yes, this is correct: Rust looks at patterns in order, top to bottom.

Incidentally, some folks have asked for a syntax closer to Hope (and Haskell,
and others) [https://github.com/rust-
lang/rfcs/pull/1564](https://github.com/rust-lang/rfcs/pull/1564)

    
    
      fn factorial(0: u64) -> u64 { 1 }
      fn factorial(n: u64) -> u64 { n * factorial(n - 1) }
    

I doubt this will ever happen, though.

~~~
hota_mazi
I really hope this doesn't happen.

I've never understood the infatuation for this Haskell type syntax where you
need to repeat the name of the function for each and every single match case.
It's so needlessly verbose.

~~~
macintux
I find it dramatically cleaner.

Each clause is categorically isolated, unlike a case statement (the
alternative in Erlang, with which I’m most familiar) where code can precede
the case statement and thus introduce bindings that can muck up the logic.

~~~
hota_mazi
Yes but why repeat the name of the function every time?

It might make more sense if such clauses are scattered through the source, but
they are pretty much all the time grouped together, so the name of the
function doesn't need to be repeated all the time.

For example, instead of

    
    
        factorial 0 => 1;
        factorial n => n* factorial(n-1);
    

Something like

    
    
        factorial {
            0 => 1
            n => n* factorial(n-1);
        }

~~~
macintux
As much as I like short functions, not all function clauses are just one line.
I don’t see your alternative as helpful for longer clauses.

~~~
hota_mazi
Whatever syntactic means you use to specify long clauses with the current
syntax you can reuse with my syntax.

~~~
macintux
Yes, but function names are a useful way to separate long clauses

------
twsted
> It's been over 10 years since I last worked with C++ every day, and I'm
> nowhere near being a competent C++ programmer anymore. Part of that is
> because C++ has evolved, which is a very good thing. Part of it is because
> C++ is huge. From a decade away, it seems hard to be a competent part-time
> C++ programmer: you need to be fully immersed, or you'll never fit the whole
> thing in your head.

This is really true. I have worked with C++ for 15/20 years, reading and
studying everything I could.

I've reached a good competence but every time I stopped for a while I
immediately fell behind.

C++ is a difficult beast.

~~~
yaktubi
I think to get good at a given systems language these days the effort is the
same. For instance, concurrency still requires the same underlying ideas to be
understood whether C++ or Rust. After all the machine underneath is the same.

Rust has some nice-to-haves of course, but C++ has been able to grow up and
evolve because of the rich feature-set the template meta-programming language
has provided it.

One can simply stick to C++11 or 14 and work with those features alone to make
strides in development. Hell, people have used C++98 for decades.

Personally, I think what C++ offers is the same as Rust. Both need a robust
testing and code-coverage tools for correctness, but the end result requires
simply good development practices and love for the work being done.

~~~
quietbritishjim
> Personally, I think what C++ offers is the same as Rust.

I think this dismisses incidental complexity that can certainly exist in
general, and certainly does exist in C++.

Just look at how complex move semantics are in C++:

* you have & and && references,

* there are all sorts of categories (e.g. glvalues),

* in a template && means something different (but not always),

* std::move by itself doesn't actually do anything (yes I know it's a cast but this is still confusing at least upfront),

* even if you do pass the result of std::move (or otherwisie know you have an rvalue reference) to a constructor then it's still possible that the object will actually be copied with no error or even warning.

In Rust, the difference is enormous:

* When you move from a value, it's guaranteed that the old object's destructor will not be called, so you don't need a move constructor that hackily sets up an empty value so is destructor won't do anything;

* Move is _always_ a bitwise copy so it's easier for the caller to understand and free for the implementor to implement (this is enabled by the previous point)

* Move is the default instead of copy which makes a million times more sense because you can just implement copy as a regular method that happens to return (move!) its result (whereas you could never implement move as something that returns its result by copying) and if you actually want to copy a value to a function you can just call the copy method and move that value to it.

This final one is key is really, it's not possible because of C++'s history
and backwards-compatibility requirements and that's why it had to come up with
the crazy system it now has. Rust has many nice features but really the move
vs copy stuff by itself is a huge saving in complexity, and the fact that Rust
manages it at all proves that it is not intrinsic to the problem.

(Disclaimer: I have worked with C++ on and off full time for more than 15
years, whereas I have only tinkered with Rust.)

~~~
pjmlp
You forgot about SFINAE, CTAD, decltype vs auto, perfectly forwarding
references, guaranteed RVO (but only in certain ISO versions), initialization
semantics, ...

~~~
WesternStar
He didn't forget perfectly forwarding references.

~~~
quietbritishjim
But that does show how confusing the notation is!

~~~
WesternStar
I guess I watched the back to basics c++ series on move semantics and its not
really all that bad. I did struggle with it beforehand though. I've never
worked in C++ tbh. I does take the 2 hours but you have it when you're done.
[https://youtu.be/St0MNEU5b0o](https://youtu.be/St0MNEU5b0o)

~~~
quietbritishjim
Really? You can tell off the top of your head what the difference between an
xvalue and a glvalue is? Which function has higher precedence in the overload
list out of foo(string&) and foo(string&&)? (Note that's not a const string&!)
That these are so obvious to you that there's no chance you would make a
mistake about them when you're in a hurry trying to solve an actual problem
that has its own complexities rather than playing games with C++'s unnecessary
complexity? The one where foo(std::move(x)) doesn't necessarily move x is a
particular killer (although at least it only causes a performance problem
rather than noticable change in behaviour).

I have worked with C++ for a hell of a lot more than 2 hours since C++11 came
out (the first version with move semantics) and worked with many others that
have too, and I can tell you that "once you have it you're done" is just not
true. I fully understand all the concepts but I can still make mistakes.

~~~
WesternStar
Is there more? Yea. Do you need for most code ? No. Also I lied I have coded
professionally in C++.

~~~
parkovski
Yeah we totally believe you, you just keep on digging your heels in on this
one cause that's how you get people to take you seriously and think you're
smart.

------
rvz
I would expect Rust to do well in these sorts of areas involving systems
programming and low level development including projects like Firecracker.

The features that are very compelling is that installation is painless, static
linking is encouraged and used widely and its APIs aren't tied to any specific
platform, thus is a true cross-platform language done right. It would be
better to compare it to C++ rather than C since they both interface and
surpass the complexity of the language.

While the language is mature, the author fails to mention that most of the
crates ecosystem is immature and some are unsafe. Especially in the domain the
author is working in, there's a degree of risk in importing some crates which
can compromise the safety of the project (use at your own risk).

Cross-compilation is there in Rust, it requires downloading the toolchain for
the specific platform and Go has it truly built-in, so I'll give them that
one. Lastly, it may be possible to use a cross-platform GUI library like gtk-
rs, but it isn't widely adopted unlike Qt, Electron and Flutter. The question
around that is whether if Rust is suitable for that use-case? As many ideas
and crates for Rust GUI development are coming, for now I'd say soon.

The author is certainly bullish on Rust in general and in low-level
development and so am I.

~~~
Random_ernest
> The author is certainly bullish on Rust in general and in low-level
> development and so am I.

While I share your sentiment, I've recently talked to about 5 people who write
software for low-level, security relevant things in airplanes. Imho the best
application for Rust one could think of. None of them had even heard of Rust.
But this is highly anecdotal of course.

~~~
daxfohl
There's a number of reasons. Anything in a regulated industry like that has to
have everything approved by regulators. The whole compiler toolchain, all
libraries, blah blah. All have to be certified versions before you can use
them. You can't just pick up github latest compiler and expect to ship a
safety critical device with it. Coders in regulated industries may have never
even heard of github, much less rust. Different mindset.

Second, it's typically not x86 architectures. They'll have a specific CPU or
SOC that they use, from a specific vendor, and other specific vendors that
provide the (certified) compiler and possibly RTOS that is used to target that
CPU. Those vendors have decades of investment in their C code. Some small
change in the asm rust produces vs c (and I'd expect the difference to be much
more than a small change) could just break everything in a finely tuned RTOS.

Third (and last one I can think of offhand), there are tons of things like
static analyzers and such that can be used against C code and have been
developed over decades to find many of the things Rust has built in. They're
not as good as Rust at some things but better in others.

Oh, fourth, these companies already have huge codebases and libraries they've
already written and are used to. Rewrites / refactors are less common in
regulated industries because of all the documentation they require.

Okay, fifth, and perhaps the biggest one, at least in my experience, we didn't
ever malloc / new in the app code anyway, because of the potential for out of
memory errors. We created a couple big buffers up front and used those
exclusively. So rust's ownership model wouldn't even help there iiuc. I
imagine most safety critical devices are similar?

None of this is to say that Rust will never be useful in a regulated context,
but it has a lot of hurdles to jump.

~~~
Ar-Curunir
Rust’s ownership model isn’t useful just for heap-allocated things; it’s
useful for all kinds of other safety guarantees, such as preventing
unnecessary mutability.

~~~
steveklabnik
To elaborate on this, nothing about ownership or borrowing has anything
directly to do with heap or stack allocation. Allocation fits into the
ownership and borrowing rules, not the other way around.

~~~
daxfohl
Nice. Is it frequently used in contexts outside of allocation? I assume
allocation is the primary use, or at least it's the most talked about, but I
have never done anything complex in rust.

~~~
nicoburns
One example of where it's used: to enforce correct usage of mutexes. Rust's
`Mutex<T>` owns the data it protects. When you lock the Mutex, you get a
reference to the data inside it, but it's impossible (a compile-time error) to
store a copy of the reference beyond the point where you release the lock.

~~~
daxfohl
Cool. So the fifth point is largely nullified, which is a big one -- that the
features of Rust would at least be _useful_ in typical safety-critical code.

------
manifoldgeo

       "Static linking and cross-compiling are built-in."
    

I found out that the above is not completely true. According to the official
Rust documentation, only Rust dependencies are statically linked by default.
But compiled Rust programs depend on more than Rust dependencies, as they also
use shared C libraries. The _capability_ is built-in, but it's definitely not
straightforward.

"Pure-Rust dependencies are statically linked by default so you can use
created binaries and libraries without installing Rust everywhere. By
contrast, native libraries (e.g. libc and libm) are usually dynamically
linked..."[1].

I did a small experiment to make a from-scratch Docker container with no
dependencies and a single Rust binary and found out that I could not do that
without jumping through some hoops[2]. I had to have it include a minimal
implementation of libc called "musl". See my write-up here[3]. If anyone has
found another way around this I would love to hear about it and make a
correction in my write-up.

References:

[1]: [https://doc.rust-lang.org/1.13.0/book/advanced-
linking.html#...](https://doc.rust-lang.org/1.13.0/book/advanced-
linking.html#static-linking)

[2]: [https://doc.rust-lang.org/1.13.0/book/advanced-
linking.html#...](https://doc.rust-lang.org/1.13.0/book/advanced-
linking.html#linux)

[3]: [https://bxbrenden.github.io](https://bxbrenden.github.io)

~~~
kouteiheika
To be honest you're overcomplicating the whole thing by involving Docker in
the build process. Assuming you have Rust installed (which is only one `apt-
get` or `curl` + `sh` invocation away) and your project does not depend on any
external C libraries (except libc) then building a fully-static Linux
executable is as simple as those two commands:

    
    
        rustup target add x86_64-unknown-linux-musl
        cargo build --release --target=x86_64-unknown-linux-musl
    

There are no hoops to jump through here. I can totally understand using Docker
for something which can be a pain to setup a toolchain for or install all of
the dependencies for, but for the Rust compiler I see very little reason for
running it inside of a container except in a very few very niche cases.

~~~
staticassertion
At one point docker was the only way to do this, I think. So if you start
searching the internet for how to do static linking with musl you're very
likely to find links to the docker'd approach.

~~~
steveklabnik
I don't think docker has ever been the only way to do this, but it is a
popular one. It helps when you go beyond pure Rust.

~~~
staticassertion
Maybe not the only way, but it was the easiest and most widely recommended way
when I was figuring out how to do musl builds around 2 years ago.

~~~
steveklabnik
Huh, I've just been pointing people to [https://blog.rust-
lang.org/2015/04/24/Rust-Once-Run-Everywhe...](https://blog.rust-
lang.org/2015/04/24/Rust-Once-Run-Everywhere.html) for the last five years.
Maybe someone was recommending Docker a lot and I just missed it.

~~~
staticassertion
Maybe it's the more niche use case of deploying to AWS Lambda before custom
runtimes.

------
jksmith
" I've also found that programs seem more likely to work on their first run,
but haven't made any effort to quantify that." Had the same experience with
Modula-2, once I got all the compile errors cleared.

I think there is a real need for the current generation of business
programmers to start focusing more on safety and rigor as they transition into
IoT. For instance, I don't see most programmers I've worked with capable or
will to use enough rigor to make autonomous cars safe. All these programmers
have generally been slinging java for years, without even much effort to
exposed themselves to other languages unless there is a business requirement
to do so.

So the concern I have with Rust (as a newbie) is it isn't consumable enough to
have it's value adopted by most business programmers out there, and I've
wondered if Ada would serve that role better because it seems generally easier
to grok percentage wise.

~~~
batter
Whatever bad things you see in this world - it's done by java developers. Go,
scala, kotlin, JavaScript, python developers would never do anything as bad as
java developers. I see that a lot of people stick to that mantra.

~~~
barrkel
Java is used on projects with more scale (business process complexity) than
the other languages you list, though.

There are reasons for the things that are done in the Java community. We might
not like the result, but there are reasons, and most of them have to do with
scale and process at scale.

There are other ways to build out scale, but they may require different kinds
of organizations to build them, with a different mix of people. Big Java
projects tend to be structured to make efficient use of developers with 0-5
years of experience, because at scale those developers are reasonably easy to
hire. Those developers need to colour inside the lines (=> framework, we'll
call you, follow the patterns, don't invent abstractions), create code which
can be tested in isolation (=> inversion of control), and have their output
glued into position (=> dependency injection) in a much larger solution. Most
of the structure of the Java ecosystem follows from this.

------
BooneJS
I’ve been immersed in C++ at the office for over a year. It is a big language,
but like all big languages one ends up finding the parts of it they need to
use and get really good at it.

The more languages I learn, the more I dislike language zealots. They’re great
to leverage when learning, but they’re blinded by their own opinions to see
that they’re making mountains out molehills.

~~~
6gvONxR4sf7o
I think part of the tension is that some people are in it for a better hammer,
while others are in it as an end unto itself. PL is just plain interesting as
a hobby interest.

------
abinaya_codes
Recently we've added support to curate the Rust programming remote jobs at
Remote Leaf[1]. We've seen a some surge in the Rust remote jobs in the recent
months. Maybe because it's being accepted by a wide variety of developers and
also hiring companies started using it?

[1] - [https://remoteleaf.com](https://remoteleaf.com)

------
nrclark
I see a lot of people comparing Rust favorably against C++, largely due to
C++'s complexity. And yet at the same time, the Rust dev team are changing and
expanding the language very quickly.

Is there any plan to feature-freeze Rust?

Otherwise it'll just become C++ 2.0 - a giant mass of features that are
progressively designed to replace each other, until the language becomes too
complex for any one person to master.

~~~
nicoburns
Notably almost none of Rust's new features are replacing old ones. Most of
them are either opening up new capabilities (e.g. async-await), or making
existing mechanisms more general/flexible (e.g. const generics, GATs).

It might accumulate cruft eventually, but I think it's at an inherent
advantage over C++ due to its heavy functional influence, which is all based
on math / PL theory. On the other hand C has always been a quick-and-dirty
language, and C++ inherited a lot of that legacy.

The macro system also helps a lot, as features can be prototyped as macros,
and only stabilised once the design has been iterated and used. More niche
features can stay as macros in 3rd party libraries.

~~~
_bxg1
It also has years of crucial hindsight in language-theory. OOP and FP are both
fairly mature at this point, and Rust started out of the gate by elegantly
weaving them into a single coherent model, compared with C++ which started
with _neither_ (C) and had to monkey-patch both of them on, over the very
decades when the ideas behind them were most actively evolving.

------
eternalban
I find it interesting that both Go and Rust, in practice, step on one of their
relative foundational concepts. (2010: [https://blog.golang.org/codelab-
share](https://blog.golang.org/codelab-share), for Go, and I suppose don't
need to cite "safe system code" for Rust).

For example, Go community could have stuck to their guns about building server
side concurrent code only using channels and value objects, performance impact
be damned, and we would not have high performance server code written in Go
(using locks and shared memory). And as OP points out, "system level" code
written in Rust likely will have (possibly opaque) unsafe code segments.

Intent here isn't to rag on either language. It's more musing out loud about
the impact of conceptual consistency in course of development on product
success: is it a (practical) mistake to insist on it? Based on Go and Rust
teams' decisions to date, it seems it pays to be pragmatic. (Or are we simply
throwing the towel in too early?)

~~~
steveklabnik
I'm not entirely sure. I think this line of inquiry is interesting, but I'm
also not entirely sure that you're declaring Rust's original principles
correctly. That is, the safe/unsafe dichotomy was always there. It's
impossible to build real systems without it, and so I think that's why people
gloss over it a bit.

See slide 16 of [http://venge.net/graydon/talks/intro-
talk-2.pdf](http://venge.net/graydon/talks/intro-talk-2.pdf), the original
presentation of Rust to the world.

~~~
leshow
Funny that the talk says "we are not rewriting the browser", given the current
projects at mozilla.

~~~
steveklabnik
I always took that to mean "we are not doing the Big Rewrite."

------
metreo
I've had a much shorter frame of time with Rust and I have to say I really
enjoy the learning experience, and that is to say that learning the language
is a pleasurable experience. Other languages with larger more established
communities are often unable to replicate this same feeling of inclusion and
value as a member of the ecosystem of contributors.

------
WilliamEdward
A south african domain, but a blogger from seattle...

you don't see that every day :)

~~~
mjb
Hi, I'm the post author. As you guessed, I'm South African. I worked at Amazon
in Cape Town in the early days of EC2, and live in Seattle now. A lot of core
development on EC2 (and other AWS products) still happens in Cape Town.

------
modernerd
Everyone always seems so positive about Rust. I'd love to try it for some
personal projects. Are there any downsides beyond the niggle the author
mentioned?

Are compilation speeds an issue for anyone?

Is there much that can be done to improve this? (Both in Rust itself and at a
developer level; presumably a faster dev machine helps?)

~~~
kouteiheika
> Is there much that can be done to improve this?

I have a 50k+ lines of code project which usually recompiles after a change in
~2 seconds in release mode.

There are many tricks which can be used to improve compile times to the point
that even on medium-sized projects the compile time is not an issue. But you
need to keep a certain discipline to adhere to these.

1) Use LLD instead of the system linker.

2) Don't add dependencies willy-nilly. Especially for trivial stuff which you
don't need pedal-to-the-metal optimized. (e.g. do you really need to add that
4000 lines long SIMD-optimized base64 encoder/decoder, or can you live with a
naive 10-lines long version you can write yourself in a few minutes?)

3) Feature-flag gate dependencies/features not necessary during development.
(e.g. do you actually need HTTPS support during development, or can you test
your webapp on HTTP and only compile-in the TLS stack for production
deployment?)

4) Avoid heavy dependencies. (e.g. there are some popular web frameworks for
Rust which have over 100+ dependencies by default; if you pick such a
framework then your compile times are obviously going to be very heavily
affected)

5) Use dynamic dispatch (&dyn T and Box<dyn T>) instead static dispatch (impl
T) when accepting generic arguments in cases where you don't need pedal-to-
the-metal performance.

6) If you absolutely need to use static dispatch purely for ergonomics (and
not because of the performance) then create two function - a dynamically
dispatched private one which accepts a &dyn T and contains the actual
functionality, and a public one which accepts impl T and is a one-line wrapper
around the private one.

7) Don't use #[inline] annotations if you don't absolutely need them.

8) Split your project into multiple crates.

~~~
leshow
> 6) If you absolutely need to use static dispatch purely for ergonomics (and
> not because of the performance) then create two function - a dynamically
> dispatched private one which accepts a &dyn T and contains the actual
> functionality, and a public one which accepts impl T and is a one-line
> wrapper around the private one.

Would you really recommend this as common advice? It seems like not a good
idea to me. If all you cared about was binary size and compilation speed,
maybe, but not otherwise. Same with blanketly recommending use of &dyn T
instead of <T>. There are other problems with dynamic dispatch in rust, namely
it's kind of a pain if you need to `+ OtherTrait` with it.

~~~
kouteiheika
Yes I would. In general people tend to overuse static dispatch even when it's
not really necessary. Of course the issue is a little bit more nuanced than
"always use X unless Y" and there are tradeoffs in play here that need to be
balanced.

For example, if your function is really small - yeah, it's probably fine to
just use static dispatch. If you're writing a generic data structure - you
most likely also want it to be a statically dispatched Struct<T>, but with a
healthy dose of #[cold] annotated non-generic functions for the cold paths.
However, let's say that you have a function that accepts a filesystem path and
loads a PNG from it - you do __not __want the PNG loading code to be 1)
duplicated in every compilation unit (compilation time bloat), and 2)
monomorphised three times just because you passed an ` &str` once to it, a
`String` another time, and a `PathBuf` yet another time (compilation time and
executable size bloat), so you definitely do want dynamic dispatch here (at
least under the hood with the two function trick).

I do think the default should indeed be &dyn T and you should only go for impl
T when you can actually clearly substantiate _why_ you should use it, instead
of the other way around which is the default now in the Rust ecosystem. (Which
is how you end up with 20+ second edit-compile cycles one of the sibling
comments mentioned.)

~~~
eximius
Could a proc macro be written that uses dynamic dispatch in debug and static
dispatch in release? That would be optimal for dev compilation speed and
binary speed, at the cost of binary size. It seems like a pretty good tradeoff
for most cases.

~~~
kouteiheika
IIRC I think there was a crate with a procedural macro like that which did
something kinda similar to this. However currently that isn't really optimal
for quite a few reasons: 1) having a procedural macro by itself pulls in extra
dependencies, 2) it introduces extra work for the compiler so it does
negatively impact the compile times (the procedural macros don't operate
directly on the AST, so they have to parse the token stream into an AST,
process it, serialize it back into a raw token stream, and the compiler has to
parse it again), 3) Rust's current procedural macro machinery doesn't yet
support emitting proper error messages from a procedural macro, so if you'll
get an error or a warning from a piece of code generated by a procedural macro
it will just point to the #[name_of_the_macro] annotation instead of the
actual location where the issue originated from.

------
rstuart4133
> The biggest long-term issue in my mind is unsafe

I'd say that, but for different reasons. It's depressing how often you have to
use unsafe. I would not mind if it was hidden in libraries tested within an
inch of their life 99.9% of the time, but it didn't work out that way for me.
Recursively defined data structures like trees were just a nightmare to do
without unsafe's.

I thought that was perhaps because I just sucked at Rust, but then I listened
to a talk from one of the Mozilla core devs working on Servo. Their code had
more unsafe's than mine. The amount of parallelism they were trying to get was
extreme of course, so it wasn't really much of a comparison. It made me feel
much better all the same.

------
tannhaeuser
Seeing as the prime (and arguably only essential ^1) use cases for C are 1.
O/S kernels and drivers, 2. embedded code for tiny microcontrollers, 3.
language interpreters/runtimes and JITters, 4. bootstrap compilers, 5.
portable command-line utilities, 6. low-level/mixed asm routines for
performance, and 7. the large body of legacy apps of course, are there any
examples for successful implementations for these categories in Rust? I'd
imagine developing a nontrivial language runtime using Rust's memory model
could be hard or impossible to do performantly. But that is or was the
mainstay of C programming on Unix - implementing higher-level languages such
as shell or awk.

^1: This is of course opinionated, but my reasoning for not including
application code, and in particular long-lived evented or multithreaded app
servers is that I personally think these are beyond C's memory management
capabilities (even if you get malloc/free 100% right, there's still the
problem of memory fragmentation) and are basically erratic outcomes of 1990's
multithreading code originating from corouting in GUI programs

~~~
joppy
Don’t C++ and Rust still have the memory fragmentation issue, since they have
to use an allocator at some point?

~~~
tannhaeuser
Exactly. That's why I don't think close-to-metal languages are the way to go
for _most_ apps (as in "C is _not_ a general-purpose language"), except the
ones I listed, and probably some. I admire the Rust community's energy and
persistence to create a new zero-overhead language, but, for me at least,
they're ultimately falling victim to the idea that an app must be a single
giant monolithic binary. C and Unix grew strong because of small little
composable programs, where each program's memory was manageable enough in a
way multithreaded, let alone async/evented server-like programs aren't (if
they need dynamic memory at all). But ever since idk Java app servers? people
have this idea that they're better at memory management than the kernel + MMU
when that isn't clearly demontrated at all considering that eg. GC overhead
isn't insignificant in long-running programs. What I'd rather like to see is
bringing down process spawning overhead (response latency), or at least a
rational discussion backed by benchmarks why everybody (except unikernel
folks) blindly follows the big fat app server idea when, from the outset, a
process-per-request model with memory/permission/resource isolation clearly
looks much saner given service-oriented workloads and troubles in recent years
(eg side-channel attacks, DOSing, leaks).

------
krebs_liebhaber
This is only tangentially related to the article, but why does any discussion
of Rust and/or Go asymptotically approach "RUST VERSUS GO RUST VERSUS GO", or
at least have everyone and their grandma chip in on which one they prefer
(like this very post)?

I'll admit that I only have experience with the former, but they seem like
totally different beasts to me, intended for very different domains / target
audiences. I figured that the holy war would be between Rust and the other
trendy systems languages (Nim, Zig, Crystal) or between Go and other Web-
oriented / glue languages (Java, Python, Perl, et cetera).

~~~
lmkg
It's an accident of history and messaging. They were both publicly announced
at the same time, and both described themselves as "systems programming
languages."

As it turns out, the two languages have entirely different ideas of what
"systems programming" means. And Go was announced very close to its 1.0 launch
while Rust was announced very, very early in its development cycle. So in
reality they actually align neither in timing, nor in target domain. Still,
the initial contrast seems to have stuck.

The only real overlap is that both deign to be C successors. But in such
different ways that the head-to-head comparisons make little sense.

There's also the perception that they're the only recently-created languages
that have gained any real traction. You don't see nearly the ink spilt on Nim
or Zig as you do on Rust or Go. So there's maybe some sibling rivalry there.
(In this vein, I occasionally see comparisons of Rust vs Swift.)

~~~
reacharavindh
Learning Go and Rust recently, it feels like Go is aiming to be a better
Java(GC, excellent libraries, feature rich std library, fast enough but easier
on devs) and Rust is aiming to be a safer C.

~~~
WesternStar
Its weird people don't compare Go to Kotlin. It has channels and coroutines. I
think Kotlin does better with contexts and I think flows are a really cool
addition. There does seem to be a lot there but honestly, it reminds me a lot
of Go in its pick up and play nature

~~~
pjmlp
Maybe because on the JVM it is a guest language and on LLVM, Kotlin/Native is
barely production ready.

------
jpz
Is it just me and my 30" monitor, or is the font for this webpage extremely
small?

~~~
Veen
Not just you. It's tiny. I also wish people would turn on hyphenation if they
insist on justifying text on the web.

------
bitfield
"I want off Mr Rust's wild ride."

