
"Rust does not have a stable ABI" - caution
https://people.gnome.org/~federico/blog/rust-stable-abi.html
======
young_unixer
This whole ideology of "the user should get all their software from their
Linux distribution" and it's implicit consequence: there's no clear difference
between system software (internal tooling) and application software installed
by the user (Audacity and friends) should just die already.

I want my OS to just provide a decent interface over which I can install
application packages myself, packages that I get from my own sources, just
like on Windows. if those packages are statically linked, fine. I know most
Linux users disagree, but I don't want the relationship between software
vendor and user to be distorted by some distro maintainer, or having to be
limited to a package manager. I want to be able to store application
installers in my filesystem.

I also want my distribution to hide it's Python binary from me so I can
install my own Python without breaking the OS.

Basically: stop assuming that I want to live under your wing. I just want you
to give me a nice desktop environment, a terminal and a well docomented way to
install third party software.

I know distro developers don't owe me anything, and it's fine if they do
something else, but this is the actual reason why Linux isn't used in the
desktop.

~~~
bsaul
I think the original reason for shared libraries, and the only true one is
that it’s meant to save hard drive (and maybe memory) space. But the ratio
between assets vs code is now so big (media files, or data in data intensive
algorithms) with code representing nothing, i don’t think any optimisation is
really worth it anymore.

~~~
teruakohatu
There are also security implications. Update one shared lib with a security
patch and all applications that use it are now using a patched version.

~~~
posix_me_less
This is very often repeated argument in favor of giving control over shared
libraries to the distribution, but it is mostly distribution's marketing. In
practice, most users don't _really_ care about security. That very small
portion of users who do care about security, don't wait 2 weeks or more for
distribution to make fixes available, they fix it/mitigate it themselves as
soon as possible.

It is true that it is easier to fix one version of a library than 10 different
versions. But if you need 10 different versions for different applications,
you probably do not need to patch all 10 of them.

~~~
ithkuil
Most people don't care about security ... until they do.

Most people don't understand security and are not equipped with the necessary
knowledge to correctly judge risk. As long as security is just something that
gets in the way to get some job done, most people will just plow ahead, since
not getting the job done right now has a high and easy to understand cost.

------
phh
The article is pretty interesting and I learned quite a few things, but it
looks like the author is knowingly not answering the issue they raise by
themselves.

In my opinion, the most important thing distros does that is incompatible with
how rust currently works, is handling security/bug updates.

The one libjpeg.so for everyone is meant to fix libjpeg flaws for everyone.
And it has many security flaws. And it has many users. There is no denying the
way this is done by distros is good.

Now, to pick author's code, one of its dependency is a CSS parsing, which is
prone to flaws. (Maybe not /security/ flaws, but still) The question is, how
is the distro supposed to handle that?

I know rust has tooling for that, but it seems to me that with the perfect
version match crate build system, every dependency will happily break API. So
let's say author no longer has time to develop rust librsvg, and cssparser
crate has a major flaw, which is fixed only in a new API rewrite branch, then
what? Distro are supposed to fix that themselves? Sounds like much more work
for them.

~~~
posix_me_less
> There is no denying the way this is done by distros is good.

Let me tell you, the way it is done by distros (Centos, Debian) is far from
being good. You will get the fix a long time after the bug is published. And
you only get it if your system is recent enough.

------
jpm_sd
This is a bunch of nonsense. Rust prefers static linking because it is
predictable. These supposedly "huge" binaries are laughably small on a modern
>1TB hard drive. If you're building a tiny embedded system, by all means
optimize your builds system-wide, you have total control! But for a desktop,
is this really a concern?

~~~
dTal
Yes, it's a concern. Firstly, hard drive space isn't the only reason to make
binaries small - you have RAM pressure, cache pressure, and bandwidth to save.
Secondly and more importantly, waste adds up. If you replaced every binary on
the system with a Rust equivalent - which, to listen to some advocates, is the
eventual goal - you could end up with a base system that's many times larger.

In a larger sense, something that sets out to be a "systems programming
language" needs to be exactly the sort of thing suitable for a tiny embedded
system, even if it isn't running on one, because everything else builds on top
of it. The attitude that "we have tons of power, why not waste it" just
doesn't fly at the very lowest levels. You can write a desktop application in
Python, and it's broadly fine - but try writing an OS kernel!

~~~
zaarn
There are patches that let Rust run on ESP32 systems, so think it's entirely
suitable for tiny embedded things. What makes it bloaty is the linked in
standard library, but it's not an unsolvable crisis; you can dynlink against
glibc, there is a crate for core rust IIRC. That'll get you reasonably sized
rust applications.

And for writing kernels the same applies; without the stdlib, it gets a lot
smaller very fast. I've done it, so I think I can count myself on having some
experience there. The biggest part of my kernel is a 128KB scratch space
variable it uses during boot as temporary memory until it has read the memory
mappings from the firmware and bootstrapped memory management on a basic
level. The remainder of the kernel (minus initramfs) is then about 1MB large,
the largest part after the 128KB scratch space using about 96KB.

~~~
nwallin
Regarding embedded, you generally only have one program. So dynamic linking
buys you precisely nothing in that space.

~~~
zaarn
Depends on if embedded refers to microcontrollers or just single board
computers in various devices, but it wouldn't buy you much either way, yes.

------
jokoon
Wouldn't it be possible to have a C-like language that is somewhat backward
compatible with C, and have the nice security features of Rust?

I get that Rust is awesome, but I'm not certain you need to make an entire new
language just to have the security stuff.

Of course it might be complicated to do, but in the end, aren't there linters
or other validators that can give the same security results rust has, but with
C or even C++?

~~~
pornel
There is Cyclone, Checked C, Deputy. Such "C-but-weird" languages have an
"uncanny valley" problem:

• "C, but safer" on its own is not very enticing. With no other benefits, it's
easy not to switch, and instead promise to try harder writing standard C
safely, or settle for analysis tools or sandboxes.

• People who use C often have backwards compatibility constraints. Switching
to another compiler and a dialect that isn't standard C is a tough sell. You
can still find C programmers who think adopting C99 is too radical.

• Programming patterns commonly used in C (rich in pointer juggling, casts,
and textual macros) are inherently risky, but if you try to replace them (e.g.
with generics, iterators), it stops looking like C anyway.

So "safer C" is unattractive to people who are tied to C implementations or
don't want to learn a new language.

But people who do want to learn a new language and use modern tooling, don't
want all the unfixable legacy baggage of C.

Rust dodges these problems by not being a "weird C". It's a clean design, with
enough features to be attractive on its own, and safety is just a cherry on
top.

~~~
Nuoji
Cyclone was not fully realized though.

And there are languages that try to keep to C and add some minor safety
improvements, eg my language C3 (subarrays/slices, contracts, all UB has
runtime checks in debug builds and more)

------
seanhunter
Not an expert, but if you want a stable Rust-to-non-rust ABI, you can use the
C ABI as the article mentions. If you want a stable rust-to-rust ABI for FFI
there's a crate for that
[https://crates.io/crates/abi_stable](https://crates.io/crates/abi_stable)

It seems somewhat unrealistic to expect a really new language commit across
the board to the same sort of ABI stablility as a decades-old language such as
C.

------
zaro
With everything slowly( or sometimes rapidly) moving into containers (docker,
systemd portable services, flatpak, snaps) I think the concept of system
library will probably become irrelevant at some point not that far into the
future.

~~~
arp242
I don't think we're anywhere near a future where "ls" will be run in a
container.

~~~
krageon
I used to think that we're not anywhere near a future where most applications
came bundled with an entire browser, yet here we are.

------
amluto
> While C++ had the problem of "lots of template code in header files", Rust
> has the problem that monomorphization of generics creates a lot of compiled
> code. There are tricks to avoid this and they are all the decision of the
> library/crate author.

Is there any research on having compilers do some of these tricks
automatically? A compiler should, at least in principle, be able to tell what
aspects of a type parameter are used in a given piece of code. Such a compiler
could plausibly produce code that is partially or fully type-erased
automatically without losing efficiency. In some cases, I would believe that
code size, runtime performance (due to improved cache behavior), and compile
times would all improve.

~~~
sanxiyn
Implementing type erasure for Rust compiler is a research problem. In
principle it ought to be possible, but I am not aware of any prior work.

------
ckok
In a way I'm happy rust does not have a stable abi. Swift does, but the
stability is 'whatever apple swift emit'. Very little documentation, that
what's there is out of date, so the only practical language that can interact
with swift is swift. To be able to interact from another language, one would
have to parse swift, make the semantics of all types and generics match
exactly to be able to do the simplest things. (For example array and string,
two core types are stock full of generics and protocols)

I'd hate to have the same happening for rust.

------
dthul
I didn't know that it was possible to export Rust enums with a C ABI like
that, that's nifty!

------
choeger
It might simply be the time to invent a more modern ABI.

If for some reason modern languages insist on monomorphization, we should be
able to design an ABI that suits that need.

Rough sketch:

A modern shared library is code that generates code (very much like a dynamic
linker is actually code that links code).

The interface of the library would consist of:

a) a description language for the shape of data types (not types themselves,
mind you)

b) a list of generators for functions

c) a list of function applied to shapes as a requirement

The job of the modern dynamic linker would then be to invoke all the functions
to the necessary shapes, put the resulting code into memory and link it. It
might be useful to support this with some kind of caching mechanism.

------
mixmastamyk
There's a lot of either/or false dichotomy being discussed in here. Distro
packaging (or not) comes with various tradeoffs of course, some good or bad
depending on perspective.

To get to the point, I quite prefer the package manager way of installing
software to "hunt down a single release" app installers of Windows/Mac. The
only issue is that sometimes software is a bit out of date. That's what the
snap/flat pack/appimage projects are trying to solve.

As soon as one of those three get their user-hostile issues fixed, it will be
a software paradise. :D

------
georgyo
The optimized and stripped library in rust was about 8 times the size of the C
version. While 9MB is not a lot by itself, if a significant portion of
libraries decide they want to switch to rust that would explode disk usage!

Though I think my problem with rust is that they make breaking changes in
their compiler and spec every release. People regularly building on rust
unstable to get features not yet released. This all makes things complicated
for a distro.

But the points at about making breaking changes at the bottom resonated with
me. Stability is what has allowed the Linux ecosystem to grow so well. Many
interconnected parts all moving in unison. Not being able to fix a bad design
decision because of this does suck. Still, having everything work and sadly is
more important than perfect design.

~~~
the_duke
> Though I think my problem with rust is that they make breaking changes in
> their compiler and spec every release

That's not accurate.

Rust is backwards compatible. Most 1.0 code would still compile perfectly fine
today. There have been some minor breaking changes for soundness reasons, if I
remember correctly. There also has been one edition upgrade (2018 edition),
but every new compiler still supports the old edition, and most new stuff
actually works in the old edition as well.

There are experimental features, which are only available on the nightly
compiler and have to be opted in to with a "#[feature(X)]" attribute. But
those are very clearly labelled as experimental, unstable, and evolving, with
no stability guarantees whatsoever.

The criticism I would share is that while Rust is backwards compatible, it is
obviously not forwards compatible. Rust evolved dramatically since 1.0, and
many developers jump on new features once available. So compiling actively
maintained projects with a older compiler in distros like Debian is not fun.

But the rate of change has slowed down _a lot_ over the past year or so.

~~~
clktmr
I think the concerns are more towards the ecosystem and community, which are
very curiosity-driven. I tried Rust about one year ago and _nothing_ would
compile without the nightly toolchain. After reading the Rust book there was
this realization that I now have to go and read all the new language
proposals, because everybody is using them already. This and the excessive use
of metaprogramming by some users drove me away from the language.

~~~
zozbot234
You might want to try again, one year is a _very_ long time in rust land.
We've seen some very high profile stabilizations quite recently.

~~~
clktmr
> one year is a very long time in rust land

That's the problem and stabilization doesn't solve it. Only rejecting language
changes will.

~~~
nindalf
> nothing would compile without the nightly toolchain.

This problem is solved though. Almost all nightly users shifted to stable
after async-await was stabilized. The last holdout (IIRC) was Rocket, which
also compiles on stable now. I can't think of any popular libraries or
frameworks that require nightly. I can't think of any popular feature that
people would want to use nightly for.

I think you could learn Rust now, write your code and then not worry about any
new features that are added, ever. Your code will work without breaking. Keep
upgrading the compiler every 6 weeks and your code still won't break. That's a
guarantee.

~~~
ezekiel68
Rocket should _soon_ not need nightly, but the documentation for currently
available releases from github states that it still does[1]. (The compiler is
ready, but the code/cargo.toml hasn't yet caught up)

[1][https://rocket.rs/v0.4/guide/getting-started/#installing-
rus...](https://rocket.rs/v0.4/guide/getting-started/#installing-rust)

------
tannernelson
I really don’t see the problem with just statically linking everything.

------
Ijumfs
Why are people even using Rust and Go, aside from employer say-so? They're not
formally defined, there aren't multiple functioning implementations, it's just
not a good idea.

~~~
sanxiyn
Go has a supported alternative implementation, gccgo.

------
fluffy87
Two issues with the post:

\- An ABI is not a PL feature, but a platform feature, I.e., it is not that
Rust does not have a stable ABI, but that eg Linux does not have a stable ABI
for Rust (it has one for C and you can use this ABI from Rust).

\- You can export generic Rust APIs with a stable C ABI by using trait
objects, and it is often very easy to do this. So the claim that Rust and C++
are on the same boat wrt generics / instantiations is not true.

~~~
moonchild
> An ABI is not a PL feature, but a platform feature, I.e., it is not that
> Rust does not have a stable ABI, but that eg Linux does not have a stable
> ABI for Rust (it has one for C and you can use this ABI from Rust).

This is somewhat of a disingenuous thing to say, because it implies that the
fault lies with linux for not providing a stable abi to rust. An ABI comprises
various conventions wrt calling convention, name mangling, data layout, etc.;
these are provided variably by the operating system, language specification,
and language compiler. And, as TFA mentions, the rust compiler explicitly does
not provide a stable ABI.

~~~
fluffy87
I don’t agree.

The C ABI is specified in a spec that Linux adopts (eg the x86-psabi), and it
is what allows all software using this abi (from assembly to C to Rust) to eg
Interface with each other. Linux could write an ABI spec for Rust on its
platform today and add a patch to the Rust compiler (or to a C++ comoiler) to
adhere to this ABI.

Nobody has done this, and from many povs this is something that does not make
much sense doing, but it’s up to the platform to specify how binary software
communicates with each other. Linux only specifies this for C, and that’s what
Rust software currently does and has to use on Linux.

~~~
bregma
In GNU/Linux it's the GNU part that supplies the C ABI. The GNU compiler
collection provides it, and the GNU libc uses it talk with the Linux kernel,
usually through traps appropriate to the CPU. The LLVM toolchain (and other
alternatives, like ICC) conform to this de facto standard. The GNU devs
designed their ABI long before Linux came along and it is mostly inherited
from even older OSes like BSD and SVr4 and developed refined and adapted over
decades by a common community of interests.

If you're going to criticise GNU/Linux for not providing a Rust ABI, make sure
you're aiming at the GNU part. The Linux part doesn't care.

~~~
SAI_Peregrinus
At the same time the C ABI is not part of the C standard. It's a GNU thing (or
a Microsoft thing for Windows, or an Apple thing for Mac, or...). While they
require the compiler to provide a stable ABI there's nothing preventing C20
(or future versions) from breaking existing C ABIs. It's not really right to
talk about "the" C ABI, rather there's the x86 GCC ABI and the x86_64 GCC ABI
and the x86 MSVC ABI and the MIPS GCC ABI and the...

That's why compilers have target "triples" (now more than 3 items): <CPU
architecture><subarchitecture>-<vendor>-<os/system>-<abi>. So you might have
ARMv7m-st-none-eabi for some embedded STM32 bare-metal code and x86_64-pc-
linux-gnu for Linux. All C, all different ABIs.

------
PudgePacket
Off-topic but it seems the quotes around the title don't render on the front
page, but they do render on this page? It flips the entire tone of this
article.

Initially on reading the title I just eye-rolled, but clicking through and
seeing it was a response to that, and was actually the quote in quote marks
made much more sense!!

~~~
cube00
I see quotes on both pages on Firefox 79 on Windows

~~~
_Microft
The submitter might have edited the title.

HN fixes a number of things in titles on submission but allows to undo these
automatic changes manually.

------
temac
"Why do distros expect all the living organisms on your machine to share The
World's Single Lungs Service, and The World's Single Stomach Service, and The
World's Single Liver Service?"

This has been debated for years, and parts of the answers is right above.
Also, software is not a collection of biological organisms. And the local
variables are not shared, so WTF anyway. The analogy makes no sense. Everybody
is already neatly separated.

Proponent of everything static have yet to show non toy / very specialized
systems where everything is actually static.

Let's avoid the strawman anyway and in this case, yes, some static linking can
have its use, especially for some small utility / metaprog / etc. packages,
although it has and will always have drawbacks to, especially for higher level
feature support (e.g. a codec). You have to go into the specifics to
understand which are more important depending on the context. Probably a mix
is needed.

For a Linux distro, I suspect some people will go crazy if the fix for a
security vuln of a small piece of code ends up downloading hundreds of MB, but
maybe there are advantages so great that this is something we can live with.
The net perf impact is extremely hard to predict and measure. You will
duplicate tons of code, but arguably e.g. the cache overhead might not be
extremely bad, we now have tons of memory, so maybe we can waste some, etc.

Note however that if a Linux distro is competing with other kind of
plateforms, there is the risk to put the Linux distro at a disadvantage if the
static vs. dynamic (maybe on a package per package basis) is chosen
improperly, because other platforms make the distinction between platform and
application, their platform typically provides a very large API, and they
won't go the insane way and switch to static.

The lack of proper dynamic linkage story for Rust is a problem that needs to
be fixed to enable some kind of usages. Not something that can always be
worked-around (sometimes it can, and for some crate you really want static to
begin with anyway)

~~~
rollcat
> For a Linux distro, I suspect some people will go crazy if the fix for a
> security vuln of a small piece of code ends up downloading hundreds of MB
> [...].

I always wondered about this problem; you could distribute the .o/.a's the
same way you currently distribute the .so's, and integrate the linker with the
package manager. This theoretically seems to share most of the benefits of
both static and dynamic linking: push complexity away from the kernel/dynamic
loader, smaller updates = easier patching (compared to fully static binaries),
etc. And it works for closed source.

OpenBSD does something similar already for libc and kernel (for boot-time
address layout randomisation) and it works great.

~~~
phs2501
Except that shares all the same ABI issues as shared libraries. If they
wouldn't link at runtime they won't link at package-install-time either.

