
Thoughts on Rust bloat - xenocratus
https://raphlinus.github.io/rust/2019/08/21/rust-bloat.html
======
ajxs
At the risk of being slightly tangential, I've been sorely wanting to air this
particular grievance with Rust for some time. It's somewhat related, since the
author mentions their package system. Its package ecosystem isn't nearly in
the horrible state that node's is, but having a package system shouldn't be a
substitute for designing a useful standard library for a language. I think
that the attraction to 'small languages' is very much misplaced. If I can't
get through Rust's official documentation without being recommended the use of
third party packages for basic functionality (getopt, interfacing with static
libraries... Etc) then the designers have made a terrible error.

~~~
burntsushi
Opinions on this are a dime a dozen. You often see the reverse of it too, for
example, you might have heard that "Python's standard library is where things
go to die." You could just as easily call that a "terrible error." The fact
that Python's standard library has an HTTP client in it, for example, doesn't
stop everyone from using requests (and, consequently, urllib3) for all their
HTTP client needs. So despite the fact the standard library provides a lot of
the same _functionality_ as a third party dependency, folks are _still_ using
the third party dependency.

I think the size of the standard library is just one of possibly many
contributing factors that leads to a large number of dependencies. I think a
part of it is culture, but another part of it is that the tooling _enables_
it. It's so incredibly easy to write some code, push it to crates.io and let
everyone else use it. That's generally a good thing, but it winds up creating
this spiral where there's almost no backpressure _against_ including a
dependency in a project. This means there's very little standing in the way of
letting the fullest expression of DRY run wild. There are some notable
examples in the NPM ecosystem where it reaches ridiculous levels. But putting
the extremes aside, there's a ton of grey area and it can be pretty difficult
to convince someone to write a bit more code when something else might work
off the shelf. (And I mean this in the most charitable way possible. I find
myself in that situation.)

I do hope we can turn the Rust ecosystem around and stop regularly having
dependency trees with hundreds of crates, but it's going to be a long and
difficult road. For example, not everyone even agrees with my perspective that
this is actually a bad thing.

~~~
mantap
Python's standard library is where things go to die because of the terrible
adhoc versioning system (the module name _is_ the version number) and dynamic
typing means they are afraid to change anything. But even then it's still
better than having no standard library at all.

The advantage of a standard library is that you only need to learn one API
instead of a dozen different APIs for doing the same thing, which means you
can develop a degree of mastery over it. It also reduces the friction for
using better abstractions. e.g. Every professional Python programmer knows
defaultdict, whereas I rarely see that data structure used in other
programming languages, it's too much of a leap to install a dependency to save
a few if statements, but it all adds up.

~~~
swsieber
> The advantage of a standard library is that you only need to learn one API
> instead of a dozen different APIs for doing the same thing, which means you
> can develop a degree of mastery over it.

The rust ecosystem has done well to converge on certain crates as sort of
replacement for missing std features.

In practice (at least in the rust ecosystem), I only need to learn one
interface for:

* regex (regex)

* serialization (serde)

* network requests (request)

There are de-facto base crates in the ecosystem.

~~~
kd5bjo
As a relative outsider, it’s not obvious at all that these are the right
crates to choose. I appreciate the commitment to long-term stability that the
standard library appears to have, but that benefit goes out the window if I
accidentally rely on a third-party crate that changes its API every six
months.

Looking at crates.io, regex looks pretty safe, as it’s authored by “The Rust
Project Developers” and includes explicit future compatibility policies.
Unfortunately, I can’t find an index of only the crates maintained by the Rust
team.

Serde is obviously popular, but at first glance is a giant Swiss Army knife
that will likely have lots of updates to keep track of that are completely
unrelated to my project (whatever it is). If I search for JSON, I get an exact
match result of the json crate, followed by a bunch of serde-adjacent crates,
but not serde itself.

Request hasn’t been updated in 4 years, and has a total of less than 7000
downloads.

~~~
Freak_NL
That sounds like something that could be solved by having crates.io provide a
curated list of common popular crates for certain features. That is, this
seems to be mostly a documentation issue.

~~~
kd5bjo
It’s really a reputation bootstrapping problem, for which popularity can be a
useful proxy. For me to use third-party code, I have to trust that the future
behavior of the developers will be reasonable: I want my side projects that
don’t get touched for months or years to still mostly work when I get back
around to them.

Not everyone or every project will have the same desires, though. Sometimes, a
fast-moving experimental library is the right choice. The trouble is figuring
out which I’m looking at.

~~~
xenocratus
I'm not sure I follow these concerns about "working in the future" \- as long
as you specify versions that work for you in your Cargo.toml file, that should
work at any point in the future given that you use Rust 1.x.

If you'll want to update to always be on the latest version of each crate,
well that discomfort about them potentially not working is part of the price.

~~~
kd5bjo
If I come back to something, it’s because I want to resume active development.
Keeping a dependency pinned at an old version makes that more difficult in
various ways, so I personally value forward compatibility.

Not everyone does, and that’s fine. I just want to know what a library
developer’s stance on it is before I try to use their library.

------
Qasaur
_Use polymorphism sparingly_

I think it is a little ironic that he speaks of performance culture but
simutaneously advises to use dynamic dispatch and avoid polymorphism. I can
see the justification in non-critical code paths, but serialisation is a
pretty important part of most networked software nowadays so I do not think
that smaller binaries and faster compilation times (better developer
experience) justifies a performance hit in the form of dynamic dispatch
through crates like miniserde.

~~~
raphlinus
Performance culture has you measure the actual performance implications, then
make an informed decision. Is the code on a performance-critical path? Maybe
some of your serialization code is, but it's extremely unlikely that a dynamic
dispatch when parsing command line args is the reason your app is slow. Also
be aware that highly inlined code does nicely in microbenchmarks but might
have significantly negative performance implications in a larger system when
it blows out the I-cache.

~~~
pcwalton
> Also be aware that highly inlined code does nicely in microbenchmarks but
> might have significantly negative performance implications in a larger
> system when it blows out the I-cache.

I see this assertion a lot, but I have never actually seen a system in which
inlining that would otherwise be a win in terms of performance becomes a loss
in a large system. LLVM developers seem to agree, because LLVM is quite
aggressive in inlining (the joke is that LLVM's inlining heuristic is "yes").

I'd be curious to see any examples of I$ effects from the effects of inlining
specifically in large systems mattering in practice.

~~~
Jasper_
Fiora refactored the MMU code emitted by Dolphin to a far jump, which had
significant performance improvements over inlining the code [0]. She had an
article about it in PoC || GTFO [1].

[0] [https://dolphin-emu.org/blog/2014/09/30/dolphin-progress-
rep...](https://dolphin-emu.org/blog/2014/09/30/dolphin-progress-report-
september-2014/#40-3194-fioras-fantastic-faster-mmu-by-fiora)

[1]
[https://github.com/angea/pocorgtfo/blob/master/contents/issu...](https://github.com/angea/pocorgtfo/blob/master/contents/issue06.pdf#page=6)

~~~
pcwalton
Interesting, that's a good case. Though it's a bit of an extreme one, because
it's jitcode for a CPU emulator. I'm not sure how relevant that is to Rust,
though it's certainly worth keeping in mind.

~~~
Jasper_
In my experience, i$ is much bigger than everyone thinks, and they over-
emphasize optimizing for it whenever someone brings up code size. It can soak
up a lot. That said, for JITs, where code is not accessed very often and in
weird patterns, it can matter quite a lot.

~~~
jcelerier
hm, I've run a lot of profiling of various software through the years and
never once instruction cache misses have been a problem, in large template-
rich boostful C++ codebases

------
pornel
It looks like the author is most bothered by compile times of dependencies.

Cargo needs to do better with shared caches (so you compile each dep at most
once per machine) or ability to get precompiled crates (so you don't even
compile it).

Incremental improvements of compiler speed or trimming of individual
dependencies won't bring the 10x improvement it needs.

~~~
Scuds
Cargo compiles dependencies in parallel so if you have lots of cores you'll
hear your fans spin up.

I see a lot of difference between my i5 laptop and i9 desktop.

~~~
Macha
As a case in point, I have a project that uses amethyst and nalgebra (and
their 200 transitive dependencies).

On upgrading from an OC i5-4670k to a r9-3900x, my compile times for a clean
release build went from 20 minutes to 2.

~~~
lasagnaphil
But two minutes is still a lot of time to wait for a build, especially if
you’re doing gamedev and want to prototype something fast.

It seems nalgebra is the culprit here: because Rust doesn’t yet support const
generics, it has to use some hacky type-level metaprogramming to represent
numbers, and that will definitely destroy build times.

~~~
dpc_pw
2 minutes for a full build from scratch. Incremental builds afterwards take
seconds, though unfortunately linking of big projects can still sometimes
takes up to around a minute.

~~~
pjmlp
My toy project that I ported from a Gtkmm article done in the days of "The
C/C++ Users Journal" takes around 25 minutes to build from scratch on a Asus
1215B netbook (dual core, 8GB, HDD).

The original code, after being migrated to an up to date version of Gtkmm,
takes a couple of seconds with GCC 7, not more than one minute if at all.

The big difference? I don't need to compile from scratch all the 3rd party
dependencies.

With every release from Rust I do a clean build to assert how much it has
improved.

It was much worse, so congrats on the work achieved thus far, but it is still
a pain to set up a project from scratch.

------
svnpenn
Rust bloat is a serious issue that is not being taken seriously I think. I
raised the issue about platform size in June:

[https://github.com/rust-lang/rust/issues/61978](https://github.com/rust-
lang/rust/issues/61978)

and its actually gotten worse since then, significantly worse. In the 2 months
since then the installer has increased from 203 MB to 299 MB. Also
unbelieveably, Rust has failed to address package balkanization which I would
say has ruined the Node community. A popular package is "cargo-edit", which
currently pulls in 239 other crates:

[https://github.com/rust-
lang/cargo/issues/2179#issuecomment-...](https://github.com/rust-
lang/cargo/issues/2179#issuecomment-523567384)

~~~
Qasaur
_Also unbelieveably, Rust has failed to address package balkanization which I
would say has ruined the Node community._

I'd say that this is more of a culture thing rather than a language thing.

~~~
thesuperbigfrog
Whether culture or language, if it is not addressed then expect similar
results :(

~~~
dom96
The primary way I can see to address this is to make packaging more difficult.
Are there other steps that a language can take to avoid this problem?

~~~
thesuperbigfrog
A few ideas: 1) Remove the ability to unpublish / yank crates. A published
crate should be immutable, but the crate's metadata should always be
updateable by the maintainer.

2) Improve the metadata that describes a crate so that it is easy to tell if a
crate should be used. For example, is the crate beta quality? Was a serious
error found in the crate and it needs to be marked as "not safe"? Is it a
Long-Term Support release? Etc.

3) As a culture, disallow trivial crates. No "is-odd" or similarly low effort
crates. These just add bloat since the have so little functionality compared
to their overhead. If your crate's toml is larger than the crate's code, you
are doing it wrong.

~~~
chc
Ironically, "no trivial crates" is almost exactly the opposite of what the
article seems to want, which is only small crates so you're not importing lots
of needless bloat. It's hard to please everyone!

~~~
raphlinus
I talk about this a bit, I'm in favor of at least medium granularity crates,
but if they break down into smaller features, where different use cases will
meaningfully choose different sets of features, use feature gates. So, for
example, you might have a "string formatting utilities" crate with a "left-
pad" feature. (Note: this particular example is unlikely because the `format!`
macro in the standard library can do it just fine)

------
weinzierl
> There’s also an effort to analyze binary sizes more systematically. I
> applaud such efforts and would love it if they were even more visible.
> Ideally, crates.io would include some kind of bloat report along with its
> other metadata, [...]

This is what I always wanted (for Rust as well as for C) but never got around
to hack together myself. I dreamt it up more as a feature of cargo though,
something like 'cargo stats' or so. Shouldn't be to hard and cargo is
extensible.

~~~
sitkack
You might be looking for `cargo install cargo-bloat`

    
    
        $ cargo bloat
        Compiling ...
        Analyzing target/debug/mdbook
    
        File  .text     Size                 Crate Name
        0.5%   1.3% 166.4KiB                 regex <regex::exec::ExecNoSync as regex::re_trait::RegularExpression>::captures_read_at
        0.5%   1.3% 165.2KiB                  idna unicode_normalization::tables::compatibility_fully_decomposed
        0.3%   0.7%  95.0KiB               ammonia html5ever::tree_builder::TreeBuilder<Handle,Sink>::step
        0.3%   0.7%  92.5KiB                  idna unicode_normalization::tables::canonical_fully_decomposed
        0.2%   0.5%  64.6KiB               unicase unicase::unicode::map::lookup
        0.2%   0.5%  63.4KiB                  idna unicode_normalization::tables::is_combining_mark
        0.2%   0.4%  55.8KiB                 regex <regex::re_trait::Matches<R> as core::iter::traits::iterator::Iterator>::next
        0.2%   0.4%  55.5KiB                 regex regex::re_unicode::Regex::find_at
        0.1%   0.3%  41.7KiB unicode_normalization unicode_normalization::tables::composition_table
        0.1%   0.3%  36.2KiB               rand_hc rand_hc::hc128::Hc128Core::sixteen_steps
        0.1%   0.3%  33.8KiB                 regex regex::re_unicode::Regex::shortest_match_at
        0.1%   0.3%  32.4KiB               rand_hc <rand_hc::hc128::Hc128Core as rand_core::block::BlockRngCore>::generate
        0.1%   0.2%  31.4KiB               ammonia html5ever::tokenizer::Tokenizer<Sink>::step
        0.1%   0.2%  24.5KiB                  idna unicode_normalization::tables::canonical_combining_class
        0.1%   0.2%  21.8KiB                 regex aho_corasick::ahocorasick::AhoCorasick<S>::find
        0.1%   0.2%  21.6KiB                 regex aho_corasick::ahocorasick::AhoCorasick<S>::find
        0.1%   0.2%  21.0KiB                  clap clap::app::parser::Parser::get_matches_with
        0.1%   0.2%  20.3KiB                    ws ws::io::Handler<F>::handle_queue
        0.1%   0.1%  19.0KiB                    ws ws::connection::Connection<H>::read_frames
        0.1%   0.1%  18.7KiB            env_logger termcolor::Ansi<W>::write_color
        35.3%  91.6%  11.4MiB                       And 58189 smaller methods. Use -n N to show more.
        38.6% 100.0%  12.5MiB                       .text section size, the file size is 32.4MiB

~~~
weinzierl
Awesome.

~~~
sitkack
It also has a wealth of options for tracking compilation time down to the
dependent crate level.

------
neonate
In case anyone else was wondering what "druid" is, it's
[https://github.com/xi-editor/druid](https://github.com/xi-editor/druid).

------
mapgrep
>the release binary is now 5.9M

A typical smartphone ships with around 10,000 times this much storage capacity
and enough RAM to hold it 100 times over.

This is bloat?

I mean, I get it, the binary used to be only 2MB, 1/3rd the size. But are
numbers this low really worth worrying about? I think a GUI app in 6MB is
hugely impressive.

I genuinely thought he was going to say it was 100MB or something higher.

~~~
userbinator
_I think a GUI app in 6MB is hugely impressive._

Windows 3.11 required 4MB of RAM and the whole install took <20MB of disk
space, and that's _the entire OS_ with all of its utilities and libraries.

 _A typical smartphone ships with around 10,000 times this much storage
capacity and enough RAM to hold it 100 times over._

The fact that it can hold that much does not make it right to waste resources.
To contrast, video or audio is a good use of the space it takes up in general,
because there has been and continues to be research in compressing that data,
and it's pretty close to being as small as it can practically be. Apps are not
a good use of space because we know roughly what the lower limit is --- and
the current average is a few orders of magnitude more than that.

~~~
orf
> Windows 3.11 required 4MB of RAM and the whole install took <20MB of disk
> space, and that's the entire OS with all of its utilities and libraries.

Sure, and the moon landing used computers with less processing power than your
kids calculator. That doesn't mean we should use those to put people on the
moon over faster hardware.

Does the fact that older, slower and smaller hardware and software once
existed mean we should spend time, resources and potentially sacrifice
features to... what? Hark back to the old days where we had 128kb of memory
and hard disks the size of vinyl records?

A 6mb GUI app _is_ impressive for right now. At some point in time it would
have been absolutely massive, and the way it's going, at some point in the
future it may well be absolutely minuscule. And that's not a bad thing.

~~~
adrianN
If today's computers did things a million times better, or did a million more
things than 25 years ago, I'd agree with you, but from a user perspective a
modern computer is not really all that different from a Windows 3.11 machine.
The screens are bigger and we have Internet now, but the experience of, e.g.,
writing a letter in Word is basically the same.

~~~
floatboth
The screens alone are responsible for a lot of size increases (framebuffers in
RAM, high res media) but also, unlike in Windows 3.11, modern Word allows you
to mix English, Japanese and Arabic in a document, allows use of a screen
reader, and has a thousand features that you personally don't need but
everyone has some set of features that they use, and taking away any of them
would offend _someone_.

~~~
pjmlp
And me thinking I already was able to do that with Word on NT 4.0 and Dragon
NaturallySpeaking.

------
nerdponx
It's a little unfortunate that some of the cool features of Rust need to be
put into the "use sparingly" category. I know that polymorphism, async, etc.
should be used judiciously anyway, but still.

 _One recent case we saw a similar tradeoff was the observation that the
unicase dep adds 50k to the binary size for pulldown-cmark. In this case, the
CommonMark spec demands Unicode case-folding, and without that, it’s no longer
complying with the standard. I understand the temptation to cut this corner,
but I think having versions out there that are not spec-compliant is a bad
thing, especially unfriendly to the majority of people in the world whose
native language is other than English._

In the Python world, you can install an "extra" along with a package. So you
can make the deliberate decision to omit Unicode case folding from your
CommonMark parser. Maybe something like that is possible with a crate?

That said, I think this is a non-feature in the spec, if anything. I see the
value in recommending (but not requiring) Unicode normalization, but I don't
see the added value of Unicode-aware case-insensitivity. Maybe it's more
important in non-Latin text.

~~~
msbarnett
> It's a little unfortunate that some of the cool features of Rust need to be
> put into the "use sparingly" category.

This is just the reality of engineering -- there are no silver bullets.

Sure, it'd be nice if any language could give you the space efficiency of
dynamic dispatch with the runtime efficiency of monomorphized generics, but
those two things are _fundamentally in tension_. Neither Rust nor any other
language can fix that.

Rust at least gives you a fairly easy choice of which you want in any given
circumstance. Most languages just pick one or the other universally.

~~~
heavenlyblue
One way to fix it is jitting.

I am wondering if Rust has a project of shipping Rust compiler embedded in
your executable, so that one could compile sym into code.

~~~
msbarnett
> One way to fix it is jitting.

Also not a silver bullet, because there's non-zero costs in memory, CPU
overhead, I$, etc to the statistics gathering, stop-and-jit-the-dynamic-call-
and-change-the-call-sites and so forth.

In long running server processes this can mostly amortize out nicely over a
very long run, but for interactive applications it can add noticeable lag.

------
geofft
> _For one, it’s common that you get different versions anyway (the Zola build
> currently has two versions each of unicase, parking_lot, parking_lot_core,
> crossbeam-deque, toml, derive_more, lock_api, scopeguard, and winapi)._

This seems like a specific thing that would be a measurable win and a not
uncommon problem (e.g., I have a test kernel module that doesn't have many
dependencies, and it still has two generic-arrays, two proc-macro2s, and two
unicode-xids). Is there something that could be done here technically, such as
pull requests to bump common crates to using the same version of even-more-
common dependencies?

~~~
MaulingMonkey
> generic-array

This is up to 0.13.2... plenty of supposedly breaking API churn, no wonder you
have two copies.

Yes, your dependencies probably rely on different 0.x versions, and a pull
request could fix that. You can inspect your Cargo.lock file to figure out
which ones are to blame.

> Is there something that could be done here technically, such as pull
> requests to bump common crates to using the same version of even-more-common
> dependencies?

Yep, that's the fix.

README.md badges like [https://deps.rs/repo/github/rust-
lang/cargo](https://deps.rs/repo/github/rust-lang/cargo) can alert people to
out-of-date dependencies.

You can even automate some of the pull requests with something like
[https://github.com/marketplace/dependabot-
preview](https://github.com/marketplace/dependabot-preview) if you control the
repositories.

------
home_project123
How many people would use a Rust cloud compiler?

Suppose it cuts build time from 5 minutes to 30 seconds....

Technically, a transparently mirrored file system, a strong compiler cluster
(memory, cores, etc). And some predictive ML. But you end up with a binary-
equivalent (verifiable) output.

Any thoughts ?

~~~
Macha
I wouldn't use it. Incremental builds aren't that bad, and I'd have a hard
time trusting third party compiled libs enough to include in a release from a
new service that hasn't built up trust the way that say a Linux distro has.

While you could assume any maliciousness or security compromise would be
caught, as you can see from the rubygems news today this is not instant and it
adds another point of failure.

------
Animats
Do dynamic libraries really help? They only save space in the case where 1)
you are running many different programs and 2) they all use the same libraries
in the same versions. If you're running multiple copies of the same program,
they share code, at least on Linux.

~~~
raphlinus
Depends on the use case. I'm thinking about GUI, where on Linux you can write
code that links against (say) Gtk, and thus can pull in lots of libraries
while the executable itself is tiny. With the current state of the Rust
ecosystem, you basically have to build the GUI toolkit and bundle it with your
app.

------
xpe
To respond to one particular point from the post:

> Once you accept bloat, it’s very hard to claw it back. If your project has
> multi-minute compiles, people won’t even notice a 10s regression in compile
> time. Then these pile up, and it gets harder and harder to motivate the work
> to reduce bloat, because each second gained in compile time becomes such a
> small fraction of the total.

This particular problem can be addressed head-on, I think. It would seem
feasible to have the compiler to distinguish between the target application
(library) and the dependencies. Then it could report the compile times as
separate values. This could be built into CI/CD tools as a way to catch
application level compile time changes.

Of course, this approach wouldn’t tell the entire story, but it would likely
serve as a canary in the coal mine at least.

------
leeoniya
> Digging into xi-editor, the biggest single source of bloat is serde

flatbuffers seems to be the go-to serialization lib, in terms of speed and mem
efficiency:

[https://docs.rs/flatbuffers/0.6.0/flatbuffers/](https://docs.rs/flatbuffers/0.6.0/flatbuffers/)

------
qaq
It highly depends on use case. If I have web application running on 100 nodes
5mb vs even 10mb has such a minuscule cost that it not even worth calculating.
Now benefits for ops from just having to push a single static binary without
external dependencies are fairly decent.

------
ameixaseca
Good points, but I'd like to point out that nothing on this post is Rust-
specific.

You can have the same issue if programming in any other language, including C:
excessive indirections, inefficient algorithms, bad abstractions, excessive
use of unnecessary libraries, etc.

It's indeed easier to "bloat" your resulting binary in C++ or Rust given the
easiness to do higher-level abstractions; since you can more easily program
complex solutions you also need to consider your design and the trade-offs on
your code.

I'd also like to point out that, in comparison:

* Rust bloat is a speck compared to hundreds of megabytes for a similar Python program + runtime including the same amount of code.

* Rust bloat can be mostly optimized away for systems that really care about excess/unused code - you have #![no_std], disabling of backtraces, aborting on panic, and a lot of other optimizations that throw away a big portion of extra functionality not needed for things like embedded. You have little alternative on things like Go besides removing debugging symbols and doing tricks like "dynamic decompression" (which could also be applied to Rust programs to further reduce their size, btw).

Bottom line is: Rust makes it easier for you to "just add a new library" and
it also make you more mindful of the bloat, but we need to keep it in
perspective.

------
ogoffart
It looks like the `fluent` library is to blame for the bloat here.

I have myself made another localization library that use a simpler model and
compiles quite fast when using the rust `gettext` backend:
[https://github.com/woboq/tr](https://github.com/woboq/tr)

------
je42
I don't understand the advice: "Use async sparingly"

It doesn't make sense. Either your complete code base is async or it is not.
If you have a single blocking code point in it. It is not async anymore, since
it can't handle any other schedule async tasks while waiting for the blocking
call.

~~~
raphlinus
Probably could have worded it better. I didn't mean, "only use a little bit of
async," I agree that doesn't make sense. I meant, "use async if your problem
really needs it, otherwise avoid it."

~~~
je42
The main thing it is you can't really add async when you need it, because if
you have written enough sync code, turning it into async amounts to a major
rewrite/refactor of the code including choices for dependencies, code for
parallel code execution are different in async etc.

------
kanishkarj
I'm naive to rust, but I do get your concern. For a small part in a project, I
used a tokio based library. And it just exponentially increased my build time.

What about if rust could support something like dynamic Linking/loading. Like
have some crates globally installed and while building we could link to the
global one instead of locally getting all the crates. Like C/C++does it right?

------
kissgyorgy
Serialization will always be slow for obvious reasons, that's why binary
protocols and messages started to emerge (HTTP/2, gRPC) instead of
serializing/deserializing everything to JSON and back.

~~~
floatboth
grpc uses protobuf which still does traditional serialization. The cool stuff
is capnp/flatbuffers, where you write to and read from the "serial" memory
directly.

------
carterschonwald
Seems like a lot of this comes from rust defaulting to allowing several
different versions of a library to be linked in ... there’s def some other
pieces. But that seems like a biggie

~~~
lwhsiao
Do you know how to not do this "default"? I would love to try it.

~~~
MaulingMonkey
So far in my toy projects I've mostly seen this crop up from depending on a
lot of 0.x crates still going through significant version churn and legitimate
breaking changes, where A and B legitimately can't use the same version of C
as-is due to API changes, because they haven't been keeping their dependencies
up to date.

The fix is simple, when it happens: Patch A/B to use the latest major version
of C, fixing the source code as necessary. You can [patch] locally until
upstream accepts your Pull Request - which might include a
[https://deps.rs/repo/github/rust-
lang/cargo](https://deps.rs/repo/github/rust-lang/cargo) badge in their
README.md, to encourage them to continue to keep C up-to-date.

------
ijiiijji1
Maybe rust core/std/alloc should built with many smaller crates that are
cached better and have fewer dependencies?

Also: [https://github.com/johnthagen/min-sized-
rust](https://github.com/johnthagen/min-sized-rust)

------
axilmar
Since it's not possible for a Standard Library to have everything a programmer
needs, programming languages shouldn't have Standard Libraries but Standard
Repositories.

------
jokoon
I just don't feel like rust can replace C/C++, because the syntax is not
simple enough. Rust has a lot of cool things, but to me syntax simplicity is
more important.

Maybe I have a hard time adapting myself to rust? Maybe my brain is too much
"wired" to a C style syntax. It still seems that to me, C-style syntax is just
better.

I would rather prefer a C++-like language that breaks down from backward
compatibility with C while keeping its simplicity, has STL containers, and is
simpler to read and use.

Rust is cool, but I'm just curious if it can really be adopted for large
project to justify rewriting existing code.

~~~
timw4mail
C++ has simple syntax? Since when?

~~~
jokoon
Since you're not really required to use templates or inheritance at every
corner.

