
Announcing Rust 1.36.0 - mark-simulacrum
https://blog.rust-lang.org/2019/07/04/Rust-1.36.0.html
======
pimeys
std::future is stable. Async/await in about 12 weeks. Been writing loads of
code, trying out the new futures and especially the possibility to use &self
in an async context is a huge benefit.

Beware though to use an executor that can drive the new futures and watch out
certain libraries using `tokio::spawn` , which will cause panics.

Some executors for the new futures:

[https://docs.rs/futures-
preview/0.3.0-alpha.17/futures/execu...](https://docs.rs/futures-
preview/0.3.0-alpha.17/futures/executor/struct.ThreadPool.html)

[https://github.com/withoutboats/juliex](https://github.com/withoutboats/juliex)

And a web server to try out async/await on nightly:

[https://github.com/rustasync/tide](https://github.com/rustasync/tide)

Compatibility layer from 0.1 to 0.3 and back is in futures-util-preview if
compiled with the feature flag `compat`.

[https://docs.rs/futures-util-
preview/0.3.0-alpha.17/futures_...](https://docs.rs/futures-util-
preview/0.3.0-alpha.17/futures_util/compat/index.html)

~~~
jkarneges
For anyone wanting to understand how to implement a basic engine for
std::future from scratch, I mashed some code until it worked:
[https://gist.github.com/jkarneges/cb1ee686ef97bb05ebe04b5fc6...](https://gist.github.com/jkarneges/cb1ee686ef97bb05ebe04b5fc67536f4)

It's based mostly on this article, which predates std::future:
[https://www.viget.com/articles/understanding-futures-in-
rust...](https://www.viget.com/articles/understanding-futures-in-rust-part-1/)

~~~
jadbox
Thank you. We really need more examples... and hopefully a full guide to
futures/async. As someone currently in the outskirts of the Rust community,
it's really hard to 'peer inside' what's happening and how to use it in
practical applications. It's a matter of time, but it's just so exciting xD

~~~
steveklabnik
There is an “async book” in the works, by the working group. You’re 100% right
that good docs will be important here!

------
mathieubordere
Maybe I'm the only one, but I have a very hard time grasping all the
functionality/concepts offered by Rust.

I really like the safety guarantees offered at compile time, and really do
think that we should move away from C-like languages if we ever want to
control the tsunami of security flaws, but I can't stop wondering if Rust
isn't (perhaps needlessly) complicating things and scaring off (non-C++)
programmers.

~~~
m12k
I mean, C++ programmers are exactly the correct audience for a language like
this - there are many other options for memory safety if you can live with a
garbage collector[1]. But writing safe C++ is much, much, much more complex
than than writing okay-ish C++. The way I see it, Rust has basically taken a
lot of the best practices required to write sane C++ (e.g. RAII) and
formalized them in a way where the compiler can enforce them. That means in
order to write ANY Rust code at all, you have to adapt a lot of best practices
all at once. That's not very beginner friendly, and will probably lead to
cognitive overload in most - you certainly don't get the same freedom you get
in other languages where you can implement something in dozens of ways,
because most of those ways won't compile here. So I'm not saying they
shouldn't keep working on the ergonomics and learnability of the language, but
I think a lot of these complexities are essential to the task of writing sane
programs while dealing with raw memory, and the fact that they have been
named, formalized and checked by the compiler is entirely a good thing - and
if that means the programmer has to know about them, then that's ok.

[1] Sidenote - I find it really fascinating how Rust can also use the stronger
static checks to prevent things like race conditions in a way few (/no?) other
languages can.

~~~
jcranmer
> But writing safe C++ is much, much, much more complex than than writing
> okay-ish C++. The way I see it, Rust has basically taken a lot of the best
> practices required to write sane C++ (e.g. RAII) and formalized them in a
> way where the compiler can enforce them.

A concrete example that I've run into recently when trying to write C++ code.
I figured that, for safety reasons, I needed to make my type be move-only. I
then had to spend about two hours trying to figure out why the program was
blowing up. The reason was that I was reusing the variable after moving from
it, and the compiler never gave any warning (even on -Wall -Werror) telling me
that what I was doing was wrong. In Rust, the same situation would be a
compiler error.

~~~
lazulicurio
Yep. As much as people extol lifetimes, my personal opinion is that Rust's
aliasing rules are its true golden goose. C/C++'s lax approach to aliasing
causes a whole host of issues that Rust is able to avoid by being more strict.

------
ChrisSD
MaybeUninit<T> is very welcome considering mem::uninitialized turned out to be
such a mistake. I only tried using it once which dissuaded me from trying
again, which was probably for the best.

I'm still looking forward to const generics and a more usable const fn. In a
way it's a shame Rust doesn't have a purely constant function in the interim.
But a hybrid function will be more versatile once it allows some form of
looping.

The last thing on my wishlist is extern types (aka opaque types aka void *).
The current workaround using a pointer to a [i8; 0] type relies on LLVM's
particular handling of such pointers and always looks weird in rust.

~~~
SCHiM
I might be revealing myself as a novice rust programmer here, but can't you
use 'ptr : * const()' as an opaque pointer type?

It's how I interface with C and C++ callback functions.

~~~
steveklabnik
You can, but there’s reasons a real external type feature is useful:
[https://github.com/rust-
lang/rfcs/blob/master/text/1861-exte...](https://github.com/rust-
lang/rfcs/blob/master/text/1861-extern-types.md)

------
mmastrac
We've been slowly rewriting our core instrumentation code at FullStory to take
advantage of the new futures and async/await.

I'd love to blog on this at some point but I think that the real big win here
was being able to use ? to early exit in async code.

I'm excited to see what the future brings here - we're still pretty new with
async/await and building our own internal patterns.

~~~
pimeys
Did you try to use an executor that can drive the new futures? The ability to
use &self in an async context is so much nicer than playing around with Arc
with things that really don't need one.

Also very happy to not being forced to write .map_err ever again.

~~~
mmastrac
We're in a tricky spot with our async code because we need to interop with
both Android JNI, Objective C threads, and some diagnostics code that uses
tokio/websockets. For the first pass there was a lot of "let's get this
working with a modern executor and make it perfect later".

When we get some spare bandwidth we'll definitely see if we can get some extra
productivity out of using &self. So much of our existing futures code is
either self-less or uses some macro code to generate glue to allow us to use
Arc-typed self - this is to allow a bunch of async core code to interop with
these async platform drivers.

Been on a crash course getting better at architecting Rust programs for nine
months. Luckily the Rust ecosystem and toolchain is getting even more amazing
each time around so we can justify some work to refactor and try new
approaches.

~~~
pimeys
Do you have experience doing async ffi to other runtimes from Rust? Such as
passing down futures from Rust to JavaScript or Java so you can use them from
their context. I'd love to read a blog post about that subject...

~~~
mmastrac
I can definitely talk about how we did it - maybe see if we can get something
up on our blog.

Our current approach-du-jure uses callback handles in combination with
channels to let the ffi code trigger a real rust future's completion. This has
worked reasonably well, but I'm sure we'll experiment with a few other
patterns.

We don't specifically interface with Java Futures (no particular reason other
than it hasn't seemed necessary to add that complexity), but that would be a
pretty cool library to build on top of the existing Rust jni crate.

One thing I'd like to pass by the Rust community is our internal "teleporter"
that allows you to borrow an object mutably on one thread and then "teleport"
an immutable ref to that object to any other thread using only a u64 handle
(with obviously huge unsafe flags). This has been very handy for some of our
async ffi work.

I'm hoping to get a few more Rustaceans onboard (aggressively hiring!) over
the next few months so we can focus more deeply on some of these interesting
problems.

~~~
pjmlp
If your Java usage is constrained to Android there is a JNI workaround that
many NDK folks tend to use.

Instead of doing JNI calls, send Android messages between NDK and Framework
threads.

There is the setup of MessageHandler on both sides, but long term they are
more productive than JNI boilerplate.

~~~
mmastrac
Interesting - do you mean using the Android handler/message infrastructure? I
hadn't considered that at all. Do you have any references?

~~~
pjmlp
Yes.

One example would be SDL, although they use a mix of JNI and messages (search
for SDLCommandHandler).

[http://hg.libsdl.org/SDL/file/abb47c384db3/android-
project/a...](http://hg.libsdl.org/SDL/file/abb47c384db3/android-
project/app/src/main/java/org/libsdl/app/SDLActivity.java)

[http://hg.libsdl.org/SDL/file/abb47c384db3/src/core/android/...](http://hg.libsdl.org/SDL/file/abb47c384db3/src/core/android/SDL_android.c)

EDIT: Sorry forgot about the C side (counterpart is Android_JNI_SendMessage).

------
thinkpad20
> In Rust 1.36.0, the HashMap<K, V> implementation has been replaced with the
> one in the hashbrown crate which is based on the SwissTable design. While
> the interface is the same, the HashMap<K, V> implementation is now faster on
> average and has lower memory overhead. Note that unlike the hashbrown crate,
> the implementation in std still defaults to the SipHash 1-3 hashing
> algorithm.

The wording here confuses me. They say they took the implementation from
hashbrown, but then finish by saying that the implementation is different.
What am I missing?

~~~
xyzzyz
The hash table implementation (the data structure) is changed, but the hash
function (the one which generates the hashes) is kept the same.

~~~
Twirrim
Maybe I misunderstood the speed complaints about HashMap in Rust. I thought it
was the hash function that was the slow bit? What is the anticipated
improvements from using SwissTable?

~~~
steveklabnik
We _do_ choose a hash function not designed for speed by default, but that
doesn’t mean that the implementation of the table can’t be improved. This is
effectively the third re-write of it.

------
RaycatRakittra
If someone inexperienced in systems programming chose Rust as their first
systems language, would there be difficulty adapting to other languages like
C++? It seems like I'm caught in this back and forth between "C++ isn't pretty
but it makes you money" and "Rust is so nice but where are the jobs".

~~~
kibwen
On the contrary, I'd say that learning Rust is a fabulous stepping stone to
"modern" C++ (much of which served as the philosophical foundation for Rust in
the first place). And once you get good enough at Rust that you've
internalized the rules regarding memory ownership, you'll be able to
instinctively apply those same rules successfully in C++, where the compiler
proves fewer things for you.

~~~
neilv
I appreciate what you're saying, and that there's some truth to that, but I
think there's at least two components to good allocation management in C++ (or
C, or other memory-unsafe language)...

The first component is conventions and idioms for managing allocations, and
Rust will force you into (and support) some good (but nontrivial) ones.

The second component is self-discipline. Look at the long history of
vulnerabilities in C and C++ code that are due to carelessness -- of an oops
that a programmer made when they knew better.

If what's being considered is Rust as a stepping stone to C++, how much does
Rust help with the first component, and is Rust even counterproductive for the
second component?

Regarding counterproductive for the second component, you might've seen a
conventional practice of grinding the Rust Clippy until the code compiles. I
don't know how that affects the development of self-discipline (e.g., maybe
some people try to make a practice of being Clippy-free on every compile
attempt?), but it seems a reasonable and interesting question to ask.

(I'm not dissing Rust for this. I mostly like Rust, and would be happy to be
working in/on it.)

~~~
kibwen
I'd say that concerns regarding self-discipline are overblown in this case.
Experienced Rust programmers aren't simply typing blindly into their editor
and hoping that their code will compile. When writing Rust one comes to learn
the code that the compiler likes, and strives to write code that is free of
compiler errors in the first place. This is itself an expression of self-
discipline, except that the discipline comes in the form of compiler errors
rather than runtime errors. There's less of a penalty for making an error in
Rust than in C++, but that's going to be true regardless of whether one's
background is "I already know Rust" or "I don't already know Rust", which is
what the parent commenter appears to be concerned about.

~~~
neilv
> _When writing Rust one comes to learn the code that the compiler likes, and
> strives to write code that is free of compiler errors in the first place._

I suggested that possibility, but is it generally true, or something
personally true for you, or are you advocating that it would be good if people
did that?

~~~
unrealhoang
When I started learning Rust and low-level programming (I'm coming from Ruby),
spamming/searching for error/fixing code and wait for `cargo build` to turn
green was the strategy I used. As I have more experience with Rust I become
more and more aware of ownership/lifetime of everything I'm using, the
compiler errors appear less and less, most of the time it's typo or mut
missing now and not lifetime issues anymore.

So yes, if you work enough with the borrow checker, your brain will form
another logical one, and that one you can use for writing C/C++ code. I have
much more confident now in learning/writing C/C++ than before I learn Rust,
because I feel like I can form a Rust-like design (tree-based, clear
ownership/lifetime objects) and put that in using C/C++ syntax.

Definitely recommend using Rust as stepping stone to learn production-grade
level C/C++.

------
CameronNemo
The alloc crate stabilization should provide serde with improved options.
Currently they maintain two json modules, one that does heap allocation and
one that does not.

[https://serde.rs/no-std.html](https://serde.rs/no-std.html)

[https://blog.rust-
lang.org/2019/07/04/Rust-1.36.0.html#the-a...](https://blog.rust-
lang.org/2019/07/04/Rust-1.36.0.html#the-alloc-crate-is-stable)

~~~
masklinn
Wouldn’t the non-allocating module remain useful and the allocating one just
get lowered to depend on alloc rather than std?

------
MuffinFlavored
Future being stabilized to me is confusing. You still need `tokio` or a
runtime to spawn them into an executor in order to do anything with them,
right?

So you have a standard trait from the language officially, that is useless
without a third party library?

~~~
steveklabnik
Sort of yes and sort of no. What you need is to call poll at the appropriate
time. Doing so does not, strictly speaking, require external libraries. That
said, you probably don’t want to write that code yourself; the naive
implementation will be extremely inefficient. This is where external libraries
come in.

The reason they’re external is, depending on what you want to do, you’ll want
an executor with different characteristics. An embedded executor has very
different needs than a network IO executor than a GUI event loop. By
stabilizing the trait, we can ensure library compatibility: everyone agrees on
the same interface.

Given that we’ve invested so much in making it easy to add libraries to your
project, including a single one wouldn’t be appropriate.

~~~
MuffinFlavored
I'm not sure what this is worth, but I'd personally feel better and take Rust
more seriously if they had an implementation as good as Tokio's available as
part of the standard library instead of it being split across a bunch of third
party libraries.

Are there talks to make that a reality in the next 18 months?

Is `async / .await` going to be just syntactic sugar around `Future` or is it
going to necessitate an executor lives in the standard library?

~~~
GolDDranks
I don't think it's going to happen. The "Rust way" is to simply live with the
fact that it's not "batteries included". Cargo means that Rust comes with
"batteries reliably delivered" kind of service, and I think that has been more
beneficial for the community in the long run.

async / .await are going to turn functions into Futures, and they by
themselves don't necessitate an executor any more than the Future trait
itself.

~~~
pjmlp
Only if those libraries are upheld to a specific quality level and available
in every single platform that Rust is able to target.

Here Java, .NET and future C++ are a clear winner, given that they are part of
the standard library.

------
kspp
Is there any way to financially support the project? I have only been able to
find a couple of Patreon pages of people working on specific libraries I
haven't been using personally.

~~~
GolDDranks
I don’t think there exists a way to support the Rust project directly, but
there are some indirect ways: supporting Mozilla, that employs many of the
core develioers is one. Another one, announced just this week, is to support
the Rust Analyzer, which is a project to create a next-level IDE-compatible
Rust compiler: [https://opencollective.com/rust-
analyzer/expenses/new](https://opencollective.com/rust-analyzer/expenses/new)

~~~
xvilka
Agree on supporting Rust Analyzer. The project recently published an update[1]
on the status and future plans.

[1] [https://ferrous-systems.com/blog/rust-analyzer-status-
openco...](https://ferrous-systems.com/blog/rust-analyzer-status-
opencollective/)

------
damck
Writing modern safe c++ isn't really the hassle everyone makes out of. Besides
smart pointers the clang's sanitizers go a long way. I did try to pitch Rust
at my corp but aforementioned safety checks are considered enough against the
overhead of learning new language and I agree. Personally I don't like the Rc
and Box syntax that's required to get a simplest homebrew version of even
linked list going, C++'s metaprogramming hacks are rivaling that.

I wish the stigma against "unsafe" C++ was a bit more rational. People who use
it aren't the kind fresh out of bootcamps and mostly realise the gains and
risks. But maybe I'm skewed by my job which uses C++ and takes any risks
seriously.

~~~
Lev1a
Seriously, what is this fascination with linked lists?

In comparison to array-based lists they're: \- less memory-efficient, \- do
not allow random-access, \- worse for cache locality (so can be up to orders
of magnitude slower) and \- more complex.

They _are_ nice to learn some principles in the context of an Intro to FP
course but apart from that, meh.

~~~
jnordwick
The linked list is just an lowest common denminator example. I think he and
others (and me when I bring it up) mean mutually linking data strauctures.

Almost any kind of data structure in Rust is extremely painful to do
efficiently. You either go the unsafe route of you drowned in a sea of boxes
and cells.

On Reddit recently somebody gave the ludicrous claim that you shouldn't have
to write your own data structures in rust - the rust system library should
have everything you need.

~~~
roca
On the real, large projects I have worked on for years (Firefox, rr, Pernosco)
in C++ and Rust I have spent negligible time writing container data
structures. Of course I create data structures, but almost always by combining
hashtables, arrays and smart pointers and occasionally something more exotic
from a library.

It's unfortunate that a lot of teaching programmer has people implement data
structures from scratch. It gives the false impression that that's what
programming is largely about.

~~~
jnordwick
Maybe the project should have implemented more from scratch instead of
cobbling together some Frankenstein data structure (and Firefox wouldn't be
such a massive memory hog with poor performance)?

I guess it really depends on your job, skill level, and mentality. While I do
use a lot of off the shelf pieces, their relationships don't always it neatly
and shoehorning them can cause performance issues. (I'm not going to pay for a
double indirection when I can avoid it entirely).

But then again, I think this cookie-cutter approach to software is poor
craftsmanship and often results in bloated, slow code that is way larger than
it needs to be. I want to write something better than everybody else, not just
make the same paint-by-numbers piece everybody else does.

~~~
roca
I have a PhD in computer science from CMU, I have published many academic
papers, and I was a distinguished engineer at Mozilla. The issue isn't skill
level.

Randomly lashing out at Firefox is silly, especially at this time when it's
getting so much praise for performance compared to Chrome. Firefox does indeed
contain some complex, micro-optimized data structures for its core data (e.g.
the CSS fragment tree and the DOM). It's just that it also contains _a lot
more_ code besides.

You wouldn't use an off-the-shelf hashtable to implement the mapping from a
DOM node to its attributes. You _should_ use an off-the-shelf hashtable to
track, say, the set of images a document is currently loading. Like any kind
of optimization, you optimize your data structures where it matters and you
write simple, maintainable code everywhere else.

~~~
jnordwick
Slow down there turbo. Nobody said anything about your skill (although PhD
doesn't particularly mean a talented developer - some of the worst code ive
seen come from cs phds where some only understand the highest polynomial in
big-o but forget the other factors). And nobody cares a cent about you getting
whatever award from moz.

A said anything about optimizing in inappropriate areas (honestly, what did
you get that from). This entire thread started because somebody didn't
understand why people often user linked list as an example of something
difficult in rust.

> Of course I create data structures, but almost always by combining
> hashtables, arrays and smart pointers and occasionally something more exotic
> from a library.

But that does scream "I don't really do a lot of performance oriented work".
That you can somehow cobble together an apple out of a banana and a cat by
probably using a metric ton of boxes and refcounts (that are just used to get
around the borrow checker) doesn't surprise me if you are willing to make the
readability and performance sacrifices.

------
davnicwil
Can anyone recommend an http/rest api framework for Rust? I'd love to look at
using it, last checked a couple of years ago and I don't remember finding
anything that looked particularly stable / production ready.

~~~
steveklabnik
Actix web and rocket are the two most popular.

HTTP stuff benefits a lot from asynchronicity, and so there’s been a lot of
churn over the past few years as this story shakes out. We’re almost there
though!

~~~
tiniuclx
I can second the suggestion for actix-web, it is a joy to use and I believe it
is the only HTTP framework that has reached version 1.0 :)

Have a look at the sort of things you can do with it
[https://github.com/actix/examples](https://github.com/actix/examples)

------
gaogao
Just that page seems to be down right now.

~~~
Flott
It's working for me.

~~~
gaogao
Strange. I'm just going to read the markdown instead,
[https://github.com/rust-lang/blog.rust-
lang.org/commit/d7214...](https://github.com/rust-lang/blog.rust-
lang.org/commit/d72140e70a32ebf8701657cbcdda244873047ef2)

~~~
Flott
You could also try this:
[https://web.archive.org/web/20190704140112/https://blog.rust...](https://web.archive.org/web/20190704140112/https://blog.rust-
lang.org/2019/07/04/Rust-1.36.0.html)

------
AlleyTrotter
Can someone point to a explanation of how to speak/read Rust. exactly how
would you say "use std::fs::File" How does one pronounce :: Seems simple
enough but things get much more complex by the third or fourth chapter of the
Book.

~~~
Diggsey
I would just read the words, "use" "standard" "fs" "file" and wouldn't
pronounce the "::" at all.

~~~
AlleyTrotter
so the " :: ' has no verbal meaning, it's just a way of linking traits,
libraries, functions,or crates? Not easy to explain

~~~
atombender
It's for dereferencing namespaces. "a::b::c" is analogous to a file system
path /a/b/c.

------
bigmit37
Does Rust have its own linear algebra, image processing, computer vision
libraries in pure rust?

I would love to see how such libraries are built from scratch in a low level
language. I feel like I would learn a lot as well.

------
johnklos
Oooh! Maybe this version can be compiled deterministically!

~~~
steveklabnik
Is there something specific you’re thinking of here? We do generally try to
keep things reproducible, though sometimes things slip in. There’s a tracking
issue for this, IIRC.

