
Async-await on stable Rust - pietroalbini
https://blog.rust-lang.org/2019/11/07/Async-await-stable.html
======
Dowwie
This is a major milestone for Rust usability and developer productivity.

It was really hard to build asynchronous code until now. You had to clone
objects used within futures. You had to chain asynchronous calls together. You
had to bend over backwards to support conditional returns. Error messages
weren't very explanatory. You had limited access to documentation and
tutorials to figure everything out. It was a process of walking over hot coals
before becoming productive with asynchronous Rust.

Now, the story is different. Further a few heroes of the community are
actively writing more educational materials to make it even easier for
newcomers to become productive with async programming much faster than it took
others.

Refactoring legacy asynchronous code to async-await syntax offers improved
readability, maintainability, functionality, and performance. It's totally
worth the effort. Do your due diligence in advance, though, and ensure that
your work is eligible for refactoring. Niko wasn't kidding about this being a
minimum viable product.

~~~
Kaladin
Is there any resources you could point to learn more about how to use this
async programming with?

~~~
Dowwie
[https://rust-lang.github.io/async-book](https://rust-lang.github.io/async-
book)

[https://book.async.rs](https://book.async.rs)

[https://tokio.rs](https://tokio.rs)

~~~
larusso
Thanks for the links. But they are not clickable :)

~~~
Dowwie
done.

------
ComputerGuru
I’ve been playing with async await in a polar opposite vertical than its
typical use case (high tps web backends) and believe this was the missing
piece to further unlock great ergonomic and productivity gains for system
development: embedded no_std.

Async/await lets you write non-blocking, single-threaded but highly
interweaved firmware/apps in allocation-free, single-threaded environments
(bare-metal programming without an OS). The abstractions around stack
snapshots allow seamless coroutines and I believe will make rust pretty much
the easiest low-level platform to develop for.

~~~
vanderZwan
Have you ever heard of Esterel or Céu? They follow the _synchronous_
concurrency paradigm, which apparently has specific trade-offs that give it
great advantages on embedded (IIRC the memory overhead per Céu "trail" is much
lower than for async threads (in the order of _bytes_ ), fibers or whatnot,
but computationally it scales worse with the nr of trails).

Céu is the more recent one of the two and is a research language that was
designed with embedded systems in mind, with the PhD theses to show for it
[2][3].

I wish other languages would adopt ideas from Céu. I have a feeling that if
there was a language that supports both kinds of concurrency and allows for
the GALS approach (globally asynchronous (meaning threads in this context),
locally synchronous) you would have something really powerful on your hands.

EDIT: Er... sorry, this may have been a bit of an inappropriate comment,
shifting the focus away from the Rust celebration. I'm really happy for Rust
for finally landing this! (but could you pretty please start experimenting
with synchronous concurrency too? ;) )

[0] [http://ceu-lang.org/](http://ceu-lang.org/)

[1]
[https://en.wikipedia.org/wiki/Esterel](https://en.wikipedia.org/wiki/Esterel)

[2] [http://ceu-lang.org/chico/ceu_phd.pdf](http://ceu-
lang.org/chico/ceu_phd.pdf)

[3] [http://sunsite.informatik.rwth-
aachen.de/Publications/AIB/20...](http://sunsite.informatik.rwth-
aachen.de/Publications/AIB/2018/2018-05.pdf)

~~~
mamcx
I also think async (the paradigm) is kind of weird in rust world. I agree with
[https://journal.stuffwithstuff.com/2015/02/01/what-color-
is-...](https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-
function/).

~~~
pcwalton
The solution suggested by that article is to use M:N threading, which was
tried in Rust and turned out to be slower than plain old 1:1 threading.

If you don't want to deal with async functions, then you can use threads!
That's what they're there for. On Linux they're quite fast. Async is for when
you need more performance than what 1:1 or M:N threading can provide.

~~~
mamcx
> If you don't want to deal with async functions, then you can use threads!

Truly? If some very popular lib become async (like actix, request, that I
use), I can TRULY ignore it and not split my world in async/sync?

~~~
pcwalton
You can easily convert async to sync by just blocking on the result. The other
way (sync to async) is more difficult and requires proxying out to a thread
pool, but it's also doable.

~~~
BatmanAoD
An `async` function _can_ call blocking functions, of course, it just blocks
the entire thread of execution which could otherwise continue making progress
by polling another future.

------
fooyc
This is a big improvement, however this is still explicit/userland
asynchronous programming: If anything down the callstack is synchronous, it
blocks everything. This requires every components of a program, including
every dependency, to be specifically designed for this kind of concurency.

Async I/O gives awesome performance, but further abstractions would make it
easier and less risky to use. Designing everything around the fact that a
program uses async I/O, including things that have nothing to do with I/O, is
crazy.

Programming languages have the power to implement concurrency patterns that
offer the same kind of performances, without the hassle.

~~~
littlestymaar
> Programming languages have the power to implement concurrency patterns that
> offer the same kind of performances, without the hassle.

Can you give one that reaches this goal? Go is often cited on that regard but
it doesn't really fit your description since it trades performance for
convinience (interactions with native libraries are really slow because of
that) and still doesn't solve all problems since hot loops can block a whole
OS thread, slowing down unrelated goroutines. (There's some work in progress
to make the scheduler able to preempt tigh loops, though).

~~~
bulldoa
> since hot loops can block a whole OS thread

Asking as a beginner, what does the above mean?

Not sure what does hot loop means, and why does it block Os thread

~~~
brandonbloom
Go creates the illusion of preemptive multithreading by having implicit safe-
points for cooperative multithreading. Each IO operation is such a safe-point.
If you write an infinite loop like `for {}` where there are no IO operations
in loop body, it will block indefinitely. This will prevent the underlying OS
thread from being available to other goroutines. The same thing can happen
even if you do have IO operations in there, but the work being performed is
dominated by CPU time instead of IO.

~~~
crawshaw
Note that this is being fixed in the next release:
[https://golang.org/issue/10958](https://golang.org/issue/10958)

~~~
littlestymaar
That's cool it's finally coming, this issue is open since 2015! I wonder which
performance impact this will have.

------
GolDDranks
This is big! Turns out that syntactic support for asynchronous programming in
Rust isn't just syntactic: it enables the compiler to reason about the
lifetimes in asynchronous code in a way that wasn't possible to implement in
libraries. The end result of having async/await syntax is that async code
reads just like normal Rust, which definitely wasn't the case before. This is
a huge improvement in usability.

~~~
takeda
Why would it be different for async code than sync code? The goal of Rust's
checker is to track lifetime of an object so for example it knows that at the
end of a function the object should be freed. Async shouldn't matter here.

~~~
GolDDranks
The point is that Rust's borrow checker can't reason about lifetimes very well
over function boundaries. It can reason about coarse things that are
expressable in the type language, but everything more nuanced than that, such
as reasoning about how control flow affects the lifetimes is limited to inside
function bodies.

The difference between synchronous code and async code implemented as
libraries is that async code involves jumping in and out of functions a lot,
while employing runtime library code in between. A piece of code that is
conceptually straightforward, may, in the async case, involve multiple returns
and restores. In the sync case it doesn't need to do that, since it just
blocks the thread and does the processing in other threads and in kernel land.

Rust's async/await support makes it possible to write code that is
structurally "straightforward" in a similar way than synchronous code would
be. That allows the borrow checker to reason about it in a similar way it
would reason about sync code.

~~~
dunkelheit
> The point is that Rust's borrow checker can't reason about lifetimes very
> well over function boundaries. It can reason about coarse things that are
> expressable in the type language, but everything more nuanced than that,
> such as reasoning about how control flow affects the lifetimes is limited to
> inside function bodies.

BTW this is a big pain point for me (unrelated to async). Code like this:

    
    
      let ref = &mut self.field;
      self.helper_mutating_another_field();
      do_something(ref);
    

gets rejected because self.helper_mutating_another_field() will mutably borrow
the whole struct. The workaround is either to inline the helper or factor out
a smaller substruct so that helper can borrow that which doesn't always look
good.

Of course it is preferable that all information needed for the caller to check
if the call is correct is contained in the function signature but it truly is
frustrating to see the function body right there, know that it doesn't violate
borrowing rules and still get the code calling it rejected.

~~~
entropicdrifter
Couldn't you just pass in the (other) borrowed field as an argument for the
function? If you need it to work without adding the argument when called
outside of the class, you could overload it with a version that borrows the
field and passes it to the version that takes the field as an argument, right?

I'm newish to Rust, so this is just an intuitive guess. Please let me know if
I'm wrong.

~~~
dunkelheit
Yes, it works, but feels unnatural. I also don't like the possibility (quite
remote, I admit) that the function gets called with a field from another
instance.

~~~
esotericn
The latter case should be impossible if the method is private?

~~~
saghm
In Rust, privacy isn't enforced like that. Private just means that things
outside the module can't access them. There's no concept of privacy at the
instance level.

~~~
esotericn
Yes. So don't make any methods that could potentially be passed incorrect
arguments public.

~~~
saghm
That still doesn't preclude the possibility that within the module the
function gets called with a field from another instance. I think the idea is
that by making it a method that just takes a reference to self, it's
impossible to accidentally mutate a field on a different instance, while
taking a reference to the field itself doesn't prevent the programmer from
accidentally calling it with the wrong instance of the field.

------
ralusek
Isn't it kind of a poor design choice that Rust will not actually begin
execution of the function until `.await` is called? If I didn't want to
execute the function yet, I wouldn't have invoked it. Awaiting is a completely
different concept than invoking, why overload it?

If you want to defer execution of a promise until you await it, you can always
do that, but this paradigm forces you to do that. The problem is then, how do
I do parallel execution of asynchronous tasks?

In JavaScript I could do

    
    
       const results = await Promise.all([
         asyncTaskA(),
         asyncTaskB(),
         asyncTaskC()
       ]);
    

and those will execute simultaneously and await all results.

And that's me deferring execution to the point that I'd like to await it, but
in JavaScript you could additionally do

    
    
       const results = await Promise.all([
         alreadyExecutingPromiseA,
         alreadyExecutingPromiseB,
         alreadyExecutingPromiseC
       ]);
    

Where I pass in the actual promises which have returned from having called the
functions at some point previously.

So how is parallel execution handled in Rust?

~~~
roblabla
In Rust, you can use a future adapter that does this:

    
    
        futures::join!(asyncTaskA(), asyncTaskB()).await
    

See the join macro of futures[0]. The way it works is, it will create a future
that, when polled, will call the underlying poll function of all three
futures, saving the eventual result into a tuple.

This will allow making progress on all three futures at the same time.

[0]
[https://docs.rs/futures/0.3.0/futures/macro.join.html](https://docs.rs/futures/0.3.0/futures/macro.join.html)

~~~
ralusek
I don't like this at all. Having to rely on futures::join! means that I don't
have the flexibility to control the execution of these things unless Rust adds
that specific utility, right?

In JS, for example, the `bluebird` library is a third party utility for
managing execution of functions. You can do things like

    
    
        const results = await Promise.map(users, user => saveUserToDBAsync(user), { concurrency: 5});
    

And I pass in thousands of users, and can specify `concurrency: 5` to know
that it will be execute no more than 5 simultaneously.

Implementation of this behavior in user space is trivial in JS, is it possible
in Rust?

~~~
hguant
It feels like you're complaining about things in the Rust language without
taking the time to understand how the language idioms work. RTFM.

Additionally, you're making snarky comments about how you don't like how the
base language doesn't handle something like JS...then reference a third party
JS library. Base JS doesn't solve your 'problem' either.

To answer your question, async/await provides hooks for an executor (tokio
being the most common) to run your code. You things like that in the executor.

[https://docs.rs/tokio/0.2.0-alpha.6/tokio/executor/index.htm...](https://docs.rs/tokio/0.2.0-alpha.6/tokio/executor/index.html)

~~~
ralusek
I reference a third-party library to show that you have full control of this
stuff in user-space. Implementing Promise.all or Promise.map in userspace is
trivial.

I'm not complaining about things without taking the time to understand how the
language works, I'm giving examples of things that don't seem possible based
off of my understanding of how the language works...in hopes that someone will
either clarify or accept that this is a shortcoming.

~~~
roblabla
Rust and JS have very very different execution models. You can absolutely
control how many futures are allowed to make progress at the same time fully
in userspace. If you're joining N futures, and want to only allow M futures to
make progress at a time, make an adapter that only calls the poll function of
M futures at a time, until those futures call ready.

Rust gives you all the flexibility you need here. It might not be trivial yet
because all the adapters might not be written yet, but that's purely a
maturity problem.

The `join` macro does nothing magical. Go check out its implementation, and it
will make it obvious how to implement a concurrency argument.

------
sudeepj
This is going to open the flood gates. I am sure lot of people were just
waiting for this moment for Rust adoption. I for one was definitely in this
boat.

Also, this has all the goodness: open-source, high quality engineering, design
in open, large contributors to a complex piece of software. Truly inspiring!

~~~
ChrisSD
Are there that many people looking for a new low level language for server
side software?

~~~
nicoburns
Rust isn't only great because it's low level. Things like sum types (called
enums in rust), pattern matching and expression orientation mean that it is
often much more expressive than other languages for high level code.

~~~
umanwizard
ML-inspired languages have all these features too; is the advantage of Rust
over those just that it’s more mainstream, the ecosystem is bigger, etc.?

~~~
nicoburns
That, and that it has better support for imperative features than most ML
languages. You can combine your fancy combinators with mutable variables and
for-loops when just want to get something done quickly.

In general, Rust just has all the little details right. It's hard to describe
that in concrete terms, but it makes using it a very smooth and satisfying
process. I get a similar feeling when using postgres: there's usually a nice
way of doing what I want, and I rarely come up against unwelcome surprises.

~~~
momentoftop
ML has always had easy mutables in the form of references, and the closest
deployed language to Standard ML (Ocaml and Reason), has always had for-loops
and while-loops. Mutable references are used frequently.

Rust is great because it's low level, high-performance, non-garbage collected
_and_ it's primary inspiration for higher-level programming is languages like
ML and Haskell.

~~~
scns
Having used OCaml and Reason i'd say the documentation and compiler messages
of Rust are more helpful IMHO

------
kodablah
I've been working with alpha futures, tokio, hyper, etc with async/await
support on rust beta (didn't use async std yet) and can attest to them working
quite well. There was quite a learning curve for me to know when to use arc,
mutex, different stream combinators, and understand raw polling, but after I
did, writing the code became a bit easier. I suggest anyone wanting to learn
to grab a tcp-level networking project/idea/protocol and grind on it for days,
heh.

~~~
aashcan
Could you share some resources? I'm trying to move a Hyper + Tokio-core +
Futures project to the newer versions and am struggling..

~~~
kodablah
I don't really have any except to use the alphas of tokio and toy with it. The
tokio alpha docs and the async book[0] has some info, but both are a bit
incomplete.

0 - [https://rust-lang.github.io/async-book/](https://rust-
lang.github.io/async-book/)

------
MuffinFlavored
For JavaScript developers expecting to jump over to Rust and be productive now
that async/await is stable:

I'm pretty sure the state of affairs for async programming is still a bit
"different" in Rust land. Don't you need to spawn async tasks into an
executor, etc.?

Coming from JavaScript, the built in event-loop handles all of that. In Rust,
the "event loop" so to speak is typically a third party library/package, _not_
something provided by the language/standard itself.

~~~
steveklabnik
> I'm pretty sure the state of affairs for async programming is still a bit
> "different" in Rust land.

There are some differences, yes.

> Don't you need to spawn async tasks into an executor, etc.?

Correct, though many executors have added attributes you can tack onto main
that do this for you via macro magic, so it'll feel a bit closer to JS.

------
pimeys
The same day as async/await hits stable, the next Prisma alpha is released and
is the first alpha that's based on Futures and async/await.

[https://github.com/prisma/prisma-engine/](https://github.com/prisma/prisma-
engine/)

Been working with the ecosystem since the first version of futures some years
ago, and I must say how things are right now it's definitely much much easier.

There are still optimizations to be made, but IO starts to be in a good shape!

~~~
maccam912
What is this? The Github repo doesn't offer a description.

~~~
pimeys
Maybe the website tells more? [https://www.prisma.io/](https://www.prisma.io/)

Basically we offer a code generator for typescript, migrations and a query
engine to simplify data workflows. Go support is coming next.

~~~
ClumsyPilot
Maybe, but you linked the repo and it don't even have link to the website, let
alone a description.

Looks like a great project, best of luck.

~~~
pimeys
Sorry, didn't mean this to be product advertisement, so I wanted to just link
to the core code.

The user-facing product is typescript and go and in different repositories.
The backend is Rust and we jumped into the async/await train some months ago
already. Wanted to share some experience and how quickly in the end we were
able to get a working system out with the new apis.

~~~
ClumsyPilot
I am perfectly happy for you to advertise , it's just that most people reading
this probably are not rust developers, so it would be great to know what the
project is about.

------
losvedir
Exciting! I know this has been a long time coming, so congrats to everyone for
finally landing it in stable.

As a rust noob, small question based on the example given: Why does
`another_function` have to be defined with `async fn`? Naively, I would expect
that because it calls `future.await` on its own async call, that from the
"outside" it doesn't seem like an async function at all. Or do you have to tag
any function as async if it calls an async function, whether or not it returns
a future?

~~~
jkarneges
An async function compiles down a function that returns an iterable-ish state
machine thing (a Future), that needs to be stepped through. The await keyword
indicates a yield point.

It's kind of like how if you declare a Python function containing the yield
keyword, then the function returns an iterable rather than simply executing
top to bottom.

In the same way that Python yield only makes sense from within an iterable,
Rust's await keyword only makes sense inside of a Future. Outside of a Future,
there'd be no concept of a yield. This is why the "outer" function must be
declared async.

------
lenkite
Does Rust offer _composable_ futures ? Something like Javas CompletableFuture
or Clojure's
[https://github.com/leonardoborges/imminent](https://github.com/leonardoborges/imminent)
?

~~~
yazaddaruvala
Yes you can use functions like CompletableFuture’s thenApply.

However, Rust Futures have a very different implementation compared to
CompletableFuture.

------
pornel
If you're just starting to learn Rust, I suggest waiting a little before using
async. It's awesome, BUT libraries, tutorials, etc. will need a while to
update from the prototype verion of Futures (AKA v0.1) to the final version
(std::future). Changes made during standardization were relatively minor, but
there's no point learning two versions of Futures and dealing with temporary
chaos while the ecosystem switches to the final one.

------
stubish
This seems very similar to Python's approach, which I've been finding poor to
use.

I was wondering if a more pleasant approach would be to add a 'defer' keyword
to return a future from an async call, and have the default call be to await
and return the result (setting up a default scheduler if necessary). Requiring
the await keyword to be inserted in the majority of locations seems poor UX,
as is requiring callsites to all be updated when you update your synchronous
API to async.

------
cdbattags
This is massive and props to the team! I have a small prototype for interop
between 0.1 and 0.3 futures which are also compatible with async/await first
order.

Excited for where this takes us! Can't wait for tokio 2.0 now.

[https://github.com/cdbattags/async-actix-
web](https://github.com/cdbattags/async-actix-web)

------
person_of_color
Can someone explain the benefit of this programming paradigm to other
applications besides web servers/IO bound tasks?

~~~
jdance
My other question of similar nature also goes unanswered :(

Maybe the primary benefit is that its new and sexy

------
faitswulff
Will the Rust Programming Language book be updated with async/await?

~~~
steveklabnik
At some point, yes. Carol and I have not figured out how we want to do it.

~~~
bora_gonul
Quick please :)

------
overthemoon
I have honestly never enjoyed learning a language more than I've been enjoying
Rust, the docs and material are so thorough and clear. Very excited to tackle
this topic.

------
sergiotapia
How does Rust compare to Nim? It seems Nim is as fast as C, static binaries,
and ergonomic UX. Whereas Rust looks like C++ mixed with bath salts.

~~~
mratsim
Nim dev here.

If you are talking about async/await:

\- for concurrency it has been in the standard library for a couple of years.
Also you can implement it as a library without compiler support.

\- for parallelism, Rust is in advance. Nim has a simple threadpool with
async/await (spawn/^), it works but it needs a revamp as there is no load
balancing at all.

You can also fallback on the raw pthreads/windows fibers and/or OpenMp for
your needs or even OpenCL and Cuda.

Regarding the revamp you can follow the very detailed Picasso RFC at
[https://github.com/nim-lang/RFCs/issues/160](https://github.com/nim-
lang/RFCs/issues/160) and the repo I'm currently building the runtime at
[https://github.com/mratsim/weave](https://github.com/mratsim/weave).

Obviously I am biaised as a Nim dev that uses Nim for both work and hobby so
I'd rather have others that tried both comment on their experience.

------
brunt
Looks like someone needs to update
[https://areweasyncyet.rs/](https://areweasyncyet.rs/)

~~~
angrygoat
I've opened an issue on github :)

[https://github.com/rustasync/areweasyncyet.rs/issues/28](https://github.com/rustasync/areweasyncyet.rs/issues/28)

~~~
angrygoat
And they've fixed it! :)

------
jdance
I have never used async/await and cant really understand it. Is it like
coroutines in lua? It seems very similar. But I guess its not limited to one
thread like lua? Whats makes it better then threaded IO? Losing the overhead
of threads?

I like coroutines but thats mostly because they are not threads, they only
switch execution on yield, and that makes them easy to reason about :)

~~~
uryga
> Is it like coroutines in lua?

idk about Lua, but afaik python's async is pretty much implemented on top of
coroutines ("generators"). `await` is basically `yield`

> I like coroutines but thats mostly because they are not threads, they only
> switch execution on yield, and that makes them easy to reason about :)

pretty sure i've heard the same thing said about async io!

~~~
jdance
I guess the thing that puzzles me is that if they are run threaded
(concurrently) they seem just as hard to reason about as threaded IO to me.

And that would leave the motivation to be performance gains by being able to
reduce the amount of threads I guess

(And that it is cool of course :)

~~~
uryga
the coroutines are run sequentially by an "event loop"/"coroutine runner" that
wakes them up and lets them run for a bit when appropriate, kind of like an
OS's scheduler (on a single core machine). if the runner is the OS, `await
foo(..)` is kind of like a syscall, where the coroutine is suspended and
control is handed back to the runner to do whatever is requested.

i guess the difference from normal threads (preemptive multitasking) is that
you explicitly mark your "yield points" – places where your code gives control
back to the runner – with `await` (cooperative multitasking). some believe
that this makes async stuff easier to reason about, since in theory you can
see the points where stuff might happen concurrently

honestly i'm out of my depth re: async IO, haven't used it all that much :/
but if you're comfortable with python and want to dig into the mechanism of
async/await, i really recommend this article: [https://snarky.ca/how-the-heck-
does-async-await-work-in-pyth...](https://snarky.ca/how-the-heck-does-async-
await-work-in-python-3-5/)

it's long, but i found it very helpful – it actually explains how it all works
without handwaviness. at the end the author implements a toy "event loop" that
can run a few timers concurrently, which really made it click for me!

------
golergka
This is great news!

I tried out Rust for a typical server-side app over a year ago (JSON API and
PostgreSQL backend), and the lack of async-await was the main reason I
switched back to Typescript afterwards, even though Diesel is probably the
best ORM I've ever worked with. Time to give it a try again.

------
tracker1
This is awesome... been waiting on this... looking at a lot at rocket and yew
(really new with rust), and had been waiting to see the async stuff shake out
before continuing (been holding for a few months now), may take a bit of time
over the weekend to start up again.

------
MrBra
Can anybody share their thoughts on which key features / libs are still
missing in Rust?

~~~
fnord123
Binary crates. Sandboxed builds (I'll continue to use crates that have a
build.rs which can do anything it wants, but I don't like it). Namespacing
crates (a la Java) to deal with the name squatting issue.

------
C14L
An interesting presentation on the history of futures in Rust (from RustLatam
2019):

[https://www.youtube.com/watch?v=skos4B5x7qE](https://www.youtube.com/watch?v=skos4B5x7qE)

------
sbmthakur
Rust beginner here who writes a lot of async code in Node.js. If I am to start
writing async code in Rust, should I directly pick up async-await? Or should I
first understand how it is done in the current way?

~~~
steveklabnik
You should start with async-await, but know that the ecosystem is in the
middle of catching up, and so you may run into packages that are more awkward
to use at the moment.

------
ldng
I would be interested to know if a comparison between async and sync API at
level exists. I have this intuitive but probably wrong feeling that async
often implies memory overhead.

------
aashcan
All we need are a few migration guides for Tokio, Hyper, Futures...

------
wiineeth
Is there anyone who has used rust instead of C++? What's your opinion on it?

------
adgasf
Can anyone help explain if Rust asyncs are hot (as in JavaScript, C++) or cold
(as in F#)?

~~~
steveklabnik
Cold.

------
trpc
Thank you Alex Crichton, Niko Matsakis and all other core devs, Rust is by far
the most well designed programming language I've ever dealt with. It's a
masterpiece of software engineering IMO.

------
bullen
How does rust perform in parallel on the same memory?

I heard it uses locks?

This is not on the same memory right?
[https://news.ycombinator.com/item?id=21469295](https://news.ycombinator.com/item?id=21469295)

If you want to do joint (on the same memory) parallel HTTP with Java I have a
stable solution for you:
[https://github.com/tinspin/rupy](https://github.com/tinspin/rupy)

~~~
littlestymaar
What do you mean by “on the same memory” exactly?

If you want two threads running in parallel to concurrently access the same
memory location you don't need synchronization if you only perform reads, and
you need one if there is at least one write. Like in any other language (this
comes directly from how CPU works).

The good thing with Rust is that you can't shoot yourself in the foot: if you
can't accidentally have an unsynchronize mutable variable accessible from two
threads: the compiler will show you an error (unless you explicitely opt out
this security by using _unsafe_ primitives, in which case the borrow checker
will let you go).

~~~
bullen
I mean just like it reads: I want two (or more) threads to write the same
memory at the same time. This is a problem Java "solved"/"worked around" with
a complete memory model rewrite for the whole JDK and the concurrency package
in 2004 (1.5).

The solutions range from mutexes to copy-on-write and more.

~~~
oconnor663
My understanding is that Rust is basically equivalent to C in this regard, in
terms of concurrent writes to the same memory being per se undefined behavior.
I think the JVM translates concurrent memory writes into hardware-appropriate
atomics, so maybe a better translation from Java to Rust would be a large
`Vec<AtomicUsize>` or something like that, rather than raw memory?

------
eeZah7Ux
The level of fanboyism in the comments is saddening. Many other fast and
productive languages have async since a while.

~~~
pixel_fcker
Or maybe there are just a lot of people who like using rust because it fits
their use cases very well and are excited about the release of a big new
feature that’s been in development for a long time?

~~~
kgraves
>...because it fits their use cases very well and are excited about the
release of a big new feature.

a bit _too_ excited. GP isn't wrong, Go pretty much has the same thing and I
have never seen so much fanboyism for a single feature ever in my career.

I don't get it, but that might be because I am a manager.

~~~
monocasa
Go's is a little different. I can't run Go on something with 2k of RAM like an
Arduino, but Rust's async structure is actually extra helpful there.

~~~
0xdead
Can Rust's async even work on an Arduino or any bare-metal system?

~~~
steveklabnik
Yes. Right now the implementation requires TLS but that will be going away.

~~~
monocasa
To whoever downvoted Steve, he's talking about thread local storage, not
transport level security.

~~~
steveklabnik
Thanks for the clarification! I didn't even notice I was downvoted, and now
you're downvoted... I don't get it.

~~~
monocasa
HN be a harsh mistress.

