
TRust-DNS: implementing futures-rs and tokio-rs support - killercup
http://bluejekyll.github.io/blog/rust/2016/12/03/trust-dns-into-future.html
======
Perceptes
Great article. Very thorough. I'm really looking forward to Hyper (and in
turn, Iron, or whatever replaces it) being rebuilt on futures-rs and Tokio.
I'm eager to start using this approach in my HTTP programs.

~~~
bluejekyll
Yeah. It looks like it's under progress, I'm really excited for HTTP futures
as well. It will make it exceptionally easy to build highly performant web
servers.

~~~
steveklabnik
It is!
[https://github.com/hyperium/hyper/tree/tokio](https://github.com/hyperium/hyper/tree/tokio)

~~~
parley
It would be so great if Sean got to work full time on hyper, but of course I
understand that he is needed elsewhere at Mozilla and he's doing a great job
with the time he has. I would hazard a guess that there are so many besides me
waiting for the upcoming HTTP client/server improvements that it would
probably not be the worst idea if Mozilla (or some other org betting on Rust)
threw some money at it.

I'm extremely grateful to everyone working on Rust and its ecosystem (paid or
otherwise) for the work that you do. It sure sounds cheesy, but after trying
lots of stuff over the years, Rust feels like coming home. There, I said it.
I'm pushing it at my employer, and for every piece of tooling or important
crate that matures, it gets easier to evangelize.

~~~
steveklabnik
Yeah, I hear you. That said, money is being thrown, but at tokio itself,
rather than at hyper. You have to finish the lower bits before the higher
ones.

(also, <3)

------
steveklabnik
So! To recap the bits, and where this all stands now:

The end goal is to implement "Your Server is a Function"
[https://monkey.org/~marius/funsrv.pdf](https://monkey.org/~marius/funsrv.pdf)
, that is, built on top of three primitives:

1\. Futures

2\. Services, which are functions of Request -> Future<Response>

3\. Filters, which takes a Request and a Service, and returns a
Future<Response>

To do this, Tokio is built up of multiple packages that each have a focus.

At the lowest layer, you have two primitive libraries: futures and mio.
Futures are a generic, no-allocation abstraction that looks something like
this:
[https://docs.rs/futures/0.1.6/futures/future/trait.Future.ht...](https://docs.rs/futures/0.1.6/futures/future/trait.Future.html)

    
    
      pub trait Future {
          type Item;
          type Error;
          fn poll(&mut self) -> Poll<Self::Item, Self::Error>;
      }
    

... and a bunch of more interesting methods built on top of poll. Futures, as
you can see, can be generic over the kind of value they produce, as well as
possible errors while producing said value. poll drives the future forward,
and should never block. Usually, you don't call poll directly, you create a
Task, which represents a chain of futures, and tell it to run. It will then
handle doing the right thing, calling the right methods in the right ways.

The second component is mio. Mio is a low-level, asynchronous I/O library,
that gives you the standard event loop stuff. In other words, it's a very thin
wrapper around epoll/kqueue, and has an adapter to make iocp fit. There are
deeper reasons that readiness was chosen over completion as a model, but I
won't get into that right now.

So, the lowest layer of tokio proper is tokio-core, which combines futures and
mio to give you the ability to say "give me an event loop. Chain some futures
together. Run this chain on the event loop." But that's still a fairly low-
level interface. And it's what's being shown off here.

At the same level, tokio-service is what gives you the Service abstraction.

At the sort of middle layer, there's a bunch of libraries that you can use,
like tokio-proto, which gives a slightly higher-level interface for
implementing network protocols.

Finally, the unreleased tokio package combines this ecosystem into an easy to
use way to build servers, it's the whole package. This "tons of tiny packages"
approach means that if you want to extend tokio in some way, you pick the
appropriate level of the stack for your task, and plug it in, and everything
up the chain can benefit. It's very modular and extensible.

In graphical form, check out this image:
[https://twitter.com/rustconf/status/774734062636249089](https://twitter.com/rustconf/status/774734062636249089)
which comes from this talk:
[https://www.youtube.com/watch?v=bcrzfivXpc4](https://www.youtube.com/watch?v=bcrzfivXpc4)

The key enabler here is Rust's zero-cost abstractions: last time we measured,
tokio-core had a very small (less than half a percent, IIRC) overhead compared
to writing a mio event loop by hand. And that's before significant profiling
effort has been done. So while in many languages, all of these layers would
add up to significantly reduced speed, the idea here is that in Rust, they
won't. A significant portion of this is zero-allocation or single-allocation,
for example.

~~~
loeg
> There are deeper reasons that readiness was chosen over completion as a
> model, but I won't get into that right now.

If you've got some time, would you go into those reasons a little bit? Thanks!

~~~
steveklabnik
Part of why I didn't go into it is because the post was already long, another
part is because frankly, it's not my area of expertise and I didn't want to
misrepresent it. Please remember that I'm not directly involved in tokio's
development, and so this is always my understanding, I might be wrong in
places, and you should ask someone on the team to be 100% sure. (The post
above is stuff I'm sure about.)

What I will say for sure is that it wasn't "hey we use Unix so we're doing
what we know and we'll tack on Windows support", it was made purely on
technical merits of the two models. I also know that it was related to
allocations.

At a high level, my understanding is that the situation is this: the unices
have a rediness model, windows has a completion model. If you want a cross-
platform abstraction, you can't get around needing to map one to the other. So
the question becomes "is it cheaper to simulate readiness with completion, or
completion with readiness."

I _believe_ that the situation is something like this: the completion model
requires that you allocate a buffer up front, whereas the readiness model
doesn't. So to map readiness to completion, you'd end up allocating the
buffer, then doing your calls, then filling the buffer. But to map completion
to readiness, you can do a trick: make a call for a zero-byte read, and when
that comes back, allocate the buffer, and make another call. There's still a
small amount of overhead here, but it's less than the other direction.

Again, I might be wrong here.

oh also, shout out to wio, which is mio, but focusing on just windows:
[https://github.com/retep998/wio-rs](https://github.com/retep998/wio-rs) I
haven't kept up with it as much as this stuff, but I like the idea.

------
IshKebab
The thing I don't get is why Rust sockets have a `set_nonblocking()` function
at all. Should there just be separate`recv_blocking()` and
`recv_nonblocking()` functions? It would be much simpler. Sadly you can't even
do your own interface like that because there is no `get_nonblocking()`
function.

Also is HN ever going to support markdown?

~~~
steveklabnik
The standard library's abstractions tend to be relatively thin mappers over OS
functionality: [https://doc.rust-
lang.org/stable/std/net/struct.UdpSocket.ht...](https://doc.rust-
lang.org/stable/std/net/struct.UdpSocket.html#method.set_nonblocking)

So, in my understanding, it has this interface because that's the interface
that the OS gives you.

Why couldn't you write this interface on top? You'd be writing that function
yourself, no?

------
markdog12
Any chance Rust is going to get async/await keywords?

~~~
steveklabnik
It's quite possible! And even if it doesn't, approaches like
[https://github.com/erickt/stateful](https://github.com/erickt/stateful) might
work as well.

