
Announcing Tokio 0.1 - steveklabnik
https://tokio.rs/blog/tokio-0-1/
======
koolba
As someone who hasn't actually used Rust for anything yet, the example on the
root of the site is surprising legible: [https://tokio.rs/](https://tokio.rs/)

Question for the more knowledgeable Rust folks: Does this code (the echo
server example) not handle the situation where the port is already in use?
What's the return type of the socket bind in that situation?

EDIT: Wow six responses in as many minutes. I think I've created a new
objective measure of how popular a programming language will be!

~~~
sidlls
This is actually something about the code samples in Rust docs that irks me a
bit.

My view is that sample code such as this should be as idiomatic as possible
and that means providing a sample demonstrating a _typical real use case_. So
seeing "unwrap" in this context doesn't sit well with me.

In fairness, this isn't specific to Rust. I find sample code like this in many
projects regardless of language. However Rust is billing itself as a safe
systems programming alternative to C and C++. It would help its marketing
efforts, in my opinion, by having more robust samples than the competition.
And let's be honest: given the competition is C and C++, that's a low, low bar
--and this comes from a guy who's about as big a fanboy of those two languages
as possible.

Edit for more disclaimer: documenting Rust code is a real pleasure. The team
has done a stellar job at making documentation an easy thing to do while
writing the code. It's already better in most cases than many other projects I
can think of.

~~~
steveklabnik
So, to be clear, unwrap is safe. It is never going to cause the sorts of
memory issues that an uncontrolled crash is.

That is, in terms of safety, this is the same thing as explicitly handling the
error and then terminating the process. Which is the only real way you're
going to handle this error anyway, unless you wanted some sort of retry logic.
Which you might!

~~~
sidlls
I agree that in terms of safety it's equivalent. However in terms of how an
actual program would be written it's...Less than optimal. I almost never
simply allow a panic-like termination in a program like this, if it can be
trapped, without doing _something_ else, even if it's just to spit out a
log/stdout message with a descriptive reason. Then again the plural of
anecdote isn't data, and maybe I'm the oddball here.

~~~
steveklabnik
> even if it's just to spit out a log/stdout message.

This will print an error out already.

    
    
      $ ./target/debug/tokiotest
      thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 98, message: "Address already in use" } }', ../src/libcore/result.rs:837
    

Printing out a _better_ error might be helpful, though, I'll agree :) You
could use expect for that, which lets you change the message in the ''s
easily.

This really shows a fundamental tension though, in documentation. Is this
example supposed to be demonstrating error handling? Or just get you going?
Does adding more complex error handling distract from the point it's trying to
teach? These are sort of open-ended questions.

~~~
rat87
Wasn't there an attempt to get a variation of main that returned result at one
point(that would presumably print error and set exit code one error)? Back
before "?"

If rust had that examples would be even shorter using ? Instead of try! or
unwrap

~~~
steveklabnik
Yes, and that discussion is still ongoing. The devil, as always, is in the
details.

~~~
JoshTriplett
`-> impl Trait` seems useful in that context, so that main could have a
variety of appropriate return value types: `()`, `i32` (for an exit code), or
`Result<T, E>` where T implements ReturnValue and E implements Error.

`quick_error` uses a trait like that to determine the exit code of `main()`,
allowing it to return either () or i32.

------
openasocket
This is all very impressive, especially since AFAICT Tokio is only about 5
months old and consists of tens of thousands of lines of code spread across a
half dozen different repositories. I'm anxious for the day when I can use Rust
at work. All I'm really missing to justify using it is a bit more mature
ecosystem. Especially a full featured, stable AWS SDK.

~~~
steveklabnik
Have you tried
[https://crates.io/crates/rusoto](https://crates.io/crates/rusoto) ? I
haven't, interested to hear what people say.

It was announced roughly in August, so yeah, five months. Though some work had
to happen for that initial release, of course. There was a joke though, about
this rapid development. Someone said something like "That they keep re-writing
tokio is really annoying as a user, but I guess if Rust is so productive that
you can throw out huge chunks and re-build it that quickly, well, that says
good things about the language." Of course, as the post says, things will be
more stable now.

~~~
curun1r
Rusoto seemed like a good effort last time I looked, it didn't have very full
coverage of the AWS API. Amazon just has so many APIs that it's a huge dev
effort to implement it all and keep up with changes. This is a problem in
other languages where Amazon doesn't contribute to keeping the client up-to-
date. I'm hopeful that with macros 1.1, the situation will improve since it
will enable compile-time code generation based on the official json files in
botocore. Just add that repo as a git submodule and then:

    
    
      #[derive(Aws!("s3/2006-03-01/service-2.json"))]
      struct S3;
    

Becomes up-to-date with the latest changes from Amazon the moment they're
released (recompile required).

Procedural macros are really exciting with regard to writing/consuming APIs as
they enable both the client and the server interface to be implemented in an
API specification language (Amazon uses custom JSON, but Swagger/RAML could
work too) instead of Rust with zero performance penalty (gotta love those
zero-cost abstractions :-)

~~~
matthewkmayer
Thanks for the feedback.

We're tracking the missing services:
[https://github.com/rusoto/rusoto/issues/436](https://github.com/rusoto/rusoto/issues/436)
. There's plenty of work to do and we're concentrating on getting them
implemented. Sometimes progress is slow since it's a side project, but it's
still moving forward.

If we can make the derive statements work as your code snippet shows, I'd be
really happy. We'll take a look at how we can improve codegen when new
features are available. Some of our codegen is dated, using what was available
when it was written.

------
anonyfox
I really want to know if the eventually arising full-blown web framework in
rust still can claim "zero cost abstractions", and if this would translate
into basically "the most performant/efficient way to write a web-app".

~~~
steveklabnik
So, when what was to become tokio was first announced, we did a check to see
how it compared to writing mio by hand. The two were within 0.3% (not a typo,
one third of a percent) of each other. That was before any optimization work
was done.

One of the core premises of tokio is that this compiles down to the state
machine you'd have to write by hand if you wanted to do asynchronous stuff. So
yes, this is very much intended as a zero-cost abstraction.

~~~
anonyfox
Yes, I got that. So to build a "small" web framework like express.js, one
could build a routing layer based on macros (like Phoenix/Elixir or rocket.rs
do), plus JSON-serialization (Serde might do it) and a compile-time HTML-
templating-engine, all of those leveraging zero-cost abstractions once Macros
1.1 land in stable, right?

Then stuff like spinning up 1 OS-thread for every CPU core and scheduling
requests in round-robin (?) style, and then wrap it all up into a developer-
ergonomic thing.

Since you probably have a way better insight into rust-dev and the ecosystem,
can we stay tuned that this eventually will happen or are there still
showstoppers somewhere?

~~~
steveklabnik
Yup! So for the next step up, see this example of using hyper with tokio:
[https://github.com/hyperium/hyper/blob/tokio/examples/server...](https://github.com/hyperium/hyper/blob/tokio/examples/server.rs)

You could see how that match block functions as a router...

I was literally playing around with this last night. There's a lot of
experimentation going on in the server-side Rust web framework space right
now, and I expect it to heat up even more now that tokio has had a release.

> once Macros 1.1 land in stable, right?

So to be clear, macros 1.1 gets you custom Derive, which is Serde/Diesel.
It'll be stable in the next release in ~3 weeks. It won't get you the full
ability to use any custom attribute, like
[https://rocket.rs/](https://rocket.rs/) uses.

> Then stuff like spinning up 1 OS-thread for every CPU core

This is not implemented in tokio yet, but in my understanding, it's coming.

~~~
anonyfox
> a lot of experimentation going on in the server-side Rust web framework
> space

This alone are awesome news! Having worked with literally dozens of web
frameworks, I have a sincere interest to see what emerges from the rust space.
I can't pinpoint or prove it, but many developers seem to care about doing
stuff "the right way" instead of getting something out the door that works
maybe Ok. I really wait for the "this is it"/"we couldn't possibly solve this
better" moments that might arise from such experimentations.

> macros 1.1 gets you custom Derive, which is Serde/Diesel

... basic building blocks for all things web I'd say. Looks like the next
stable release will arrive just-in-time for things to emerge.

And then there might be the next wave of rust-users/developers that want to
built upon such rock-solid foundations, like me. Really, I'm excited like I
haven't been in many years!

~~~
jononor
Really minor thing, but I for instance like how in Hyper.rs types are used to
enforce (at compile time) that one does not try to write headers after having
started or sent the body of a response.
[http://hyper.rs/hyper/v0.10.0/hyper/server/index.html#an-
asi...](http://hyper.rs/hyper/v0.10.0/hyper/server/index.html#an-aside-write-
status) Good sign that we are starting to leverage the compiler to verify such
invariants.

------
jononor
That is a serious amount of documentation for a 0.1 release.

~~~
wyldfire
The rust community suffers here from humility and a modest default (`cargo
new` puts you at 0.1.0 by default).

~~~
JoshTriplett
The Rust community also takes semver quite seriously. 0.1.0 can break API
compatibility when it moves to 0.2.0, so prototypes that want to iterate on
API stay pre-1.0 until they feel confident in their API. The Cargo ecosystem
has shockingly few 2.x or 3.x versions.

~~~
kzrdude
When you say this, what experience are you comparing to? Any particular
package ecosystems, or a general rule? Thanks

~~~
JoshTriplett
By comparison to the long tail of the C library ecosystem (SONAME handling),
and the package ecosystems of many other languages, most of which do not use
sufficiently well-defined versioning schemes to allow expressing dependencies
like "1.4 or any compatible version". In other languages, I've done the
equivalent of "cargo update && cargo build" and encountered build errors due
to API changes. In Cargo, I've found that exceptionally rare.

------
JoshTriplett
I really look forward to the first production web framework based on futures.

~~~
estsauver
Scala's play framework is pretty good if you can live on the JVM. I've been
pretty happy with it.

~~~
JoshTriplett
To clarify, I was specifically looking forward to the first production _Rust_
web framework based on futures. I'd like to write web applications in Rust, to
integrate with many other libraries and parts of the ecosystem, and I look
forward to doing so via a futures-based framework that integrates well with
other futures-based Rust code.

------
arthursilva
Amazing work from the contributors, kudos!

------
z3ugma
I'm getting an SSL / certificate error...anyone else?

~~~
steveklabnik
Someone on Reddit is having the same issue. DNS was changed 12 hours ago, but
possibly has not propagated to you yet? Sorry about that!

------
EugeneOZ
Would be interesting to read story about naming :)

~~~
steveklabnik
"Tokio" is one way to romanize 東京, the capital of Japan. [1] "Tokyo" is more
accurate in a sense, and so is more popular today, but doesn't contain the I/O
pun. The logo also references its metropolitan crest. [2].

1:
[http://english.stackexchange.com/a/207016](http://english.stackexchange.com/a/207016)

2:
[https://en.wikipedia.org/wiki/Symbols_of_Tokyo](https://en.wikipedia.org/wiki/Symbols_of_Tokyo)

~~~
wccrawford
It's a really odd way, though, since "kyo" is a single syllable in Japanese
and not pronounced "kee oh" like most English speakers do.

~~~
steveklabnik
I edited my post with a link to some history, you might want to check it out.
As someone with very limited 日本語 skills, I find this stuff very fascinating.
(And yes, un-learning kyo vs kee-ooh was tough...)

~~~
wccrawford
Yeah, I think I still say it wrong sometimes. To me, it's a word I learned in
English, and so I think of it the English way. In that way, loan-words are
troublesome in both English and Japanese for words from both languages.

Don't even get me started about learning French words in either of them and
then learning their pronunciation in the other, and then the actual French. As
an example, the Atelier game series.

~~~
grzm
I don't know if it's entirely correct to call it wrong. The nearer a word
comes to its equivalent in its source language, the higher the expectation
that it's pronounced the same as the source language. This expectation is
arguably correct when the word is not commonly used in the host language. Some
words are very different. Germany/Deutschland. Florence/Firenze. Some are
closer, even having the same spelling: Mexico/Mexico. I don't think it's not
wrong per se to pronounce it the way its typically pronounced in the language
you're speaking.

------
shmerl
Interesting usage of Serbian domain (.rs).

~~~
steveklabnik
They're pretty common in Rust, since that's the filename extension.

------
wildchild
Instead of adding concurrency to rust adding javascript-like hell to it.
Brilliant.

~~~
fooyc
They don't want to add abstractions to the language (e.g. Green threads). So,
they are doomed to add abstractions to the libs instead, and one way to do
cheap concurrency without the help of the language is to use async I/O.

And by cheap I mean cheaper than OS threads. The reactor pattern still adds a
lot of CPU overhead to each I/O operation.

What's interesting is that Rust had green threads at one point. They
implemented that by making std::io async-capable under the hood. But they
didn't like it: too much abstraction cost, and it was preventing to add more
native i/o features easily.

So they ditched it and now they are doing exactly the same thing to I/O, just
in user space, and without green threads.

~~~
mercurial
> And by cheap I mean cheaper than OS threads. The reactor pattern still adds
> a lot of CPU overhead to each I/O operation.

You need to pay for concurrency one way or another, there is always going to
be some bookkeeping overhead. Whether you pay the price in your language's
runtime (like Go) or in a library doesn't mean you're getting for free.
However, it looks like the design of Rust's futures will let it reach very
throughput.

------
zimbatm
The term "zero-cost abstraction" is a bit annoying. By definition an
abstraction has an obfuscation and compilation cost even if it doesn't add to
the runtime cost. Too much of these and the program is not understandable
anymore and takes hours to compile.

~~~
p0nce
I wish the term wouldn't exist.

I've measured "zero-cost abstraction" that weren't quite so when profiled.

What it really means is "if the inlining goes like the programmer expects,
then it should have no cost". The term expresses hope.

But inlining is best left to the compiler, and hope is of little value until
measurement is done with a profiler.

~~~
stymaar
Inrust you can tell the compiler to always inline a given function.[1]

But you don't necessarily want to do that since you may not make the most
efficient use of the CPU cache.

[1] [https://doc.rust-lang.org/reference.html#inline-
attributes](https://doc.rust-lang.org/reference.html#inline-attributes)

------
fooyc
Asynchronous programming is hard, even when hidden behind abstractions like
futures or promises.

It was cool and trendy when libevent and Nodejs came out, but now we should
all have the experience and knowledge that tell us to stop doing asynchronous
I/O.

Go, Hakell, or Erlang do a great job at concurrency, without the async i/o
craziness.

~~~
zzzcpan
No, this is not true and you got couple of things wrong. Erlang's model is
more like an asynchronous event driven programming on steroids, while
idiomatic Go is synchronous and is merely a traditional multithreading, which
is known to be unusable for non-trivial concurrent problems. Here's the thing
though, people don't really understand these things, they always want
something easy that solves a simple problem they have in mind and always fail
to grasp how much flexibility they sacrifice and how much harder or even
impossible it will get for more complex problems. And believe me, if you get
into concurrency you're gonna have a lot of problems that are very hard or
even impossible to solve synchronously. The world is an asynchronous place.

~~~
ecnahc515
Go certainly does not do traditional threading. It has Go routines, which is
are lightweight "threads" which gets multiplexed on top of real OS threads.
It's basically implementing async IO at the runtime level.

~~~
stymaar
In term of programming interface, you deal with them as if it was a
traditional thread. (with its pro and cons: pro: all your code is sequential,
it's intuitive to understand what the programmer wanted to do in a specific
scenario. cons: data-races are hiding in every corners :/)

Being a green thread with a growing stack is an implementation detail for the
developer writing Go code.

