
Rust 1.0: Status report and final timeline - steveklabnik
http://blog.rust-lang.org/2015/02/13/Final-1.0-timeline.html
======
AlyssaRowan
Rust looks interesting.

One thing I'm not clear on it if can do, and that I'm interested in, is secure
destructors.

Say I'm handling crypto, and I'm carting around an ephemeral key. When this
goes out of scope, I definitely no matter what, want this _zeroised_ by its
destructor - as opposed to just having it (or a temporary copy made by a
compiler optimisation!) zombling around the heap, stack or forgotten unused
xmm registers because the compiler figured since I don't reference it again,
the memory's contents are no longer important.

Current approaches to this involve explicit_bzero(), or other similar
memset(0)-and-I-really-mean-it-don't-optimise-this-out techniques. (And a fair
bit of testing and prayer when it comes to potential temporary copies or
registers.) But unless you're doing it in assembly language, you don't really
_know_. (The stack beneath you, such as the OS, any hypervisors, SMM, AMT,
SGX, µcode etc, aside, of course!)

I'm not quite clear what Rust's behaviour with this scenario is. If it can do
this easily, even potentially, I am _very_ interested…?

~~~
pslam
This keeps coming up, but I think it's a _very, very bad idea_. It's false
security.

If you are running in an environment where you don't trust code running in the
same compartment/sandbox/process, then it's futile to zero out memory. The
caller could have prepared things such that the memset doesn't work, if the
key material went somewhere else.

If you ever find yourself thinking you need to do this, what you instead need
is a helper process who's only purpose is to do primitive operations with
sensitive key material.

Particularly as Rust is already a "safe" language - it doesn't even make sense
to zero memory which by definition another piece of code can't access. Unless
there's declared "unsafe" code lying around, but you wouldn't put that in the
same process, would you? At which point, what are you even protecting against?
If an in-process threat is that advanced, then you're not achieving anything.

~~~
duaneb
Well, heartbleeed is a fiasco that would have been avoided by this—a
vulnerability that rust shares without these secure destructors.

~~~
mbrubeck
No, it wouldn't. For example, the "Heartbleed in Rust" blog post [1] re-used a
buffer without freeing it. No destructor runs in between the two uses, so a
zeroing destructor could not possibly prevent the bug.

Maybe zeroing destructors make sense as defense-in-depth, but I don't see how
they can fix a Heartbleed-style exploit in Rust. In code where the buffer is
freed and its destructor runs, Rust's memory safety guarantees already prevent
it from being accessed after free. In vulnerable code that just uses the same
buffer twice, the destructor never has a chance to run so its behavior doesn't
matter.

The _real_ Heartbleed vulnerability (CVE-2014-0160 in OpenSSL) involved
reading into uninitialized memory in a newly-allocated buffer, which safe Rust
code already prevents [2].

[1]: [http://www.tedunangst.com/flak/post/heartbleed-in-
rust](http://www.tedunangst.com/flak/post/heartbleed-in-rust)

[2]:
[https://news.ycombinator.com/item?id=8984169](https://news.ycombinator.com/item?id=8984169)

~~~
pslam
Thanks - that's a good example of what I was trying to convey.

The point is Rust already provides safety guarantees. If you don't trust the
runtime, then why would you trust the built-in zero'ing? I get the "defense in
depth" argument, but it feels a bit like doing this:

    
    
        {
          int a = secret;  // Get secret.
          assert(a == secret);  // Check "a" is actually that.
          a = 0;  // Ensure "a" is zero'd on exit.
          assert(a == 0);  // Just because.
        }
    

And yes, I get that you can build this into the language so it's not quite as
ridiculous - you actually wipe tainted stack, for example.

But the point is: the runtime has an ABI and a machine model. Information is
allowed to leak across function boundaries, beacuse _it doesn 't matter_.
Without using the "unsafe" keyword, there are no methods of getting around the
machine model and dipping into the underlying actual machine.

Even if you don't have a "safe" language and runtime, it's still of limited
value. It protects against threats involving data or control flow corruption
_after_ key usage, and where there isn't sufficient control of the program to
perturb the secret-consuming functions. That's more of an annoyance than
prevention. On the other hand, it gave the programmer a false sense that it
was properly wiping secrets.

~~~
erickt
It is very probable that a sufficiently smart optimizer could see the
assertion was always true and delete it. Then see no one reads "a" and delete
it as well. In certain circumstances this can cause a secret to be leaked in,
say a register, making our safe function unsafe. You need to be very careful
writing secure code, and probably need to go down to the level of writing
assembly to be sure the optimizer isn't turning your safe code into unsafe
code.

We actually have an interesting project in rust where someone is writing a
syntax extension to take rust like code and generate assembly [0]. It's
probably unsafe to use right now but if sufficiently well implemented it could
be the foundation of a lot of interesting cryptography work.

[0]: [https://github.com/klutzy/nadeko](https://github.com/klutzy/nadeko)

~~~
pslam
Sorry, I wasn't clear enough that my code was intended as sarcastic. It's
obviously silly to zero variables because the compiler is free to ignore you.
The point is, the underlying machine is going to do the same.

There are many ways to dig out stale memory if you're running at sufficient
privilege, for example. Direct cache introspection, for example, or bypass.
Zero-izing alone is not sufficiently strong to mitigate the threats people
imagine it works against.

------
jarrettc
Even though 1.0 isn't out yet, today you can use Rust for many real projects.
I'm unsure whether I'd bet my business on it yet, but I'd be open to the idea.
And I'm usually a very late adopter.

I've been building a 3d game with Rust and OpenGL, ported from a C++ codebase.
So far, my experience has been very positive. Despite Rust's supposed
immaturity, it feels more polished than C++ in many ways. Forward progress has
been much faster than it was with C++.

Does anyone else have a story (positive or negative) about using Rust in real
projects?

~~~
MichaelGG
I'm betting my business on it. I've written a network search engine in F# and
it's in use on the VoIP arm of one of the public telcos. VoIP can generate
terabytes of _signalling_ data a day, even while not making much money. (In
wholesale, many calls simply don't complete, so you've got a huge amount of
data and transactions that pay you $0.)

The challenge with F# is controlling memory usage. Even one extra allocation
per packet can make a measureable difference in performance. I ended up doing
a ton of unsafe code and manually managing most of the heap. Rust allows me to
write fairly high-level code (not as expressive as F# yet but whatever) while
getting "best" performance. Inline asm is a bonus, as there's some algorithms
for integer compression that can use SIMD for big wins (I can do that in .NET,
but it's ugly, and doing it safely means a ~30 instruction thunk). And
sometimes in tight loops, I've found it difficult to get .NET to do acceptable
codegen, causing double-digit% impacts.

There's also the safety issues writing unsafe code for network-exposed
traffic. So Rust is actually _more_ safe than .NET, because I have to toss
.NET's safety to gain performance.

The backend management code I can continue to write in F#, and Rust's
C-compatibility means it's trivial to interop the code. So I can do
"orchestration" of indexing daemons and management APIs and such things in a
higher-level language, then for actual indexing and whatnot, just jump over to
Rust, seamlessly.

Finally the static compilation means a smoother installation experience for
customers. And if I ever ship a closed-source module that executes on the
client, I don't need to license Mono for static linking. So that's nice. And
the safety guarantees are good, because similar, existing, software in C has
put customers at risk before. (I'm not sure if I can effectively market that
last part, but hey.)

Rust would appear to have a unique value proposition and I'm very pleased to
see it progressing so damn well.

~~~
omega_rythm
Did you give a shot at other languages in the same category as F#, like
Haskell, Ocaml, Clojure or Elixir?

~~~
MichaelGG
To replace the F# side of things? F# is pretty unique as far as
performance/language/tooling goes. It's outclassed in specific cases, but
overall it's a great package.

The alternatives you listed aren't known for being able to write top-
performance idiomatic code (I've got something _working_ in F#, but it's ugly
non-idiotmatic code). The overhead of a GC is just too much to pay when doing
linerate networking. Rust allows me to keep nice, high-level, idiomatic style,
without paying any overhead. I can account for almost every byte.

------
drobilla
I'm very happy to see Rust stabilize, about time we get a systems(ish)
programming language with a half decent type system. With that said... I need
to get some bikeshedding off my chest:

I hate to let such a triviality lower my enthusiasm for a language so much,
but I just can not get over that awful inconsistent closure syntax :/

I don't get it. Most everything else has a nice unique keyword syntax, fn uses
(args, in, parenthesis), proc syntax made consistent sense, then lambda is
this crazy || linenoise thing that doesn't fit in at all. The "borrow the good
ideas from other languages" approach has resulted in a great language, but
"cram random syntax from other languages that doesn't fit" doesn't work out so
well.

~~~
pcwalton
The natural thing to want in a C-like syntax is the "arrow function" closure
syntax (like ES6 or C#), but that required too much lookahead to parse. Having
a keyword discourages functional style, which would be a shame in a language
with a powerful iterator library. So Rust went with the Ruby/Smalltalk-style
bars, which are nice, concise, and easy to parse.

~~~
alextgordon
It seems like the human parser should be given priority over the computer
parser, when considering what is easy and what is hard. The machines work for
us!

~~~
gnuvince
As far as I know, Rust now has an LL(1) grammar, which means that writing
parsers for it can be done by hand (or with the more powerful LALR(1) and
LR(1) parser generators). This is very important for humans too, because it
means more people are likely to write tools to process Rust code. If you hope
to have automatic indentation, auto-completion, refactoring, formatting tools,
etc. keeping the syntax simple is really important.

~~~
nostrademons
Can't they just expose the parser as a library? Actually, it looks like they
did, with the rustc crate.

Hand-writing a parser for some other language leads to madness - just ask the
folks who've done SWIG, GDB, or most IDE syntax-checkers. You'll inevitably
get some corner-cases wrong, or the language definition will change underneath
you long after you've ceased to maintain the tool. Instead, the language
should just expose its compiler front-end as a library, and then you can
either serialize the AST to some common format for analysis outside the
language or build your tools directly on top of that library.

~~~
jpgvm
You missed the whole point.

By making the language simple you can easily implement your own parser. This
opens up the ability to write native parsers in other languages, say
vimscript. By keeping it super simple there -are- no corner-cases.

There are many benefits to this (like the formatters etc that others have
alluded to) from things like IDE integration (imagine lifetime elision
visualisation, invalid move notifications, etc) static analysis tools and
more. None of these tools then need to be written in Rust. It also means it's
easier to implement support in pre-existing multi-language tools.

Don't underestimate the necessity of a simple parseable grammar. Besides,
people have endured much worse slights in syntax (see here Erlang).

~~~
nostrademons
The vim formatters/syntax checkers I've used that actually try to parse the
language - other than Lisp, which is the limiting case - are generally
terrible. They all miss some corner case that makes them useless for daily
work, since they generate too many false-positives on real code.

The ones I actually use all call out to the actual compiler - Python, Go, or
Clang for C++.

Just because people write their own parsers doesn't make it a good idea. It
may've been necessary when most compilers were proprietary and people didn't
have an idea how to make a good API for a parser. But now - just don't do it.
You'll save both you and your users a lot of pain.

------
netcraft
From the outside looking in, I am mostly impressed with the governance
structure of the whole endeavor. It seems to me that it is a great model for
other open source projects.

Edit: as someone involved with other, less mature (and less ambitious) open
source projects, if you know of pain points in the governance of rust, i'd be
interested in learning about them.

~~~
codys
There's a governance structure? That's news to me.

I was under the impression that a few primary contributors (mostly/all mozilla
employees?) are gatekeepers to merging anything.

Having an "RFC" issue tracker isn't the same as having a governance structure.

Edit: I suppose you could call the above a 'governance structure', but I'm
having a hard time seeing anything impressive/different about it from other
open source projects

~~~
Gankro
In addition to what aturon said, for actual patches these are the people who
decide on merging: [https://github.com/orgs/rust-lang/teams/rust-
push](https://github.com/orgs/rust-lang/teams/rust-push)

Maybe a third to a half are Mozilla employees (although it's infamously hard
to tell who actually works at Mozilla and is just weirdly into maintaining
Rust).

~~~
dbaupp
FWIW, the Rust-push team doesn't match the set of reviewers (people to which
the integration bot bors will react and merge a PR). Being on that list offers
powers like issue tagging and the ability to push to the 'try' branch, but
manually merging a PR or pushing straight to master is essentially banned (and
would be reverted immediately).

~~~
Gankro
Huh. I thought it was basically a bijection, though?

~~~
dbaupp
There is an injection from the set of reviews to the rust-push team (Or, at
least, there should be), but e.g. people have got privs because they've been
doing a lot of triage or need try push, without having review powers.

------
dkhenry
I am going to be happy when I can program in rust and not spend the first hour
making all my code work with the latest compiler changes. I really do like
what rust is offering, but tracking head has been difficult

~~~
breckinloggins
Me too! I have yet another NES-emulator-in-rust project [1] and I've been
hesitating on picking it back up because I only want to convert to "new rust"
once.

It looks like April will be a good month.

[1]
[https://github.com/breckinloggins/rusticom](https://github.com/breckinloggins/rusticom)

------
haberman
I saw that integer overflow has been revised: it now defaults to checking
overflow in unoptimized builds. I got a little nervous about this when reading
the performance-related objections of @thestinger here:
[https://github.com/rust-lang/rfcs/pull/560](https://github.com/rust-
lang/rfcs/pull/560)

At least optimized builds aren't affected, but it sounds like lots of code
(including Rust nightlies) aren't built optimized.

~~~
nikomatsakis
The final draft made an effort to address those concerns -- but also they are
completely inapplicable to optimized builds, as kibwen points out. (And if you
care how fast your code runs, you really _do_ want optimizations...)

~~~
haberman
AFAICS these concerns apply even to optimized builds unless they are also
"ndebug", right?

------
moonchrome
I'm excited by what Rust will eventually bring to the table if it ever gets
popular - a higher level C++ (tools for writing safe native code) replacement
without the legacy crap (header files...)

At the same time I don't think I would use Rust 1.0 in production for two
reasons :

* the language doesn't seem to be mature enough to be highly productive, for eg. the type system isn't powerful enough to express stuff like Iterable or VectorTN and I'm sure there is plenty of tedious stuff like that along with pains from ownershinp systems

* tools and libs are obviously not there

So I guess I'll wait for early adopters to write the libs and give feedback on
their painpoints to the devs.

I've said this before - I think Rust 1.0 is something that I could use (ie.
working and stable) but I don't think it's something that I'd want to use yet.

------
gtaylor
I haven't been tinkering with Rust, merely keep an eye on it until 1.0 lands.
For those who are currently more in touch with the situation, does this
release date look realistic, without quality suffering? Is the 1.0 release
premature, aggressive, or very realistic?

It seems like the standard library has seen a ton of work in the last month or
two. I'm surprised at how aggressive the release schedule is, given how things
are still churning a good deal.

~~~
kibwen

      > For those who are currently more in touch with the situation, does this 
      > release date look realistic, without quality suffering? Is the 1.0 
      > release premature, aggressive, or very realistic?
    

I think it's aggressive, but I don't think it's unrealistic. Personally I had
hoped for a late June/early July release to give more time to solidify the
docs and to shake out bugs in the compiler.

There will definitely be people who say that the release is premature, but I'm
personally not one of them. The core of the language is ready, even if some
pieces around the edges could still use some refinement (and will see
refinement, backwards-compatibly, in the coming releases).

    
    
      > I'm surprised at how aggressive the release schedule is, 
      > given how things are still churning a good deal.
    

You'd be surprised at how much of a motivator a concrete release date is. :)
Churn is happening now _because_ of the impending release, not despite it. The
language intends to have a solid compatibility story (via semver) for post-1.0
releases, so everyone who'd been holding off on changes for the past few years
has suddenly come out of the woodwork to implement them.

~~~
gtaylor
That's re-assuring to hear. Thanks!

------
q2
Is it possible to estimate the amount of manpower and time required to develop
a new language from scratch till it is stable and reasonably production ready?
Adoption of language is different topic, since it depends on users.

Rust and Go are two reasonably new languages. I understand scope and
priorities of each language may be different but my idea is to get some
approximation/thumb rule for any one before starting similar journey.

As per Github and wikipedia:

Number of contributors for Rust and time taken so far: 840, and 2 years.
Number of contributors for Go language and time taken so far: 424 and 6 years.

Financial details are not known.

It seems developing new language and bringing it to reasonable level is not
trivial effort.

1\. Is above data correct i.e. are those contributors full time working on
those languages i.e. is it full time job of those people?

2\. Can we get details like number of developers/number of test
engineers/number of documentation writers ...etc?

3\. Is it possible to know the total amount of financial resources consumed so
far in the effort?

4\. Is there any research into resources required for new language development
in terms of man power, time, financial resources for various languages?

It is fascinating to see a new language developed in front of us.

~~~
Artemis2
Both Google and Mozilla have teams dedicated to their languages, but they
represent a very small portion of the total number of contributors to the
language. I couldn't find exact lists of the team members inside both
organizations.

In terms of volume of contributions, for Go, Google employees are by far the
most active:
[https://github.com/golang/go/pulse](https://github.com/golang/go/pulse). In
this graph, the 7 top contributors to the project are Google employees. The
Rust pulse graph shows the same trend ([https://github.com/rust-
lang/rust/pulse](https://github.com/rust-lang/rust/pulse)), with the top 6
contributors being Mozillians (according to a few Google searches).

Something that noteworthy about Go is the "quality" of the team members:
Google has Ken Thompson, Rob Pike and Russ Cox working full time on the
language. Mozilla may have a few great developers on Rust too, but Google is
very serious about Go.

I don't have any information about how financial and human resources are used
by Google and Mozilla for the development.

~~~
kibwen
In the linked Rust graph, eddyb (the third-highest committer) is a volunteer
(an unimaginably prolific one), not a Mozilla employee. kmcallister (the
fifth-highest committer) is a Mozilla employee, but not actually on the Rust
team (they work primarily on Servo (though there is a fair bit of spillover
between the two projects)). Rust has a few other full-time Mozilla employees
that aren't represented on that chart for whatever reason (working on feature
branches, perhaps?), such as nrc and pcwalton.

~~~
lastontheboat
sfackler is a volunteer, too.

~~~
kibwen
Just goes to show how blurry the line is that I've never actually noticed that
he's not an employee. :P

------
andrewflnr
It's good to hear that IO reform has finally landed. I expected to see an
announcement in /r/rust. Did I just miss it? brb, updating my code...

------
breckinloggins
One thing I'm less clear on is what will happen _after_ 1.0.

Is there a "post 1.0 wishlist" somewhere?

~~~
kzrdude
A feature that has already seen many proposed rfcs and long discussions is
"Efficient code reuse" (a.k.a some kind of inheritance), summarized here:
[https://github.com/rust-lang/rfcs/issues/349](https://github.com/rust-
lang/rfcs/issues/349) It was explicitly postponed until after 1.0.

~~~
breckinloggins
I just wrote a comment on that issue with some half-baked ideas, but I really
think that this is one of those "line in the sand" features that will
determine (at least for me) whether rust is really staying true to its
emerging identity or whether it's on the road to becoming another opinionated
kitchen sink language.

The thing about "efficient code reuse" is it probably requires dynamic
dispatch. Once you have dynamic dispatch, you suddenly have vtables. But who
decides what those vtables look like? Where do they reside in memory? What's
the layout of that? If a struct suddenly has an is-a pointer, where is that
mentioned in the code? Now my struct isn't just a struct.

I love the rust idea that tons of modern language design still allows for
zero-cost abstraction. Inheritance in dispatch starts getting into the land of
"putting a lot more stuff in my binary than I asked you to", and I would argue
that it's this property more than anything else that keeps embedded
programmers and kernel guys safely in the minimalistic land of C.

It would be nice to have a new C, finally. But the more a language has an
opinion on runtime layout, behavior, and symbol names, the less C-like it
becomes.

Rust got this right when it decided that GC was NOT the correct default
behavior for a language. The reason you see people playing with OS kernels in
rust and not as much in D is, I think, mainly due to this decision. I think
rust should continue carrying this torch.

If not for the ability to have the best of both worlds (a modern language and
access to to-the-metal programming with a controllable runtime layout and
deterministic performance profile), where exactly is the value in learning how
to use the borrow checker?

~~~
pcwalton
> The thing about "efficient code reuse" is it probably requires dynamic
> dispatch. Once you have dynamic dispatch, you suddenly have vtables. But who
> decides what those vtables look like? Where do they reside in memory? What's
> the layout of that? If a struct suddenly has an is-a pointer, where is that
> mentioned in the code? Now my struct isn't just a struct.

1\. We already have vtables through trait objects (though not for structs), so
this would be nothing new. It's important that we have them, because otherwise
common dynamic dispatch would be very annoying to write.

2\. Structure layout is already not defined. The compiler is permitted to
reorder structure fields as it likes. However, you can force it to adopt your
specified in-memory order with the `#[repr(C)]` annotation.

> It would be nice to have a new C, finally. But the more a language has an
> opinion on runtime layout, behavior, and symbol names, the less C-like it
> becomes.

The language already has an opinion on runtime layout and symbol names.
However, you can specify the layout and symbol names manually if you like
(through `#[repr(C)]` in the former case and `#[no_mangle]` in the latter
case).

> Rust got this right when it decided that GC was NOT the correct default
> behavior for a language. The reason you see people playing with OS kernels
> in rust and not as much in D is, I think, mainly due to this decision. I
> think rust should continue carrying this torch.

GC has performance costs, while trait objects and symbol names do not, as long
as they're opt-in. Garbage collection and virtual dispatch are completely
different things; having one in no way moves us closer to the other.

~~~
breckinloggins
> Garbage collection and virtual dispatch are completely different things;
> having one in no way moves us closer to the other.

Right, of course not. The comparison was philosophical rather than technical.

My point was that I believe there's a "sweet spot" for a language that is
expressive and convenient and modern, but also tries hard not to stray too far
from C's spartan abstract machine model (and when it does, it exposes that
complexity in a composed pluggable fashion).

I'm beginning to believe rust really has a shot at replacing C and needs to
court "bare metal" programmers as well as higher-level programmers to do it;
I'm just preemptively registering my wish that rust continue to head down that
path.

~~~
kibwen
The best way to make that wish come true is to use Rust in a project where you
would normally use C, and report your experience to help us discover the best
ways to support that use case. :)

~~~
breckinloggins
I think what personally got me excited about this direction were the many "OS
kernel in rust" hobby projects [1, 2]. For some reason these strike me as a
sort of reverse canary-in-a-coal-mine to judge whether a language is seen as a
potential C replacement.

Prior to rust the hobby OS dev community was primarily C / asm with some
honorable mentions for other languages. It's also something of a stand-in for
the requirements of the professional embedded community.

For these use cases, it's just really cool to be able to use more and more
"layers" of the language as you implement more of the underlying abstract
machine model.

[1] [http://jvns.ca/blog/2014/03/12/the-rust-os-
story/](http://jvns.ca/blog/2014/03/12/the-rust-os-story/) [2]
[https://github.com/rust-lang/rust/wiki/Operating-system-
deve...](https://github.com/rust-lang/rust/wiki/Operating-system-development)

------
qznc
Anybody wants to bet some play-money on the release date?
[http://www.knewthenews.com/Market/44777/Will%20Rust%201.0%20...](http://www.knewthenews.com/Market/44777/Will%20Rust%201.0%20be%20released%20before%20May%2016%3F)

------
cw0
Is it safe to assume that anyone wishing to learn Rust can, by alpha2, study
The Rust Programming Language and Rust By Example and not need to relearn
anything within that scope after 1.0 lands?

~~~
jroesch
The core language is almost completely fixed, and the only real changes will
be in unstable areas like associated types. There might be some small API
changes but all APIs are marked with stability levels so you should be able to
figure out what is stable, and what is not.

------
TwoBit
Some language guy told me that Rust's ownership system is untenable. He said
that researchers tried the same thing years ago and it was concluded to be
impossible to work well. I know that's vague, but he claimed to know what he
was talking about.

~~~
pcwalton
Rust goes a lot farther than academic languages that used regions like Cyclone
or the ML Kit. In particular, Rust's regions are only used to enforce stack
discipline, and the basic memory management is taken from C++. This way, we
avoid the well-known limitations of "classical" region-based memory
management.

Besides, consider the fact that we've written hundreds of thousands of code
for working, non-toy projects in the language, including the Rust compiler,
crates.io, Servo, etc.

------
nathell


