
The Road to Rust 1.0 - steveklabnik
http://blog.rust-lang.org/2014/09/15/Rust-1.0.html
======
tomdale
We've been using Rust in production for Skylight
([https://www.skylight.io](https://www.skylight.io)) for many months now, and
we've been very happy with it.

Being one of the first to deploy a new programming language into production is
scary, and keeping up with the rapid changes was painful at times, but I'm
extremely impressed with the Rust team's dedication to simplifying the
language. It's much easier to pick up today than it was 6 months ago.

The biggest win for us is how low-resource the compiled binaries are.

Skylight relies on running an agent that collects performance information from
our customers' Rails apps (à la New Relic, if you're more familiar with that).
Previously, we wrote the agent in Ruby, because it was interacting with Ruby
code and we were familiar with the language.

However, Ruby's memory and CPU performance are not great, especially in long-
running processes like ours.

What's awesome about Rust is that it combines low-level performance with high-
level memory safety. We end up being able to do many more stack allocations,
with less memory fragmentation and more predictable performance, while never
having to worry about segfaults.

Put succinctly, we get the memory safety of a GCed language with the
performance of a low-level language like C. Given that we need to run inside
other people's processes, the combination of these guarantees is extremely
powerful.

Because Rust is so low-level, and makes guarantees about how memory is laid
out (unlike e.g. Go), we can build Rust code that interacts with Ruby using
the Ruby C API.

I'm excited to see Rust continue to improve. Its combination of high-level
expressiveness with low-level control is unique, and for many use cases where
you'd previously use C or C++, I think Rust is a compelling alternative.

~~~
chrisseaton
"We end up being able to do many more stack allocations, with less memory
fragmentation and more predictable performance, while never having to worry
about segfaults."

But isn't this stuff the job of the VM? There's no reason why a program
written in Ruby can't do the same stack allocations, reduced memory
fragmentation and predictable performance automatically - if the Ruby VM was
better designed.

If Ruby had a better VM would you chose to use it over Rust? In Rust are you
doing things that the VM could be doing for you?

~~~
pcwalton
> There's no reason why a program written in Ruby can't do the same stack
> allocations, reduced memory fragmentation and predictable performance
> automatically - if the Ruby VM was better designed.

Escape analysis falls down pretty regularly. No escape analysis I know of can
dynamically infer the kinds of properties that Rust lets you state to the
compiler (for example, returning structures on the heap that point to
structures on the stack in an interprocedural fashion).

This is not in any way meant to imply that garbage collection or escape
analysis are bad, only that they have limitations.

~~~
pkolaczk
Rust must statically prove lifetime of references to stack allocated variables
does not exceed lifetime of the variables they point to at compile time, in
order to be 100% memory safe. How is that different than just an advanced
escape anlysis? Theoretically a VM could do much more, because it could do
speculative escape analysis (I heard Azul guys were working on such
experimental thing called escape detection) and even move variables from stack
to heap once it turns out they do escape.

~~~
tomp
You could do escape analysis equivalent to Rust's only if you inlined all the
functions. Sometimes, that's simply not possible, e.g. with separate
compilation.

On the other hand, Rust is still able to perform its kind of escape analysis
(via lifetime-tracking type system), because the relevant facts are embedded
in type signatures of Rust functions, and as such must be present even for
separate compilation (even if the actual implementation of the function is
unknown).

~~~
pkolaczk
VM could do see all the functions and do whole-program analysis. Inlining is
an orthogonal concept.

~~~
pcwalton
You could in theory, but that analysis would likely be really slow.

In any case, without the type system and compiler to enforce the discipline
the programmer is going to lose a lot of control and predictability.

~~~
pkolaczk
Not necessarily. As you said, Rust encodes that information in type
signatures. Exactly the same information can be used in a VM and it could do
escape analysis one method at a time then.

~~~
tomp
That's true. Maybe it would work, but I wonder if anyone has attempted it
before...

------
JoshTriplett
I used to describe my preferred family of languages as:

\- C when I absolutely had to (kernel/modules/plumbing).

\- Python for scripting and broad accessibility.

\- Haskell when I had the choice and I knew everybody who would work on the
project.

I was skeptical of Rust when it first came out, due in large part to the many
different kinds of pointers it originally had, many of which involved
significant manual memory management. But now, with a strong static type
system, garbage collection, pattern matching, associated types, and many other
features, Rust is looking like a serious contender to replace all three of
those languages for me.

Still waiting to see if it develops a strong following, community, and
batteries-included library ecosystem, but I need to start doing more
experiments with Rust.

Disappointing to see yet another language-specific package management system
(Cargo), though.

~~~
dbaupp
As others have said, having a package management system that deeply
understands the language and tooling is awesome. Examples:

\- Rust has a distributed-by-default documentation generator (rustdoc), cargo
knows this and provides `cargo doc` to render a library's docs with it.

\- rustdoc can run code examples in the documentation as tests, to check that
everything is up-to-date, `cargo test` does this (along with running the in-
source unit tests and any external tests).

\- the Rust compiler allows for plugins, which are dynamic libraries loaded
into the compiler and can be used for things like custom macros (aka
procedural macros aka syntax extensions) and custom compiler warnings. cargo
understands these, allows them to be 'imported' via the normal dependency
mechanism, and specifying `plugin = true` in a package makes cargo do the
right thing, e.g. building as a dynamic library (static libraries are the
default) and compiling for the correct target when cross-compiling.

I'm sure all of this is possible with other systems, but it seems unlikely to
be _so_ nice to use.

~~~
seabee
It's very similar to Racket, and yes, it is nice to use!

Other systems can get you much of the way there (node, Python are the only
ones I'm really familiar with) but I suspect you need a little language help
to achieve the same kind of convenience.

------
pacala
The ownership idioms are very similar to idiomatic C++11 and std::unique_ptr.
Which is to say that Rush has got an industrial strength safe memory
management system.

But Rust stands out because the rest of the language is such a joy to use,
compared to pretty much any other 'systems' language out there.

Congratulations to the team!

~~~
wycats
One of the cool things about the Rust ownership system is the lease/borrow
system. Moving is cool, but much of the time you want to synchronously call
another piece of code and give it a temporary (stack-frame-long) lease for
that pointer.

Rust starts with ownership, but makes it easy to ergonomically _and safely_
lend out that ownership (including one-at-a-time mutable leases) of both stack
and heap allocated pointers.

I've been programming with Rust since last December, and I have had
essentially zero segfaults coming from Rust code during that time frame,
roughly equivalent to what I would have expected writing code in a language
whose safety guarantees come with a runtime cost (GC or ARC).

------
Ixiaus
> _The key to all these changes has been a focus on the core concepts of
> ownership and borrowing. Initially, we introduced ownership as a means of
> transferring data safely and efficiently between tasks, but over time we
> have realized that the same mechanism allows us to move all sorts of things
> out of the language and into libraries. The resulting design is not only
> simpler to learn, but it is also much “closer to the metal” than we ever
> thought possible before. All Rust language constructs have a very direct
> mapping to machine operations, and Rust has no required runtime or external
> dependencies._

Almost sounds like they borrowed this thinking from Exokernel design... I
think Rust is shaping up to be a very exciting language.

------
MichaelGG
Rust looks fantastic, and has a lot of things I wish I could do while in a
higher level language like F#.

I just wish Rust was a bit less verbose. Requiring, for instance, type
annotations on function arguments because it's sometimes helpful is such a
weird decision. Let the programmer decide when an annotation is needed. This
gets annoying when you get into functions with complex arguments. Especially
for local functions where the signature could be messy, but the limited scope
means annotations just clutter things. I'm not sure why Rust forces us to
compromise here.

~~~
pestaa
Fully typed function signatures form a kind of contract you can program
against.

Haskell and other extremely strongly typed languages can infer the types of
function parameters, yet the community still agrees it is good practice to
annotate your work.

~~~
MichaelGG
Would Haskell be better off if the compiler enforced this community agreement,
instead of letting users decide?

Also, the type annotations can be added on later. While you work and play with
ideas, leave everything unannotated. After it's cemented and perhaps
refactored a bit, add the "contract". In Rust, even while working things out,
the user has to figure out and jot down the types.

~~~
burntsushi
> Would Haskell be better off if the compiler enforced this community
> agreement, instead of letting users decide?

Unequivocally, yes. My logic is that, while writing the function may be
slightly quicker and more convenient if you can leave off the type, _reading_
that same code is made at least an order of magnitude easier if the type
annotation is sitting there in the code.

Actually, it gets better than that. Writing down the type of a function before
writing the function often helps you _write_ the function.

Protip: Use `ghc` with `-fwarn-missing-signatures -Werror`, or even better,
`-Wall -Werror`. :-)

~~~
dbpatterson
The thing is, with current tooling, the current design means that I can write
a function, type a key combination, and have the type signature printed in a
buffer (where I can then copy it into place). And until I do that, there is
warning highlighting on the function.

Now in some ways this may seem silly - are you really going to understand code
without understanding types? But especially for people new to the language,
_or_ when you're dealing with new libraries (if you've ever written a little
wrapper around a function from a complicated library, you know what I mean),
it's nice to choose whether you want to work from values or from types (where
undefined is your friend).

Which isn't to say that think rust's decision is bad, just that having
flexibility makes this kind of tooling easier (and I'm assuming here that in
all cases, the end result will be all annotated top level functions). And,
part of rust's choice was probably to make type checking easier, which is an
important thing (especially given how sophisticated the borrow checker is).

~~~
burntsushi
Interesting point. Getting the type annotation inferred for you there is
definitely useful. I've used it a few times myself.

The `undefined` trick is also immensely useful. I use it a lot when starting a
new module. Rust also has a notion of bottom, indicated by `!`, which will
unify with all types. I frequently use this in Rust in a similar way that I
use `undefined` in Haskell. (In Rust, you would speak `fail!()` or
`unreachable!()` or `unimplemented!()` or define-your-own.)

> (if you've ever written a little wrapper around a function from a
> complicated library, you know what I mean)

Yes, absolutely. I haven't really run into this problem with Rust yet though.
Types are generally pretty simple. If and when Rust gets higher-kinded types,
that would assuredly change.

~~~
fanf2
I thought ! in Rust indicates a macro.

~~~
burntsushi
A `!` followed by an identifier does, yes. But a `!` also lets you define
diverging functions:

    
    
        fn diverging() -> ! {
            unreachable!()
        }
    

The two `!` in that code are completely orthogonal things. See the manual on
diverging functions: [http://doc.rust-lang.org/rust.html#diverging-
functions](http://doc.rust-lang.org/rust.html#diverging-functions)

------
hadoukenio
Still sitting on the fence as to which language I should pick up on next - the
only contenders are C++11 and Rust.

How does Rust compare with C++11 as a language? C++11 seems to (in some ways)
have caught up with what Rust has to offer (compared to older C++ versions)
e.g. smart pointers, concurrency and regexes part of the standard library

~~~
ranran876
I'd definitely pick C++11 unless you _need_ to use Rust.

Rust is inherently memory safe - however in practical terms this isn't
important for most applications. If you are writing security critical
applications Rust will provide you with some very important guarantees (ie.
there are certain mistakes which are inherently not possible in the language).
C++ doesn't really guarantee anything and if you're an idiot you can shoot
yourself in the face. However in practical terms memory management in C++11 is
very straightforwards and C++11 compliant code (ie. using the STL and not
writing it like C) is very safe and clean. You're not mucking with raw
pointers anymore

The main issue I see is that Rust is still in early development. It may or may
not get "big" in the coming years. And library support is ... lacking

In contrast C++ has the STL and boost and every library under the sun. I
haven't working with a lot of other languages extensively, but I've never seen
anything as clean, robust and thorough as the STL and boost. C++ will remain
relevant for a long long time. If Rust takes off in a big way, you'll be well
positioned to jump ship.

~~~
hadoukenio
Wow, that was a great comment and exactly the type of info I was after.

I think (coming from a dynamic language world) the memory safeness is what
pulls me towards Rust. But from what you say and what I've read elsewhere,
that was old-style C++ and not C++1[17].

Thanks!

~~~
Shamanmuni
Read what the other commenters are pointing out. Maybe the situation is better
in C++ now than it was before, but it doesn't mean you can't shoot yourself in
the foot, specially for a beginner. Rust was built with safety in mind from
the start, there are errors you can make in C++ that the Rust compiler simply
won't let.

My advice is, if you're learning it for work, then go with C++. Even if it
succeeds, it will take some years for Rust to be mainstream and as pointed out
the library support is great.

If you're learning it for fun or for the sake of learning something new. Then
Rust is a very nice and promising language bringing things from functional
languages that C++ lacks and offering very interesting tooling around it.

Whatever you choose, after you feel confident with one go and learn the other
as it will probably give a better perspective in the strengths and/or
weaknesses of both.

------
rubiquity
> _Green threading: We are removing support from green threading from the
> standard library and moving it out into an external package._

I only ever looked at Rust from a 500 foot view while toying with it at a
Hackathon, but I had no clue it had so many different types of threading
models. This seems like a step in the right direction, indeed. If Task is
going to your unit of concurrent execution, as much transparency around that
as possible is a good thing.

~~~
masklinn
It had just two: libgreen backed tasks with "green threads", libnative backed
them with OS threads.

~~~
rubiquity
Does that mean every task will be an OS thread? Or are tasks handled by a
runtime specific scheduler and will use a pool of threads for IO?

~~~
masklinn
> Does that mean every task will be an OS thread?

By default, yes.

~~~
rubiquity
Hmmmm, I hope there are other alternatives in the future (besides libuv). It
would be nice to see Rust have a lightweight unit of execution similar to
goroutines or Erlang processes.

------
forrestthewoods
When will it be pleasant to use on windows?

------
alkonaut
Hopefully the tooling will take off once the language stabilizes. Using a
"newish" language is a total exercise in frustration when you are a spoiled
kid who expects an IDE to come with your language configured, and a nice big
play button for running your first program.

For a language to take off, it badly needs a very good (ideally "official")
development experience, such as a custom eclipse impl, or a very good IntelliJ
plugin. When a dev experience comes with batteries included it lowers the
treshold substantially from just "use whatever text editor you like and
compile on command line, here is a readme".

------
toolslive
How difficult is it to start from a huge C++ codebase and start adding new
features in Rust? How bad is such an idea? I know there is interoperability,
but those are toy examples, does anybody have real life experiences?

------
sriku
This is all good. No higher kinded types for v1.0?

~~~
steveklabnik
Nope. We're pretty sure they'll be backwards compatible, and there are too
many other outstanding issues (eg, syntax) and too little time. Lots of us
want them though!

~~~
spott
There isn't an RFC for them, at least that I could find. Have they just not
gotten that far in terms of thought?

~~~
TheHydroImpulse
I have an RFC sitting here (and a few blog posts talking about it), but
considering it'll be post 1.0 when such a thing is even entertained, I haven't
submitted it. There are more pressing and more appropriate things to work on
first.

~~~
bjz_
Awesome - looking forward to seeing that. :)

------
doe88
What I think is an important feature of this language is the ease with which
it can interact with other languages. Especially the possibility for Rust code
to be called from foreign languages such as C very easily.

I'm looking forward for even better support of iOS with the support of arm64,
I think it is really important to offer an alternative.

BTW is there an RFC on _dynamically sized types_? I can't find any, I'm
looking to learn of it works.

~~~
steveklabnik
> Especially the possibility for Rust code to be called from foreign languages
> such as C very easily.

The second production deployment of Rust is a Ruby gem, written in C, that
calls out to Rust. It's used in skylight.io, if you're curious.

> BTW is there an RFC on dynamically sized types?

IIRC, DST was before the RFC process even existed, it's just taken forever to
implement. The Duke Nukem Forever of Rust. :)
[http://smallcultfollowing.com/babysteps/blog/2014/01/05/dst-...](http://smallcultfollowing.com/babysteps/blog/2014/01/05/dst-
take-5/) is what you want to read, IIRC.

~~~
wycats
> The second production deployment of Rust is a Ruby gem, written in C, that
> calls out to Rust. It's used in skylight.io, if you're curious.

Yep! I'm one of the authors of that project. The fact that Rust provides
automatic memory cleanup and the attendant safety without runtime overhead
(even ARC has non-trivial runtime overhead) was a huge win for us, as was the
transparent FFI.

We were looking for a way to write fast code that was embeddable in a Ruby C
extension with minimal runtime overhead and without a GC (two GCs in a single
process is madness). We also wanted some guarantees that we wouldn't
accidentally SEGV the Rails apps we were embedded in. Even last December, Rust
was a clear winner for us.

We've been shipping Rust code to thousands of customers for many months, and
given the language instability, it's worked really well for us.

~~~
djur
Are there any open source libraries spun off from the Skylight agent? It would
be nice to see some examples of production-quality Rust code.

~~~
wycats
A few:

* [https://github.com/carllerche/hamcrest-rust](https://github.com/carllerche/hamcrest-rust) \- a (badly in need of more fleshing out) testing library * [https://github.com/carllerche/nix-rust](https://github.com/carllerche/nix-rust) \- bindings of Linux/OSX-specific APIs to Rust * [https://github.com/carllerche/curl-rust](https://github.com/carllerche/curl-rust) \- a binding of libcurl to Rust * [https://github.com/carllerche/pidfile-rust](https://github.com/carllerche/pidfile-rust) \- a library for using a pidfile for mutual exclusion across processes * [https://github.com/carllerche/mio](https://github.com/carllerche/mio) \- a low-level IO library that attempts to implement an epoll-like interface across multiple platforms

------
austinz
Congratulations to the Rust team! Can't wait to start learning the language
and building stuff using it.

I'm looking to learn about how Rust's refcounting memory management works (and
how it differs from how, e.g. Objective-C or Swift's runtime-based reference
counting works), mostly for personal edification. Can anyone point me to any
good resources?

~~~
steveklabnik
Is worth noting that you don't reach for reference counting by default in
Rust: your reach for references, then boxes, THEN Arc.

You can find documentation on these types here: [http://doc.rust-
lang.org/guide-pointers.html](http://doc.rust-lang.org/guide-pointers.html)

~~~
eddyb
Well, you wouldn't go for Arc (atomic RC) unless you need to share data across
tasks (threads), within the same task you can use Rc.

This is different from shared_ptr in C++ which is always atomic and LLVM has
to try hard (in clang, no idea what GCC does) to eliminate some redundant ref-
counts.

Oh and since Rc is affine, you only ref-count when you call .clone() on it or
you let it fall out of scope.

Most of the time you can pass a reference to the contents, or, if you need to
sometimes clone it, a reference to the Rc handle, avoiding ref-counting costs.

~~~
steveklabnik
That's true, thanks.

------
AlyssaRowan
It'd be wonderful if they kept the ability to define that a certain destructor
_does_ zero memory.

Sometimes, you need that.

~~~
gpm
Zeroing out memory isn't sufficient, as discussed here:
[http://www.daemonology.net/blog/2014-09-06-zeroing-
buffers-i...](http://www.daemonology.net/blog/2014-09-06-zeroing-buffers-is-
insufficient.html)

That said, this was discussed on reddit[0], it sounds like there is a way to
guarantee that you did zero out memory (but not necessarily copies of that
memory, as discussed in the link above), and because Rust is intended to be
memory safe, it's not as much of an issue if you don't/can't.

[http://www.reddit.com/r/rust/comments/2fnb82/zeroing_buffers...](http://www.reddit.com/r/rust/comments/2fnb82/zeroing_buffers_is_insufficient_why_c_is_insecure/)

~~~
AlyssaRowan
Not in C. Rust is, as you say, intended to be memory safe.

That means it has the hope of getting it _right_.

~~~
someone13
FWIW, you can implement the 'Drop' trait to provide a custom destructor, and
then use, e.g. 'volatile_set_memory'[0] to zero out the memory of the object.
This isn't subject to the same problems as C, AFAIK.

[0] [http://doc.rust-
lang.org/std/intrinsics/fn.volatile_set_memo...](http://doc.rust-
lang.org/std/intrinsics/fn.volatile_set_memory.html)

~~~
strcat
You would have to force the sensitive content to be dynamically allocated. All
types in Rust can be moved via a shallow memcpy and that will leave around
dead shallow copies. For example, `Vec<T>` will leave around dead versions of
the values when it needs to do a reallocation that's not in-place.

------
kvark
I'm excited for the release too. Know many people who hesitate to touch Rust,
even if interested, due to the fact language is still in active development.

On minor concern though, I don't see how "where clauses" are simplifying the
language. Looks like something that could be added after the release.

~~~
steveklabnik
> On minor concern though, I don't see how "where clauses" are simplifying the
> language. Looks like something that could be added after the release.

Where clasues simplify Rust code, they don't simplify the language itself.
They're also important for associated items. For more:
[https://github.com/aturon/rfcs/blob/associated-
items/active/...](https://github.com/aturon/rfcs/blob/associated-
items/active/0000-associated-items.md)

------
3289
It seems that 1.0 is going to be a solid release. But the post-1.0 Rust is
going to be even more exciting once they have added inheritance and subtyping
which enable true polymorphic reuse!

~~~
strcat
Object inheritance is only useful in rare edge cases so your statement doesn't
make much sense. Traits have default methods, inheritance and can be used as
bounds on generics (no dynamic dispatch, type not lost) or as objects (dynamic
dispatch / type erasure).

What makes you think object inheritance is such a sure thing anyway? I don't
expect either object inheritance or optional garbage collection to
materialize, ever. In fact, I'd be pretty sad if the language was degraded
with that extra complexity - I think it would be a worse situation than the
mess that is C++. There would no longer be a shared idiomatic Rust.

~~~
wtetzner
I think optional garbage collection might materialize, but I would imagine it
would end up being a third-party library.

------
robin_reala
Adopting the channels system is interesting. Are there any other languages
that have a scheduled release pattern like this?

~~~
howeman
Go sort of does this, though on longer timescales. Developers are at "tip".
Actual releases are every 6 months, and before the release there is always at
least one release candidate, and sometimes an actual beta.

------
MoOmer
> We are removing support for green threading from the standard library and
> moving it out into an external package. This allows for a closer match
> between the Rust model and the underlying operating system, which makes for
> more efficient programs.

That's an interesting move in comparison to Go, which multiplexes coroutines
onto threads.

~~~
pcwalton
It's more systems-like. We don't pay any overhead for calling into C code, and
M:N scheduling doesn't really provide advantages over 1:1 scheduling when you
don't have a GC (and even when you do, the differences are fairly negligible
on modern Linux).

~~~
ngrilly
With 1:1 scheduling, how do you limit stack size to something reasonable (a
few kB per thread), which is necessary when you need to launch tens of
thousands of threads?

~~~
dbaupp
If you know you only need a small amount of stack size, you can set the stack
size to be small via the task builder: [http://doc.rust-
lang.org/master/std/task/struct.TaskBuilder....](http://doc.rust-
lang.org/master/std/task/struct.TaskBuilder.html#method.stack_size)

~~~
ngrilly
Thanks for the link. But what if I don't know the stack size in advance? I
guess the model with a stack that starts small and grows on demand is only
possible with M:N scheduling, not with 1:1 native scheduling?

------
cdnsteve
Can rust be used to power http endpoints like a REST API? Or is it more
designed to be system type daemon stuff? I guess I don't fully understand the
marketing of it, however I haven't ever written anything in C or C++ either.

~~~
tmzt
There are already a number of libraries and frameworks that implement an HTTP
setver in Rust, though things seem to be coalescing around Teepee.

[http://chrismorgan.info/blog/introducing-
teepee.html#main](http://chrismorgan.info/blog/introducing-teepee.html#main)

There are few REST api libraries as well.

------
theavocado
When I first looked at Rust, I recall being very confused about the
distinctions between crate, package, module, and library. It seemed like an
area which could use some simplification.

~~~
steveklabnik
"Package" and "library" are two words that mean the same thing. They're more
generic terms for "crate," which is Rust specific. "module"s are ways of
splitting up your code inside a crate: one crate has many modules, and each
module belongs to one crate.

~~~
theavocado
Thanks, Steve! Have not yet had a chance to go through the new guide, but
looking forward to it!

~~~
steveklabnik
Any time. There'll be a full guide to the module system when I get some
time...

------
Chirono
Wait, they've got rid of unique pointers? When did that happen? They were
there a couple of months ago. That was one of my favourite language
features...

~~~
andrewaylett
My understanding is that they're still there, just as part of the standard
library rather than as a language feature.

~~~
steveklabnik
That's correct. What used to be ~T is now Box<T>. We say 'boxes' instead of
'unique pointers' now.

------
eCa
I haven't looked at Rust, but it seems from the outside that releasing a
stable version of a language every six weeks is very aggresive?

~~~
steveklabnik
Continuous deployment is common in the web world. It's true that it's
aggressive, but we think it's going to have significant benefits.

~~~
pbsd
People tend to expect more stability from a systems language than a web one.
Can I trust that my Rust 1.0 code will work, unchanged, 20 years from now? If
not, the language is likely to remain in the enthusiast realm.

~~~
steveklabnik
To be clear, Rust is following SemVer, and the six-week releases are 1.x
versions. So they should be backwards compatible. There's no current timeline
for a 2.x.

------
egonschiele
Could someone lay out the advantages of Rust over C/C++/Dart/Go/other
languages that cater to a similar space?

~~~
whyever
AFAIK Rust is the only language that offers memory safety without garbage
collection.

~~~
_pmf_
C++11's std::unique_ptr and std::shared_ptr are also nice features. Does Rust
provide more guarantees?

~~~
dbaupp
Rust provides memory safe versions of them; e.g. using a unique_ptr after
moving it leads to undefined behaviour. Rust also allows 100% safe references
into the memory owned by those types (no possibility of accidentally returning
a reference into memory that is freed).

Lastly, Rust's type system actually allows 'this value must be kept local to a
thread local' to be expressed, meaning there are two shared_ptr equivalents:

\- Arc (Atomic Reference Counting), which uses atomic instructions like
shared_ptr

\- Rc, which uses non-atomic instructions, and so has much less overhead.

Rust also has move-by-default semantics, so there's no extraneous reference
counting due to implicit copying. (Which is particularly bad with the atomic
instructions of shared_ptr.)

------
illumen
Does it use the GPU, memory compression, code rewriting, automatic
vectorisation, multiple cores, or any other performance techniques from the
last 10 years?

~~~
dbaupp
Rust is a low-level language so everything user-space can be implemented, and
the main (and only) compiler rustc uses an industrial strength optimiser
(LLVM) which has support for automatic vectorisation.

Furthermore, the type system is designed to be very good for high-performance
concurrency.

See
[http://blog.theincredibleholk.org/blog/2012/12/05/compiling-...](http://blog.theincredibleholk.org/blog/2012/12/05/compiling-
rust-for-gpus/) for an example of using Rust on a GPU, and I can only imagine
that it has become easier since then.

