I'm very excited at the progress in this release. In particular I'm thrilled at the ongoing focus on ergonomics and overall developer experience, what with the push for quality documentation (brand-new tutorials, code examples for all library functions, and API stability markers) and the rapid maturation of Cargo. Also, lifetime elision and `where` clauses have done wonders in reducing the horror that has historically plagued Rust function signatures.
There's also been tons of progress toward the last features that are blocking the 1.0 release (http://blog.rust-lang.org/2014/09/15/Rust-1.0.html). The long-awaited work on dynamically-sized types is now partially complete, and the only two remaining big hurdles that I can think of are closure reform and trait coherence reform. Onward to stability!
It has come a long way since I first started using it (0.6 era). Congratulations on the release! I look forward to a future world where I can use it (via Servo) in Firefox, as well as in my Constraint Solver engine, which I hope to reveal for the 2015 Minizinc Challenge.
MiniZinc! There's a blast from the past... I wrote most of the first few versions of the specification and much of the initial implementation. Good to hear it's still being useful.
Does the RFC230 (https://github.com/rust-lang/rfcs/pull/230) removal of green threads give me out of box 1:1 threading or just that std::io uses native threads and the rest has to use green threads only now as an additional outside dependency? Kinda confused given I don't follow Rust all that much.
Really odd that they went with M:N model - if the OS is fast at creating/destroying threads 1:1 is the best model for vast majority of apps.
The existing libgreen will be moved out to a Cargo library. You just won't be able to seamlessly switch between the two like you did before, you'll have to explicitly use the green threading API instead.
How will this work if you depend on a lib that uses traditional threading but want to use green threads yourself? That actually comes up a lot at my work.
You'd have to use one that uses green threads internally.
Basically, in the effort to make the abstraction Just Work between 1:1 and N:M threading, we made green threads so heavyweight that they barely even qualified as green anymore. It's only by removing that abstraction that we could feasibly make them lightweight again.
The current implementation that's being phased out was designed to abstract away the threading model such that libraries could make use of concurrency while the programs that use those libraries could use any threading model they choose, and it would all Just Work. However, in practice this caused too many compromises in the implementation of both the native thread runtime and the green thread runtime, erasing the benefits of both.
Sorry! We're sensitive to your use case, but we just couldn't make it work.
No, and we actually use a "tasklet" library in Servo for the really fine-grained parallelism (per-CSS-block) where any allocation on task spawn would be far too expensive and work stealing is essential.
Could you point me to this? I did a cursory look through the Servo codebase. It just occurred to me that since Servo is pretty much the reason for the existence of Rust, it is probably the best thing to learn Rust by reading code.
I have had a problem learning Rust by example, the codebases I would like to read don't usually work with the latest Rust releases.
I don't really _do_ C++, but if I could talk to Rust from C++ maybe I could start hacking on their codebases.
Part of Rust's ownership system are 'lifetimes.' Without getting into the details, they're an annotation you occasionally add to code in certain situations. Like these two:
We used to have a single inference rule for when you didn't need to write the annotation. With RFC 39[1], we added two more. These two extra inference rules removed 87% of the existing annotations, giving you this instead:
oh ok, thanks. i knew what lifetimes were, but was confused by the 'elision.' Apparently I didn't connect 'elision' with 'elide.' So, new vocab word for today :)
I actually haven't had to since elision was implemented, so I can't give you a good rule of thumb from experience. Here's an overview of how the rules applied to the standard library at the time they were implemented, which is where the percentage came from: https://gist.github.com/aturon/da49a6d00099fdb0e861
Basically, after working with lifetimes for so long we realized that a great many functions that take lifetime parameters follow a similar and straightforward pattern. As of this release, we now allow you to omit lifetime parameters entirely for functions that follow this pattern, which is likely the vast majority of functions in your code that previously required such annotations.
But to clarify, functions which take 2 or more reference inputs won't benefit from this, since the compiler can't determine which lifetime parameter to use, right?
Edit: Never mind, I RTFM. Turns out it can't benefit.
What if you're looking up the input in a temporary cache of some kind, and returning a reference to the cached value? The lifetime of the input and output would not be related in that scenario, but the output would not necessarily be 'static (maybe you build a cache, run some functions, tear it all down, then build a new cache with different values and do it all again).
You wouldn't use a reference generally, if you're thinking of creating a new independent object you'd just return that. This is for cases when your return object is a reference into your argument, for example
From someone that sucks at C (learned a bit here and there a decade ago and never used it) and recently started looking into 'system' languages again:
- The community was very friendly, although my questions probably were more on the 'RTFM' side of things
- The rust compiler is amazing. The error messages it provides are clear and helpful, often including potential spelling corrections and/or repeating the part of the code that causes trouble, closing with a suggestion to fix that.
It's not quite magic, but really impressive and helped me a lot to get started. Being a Windows dev by day I do bemoan the fact that this platform is a bit .. behind, but I'm actively pursuing a sideproject based on rust (on Windows) atm.
> The rust compiler is amazing. The error messages it provides are clear and helpful
I'm sure that Clang inspired the great error messages - it sets a very high bar for new compilers (and old - GCC is still catching up).
> Being a Windows dev by day I do bemoan the fact that this platform is a bit .. behind
I'm sure that once 1.0 is released there will be a greater effort to make the Windows experience more seamless. This will be extremely important if we want to convince more game developers to give Rust a go. Have you seen https://github.com/PistonDevelopers/VisualRust/?
That said, IDE support isn't exactly what I was referring to.
- you need to install mingw separately and there are lots of reports about conflicts with other software that might bundle a different version of mingw
- installing cargo is weird (so far .. I failed and don't use it)
And even with my first baby-steps (i.e. doing basically nothing of value) I discovered some limitations, one being that you cannot create a DLL that exports an unmangled entry point - for stdcall (other calling conventions don't have that issue and follow #[no_mangle] or #[export_name="foo"].
BUT that's really not a huge deal for me. Just a bumpy road at times.
I think soon should be a time to look at performance. Performance should be close to or better than C++ on average. We know about "the root of all evil". But, if there is going to be any large uptake from those using C++ currently (and I imagine a lot of those developers are concerned about performance, memory usage etc), Rust simply has to compete on performance as well.
Go in a way started that path. The whole "systems language" re-interpretation and all. But it never quite became the systems systems language. It can't quite eat C's and C++'s lunch figuratively speaking.
Rust might be able to do it. But regardless of concurrency, type safety and other features if it cannot be fast enough it will have a hard time.
> I think soon should be a time to look at performance. Performance should be close to or better than C++ on average.
The language has been designed from the ground up to compete on this level. Also note that Rust potentially has far greater room for compile time optimizations than C and C++, because the compiler knows far more about lifetimes and ownerships. This can be seen in one of the last items in the release notes:
> Stack usage has been optimized with LLVM lifetime annotations.
We already do pay attention to performance. In general, Rust should already be in the same league as C or C++. Please open issues if you find something that isn't.
And even more important than current benchmark numbers is that the language is explicitly designed to be optimizable, and takes the idea of "zero-cost abstractions" seriously. I can count on two fingers the number of places where we impose dynamic checks by default in the name of safety, and one of those can be unsafely turned off and the other can be safely disabled with proper code structuring.
(Theoretically we can also leverage Rust's ownership rules to automatically generate aliasing information. When implemented this should be an across-the-board speed boost for all Rust code in existence, in a way that C and C++ compilers can only dream of.)
The underlying compiler backend would use the same alias optimization pass for both Rust and C. Rust's advantages here are that all Rust code is automatically eligible for such optimization without the programmer having to do any work at all, and whereas a programmer can get `restrict` usage wrong (or alternately fail to have the confidence that a given pointer is truly unaliasable), the Rust compiler has perfect information.
To be fair, there are also optimizations that C can do that Rust can't, often resulting from C's undefined behavior.
Rust's type system enforces a great deal more invariants in terms of lifetimes, ownership and mutability than C, which enforces virtually nothing. This should therefore give the compiler far more room make optimizations without the fear of changing the semantics of the program. So in the future safe, idiomatic Rust code should be able approach the performance of highly optimized C code, without having to drop resort to unsafe code (note of course that C is implicitly unsafe).
I noticed that in the last round of the language shootout benchmarks that rust was significantly behind C++(and I also noticed that this was being addressed on the rust reddit).
I'd like to see more benchmarking done between rust and C++ where there is a solution for each language that is idiomatic or safe while another solution optimizes for performance.
1. We haven't put in the time for the benchmarks game, because we're too busy working on the language.
2. IIRC, the C++ version uses GMP, and we don't, because licensing. You can install a package if you need GMP, but that doesn't work for the benchmarks game.
> I'd like to see more benchmarking done between rust and C++ where there is a solution for each language that is idiomatic or safe while another solution optimizes for performance.
Much of Rust's safety comes with no runtime overhead, but some of it does.
I thought it won't be more than two times slower. The basic design was supposed to alow the speed of C with the added benefit of safety. But it's much worse:
Most of the 'problems' there are Rust programs not micro-optimised to the ridiculous extent of the C ones, e.g. the C ones almost all use SIMD instrinsics or features like OpenMP.
Rust is a new, experimental language and the support for SIMD is still in development, and we still don't have much in the way of data-parallel libraries. The type system has been expressly adjusted to allow for safe data-parallel libraries though, so they will appear.
Also, a few other points:
- Rust allows for opting in to unsafe code (with rules and overhead similar to C), which can be used for performance optimisations. Last time I checked, none of the Rust benchmark programs used any unsafe. (On the other hand, all of the C programs are effectively using `unsafe` code always)
- C is using highly optimised libraries; Rust has zero overhead FFI, and so can easily call into them, but is not for those benchmark (e.g. the pidigits one is just a benchmark of "are you using GMP?", if the Rust was written to use it, there is no reason it would be slower than C)
- The worst offender (reverse-complement) is apparently written without any thought of performance, e.g. it is doing formatted IO for each and every character it emits, rather than writing to an in-memory buffer and dumping that in one go like the C.
In summary, those benchmarks do not reflect the fundamental speeds of the languages, other than the suboptimal SIMD support in Rust.
All it takes is someone to write a wrapper around gmp providing those parts of the API and it will be less verbose and not require the `unsafe` everywhere.
Timing:
$ /usr/bin/gcc -pipe -Wall -O3 -fomit-frame-pointer -march=native pidigits.c -o pidigits.gcc_run -lgmp
$ time ./pidigits.gcc_run 10000 > /dev/null
real 0m0.980s
user 0m0.976s
sys 0m0.000s
$ rustc --opt-level=3 pidigits.rs -o pidigits.rust
$ time ./pidigits.rust 10000 > /dev/null
real 0m0.987s
user 0m0.984s
sys 0m0.000s
Bravo! It's still a good and important demonstration of what the language features. Gmp is LGPL, so you should add the real safe wrappers (to avoid doing everything unsafe) and allow dynamic linking.
You see, I'm certainly not the only one who wouldn't know all this unless there are good benchmark examples (or have the luck of being able to ask and receive the answer from you. I believe Go couldn't do that once, I don't know the current state.
Thank you. But a lot of people wouldn't get so far. Therefore please do promote good examples of the fast code that do something real, like these benchmarks.
Yes I do prefer one good benchmark over ton of stale manuals. The good written benchmark code should show the best possible aspect of the language: how the code looks when it has to compete in speed too. If the code is demonstrably cleaner and almost as fast as C, then it can be a winner. If it's more than 2 times slower, I don't care, unless it has some fantastic libraries or framework which solves something much easier than other languages.
Fast and looking nice when solving something like a real problem. Benchmarks can demonstrate that. Manuals and references typically not.
Improve the benchmarks and submit them to the shutout. Use the benchmarks during the development too to remain focused on the performace. It's not helpful when somebody in the small circle knows that the benchmark can be implemented better if there aren't any reference examples.
I believe Servo is your reference, but you definitely need to have a bit wider horizons than that. The small benchmarks are good representation of the rest of the world. And the good advertisement, once they are fast. Please don't ignore them.
They are not being ignored. Better versions of those benchmarks exist in-tree: https://github.com/rust-lang/rust/tree/master/src/test/bench (the shootout-....rs). The only reason they're not on the website is licensing problems (which have recently been resolved) and the fact that the benchmarks game is using the 0.11 release (while the in-tree benchmarks are kept up-to-date with the master branch and so don't necessarily compile with 0.11); this new release represents a good chance to push them up.
Those small benchmarks are not good representations of the rest of the world; they are contrived and limited problems, with some fairly arbitrary rules about which languages/implementations are valid to include, e.g. PyPy is not allowed, and Java gets JIT warm-up time etc.
The site could be renamed to be "The Computer Language Implementation Benchmarks Game", since it's not testing the language speed (best approximated by the fastest known implementation), just certain implementations, some of which are designed with priorities above speed in mind (e.g. cPython).
> Java does not get JIT warm-up time! Please stop making up misinformation!
Oh, sorry! I must've been misremembering something someone told me. (Although, how does `Cold` differ from `Usual` there? It's not clear from the text what the difference is.)
> best approximated by the fastest known implementation
Best not to become so confused: " Measurement is highly specific -- the time taken for this benchmark task, by this toy program, with this programming language implementation, with these options, on this computer, with these workloads."
(Seems like a better way to test would be to allow anything, with three sets of numbers. One for "time from source to as far as you can go without input" (i.e. compilation time, loading the source into RAM, that sort of thing) "time from input to end of first run", one for "time for nth run", with n being high enough that the timing settles. So after any JITters are done, that sort of thing.)
Very good idea these release notes [1]. In a perfect world it should be even more detailed a bit like What's new in Python x.x notes.
Maybe by giving the gist of an RFC through an example or an high-level description before linking to it, because there are some RFCs a bit overwhelming I think, more theoritical than illustrated with examples.
Actually, I've heard from at least one person that the code that they wrote for 0.11 didn't break when upgrading to 0.12! A milestone in Rust history!!
As we approach 1.0, these last few snapshot releases will become more and more valuable since the reduced frequency of breakage means that they will be more representative of the final language. The fact that people feel compelled to develop against nightlies is a bug, not a feature! :)
I recently received a pull request on a library that I hadn't updated in about 6 months, and was astonished how few changes it needed! Very exciting times.
The TL;DR is this: not having a default type makes you consider which size to choose. This is generally good. But there are two good cases in which having a default helps a lot: examples and tests. Both of these are small, and don't really care about the details.
In real code, inference means you can often not get away with annotating numbers, because the type is inferred. But in these two cases, there often isn't something to do the inference.
Long ago, there was fallback, and then it was removed, and now we're considering putting it back. This is a great example of Rust's practicality and a scientific/empiric mindset to building the language.
Thanks! I support having the most convenient defaults. Not only for integers, but for other stuff too. Is there a default now for 3.33? Is there a default for "string in quotes"? If there is, integers should be the default too.
I believe that everybody can invent as much red tape as he wants, the hard thing is making it concise and effective for the everyday use, not for the writers of compiler.
Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?
> Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?
There were (I think it was somewhere on GitHub). One of the things `let mut` facilitates is:
> Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?
I was interested in the answer to this too. I found these which helped me understand:
The thing is, for floats there's a default that makes sense. For strings, there's a default that makes sense.
Integers, on the other hand, are harder. You _probably_ want i32, even on 64 bit machines, but then an i32 can't index an array, so for integers used that way, you want a machine-sized integer.
Unless you need the extra precision, using a float makes more sense than a double.
But with integers, it's different. It's not so much that an i32 _can't_ index an array, but that it _might not be able to_. On a 64 bit machine, you could have a very large array that's bigger than an i32 in size.
> Unless you need the extra precision, using a float makes more sense than a double.
No. Floats are OK for big storage, but for a lot of uses you want to make all the intermediate calculations in doubles and only when storing the results to convert to floats again. See
(at the moment top of the front page) where people have problem with Intel using 66 bits mantissa internally for the input of 53 bits (double!). You don't want to have partial results as floats unless you're doing graphics and simple calculations where you need much less bits. You'd would never like to calculate a bridge construction with floats, even if the needed parts would at the end be written with only 4 digits.
That's what I intended as part of 'if you need the precision.' You're absolutely right that even if the beginning and end are floats, you may want to do the intermediate calculations as a double.
Therefore it's often important to have fp constants with as much bits as possible. But there are exceptions again: unless you use them to initialize float arrays etc.
Regarding indexes for arrays, there are typical uses that should be recognized: if I write a[ 1 ] I'd like to be able to address the element 1. If I write a[ 0xffffffffffff ] it's also clear it's more than 32 bits.
Right. There's tons of reasonable choices here. Which is why Rust currently has no defaults, and is why you need to write 5i today.
This whole topic is very much still under discussion. While many want defaults, what they default to is still a question. And there's a sizable group that doesn't want any default at all.
There is little performance advantage to using a 32-bit float. Their precision is low enough that you can't do anything useful without risking the correctness of your program.
They are, like shorts, an artefact of the past. Unlike shorts, their limitations are not within the intuition of the average programmer, which leads to widespread abuse and bugs.
Floats (and shorts) have a significant performance advantage, when they're appropriate. For example, you can fit 2x as many floats into cache compare to a double and good cache behaviour is really important these days, it's one of the reasons languages like C/C++/Rust with good control over memory layout can be very fast. Also, you can operate on 2x as many floats with vectorised instructions; again, SIMD is important these days, especially for things like games (where the precision of a float is fine).
This is too brutal, you have to consider the application domain. For physics simulations I would agree with you. For signal processing or graphics applications float is usually perfectly fine (and even overkill for many DSP applications).
People mentioned the cache efficiency gain with floats, and that is already relevant to rust. If in the future maybe rust could be used for GPGPU, then the difference between float and double often becomes even more dramatic. You definitely want to keep float support for this.
Rust doesn't have a default for float types, it forces you to be explicit. `let x = 3.33;` without any other way to infer it will cause an error, forcing you to either write `3.33f32` or `3.33f64`.
The whole point of 1.0 is that they won't experiment like this anymore. They're using the freedom to break things now with the promise to lock it down later. I'm sure there will still be some experimentation, but the core language should be stable.
One thing rust has going for it is a very clear vision (safety, control, speed), which is based on fundamental concepts (e.g. lifetimes). Consequently, it has less of a need for a BDFL to declare what is or is not rust-ific.
The default seems to be "inferred". That is, 1 + 1i will work, but 1 + 1 is ambiguous. I've just noticed this in use, and don't know what this is officially called. I don't think this will actually be a real annoyance in practice.
It's just type inference. Rust doesn't do automatic integer coercion on arithmetic operations, so once it knows the type of any of the numbers it can deduce the types of all the rest.
Doesn't seem so - looks like we'll have to install cargo separately if we use homebrew, which is kind of unfortunate. Guess I'll just go back to using the rustup until brew installs cargo as well as rust.
I looked into adding one to homebrew-cask[0], but it isn't quite as easy as the rust nightly, because it doesn't use a pkg. It shouldn't be too hard though, to do it from that tar.
There's also been tons of progress toward the last features that are blocking the 1.0 release (http://blog.rust-lang.org/2014/09/15/Rust-1.0.html). The long-awaited work on dynamically-sized types is now partially complete, and the only two remaining big hurdles that I can think of are closure reform and trait coherence reform. Onward to stability!