Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rust 0.12.0 released (mail.mozilla.org)
281 points by bilalhusain on Oct 9, 2014 | hide | past | favorite | 110 comments


I'm very excited at the progress in this release. In particular I'm thrilled at the ongoing focus on ergonomics and overall developer experience, what with the push for quality documentation (brand-new tutorials, code examples for all library functions, and API stability markers) and the rapid maturation of Cargo. Also, lifetime elision and `where` clauses have done wonders in reducing the horror that has historically plagued Rust function signatures.

There's also been tons of progress toward the last features that are blocking the 1.0 release (http://blog.rust-lang.org/2014/09/15/Rust-1.0.html). The long-awaited work on dynamically-sized types is now partially complete, and the only two remaining big hurdles that I can think of are closure reform and trait coherence reform. Onward to stability!


It has come a long way since I first started using it (0.6 era). Congratulations on the release! I look forward to a future world where I can use it (via Servo) in Firefox, as well as in my Constraint Solver engine, which I hope to reveal for the 2015 Minizinc Challenge.


MiniZinc! There's a blast from the past... I wrote most of the first few versions of the specification and much of the initial implementation. Good to hear it's still being useful.


Does the RFC230 (https://github.com/rust-lang/rfcs/pull/230) removal of green threads give me out of box 1:1 threading or just that std::io uses native threads and the rest has to use green threads only now as an additional outside dependency? Kinda confused given I don't follow Rust all that much.

Really odd that they went with M:N model - if the OS is fast at creating/destroying threads 1:1 is the best model for vast majority of apps.


The default is 1:1 threads. If you use `spawn` in Rust, you get a native thread.


Oh, it is? I must be reading some old docs and mailing list discussions. That's great to hear. Thanks for the clarification!


It's changed a number of times. First it was M:N, then 1:1 was added, then 1:1 was made the default, now M:N is getting removed.


Does anything preclude M:N threading in a library? It is really useful in quite a number of scenarios.


The existing libgreen will be moved out to a Cargo library. You just won't be able to seamlessly switch between the two like you did before, you'll have to explicitly use the green threading API instead.


How will this work if you depend on a lib that uses traditional threading but want to use green threads yourself? That actually comes up a lot at my work.


You'd have to use one that uses green threads internally.

Basically, in the effort to make the abstraction Just Work between 1:1 and N:M threading, we made green threads so heavyweight that they barely even qualified as green anymore. It's only by removing that abstraction that we could feasibly make them lightweight again.


The current implementation that's being phased out was designed to abstract away the threading model such that libraries could make use of concurrency while the programs that use those libraries could use any threading model they choose, and it would all Just Work. However, in practice this caused too many compromises in the implementation of both the native thread runtime and the green thread runtime, erasing the benefits of both.

Sorry! We're sensitive to your use case, but we just couldn't make it work.


No, and we actually use a "tasklet" library in Servo for the really fine-grained parallelism (per-CSS-block) where any allocation on task spawn would be far too expensive and work stealing is essential.


Could you point me to this? I did a cursory look through the Servo codebase. It just occurred to me that since Servo is pretty much the reason for the existence of Rust, it is probably the best thing to learn Rust by reading code.

I have had a problem learning Rust by example, the codebases I would like to read don't usually work with the latest Rust releases.

I don't really _do_ C++, but if I could talk to Rust from C++ maybe I could start hacking on their codebases.

Edit: I found the http://doc.rust-lang.org/green/index.html is this what you are referring to?


I believe the goal is for there to be an officially-supported green threading library, yes.


excuse my ignorance, but what exactly does 'lifetime elision' mean?


Part of Rust's ownership system are 'lifetimes.' Without getting into the details, they're an annotation you occasionally add to code in certain situations. Like these two:

    fn get_mut<'a>(&'a mut self) -> &'a mut T;
    fn print<'a>(s: &'a str);
The 'a parts are the lifetime annotations.

We used to have a single inference rule for when you didn't need to write the annotation. With RFC 39[1], we added two more. These two extra inference rules removed 87% of the existing annotations, giving you this instead:

    fn get_mut(&mut self) -> &mut T;
    fn print(s: &str);
This is a pretty huge readability win.

1: https://github.com/rust-lang/rfcs/blob/master/active/0039-li...


oh ok, thanks. i knew what lifetimes were, but was confused by the 'elision.' Apparently I didn't connect 'elision' with 'elide.' So, new vocab word for today :)


What kind of cases do you still need to write explicit lifetimes?


I actually haven't had to since elision was implemented, so I can't give you a good rule of thumb from experience. Here's an overview of how the rules applied to the standard library at the time they were implemented, which is where the percentage came from: https://gist.github.com/aturon/da49a6d00099fdb0e861


The most common case I have run into is storing a reference inside another data structure.


Basically, after working with lifetimes for so long we realized that a great many functions that take lifetime parameters follow a similar and straightforward pattern. As of this release, we now allow you to omit lifetime parameters entirely for functions that follow this pattern, which is likely the vast majority of functions in your code that previously required such annotations.

The detailed release notes have a more, er, detailed explanation: https://github.com/rust-lang/rust/wiki/Doc-detailed-release-...


This is a great change.

But to clarify, functions which take 2 or more reference inputs won't benefit from this, since the compiler can't determine which lifetime parameter to use, right?

Edit: Never mind, I RTFM. Turns out it can't benefit.


In cases where a function's input and output have the same lifetime, the lifetime is now inferred. So instead of this:

    fn f<'a>(x: &'a SomeType) -> &'a OtherType
you can now write this:

    fn f(x: &SomeType) -> &OtherType
Same meaning, just more concise syntax.


Uhm, what if I didn't want `&OtherType` to have the same lifetime? Would I then write:

    fn f<'a>(x: &'a SomeType) -> &'b OtherType

?


That would be illegal, because it only makes sense to return a reference when that reference is somehow linked to the lifetime of an input parameter.

(Or when it has a lifetime of `static`, which means the referent is stored in static memory and thus is alive for the whole program.)


What if you're looking up the input in a temporary cache of some kind, and returning a reference to the cached value? The lifetime of the input and output would not be related in that scenario, but the output would not necessarily be 'static (maybe you build a cache, run some functions, tear it all down, then build a new cache with different values and do it all again).


The lifetime would be matched to the lifetime of the cache, which has to be referred to somehow.


Ah cool, I still have a lot to learn about rust :)


You wouldn't use a reference generally, if you're thinking of creating a new independent object you'd just return that. This is for cases when your return object is a reference into your argument, for example


Yes, though this particular code would not compile because one of the lifetimes is not declared.


ah, got it. thanks!


Is there like a what's new in Firefox but for Rust?


Yes. Click on the link and scroll down.


Noticeably absent is a really good HTTP library.


Chris Morgan is working on teepee[0] though, so it's coming.

[0] https://github.com/teepee/teepee


There is also hyper which is used by Servo: https://github.com/hyperium/hyper


From someone that sucks at C (learned a bit here and there a decade ago and never used it) and recently started looking into 'system' languages again:

- The community was very friendly, although my questions probably were more on the 'RTFM' side of things

- The rust compiler is amazing. The error messages it provides are clear and helpful, often including potential spelling corrections and/or repeating the part of the code that causes trouble, closing with a suggestion to fix that.

It's not quite magic, but really impressive and helped me a lot to get started. Being a Windows dev by day I do bemoan the fact that this platform is a bit .. behind, but I'm actively pursuing a sideproject based on rust (on Windows) atm.


> The rust compiler is amazing. The error messages it provides are clear and helpful

I'm sure that Clang inspired the great error messages - it sets a very high bar for new compilers (and old - GCC is still catching up).

> Being a Windows dev by day I do bemoan the fact that this platform is a bit .. behind

I'm sure that once 1.0 is released there will be a greater effort to make the Windows experience more seamless. This will be extremely important if we want to convince more game developers to give Rust a go. Have you seen https://github.com/PistonDevelopers/VisualRust/?


Nope, haven't seen VisualRust before. Thanks.

That said, IDE support isn't exactly what I was referring to.

- you need to install mingw separately and there are lots of reports about conflicts with other software that might bundle a different version of mingw

- installing cargo is weird (so far .. I failed and don't use it)

And even with my first baby-steps (i.e. doing basically nothing of value) I discovered some limitations, one being that you cannot create a DLL that exports an unmangled entry point - for stdcall (other calling conventions don't have that issue and follow #[no_mangle] or #[export_name="foo"].

BUT that's really not a huge deal for me. Just a bumpy road at times.


All this is good stuff.

I think soon should be a time to look at performance. Performance should be close to or better than C++ on average. We know about "the root of all evil". But, if there is going to be any large uptake from those using C++ currently (and I imagine a lot of those developers are concerned about performance, memory usage etc), Rust simply has to compete on performance as well.

Go in a way started that path. The whole "systems language" re-interpretation and all. But it never quite became the systems systems language. It can't quite eat C's and C++'s lunch figuratively speaking.

Rust might be able to do it. But regardless of concurrency, type safety and other features if it cannot be fast enough it will have a hard time.


> I think soon should be a time to look at performance. Performance should be close to or better than C++ on average.

The language has been designed from the ground up to compete on this level. Also note that Rust potentially has far greater room for compile time optimizations than C and C++, because the compiler knows far more about lifetimes and ownerships. This can be seen in one of the last items in the release notes:

> Stack usage has been optimized with LLVM lifetime annotations.


We already do pay attention to performance. In general, Rust should already be in the same league as C or C++. Please open issues if you find something that isn't.


And even more important than current benchmark numbers is that the language is explicitly designed to be optimizable, and takes the idea of "zero-cost abstractions" seriously. I can count on two fingers the number of places where we impose dynamic checks by default in the name of safety, and one of those can be unsafely turned off and the other can be safely disabled with proper code structuring.

(Theoretically we can also leverage Rust's ownership rules to automatically generate aliasing information. When implemented this should be an across-the-board speed boost for all Rust code in existence, in a way that C and C++ compilers can only dream of.)


Would there also be a performance boost compared to C code which properly uses 'restrict'?


The underlying compiler backend would use the same alias optimization pass for both Rust and C. Rust's advantages here are that all Rust code is automatically eligible for such optimization without the programmer having to do any work at all, and whereas a programmer can get `restrict` usage wrong (or alternately fail to have the confidence that a given pointer is truly unaliasable), the Rust compiler has perfect information.

To be fair, there are also optimizations that C can do that Rust can't, often resulting from C's undefined behavior.


Thanks for the information! Looking very much forward to rust 1.0.


Rust's type system enforces a great deal more invariants in terms of lifetimes, ownership and mutability than C, which enforces virtually nothing. This should therefore give the compiler far more room make optimizations without the fear of changing the semantics of the program. So in the future safe, idiomatic Rust code should be able approach the performance of highly optimized C code, without having to drop resort to unsafe code (note of course that C is implicitly unsafe).


I noticed that in the last round of the language shootout benchmarks that rust was significantly behind C++(and I also noticed that this was being addressed on the rust reddit).

I'd like to see more benchmarking done between rust and C++ where there is a solution for each language that is idiomatic or safe while another solution optimizes for performance.


There's a few reasons for this:

1. We haven't put in the time for the benchmarks game, because we're too busy working on the language. 2. IIRC, the C++ version uses GMP, and we don't, because licensing. You can install a package if you need GMP, but that doesn't work for the benchmarks game.

> I'd like to see more benchmarking done between rust and C++ where there is a solution for each language that is idiomatic or safe while another solution optimizes for performance.

Much of Rust's safety comes with no runtime overhead, but some of it does.


> You can install a package if you need GMP, but that doesn't work for the benchmarks game.

Why doesn't that work for the benchmarks game?

> Much of Rust's safety comes with no runtime overhead, but some of it does.

I guess he'd like to find out how much overhead.


> but that doesn't work for the benchmarks game.

fyi

http://benchmarksgame.alioth.debian.org/u64/program.php?test...


I thought it won't be more than two times slower. The basic design was supposed to alow the speed of C with the added benefit of safety. But it's much worse:

http://benchmarksgame.alioth.debian.org/u64/benchmark.php?te...

I agree with you, unless the language creators address this the language has much more chance to remain unused.


Most of the 'problems' there are Rust programs not micro-optimised to the ridiculous extent of the C ones, e.g. the C ones almost all use SIMD instrinsics or features like OpenMP.

Rust is a new, experimental language and the support for SIMD is still in development, and we still don't have much in the way of data-parallel libraries. The type system has been expressly adjusted to allow for safe data-parallel libraries though, so they will appear.

Also, a few other points:

- Rust allows for opting in to unsafe code (with rules and overhead similar to C), which can be used for performance optimisations. Last time I checked, none of the Rust benchmark programs used any unsafe. (On the other hand, all of the C programs are effectively using `unsafe` code always)

- C is using highly optimised libraries; Rust has zero overhead FFI, and so can easily call into them, but is not for those benchmark (e.g. the pidigits one is just a benchmark of "are you using GMP?", if the Rust was written to use it, there is no reason it would be slower than C)

- The worst offender (reverse-complement) is apparently written without any thought of performance, e.g. it is doing formatted IO for each and every character it emits, rather than writing to an in-memory buffer and dumping that in one go like the C.

In summary, those benchmarks do not reflect the fundamental speeds of the languages, other than the suboptimal SIMD support in Rust.


In fact: I translated the C pidigits program to Rust: https://gist.github.com/huonw/4b326e9a73a40df664cd

All it takes is someone to write a wrapper around gmp providing those parts of the API and it will be less verbose and not require the `unsafe` everywhere.

Timing:

  $ /usr/bin/gcc -pipe -Wall -O3 -fomit-frame-pointer -march=native  pidigits.c -o pidigits.gcc_run -lgmp
  $ time ./pidigits.gcc_run 10000 > /dev/null

  real    0m0.980s
  user    0m0.976s
  sys     0m0.000s

  $ rustc --opt-level=3 pidigits.rs -o pidigits.rust
  $ time ./pidigits.rust 10000 > /dev/null

  real    0m0.987s
  user    0m0.984s
  sys     0m0.000s


Bravo! It's still a good and important demonstration of what the language features. Gmp is LGPL, so you should add the real safe wrappers (to avoid doing everything unsafe) and allow dynamic linking.

Can Rust link dynamically?


Yes, it can link dynamically:

  $ ldd pidigits.rust
        linux-vdso.so.1 (0x00007fff8b9e7000)
        libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f5c233be000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5c231ba000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5c22f9c000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5c22d86000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5c229de000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f5c2372b000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5c226dc000)


You see, I'm certainly not the only one who wouldn't know all this unless there are good benchmark examples (or have the luck of being able to ask and receive the answer from you. I believe Go couldn't do that once, I don't know the current state.

Thank you. But a lot of people wouldn't get so far. Therefore please do promote good examples of the fast code that do something real, like these benchmarks.


Linkage and FFI are described (to some degree, at least) in the FFI guide. http://doc.rust-lang.org/nightly/guide-ffi.html

Do you prefer reading benchmarks over the textual documentation? (genuine question)


Yes I do prefer one good benchmark over ton of stale manuals. The good written benchmark code should show the best possible aspect of the language: how the code looks when it has to compete in speed too. If the code is demonstrably cleaner and almost as fast as C, then it can be a winner. If it's more than 2 times slower, I don't care, unless it has some fantastic libraries or framework which solves something much easier than other languages.

Fast and looking nice when solving something like a real problem. Benchmarks can demonstrate that. Manuals and references typically not.


> Fast and looking nice when solving something like a real problem. Benchmarks can demonstrate that. Manuals and references typically not.

They certainly can. The actual "specification" documents probably not, but guides and tutorials (like Rust offers) can and (some of them) do.


Improve the benchmarks and submit them to the shutout. Use the benchmarks during the development too to remain focused on the performace. It's not helpful when somebody in the small circle knows that the benchmark can be implemented better if there aren't any reference examples.

I believe Servo is your reference, but you definitely need to have a bit wider horizons than that. The small benchmarks are good representation of the rest of the world. And the good advertisement, once they are fast. Please don't ignore them.


They are not being ignored. Better versions of those benchmarks exist in-tree: https://github.com/rust-lang/rust/tree/master/src/test/bench (the shootout-....rs). The only reason they're not on the website is licensing problems (which have recently been resolved) and the fact that the benchmarks game is using the 0.11 release (while the in-tree benchmarks are kept up-to-date with the master branch and so don't necessarily compile with 0.11); this new release represents a good chance to push them up.

Those small benchmarks are not good representations of the rest of the world; they are contrived and limited problems, with some fairly arbitrary rules about which languages/implementations are valid to include, e.g. PyPy is not allowed, and Java gets JIT warm-up time etc.


> The only reason they're not on the website is…

… that no one has contributed them.

> … with some fairly arbitrary rules about which languages/implementations are valid to include, e.g. PyPy is not allowed …

Hundreds of programming language implementations are not included! It would take more time than I choose to donate. Been there; done that.

http://benchmarksgame.alioth.debian.org/play.html#languagex

> … and Java gets JIT warm-up time etc.

Java does not get JIT warm-up time! Please stop making up misinformation!

http://benchmarksgame.alioth.debian.org/play.html#measure

http://benchmarksgame.alioth.debian.org/play.html#java


The site could be renamed to be "The Computer Language Implementation Benchmarks Game", since it's not testing the language speed (best approximated by the fastest known implementation), just certain implementations, some of which are designed with priorities above speed in mind (e.g. cPython).

> Java does not get JIT warm-up time! Please stop making up misinformation!

Oh, sorry! I must've been misremembering something someone told me. (Although, how does `Cold` differ from `Usual` there? It's not clear from the text what the difference is.)


> best approximated by the fastest known implementation

Best not to become so confused: " Measurement is highly specific -- the time taken for this benchmark task, by this toy program, with this programming language implementation, with these options, on this computer, with these workloads."

> `Usual`

http://benchmarksgame.alioth.debian.org/play.html#measure

> `Cold`

http://benchmarksgame.alioth.debian.org/play.html#java

"Here are some additional (Intel® Q6600® quad-core) elapsed time measurements, taken after the Java programs started and before they exited:

In the first case (Cold), we simply started and measured the program 66 times; and then discarded the first measurement leaving 65 data points."


Yow.

I didn't know that.

(Seems like a better way to test would be to allow anything, with three sets of numbers. One for "time from source to as far as you can go without input" (i.e. compilation time, loading the source into RAM, that sort of thing) "time from input to end of first run", one for "time for nth run", with n being high enough that the timing settles. So after any JITters are done, that sort of thing.)


You "didn't know that" because it isn't true.


Very good idea these release notes [1]. In a perfect world it should be even more detailed a bit like What's new in Python x.x notes.

Maybe by giving the gist of an RFC through an example or an high-level description before linking to it, because there are some RFCs a bit overwhelming I think, more theoritical than illustrated with examples.

[1] https://github.com/rust-lang/rust/wiki/Doc-detailed-release-...


Because it's easy to miss, note that there is a longer list of summarized changes below the links in the OP, also reproduced in this file: https://github.com/rust-lang/rust/blob/master/RELEASES.md


...and as usual no one will actually use it ;)

(Because of the nightlies, I mean -- the language itself even I use!)

But really, is there any reason to have these "releases" that almost no ones uses, or is it just to have a concrete changelog every once in a while?


Actually, I've heard from at least one person that the code that they wrote for 0.11 didn't break when upgrading to 0.12! A milestone in Rust history!!

As we approach 1.0, these last few snapshot releases will become more and more valuable since the reduced frequency of breakage means that they will be more representative of the final language. The fact that people feel compelled to develop against nightlies is a bug, not a feature! :)


I recently received a pull request on a library that I hadn't updated in about 6 months, and was astonished how few changes it needed! Very exciting times.


Reading the new Guide:

http://doc.rust-lang.org/guide.html

"The first thing we'll learn about are 'variable bindings.' They look like this:

    let x = 5i;
"

And so on: "let (x, y) = (1i, 2i); let x = 5i; x = 10i; "

I had to double check if the default (when the i is not written) is something else than int. It is confusing.


Currently, there is no default. This is probably changing back. See https://github.com/rust-lang/rust/issues/15526 and the associated issues.

The TL;DR is this: not having a default type makes you consider which size to choose. This is generally good. But there are two good cases in which having a default helps a lot: examples and tests. Both of these are small, and don't really care about the details.

In real code, inference means you can often not get away with annotating numbers, because the type is inferred. But in these two cases, there often isn't something to do the inference.

Long ago, there was fallback, and then it was removed, and now we're considering putting it back. This is a great example of Rust's practicality and a scientific/empiric mindset to building the language.


Thanks! I support having the most convenient defaults. Not only for integers, but for other stuff too. Is there a default now for 3.33? Is there a default for "string in quotes"? If there is, integers should be the default too.

I believe that everybody can invent as much red tape as he wants, the hard thing is making it concise and effective for the everyday use, not for the writers of compiler.

Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?


> Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?

There were (I think it was somewhere on GitHub). One of the things `let mut` facilitates is:

    fn main() {
        let (i, mut j) = (10i, 10i);
        j += 1;
        println!("{}", (i, j));
    }
I'm sure there are other applications for it too (such as in pattern matching). It just fits better into Rust.


> Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?

I was interested in the answer to this too. I found these which helped me understand:

https://github.com/rust-lang/rust/issues/1273

https://github.com/rust-lang/rust/issues/2643


The thing is, for floats there's a default that makes sense. For strings, there's a default that makes sense.

Integers, on the other hand, are harder. You _probably_ want i32, even on 64 bit machines, but then an i32 can't index an array, so for integers used that way, you want a machine-sized integer.


> for floats there's a default that makes sense

Really? In C I use both doubles and floats all the time.

> i32 can't index an array

Why? Is it because Rust would use less than 32 bits for indexes on some platforms? And how should then the indexes for arrays be written?

You have really a hard job, when such basic things change?


Unless you need the extra precision, using a float makes more sense than a double.

But with integers, it's different. It's not so much that an i32 _can't_ index an array, but that it _might not be able to_. On a 64 bit machine, you could have a very large array that's bigger than an i32 in size.


> Unless you need the extra precision, using a float makes more sense than a double.

No. Floats are OK for big storage, but for a lot of uses you want to make all the intermediate calculations in doubles and only when storing the results to convert to floats again. See

https://news.ycombinator.com/item?id=8434128

(at the moment top of the front page) where people have problem with Intel using 66 bits mantissa internally for the input of 53 bits (double!). You don't want to have partial results as floats unless you're doing graphics and simple calculations where you need much less bits. You'd would never like to calculate a bridge construction with floats, even if the needed parts would at the end be written with only 4 digits.


That's what I intended as part of 'if you need the precision.' You're absolutely right that even if the beginning and end are floats, you may want to do the intermediate calculations as a double.


Therefore it's often important to have fp constants with as much bits as possible. But there are exceptions again: unless you use them to initialize float arrays etc.

Regarding indexes for arrays, there are typical uses that should be recognized: if I write a[ 1 ] I'd like to be able to address the element 1. If I write a[ 0xffffffffffff ] it's also clear it's more than 32 bits.


Right. There's tons of reasonable choices here. Which is why Rust currently has no defaults, and is why you need to write 5i today.

This whole topic is very much still under discussion. While many want defaults, what they default to is still a question. And there's a sizable group that doesn't want any default at all.


Floats are like 16-bit shorts. Useless.

There is little performance advantage to using a 32-bit float. Their precision is low enough that you can't do anything useful without risking the correctness of your program.

They are, like shorts, an artefact of the past. Unlike shorts, their limitations are not within the intuition of the average programmer, which leads to widespread abuse and bugs.

Kill them before it's too late.


Floats (and shorts) have a significant performance advantage, when they're appropriate. For example, you can fit 2x as many floats into cache compare to a double and good cache behaviour is really important these days, it's one of the reasons languages like C/C++/Rust with good control over memory layout can be very fast. Also, you can operate on 2x as many floats with vectorised instructions; again, SIMD is important these days, especially for things like games (where the precision of a float is fine).


This is too brutal, you have to consider the application domain. For physics simulations I would agree with you. For signal processing or graphics applications float is usually perfectly fine (and even overkill for many DSP applications).

People mentioned the cache efficiency gain with floats, and that is already relevant to rust. If in the future maybe rust could be used for GPGPU, then the difference between float and double often becomes even more dramatic. You definitely want to keep float support for this.


Packets for networked multiplayer games are another place where half-precision floats are acceptable: http://www.gamasutra.com/blogs/MarkMennell/20140929/226628/M...

(Though it must be said that Rust doesn't actually have a 16-bit float type at all.)


You are wrong. Ever played 60 fps games on smaller devices? They would not be possible without floats.


Rust doesn't have a default for float types, it forces you to be explicit. `let x = 3.33;` without any other way to infer it will cause an error, forcing you to either write `3.33f32` or `3.33f64`.


> This is a great example of Rust's practicality and a scientific/empiric mindset to building the language.

Rust sometimes seems like the antithesis to BDFL. I quite like that they experiment, but I'm wondering how this will work out post 1.0.


The whole point of 1.0 is that they won't experiment like this anymore. They're using the freedom to break things now with the promise to lock it down later. I'm sure there will still be some experimentation, but the core language should be stable.


One thing rust has going for it is a very clear vision (safety, control, speed), which is based on fundamental concepts (e.g. lifetimes). Consequently, it has less of a need for a BDFL to declare what is or is not rust-ific.


The default seems to be "inferred". That is, 1 + 1i will work, but 1 + 1 is ambiguous. I've just noticed this in use, and don't know what this is officially called. I don't think this will actually be a real annoyance in practice.


It's just type inference. Rust doesn't do automatic integer coercion on arithmetic operations, so once it knows the type of any of the numbers it can deduce the types of all the rest.


Quick note: as of 09-Oct-2014 at 18:20 EDT, the page http://www.rust-lang.org/install.html shows the release date of 0.12 as July 2, 2014.



How long until this hits Homebrew?


> (will merge when the bot finishes and goes green)

https://github.com/Homebrew/homebrew/pull/33060


Is there a Cargo build on Homebrew?


Doesn't seem so - looks like we'll have to install cargo separately if we use homebrew, which is kind of unfortunate. Guess I'll just go back to using the rustup until brew installs cargo as well as rust.


I went ahead and added this to homebrew cask! Here's my PR[0]. Hopefully it will be merged soon.

[0]: https://github.com/caskroom/homebrew-cask/pull/6591


I looked into adding one to homebrew-cask[0], but it isn't quite as easy as the rust nightly, because it doesn't use a pkg. It shouldn't be too hard though, to do it from that tar.

[0]: https://github.com/caskroom/homebrew-cask


Unsure, it's been a long time since I've actually owned a Mac. It doesn't look like it, though.


Dunno, but they already have pre-built installers for nightly.


Likely the 1.0 release. Most Linux distros are holding off shipping rust until the 1.0 release.


Nope, homebrew has included Rust for a while: https://github.com/Homebrew/homebrew/blob/master/Library/For...


Homebrew has formulas for Rust. I installed Rust 0.11 yesterday :)


0.12.0 is up now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: