
Rust 1.25 released - Manishearth
https://blog.rust-lang.org/2018/03/29/Rust-1.25.html
======
kibwen
The big thing in this one is the LLVM upgrade, which has been a _long_ time
coming. :) We actually skipped LLVM 5 entirely due to some issues with
emscripten. It'll be great to finally be able to experiment with LLD.

Looking forward, the next two releases, 1.26 and 1.27, are going to have some
major features that we've been working on for a long, long time, such as "impl
trait" (which should allow for improved performance, simpler syntax for the
typical use of generics, and greatly improve error messages for extensive use
of generics), more ergonomic matching when dealing with references, 128-bit
integers, stable SIMD library support, and others. This whole year is going to
see the continued stabilization of the ergonomics intiatives from last year,
so stay tuned for lots of welcome tweaks to make Rust easier to write and
read. :)

(I should also mention that this release happened live from the Rust All-Hands
happening in Berlin, which has been a massively productive time so far; keep
an eye out for the eventual workweek recap which will hopefully enumerate the
tons of improvements (e.g. reducing Rust binary sizes by 10%) and future
initiatives (e.g. forming an official Verification Team) that have happened
this week.)

~~~
simias
>stable SIMD library support

Oh, that's a huge one for me and I didn't even know it was in the pipeline,
very happy to hear that. Now I'll have to complain about lack of stable inline
ASM support instead!

~~~
steveklabnik
You can check it out in nightly here [https://doc.rust-
lang.org/nightly/std/arch/](https://doc.rust-lang.org/nightly/std/arch/)

> Now I'll have to complain about lack of stable inline ASM support instead!

This one is much, much harder.

~~~
simias
>This one is much, much harder.

I'm curious why that is, intuitively I'd have assumed that SIMD support would
be trickier to stabilize (since you need a generic interface over a wide range
of implementations) while inline ASM is pretty much "just copy-paste this bit
of assembly in the middle of this function" which I assume is already
something LLVM can do easily since it needs that for C/C++. Since nobody
expects inline ASM to be portable you don't have to worry about making one
single generic interface to it.

What's the catch?

~~~
burntsushi
w.r.t. to SIMD, there are two prongs to it:

* Platform dependent functions that match an upstream vendor's API. (Think __m128i and friends.)

* Platform independent APIs that provide portable vector APIs.

The former is necessary because the latter cannot ever hope to cover all use
cases supported by CPUs. The latter is necessary because it can hit many use
cases, and finding the optimal implementation for every operation for every
permutation of platform (and every permutation of CPU feature on each
platform) is a non-trivial task.

So yeah, that is indeed a ton of work. However, LLVM does a lot of the heavy
lifting for us, particularly with respect to the implementation of the
portable API. "All" we have to do is define an API we're willing to stabilize
in `std`. (Which is no easy task.)

I'm less in touch with the inline ASM feature, but like most things, I expect
it is much more complex than you're suggesting. You can read more about it on
the tracking issue for inline ASM[1]. Remember, if we're stabilizing
something, then we need to make sure it continues to work. If LLVM's doesn't
uphold that guarantee, that we must necessarily provide our own interface that
is stable. "Just expose LLVM's internals" is not really an acceptable answer,
and it was in fact a large part of the SIMD effort (building APIs we can
stabilize on top of what LLVM supports).

I'd also like to note that the stabilization of the former piece above
(platform dependent vendor intrinsics) eats into _many_ of the use cases for
inline ASM. Not all of course, but many. :-)

[1] - [https://github.com/rust-
lang/rust/issues/29722](https://github.com/rust-lang/rust/issues/29722)

~~~
steveklabnik
Yes, the fact that we basically just do what LLVM does, and that that's not
stable, is a huge blocker. There's also the broader question of "DSL" vs the
way that LLVM does it...

------
alex_duf
Novice question: It says one step closer to support AVR, am I correct to
assume the arduino is part of that family?

Because if so I have an amazing excuse to finally learn rust

~~~
dancek
If you want to get something done with microcontrollers, ARM Cortex-M tends to
be much more powerful and is already great fun to write Rust for. It's a bit
of work to get started--certainly a lot compared to Arduino which abstracts
away even things that shouldn't be--but the code can be much more readable and
portable than the corresponding C.

For example, get this $2 board
[http://wiki.stm32duino.com/index.php?title=Blue_Pill](http://wiki.stm32duino.com/index.php?title=Blue_Pill)
and use this library (see the examples!)
[https://github.com/japaric/stm32f103xx-
hal](https://github.com/japaric/stm32f103xx-hal)

There probably will eventually be AVR implementations of
[https://github.com/japaric/embedded-hal](https://github.com/japaric/embedded-
hal) so you could run mostly the same code on an Arduino board.

------
galangalalgol
This llvm should have newgvn which allows better auto vectorization. I'm
curious to see if panic=abort allows some of the nbody implementations at
benchmark game to autovec. Might not have to wait on SIMD to beat c++. The
fastest nobody entry is FORTRAN and it has no intrinsics, just autovec.

------
bluejekyll
> cargo doc, ... . It’s getting a huge speed boost in this release, as now, it
> uses cargo check, rather than a full cargo build

This is great. I often run cargo doc after adding new library dependencies or
even for my own new interfaces to be able to click through the code. This is a
wonderful tool, especially while IDEs are still coming up to speed.

Are IDEs using cargo doc? or are they reimplementing the markdown processing?

~~~
steveklabnik
My understanding is that they use their own markdown processing, but I'm not
actually sure. The RLS presents the raw text as far as I know; I'm not sure
how the IntelliJ stuff works.

------
jitans
Instead to improve the "use" they had to remove it and enable only namespace
renaming

------
hn_person
Does async/await have a slated release version yet? Also, is there a central
resource to track progress on that topic?

~~~
steveklabnik
There's sorta two possibilities here; the procedural macro based version and
some sort of native keyword version. We don't have a target release number
yet, but sometime in the first three quarters of the year is a vague sort of
target.

The networking WG just started literally today, and will be shepherding the
work. I'm not yet sure where the best tracking issue is; if I find out soon
I'll leave another comment.

~~~
OtterCoder
For whatever my two cents are worth, I would choose a macro over a keyword
every day. Keyword clutter can be incredibly stifling once a language has any
history at all.

------
borplk
Does the LLVM version jump improve performance or doesn't really matter in
that area?

~~~
steveklabnik
As always, it depends on the exact code. I'm sure some stuff got faster, and
some got slower; it really depends on the exact code being compiled.

------
Mononokay
Link is dead.

~~~
Manishearth
It is now!

(we had some issues with GH pages)

~~~
kibwen
Or rather, it _isn 't_ now. :)

~~~
andrewshadura
The link is dead, long live the link!

------
Grollicus
> Nested import groups

man, they work really hard to keep their language ugly & confusing

~~~
chrismorgan
Nested import groups will often be uglier, but there are definitely some cases
where they lead to _much_ prettier code with no loss of obviousness, and
nesting was a logical extension (in a “why don’t we have this?” kind of way,
in my opinion) of import grouping, making that feature more internally
consistent.

