Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.56.0 and Rust 2021 (rust-lang.org)
389 points by steveklabnik 3 months ago | hide | past | favorite | 104 comments

I'm eagerly awaiting the Cargo feature "named profiles"[1]. It's already merged, but not yet announced to be planned for any specific Cargo version. It will allow users to create custom profiles with different build parameters from the standard ones, so you can e.g. create a "profiling" profile which is based on the "release" profile, so that it has all the nice optimizations, but with debug information included, so that it works well with cargo-flamegraph:

  inherits = "release"
  debug = true
[1]: <https://github.com/rust-lang/cargo/pull/9943>

And then there's the less fireworky, but still appreciated, Iterator::map_while[2], which is going to be in Rust 1.57.

[2]: <https://github.com/rust-lang/rust/pull/89086>

Unless something is special about this feature, new features generally aren't "planned" for specific versions in general. Since this landed two weeks ago, that means that it should land in the beta version of Cargo once that branches for this release (I didn't check if it has yet or not), which would place it to be stable in the next release of Rust, 1.57.0.

BTW, this article explains how Rust releases work:


It's a release train, similar to that used by Chromium and Firefox.

Edition releases (e.g. Rust 2021) are reserved for breaking changes only, and to retain Rust's stability promise, are opt-in.

Thanks for the clarification. I just went by what's specified in the sidebar on GitHub in the "Milestone" section.

Any time. Cargo doesn't use the Milestone feature of GitHub at all, so that's why that's empty :)

I love the concept of editions. I haven't anything like it in other languages. Having the ability to have one project made up of crates of many different editions is brilliant. It makes breaking language changes that would otherwise be a nonstarter become minor and easy to manage. Love it

While there's nothing exactly like it yet, we did consider the ways that many similar systems work. There's a lot of similarities, even though I do think Editions end up being meaningfully distinct.

As an example, here's a comment of mine on the original RFC (which was called "epochs"): https://github.com/rust-lang/rfcs/pull/2052#issuecomment-315... (the part with "Some language development comparisons")

> I think the main problem here is semver.

Maybe it’s ironic that semver (machine-readable versioning) ended up being a liability due to interpretation by people (i.e. we can’t release a 2.0 because it would “send the wrong message”).

While semver was intended as machine-readable versioning, and still mostly is, given that sevmer does not say what an "api" is in any way means that it's still defined by humans at the end of the day.

I think I'm in agreement :)

Of course there's always Hyrum's Law[1], which can be summed up as

> All observable behaviors of your system will be depended on by somebody.

Even minor bugfixes will at some point invariably break somebody who depended upon the broken behavior. So semver really boils down to a human assessment of whether any potential breakage is incidental or intended.

[1]: https://www.hyrumslaw.com/

Hyrum's Law reminds me of Frank Borman's "A superior pilot uses his superior judgement to avoid situations which require the use of his superior skill". On the user side Hyrum's law is a situation where you're going to need superior skill to fix it. You should instead have used superior judgement to avoid this situation altogether.

As a library author I don't feel there's any value in considering Hyrum's Law. I can't help those poor fools, for all I know they're relying on me not updating the documentation to warn them they shouldn't rely on undocumented behaviour I'm about to change... Rust provides a pretty clear line in the sand on API changes we can use to choose semver policy. If your program has a proc macro to copy-paste sections of my source code into yours so you can access my non-public functions, I can't see that from where I am and you're lucky it ever worked, I am under no obligation to ensure it magically stays working in my next bugfix release.

And I agree with you on the solution: version the language (or whatever other more conceptual thing) by year and the program by semver. It’s the only thing that makes sense given the constraints.

> I haven't anything like it in other languages

Is that different from compiling one lib with --std=c++11 and another with --std=c++17?

Aside from the header issue, it doesn't allow for or support backwards-incompatible changes.

Because editions are opt-in (and library-level source metadata) the language itself can be modified in non-backwards-compatible ways.

So for instance a C++ with editions could make ctors `explicit` by default, or it could entirely change the automatic member generation (by removing it for instance). As long as the ABI and API remain compatible, that's fine.

It’s probably worth noting that the backwards compatibility breaking changes I think only applies to the language and not the standard library. I’m unable to look up the details right now but I believe there’s a trait function defined in stdlib which is deprecated, supersede, but can’t be removed even as part of an Edition, or it would break older code.

That is correct, editions are mostly about language-level syntactic changes, the APIs have to be compatible between editions. The only place where that isn't the case is the standard prelude (the "builtins" you don't have to import).

The ability to replace the prelude does mean that you can imagine if somebody invents a much better thing than, say, Rust's iterators Rust 2050 can have a prelude that brings in std::better::Iterator instead of std::iter::Iterator and then most coders will end up using the new better iterators, just because that's the kind you get out of the box.

Anybody who literally refers to std::iter::Iterator gets the old ones of course as does any library code from prior editions, but the documentation could lead those few people in the right direction. And presumably std::better::Iterator politely implements std::iter::IntoIterator because why not.

I would be interested to understand if they're allowed to replace the macros. The standard macros aren't actually from the prelude, but instead if you aren't no_std you get all the standard macros anyway. Are they allowed to change those in a future edition? Or not?

Personally I'd kinda like it if preludes were decoupled from editions in some way. Like, fine to have a default per-edition but there are times I would really like to have a prelude without all the `impl<T> X for T`s included.

You can kinda do this now with `#[no_implicit_prelude]` I think but it has somewhat odd semantics. It applies to all submodules, unlike most attributes, and then if you define your own prelude you need to use it in every submodule because they won't all have their own.

If it's gonna have global effect I think it'd be better if it was:

    mod my_prelude {
        use std::whatever::*;
and then my_prelude would be included in all submodules by default.

There was a proposal for that, but it wasn't accepted https://github.com/rust-lang/rfcs/pull/890

Hm. Not really fond of the way that RFC was closed tbh. I feel like far more dubious suggestions sit open for ages and the main argument against this one is that it had narrow use cases but was otherwise uncontroversial. Like, there's a fine line between a feature request is quiet because no one really wants it vs. because it's just kind of an easy small win that's not even worth bikeshedding.

Meanwhile, the thing that is in the language is.. weirder and arguably worse.

If you feel like it, maybe open a thread on internals.rust-lang.org? This issue can still be debated..

Also, it was closed in 2015; it was a different time back then. I don't think that RFCs from 2015 should be kept open just because. Arguably, Rust shouldn't have 88 RFCs open (as of now), either.

Seems like it ought to new possible to rename std::iter::Iterator to std::iter::OldIterator, add std::iter::NewIterator, and then vary which iterator std::iter::Iterator points to based on edition.

Maybe that would be considered too confusing though.

Yes, in that in C++ it would break if one library used a modern language feature as part of its public header files.

Surely the difference there is that Rust doesn't have header files.

Not necessarily, you could imagine a system where the header file specifies its edition.

A header file can check #if __STDC_VERSION__ or __cplusplus versions to make some code conditionally available for, say, C11 or C++11.

  #if __STDC_VERSION__ >= 201112L
    // C11 feature

  #if __cplusplus >= 201103L
    // C++11 feature

But don't header files generally get concatenated together due to the way "dumb" textual includes work?

Can't you compile libraries to object files independently with whatever version of c++ they require and then link them together?

Yes. If the only value the library brings to you is that it generates a particular object file, you can divorce that object file from your chosen language version entirely.

If you have libraries that you don't care if they're C or Pascal or a Lisp implemented in raw machine code, then this works just fine and you needn't care about Rust's editions feature. Rust will also cheerfully consume these libraries although of course everything about them is by definition Unsafe in Rust terms.

But most people want their C++ libraries to deliver a bit more than "Here is some machine code, and here are some symbol names that map to the machine code or to raw binary data". Like maybe they want to be able to implement an Interface the library describes, or they want to use a Concept the library names. You can't do those things using language-independent object files.

Rust library A, from edition 2021 can implement a Trait from library B (edition 2018) on its thin wrapper of a type from library C (edition 2015) and then you can consume the resulting type, with its trait implementation, from your Rust 2018 program.

The problem is a C++ library's headers must be compiled with the settings, context, and flags of every downstream thing that depends on them, rather than separately.

Which isn't such a big problem if your C++ library exposes a minimal C-like API from it's headers, with most of the meat of the library hidden away in source files, but might be a very big problem if your C++ library is a miserable little pile of ~~secrets~~ templates, a la boost.

You'd still need the header that describes the public interface of such a separately compiled object file.

Also, this doesn't work with templates, and modern idiomatic C++ tends to be template-heavy.

Yes, because C versions are frozen in time, and editions aren't. Today Rust added brand new features to Rust 2015 and Rust 2018, and will continue to expand them forever (every new feature lands in all editions whenever possible).

Rust editions are closer to source code parsing modes. More like enabling trigraphs in C or "use strict" in JS.

Additionally, textual header inclusion in C makes mixing versions tricky. Rust has properly isolated crates, and tracks edition per AST node (so that even cross-crate macros work correctly with mixed editions).

C compilers usually backport features in older standards as well (I know, they are breaking the standard). One example are C++ style comments, that are in the standard only from C99, but basically every compiler supports them even in ANSI/C90 mode.

By the way the difference is that Rust is not a standard, thus is easier to evolve (the process is much shorter). On the other side, the fact that a language changes slowly it's something good in a way, it means that you don't have to continue to change the way you do things, and update older projects.

That to everyone that has to maintain code for decades it's important. And every serious software project (not hobby stuff) does stay in production decades really. I don't use Rust, or even C++, for that reason.

> On the other side, the fact that a language changes slowly it's something good in a way, it means that you don't have to continue to change the way you do things, and update older projects.

Isn’t the point of Rust editions that this is also true for Rust? Don’t want to update to a new edition? Then… don’t. The old ones are maintained.

> By the way the difference is that Rust is not a standard, thus is easier to evolve (the process is much shorter).

Another thing is ABI.

C and C++ are ABI-stable, which means that many historic mistakes (intmax_t, std::regex, polymorphic allocators) are impossible to fix.

Rust only promises source compatibility, not ABI compatibility, so it has a lot more freedom to tweak its design.


If the application is using C++17, wouldn't the headers of the C++11 lib then be compiled as C++17, potentially breaking things?

PS: The other way around is more obviously broken, with the C++17 lib headers getting compiled using C++11.

Or you make use of the preprocessor or if constexpr, and then have the specific code for each language version.

It's certainly true that if C++ library maintainers are up for the ever-growing maintenance burden, they can all individually deliver the same promise Rust gets out of the box.

This is in practice what the maintainers of the three standard libraries have to do, perhaps one or more of them will offer their opinion about that experience?

The maintainers would have to do it for every new language version that breaks them, though. The edition system keeps old stuff working without any effort.

I don't want to start this discussion thread yet again, but I am a firm believer that edition system only appears to work right now because:

1 - Rust is still quite young and doesn't have 30 years of accumulated editions

2 - There is still only one major Rust compiler

3 - Editions are designed only to work when compiling the whole project, including 3rd party dependencies, from source code within a single build

4 - So far the editions don't have semantic breaking changes across editions, where behaviour changes across the edition border

5 - There is no plan to ever have editions work across ABIs

So "The edition system keeps old stuff working without any effort." might not be true when Rust achieves an adoption scale similar to C and C++, in about 20 years, with several accumulated editions, and a couple of compilers in use.

I might be proven wrong, but that is how I see it today.

> 1 - Rust is still quite young and doesn't have 30 years of accumulated editions

Rust 1.0 was six years ago. When do we start the clock on six years of C++ evolving while also having "old stuff working without any effort" ?

C++ 98 to C++ 03 was five years, and that introduces almost no features at all. C++ 11 was eight years later but it's notoriously incompatible to prior versions of C++. C++ 14 is only three years, C++ 17 has breaking changes, as does C++ 20...

I think C++ 20 should have taken Epochs (yes even at considerable cost to other new features like Concepts) for this reason. I think ten years from now even if Rust has found it can't achieve everything it wants to via editions, this feature will be generally considered to be a good idea, like generics or string literals. Something you need a specific rationale for not including in your general purpose language.

I might be proven wrong too about how far this can go, but I feel like the Rust 2018 and 2021 editions already prove the value of the idea.

For starters epochs could never work in ecosystems that value binary libraries, regardless of possible ABI issues, which is also a reason why Apple went to such an effort to define an ABI alongside language versions.

Lets say you have a noexcept function compiled in C++20 that calls a throws() function, compiled in C++03, which actually ends up throwing, linked together.

What is the runtime supposed to do now?

To which semantics does it follow now?

Call std::unexpected() as it is supposed to do pre-C++14 or call std::terminate() as it should do in C++20?

What about the user defined handler that was configured for such scenario? Which of them gets called, or do both get called, in which order?

Maybe I am doing it more complex that it is, but I see several scenarios from point of view where multiple compilers, binary libraries and semantic changes come into play, editions turn just into another way to define language versions, because they don't cover all possible uses cases how a language might evolve.

Anyway, history will tell how things work out in the end.

> For starters epochs could never work in ecosystems that value binary libraries

Not solving everybody's problem is not the same thing as not solving anybody's problem.

You present subtle considerations which, ignoring the fact that they assume Epochs don't exist in C++ 20 yet ask what Epochs should have done in C++ 20 - might have taken up committee time if Epochs had advanced, and which now can only be answered in the vaguest way, they should definitely have decided on a coherent strategy for resolving such problems.

It is true that for any conceivable change, Hyrum's Law applies, and so it would apply to Epochs just as it does for Rust's editions. Mara's "competition" for writing Rust that gives different results when re-formatted provides examples of the sort of stuff Hyrum's Law invariably breaks. Nobody should be writing non-toy programs like that and in Rust it seems like nobody is. It's a sad fact that too often C++ programs are written in a very fragile way and many of them can and do break with the least provocation.

You can't magically rewrite all those programs, but you can make fragile techniques like SFINAE unattractive to propagate into new programs, and I argue Epochs would have allowed C++ to begin the much harder part of Stroustrup's ever-evolving quest to ship a good programming language - not adding yet more kitchen sinks but removing parts of the language that in hindsight were a bad idea and revisiting old design decisions in the light of what has been learned.

That is the thing with ISO/ECMA languages, it has to work in every possible scenario there is a certified compiler, if not then as you know, we go down the path of nasal daemons.

For example, the Epoch semantics in xlCC would have to behave as in VC++, to avoid too many nasty surprises in cross platform code that is in production for decades.

First, it seems weird to suggest that "nasal demons" are a particular fear of WG21 at this point. As a reminder when C++ got an Optional type std::optional the committee insisted that it have Undefined Behaviour if you for any reason don't check whether it is empty before using the contents.

We're not talking about 1998 here. This is in 2017. By this point plenty of people have experience already using languages with Optional types that do what you actually want here, but the C++ committee decided no, C++ is the footgun language, it's what we're known for.

But OK, in an alternate universe where the committee doesn't introduce footguns on purpose to prove their Real Programmer credibility, and where we do get Epochs, what should happen for fraught situations so as to deliver consistency?

The committee should decide on a rule. I know that's often portrayed as too difficult for such problems, but it won't get easier in subsequent versions. That's why I think it would have made sense to delay long awaited work like Concepts if that was the only way to land Epochs. Concepts is already too late for the main act and being a little later barely makes a difference. Whereas Epochs gets harder to do every version, so the sooner the better.

You might well be right, but if the scheme lasts 20 years (and makes it easier to maintain old code during that period), I'd say it's a very good run by PL standards!

I don't know what guarantees your typical C++ compiler gives you that those can link together?

Regardless, at the very least, you would need to write the headers to be interoperable.

> I haven't anything like it in other languages

Perl can do it not only at module level, but at block level within a single source file.

Lots of languages have opt-in features/extensions/pragmas, which are more granular but somewhat similar. Like Python's __future__, Haskell's #LANGUAGE, and Rust's #![feature].

Adding 20 lines of #[!feature(...)] to every project would get old quick.

Although if they hit Rust version 2.XX.Y this might add some (probably brief) confusion.

I remember reading somewhere that Rust will never have a v2, but now I can't find a source.

There are currently no plans, but I don’t think it’s a hard never. I’d probably be willing to bet it’ll be at least 10 years though.

ES(year) for JavaScript

A better JavaScript example would be strict mode, which is opt-in with "use strict" and changes quite a bit of the semantics of the code while being completely interoperable with non-strict JS code.

Yeah, and the "use" statements for this came from Perl which is probably the originator of this pattern here.

Newer versions of JavaScript do not break compatibility with older versions, whereas different editions of Rust break compatibility in various ways while still allowing interoperability between libraries written in different editions. That makes the two approaches very different.

Solidity (a PL in which Ethereum's so called "smart contracts"[1] are developed) has version pragmas [2] at the top of each file.

That's necessary b/c there are lots of breaking changes between language/compiler versions[3].


[1] they are more like DB triggers than contracts

[2] https://docs.soliditylang.org/en/develop/layout-of-source-fi...

[3] https://docs.soliditylang.org/en/develop/050-breaking-change...

From one your links:

> It just instructs the compiler to check whether its version matches the one required by the pragma. If it does not match, the compiler issues an error.

This is very different from Rust where your Rust 1.56.0 compiler will cheerfully compile Rust 2015, Rust 2018 and Rust 2021 code, into the same program even. Rust editions are not about the compiler version, they're about the language and every Rust compiler will compile every language edition it knows about.

Version pragmas themselves aren't new -- even Perl has them[1]. What makes Rust's implementation nice (and somewhat unique) is its commitment to backwards compatibility while allowing codebases to incrementally move towards a new edition.

[1]: https://perldoc.perl.org/functions/use

I'm so excited to finally have disjoint captures in closures; many papercuts on that one when I worked heavily with Amethyst a while back as there were a few closure-based APIs.

Thanks to all the contributors for getting us to the 2021 edition!

That's the stuff Rust needs more of!

In 2017 or so when Rust editions were invented, the notion was that editions were primarily a "rallying point" (a way to make Rust's continuous release process feel more like Java's or C++'s), and only secondarily an (opt-in) change to the language itself.

See for example the summary at the top of https://rust-lang.github.io/rfcs/2052-epochs.html

It seems to to me that that aspect has now been dropped: TFA simply says "Editions are a mechanism for opt-in changes that may otherwise pose backwards compatibility risk."

I'm not sure this change of direction has been officially announced anywhere, though.

(I think this is a good change: last time I saw cases of people asking questions like "How do I do foo in Rust 2018", and getting a mix of answers like "Nothing in Rust 2018 affects foo" and "Since Rust 1.20 you've been able to use std::foo::bar to do that".)

It created confusion, because:

• lumping marketing of cool features with an announcement of a few incompatible changes was easily misinterpreted as all new features requiring a new incompatible edition (while in fact almost all marketed features were already available in the old edition).

• celebration of features developed in recent years under one big event sounded like all these features were brand new and released at once.

For people who didn't follow Rust development, the announcement sounded like Rust suddenly made a lot of incompatible changes.

The 2018 was widely considered to be overly pressured (contrary to the usual Rust approach of shipping it when it’s ready), which led to burnt out compiler devs, and a bunch of people with unmet expectations. I think the approach to decouple Big Bang features from editions has been made in response to that.

It's hard to tell how much of the trouble with 2018 was due to the concept of having a major release with several things being updated together, and how much was due to the mistake of setting too early a deadline (by pre-announcing that it would be "Rust 2018" rather than "Rust 2019").

2021 was explicitly a "nothing new" edition, at least.

Interesting. It mentions stuff like the library and documentation.

When IntoIterator for arrays first stabilized (with the hack hiding into_iter() for backwards compatibility) I considered making all the documentation changes to use natural arrays in standard library examples, which of course would now be the obvious way to write it whereas previously it was ugly.

I didn't do it (and now I have a job keeping me too busy) and I haven't gone back to look at examples to see if all/ most / some were updated to use arrays in the now natural way.

It was discussed as part of the 2021 Edition RFC.

Really excited for the reserved prefixes. Lots of possibilities there. I'm really rooting for f-strings to stream line format! calls.

Finally a nice way to init hash maps and all other kinds of collections! The former was really unattractive to Rust newbies!

Love the 2021 edition changes and also the fact that they are so minor!

Woot woot, transmute is const now

Curious what the applications are. Collections? Would you write a lot of "unsafe" const code, given the chance?

One thing that's come up a couple times are... well, what transmute is, that is, two types that have the same representation, and you want to cast between them. For example, the "try Rust out in the Linux kernel" code has a function that turns a byte slice into a &Cstr via a transmute. Those kinds of conversions can be useful inside of const fns.

Hi Steve,

I came across this post which presents an example using rust's transmute and it's incorrect behavior when the alignment is different. https://andrewkelley.me/post/unsafe-zig-safer-than-unsafe-ru...

Is this issue still present? I tried to figure it out the other day using latest rust's nightly but the llvm ir output has so much going on that I didn't _really_ understand what was going on

Per transmute's docs @ https://doc.rust-lang.org/std/mem/fn.transmute.html:

> Because transmute is a by-value operation, alignment of the transmuted values themselves is not a concern. As with any other function, the compiler already ensures both T and U are properly aligned. However, when transmuting values that point elsewhere (such as pointers, references, boxes…), the caller has to ensure proper alignment of the pointed-to values.

The post you've linked transmutes a reference, and as the caller fails to explicitly ensure proper alignment, it explicitly risks invoking undefined behavior - I would consider the code buggy. The easiest way to prove it's broken would be to create an reference that's more likely to be unaligned (&array[1] instead of &array[0]?) and run the code on a less misalignment-tolerant platform (ARM?).

Here are some 100% sound alternatives using bytemuck (no unsafe required!) and core::ptr::{read,write}_unaligned (unsafe required):


I am unsure about the specific details about IR that's emitted (though at first glance it seems we do emit the alignment attribute now, I think?), but transmute is, as the documentation says, "incredibly" unsafe:

> Because transmute is a by-value operation, alignment of the transmuted values themselves is not a concern. As with any other function, the compiler already ensures both T and U are properly aligned. However, when transmuting values that point elsewhere (such as pointers, references, boxes…), the caller has to ensure proper alignment of the pointed-to values.

This code is doing the latter incorrectly, and therefore invokes UB, as far as I can tell. To be honest, I don't use transmute very often and so I don't have every single last corner case about it memorized.

It's not so much an "issue" as it is "Here's an API that's extremely sharp in Rust, and a similar, but less sharp API in Zig."

That discussion is a bit strange. In that example Foo is implicitly #[repr(Rust)] meaning it has an undefined layout. In particular, a,b is unordered, and even if you don't care which order you get, there is the question of padding (a reasonable compiler will tightly pack a,b but that's not required).

For this reason, we never reach the question of alignment because we know neither i32, u8, nor any other type has the same layout as Foo (undefined layout).

It is certainly true that unsafe Rust is veeeeery unsafe, as perhaps evidenced by me being the first person to point out the repr issue. On the other hand this scheme has a lot of advantages for writing safe Rust.

I will shamefully admit to using transmute in very sinful ways, such as having enums that are repr(u16) or something like that at multiple numeric ranges such as:

    enum Foo { A = 0, B }

    enum Bar { A = 3200, B }

    struct FooOrBar(u16);
And then proceeding to violate all of the unwritten rules of Rust by wrangling casts across these types like a goddamn wizard

I'm not proud of this...

This sounds like a great example of where you might want a union (https://doc.rust-lang.org/1.56.0/reference/items/unions.html)

Ooh I forgot the language had those!

I still feel like this trick has it's place though, as an example take a look at where I stole this trick from, by matklad [0] and tell me what you think :)

[0] https://github.com/rust-analyzer/rowan/blob/d2c7843858da9d9e...

I think that this code is acceptable as long as you're careful. And I wish Rust had a `#[derive(TryFrom)]` sort of thing for deriving conversions from the declared primitive type to the enum.

Happy that the const subset of the language grows

It's useful for handling data that came from outside Rust.

If you have a byte array that you've received over a network connection that represents an array of floats, or you need to convert a 32bit RGBA pixel buffer that you got from some clang ffi binding to a byte pixel buffer without having to split/copy to a new vector.

I can't think of an example now, but it has come up before when working on collections and other things.

I'm confused about the bullet point "IntoIterator for arrays". I thought this was added already in Rust 1.53.0. Did something else happen?

See https://blog.rust-lang.org/2021/06/17/Rust-1.53.0.html#whats...

There's an explanation at the bottom of the section; this landed as a hack in previous editions, but works via the normal mechanisms in Rust 2021.

"Since this special case for .into_iter() is only required to avoid breaking existing code, it is removed in the new edition, Rust 2021, which will be released later this year."

As a user, you're right that there's not really an external-facing change here.

To follow Steve's comment:

> Until Rust 1.53, only references to arrays implement IntoIterator. This means you can iterate over &[1, 2, 3] and &mut [1, 2, 3], but not over [1, 2, 3] directly.

...now, you can also iterate over [1, 2, 3] etc.


Right, but the point is that you can do that as of 1.53.0, you didn't have to wait until this release and the edition to do it.

You know all this, but while some obvious things work in 1.53.0 one important thing causes scary warnings because it is shadowed by the back compat hack.

"for x in myArray" works fine, just like "for x in myVector" but whereas "myVector.into_iter().foo()" does what you expect, "myArray.into_iter().foo() is actually giving foo an iterator over the references just as it would have in 2017 and now produces a warning about this into the bargain.

In Rust 2021 myArray.into_iter() does what a modern Rust programmer expects it to do, provide an iterator over myArray itself.

The warning does explain how you can get that iterator in 1.53.0 of course, but you need to write some ugly syntax whereas in Rust 2021 the obvious syntax just does what you expect as if arrays had always been IntoIterator.

Finally here! Congratulations to everyone.

It's not written in the announcement, but I am under the impression that Rust 1.56 compiles a bit faster. Is that right ?

Here's some compile benchmarks from the LLVM update which is the main reason for compile time changes:


Some things compile decently faster (~ 10%), some things compile a little faster or slower (~ +/- 3%), some things have a bigger perf hit but not as many as had a bigger perf gain (~ -10%).

So it's faster on average but the data is muddy enough that you probably wouldn't stick it front and center on your release notes.

Does anyone know what change was responsible for the big improvement from a few days ago?


It's less of a universal improvement than the pass manager, but for most of the benchmarks that it does impact it seems just as large, or larger.

From the graph, it points out this commit. And from the description, I think it's exactly that commit and not just one close to it. I heard it was a big improvement.


> Enable new pass manager with LLVM 13

> https://github.com/rust-lang/rust/pull/88243

I'm talking about the change on the 17th of October, only a couple of days ago, not the one from a month ago which I know (and mentioned) is the new pass manager. The change doesn't show up in all of the graphs, but for the cases where it does show, it's a similar size improvement to the new pass manager.

Maybe it was enabling PGO rather than any code change, I've heard it mentioned that happened recently.

I believe this is mostly due to the switch to LLVM 13[1].

[1]: https://twitter.com/ryan_levick/status/1443202538099073027

I believe the new pass manager isn’t due to be enabled by default until the next version (1.57)

Yeah the significant improvements from 13 will require that last I heard.

That seems to have been a mixed bag. But they also enabled PGO (or was it LTO?) and that was mentioned to be a bigger improvement.

PGO requires a runtime profile, so I doubt they've enabled that by default :-)

Rust has had LTO for quite a while, and it's normally a source of longer compilation times rather than shorter ones (since LTO in LLVM-world involves mashing all of the bitcode together and (re-)running a lot of expensive analyses to further optimize across translation unit boundaries.

OTOH they've been making continuous improvements to the incremental compilation mode since 1.51/2, so that's probably among the sources of improvements here.

They're referring to the use of PGO when building the compiler itself: https://github.com/rust-lang/rust/pull/88069

TIL. That's fantastic!

Wonderful, but still disappointed that easy strict-type-wrapping is not a thing yet.

There have been a few RFC trying to work on deriving a type from another, and while I agree it's much more complex than it sounds, I also find it's a huge missing point.

If I have to reimplement a wrapper myself for all Traits, I most likely won't bother, leading to less typesafety, leading to more bugs :(

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact