Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.43 (rust-lang.org)
259 points by steveklabnik 44 days ago | hide | past | web | favorite | 104 comments



Hey folks, this is the first release after I wrote https://words.steveklabnik.com/how-often-does-rust-change. We haven't changed any real policy here, but the bit saying

> This release is fairly minor. There are no new major features. We have some new stabilized APIs, some compiler performance improvements, and a small macro-related feature. See the detailed release notes to learn about other changes not covered by this post.

is an attempt by me to maybe address this. We've historically said similar-ish things, but I'm trying to be a bit more blunt about the magnitude of changes. Any feedback on this would be useful!


I think the key thing people worry about with regard to "churn" is "OMG, do I have to update my code again?". Explicitly calling out that this release doesn't really have any changes like that is good. I also think that's why the ecosystem change is in fact a big deal. People seem to feel pressured to keep up. Meanwhile I'm still not really sure what the consensus is, if any, on which error handling library we're supposed to use... Anyway, it might also be worth calling out somewhere that no, you don't have to follow the ecosystem and the hip idioms super closely if you just want to get some work done (and to the extent that's not true, making it so).


Just to be clear, you never HAVE to update your code. Rust 1.0 code still works just fine.

The consensus with error handling is that the limitations of std error trait have now been sufficiently fixed so as to be usable without anything more. But you can use a crate (like `thiserror`) to reduce boilerplate if you really want to.


> Just to be clear, you never HAVE to update your code. Rust 1.0 code still works just fine.

You have to update your dependencies though because every now and then, changes lead to breaking builds. I have a codebase from 2015, last touched in 2016, and many dependencies failed to compile, some because of changes in the language (those changes are only done to fix bugs but they still cause churn to users), others because the openssl wrapper doesn't support newer natively installed versions of openssl. It took me roughly an hour to fix all the issues and it's only a few hundred lines.

Admittedly this is much much less effort than staying up with whatever the currently hyped error handling library is. And C++ projects often need effort to work on newer compilers as well because they invoke UB and the optimizer changed and led to segfaults. Seen it with my own eyes.


Most (all?) of this "dang we need to brake something because we made some stupid design mistake wrt. soundness" kind of braking changes should be done by now I think.

So I think that when in 2024 someone build a code base from 2020 this shouldn't be a problem.

Also sadly currently rust has not yet completely escaped from the C/C++ UB optimization hell ;-). There are some bugs caused by the interaction of Rust and LLVM UB optimizations. Like there was a problem with some float casts being UB (but should not have been).

But we are getting there.


>Like there was a problem with some float casts being UB (but should not have been).

And one of those is just about to get fixed, it looks like =]

https://github.com/rust-lang/rust/pull/71269


I've already tested my lewton project with this change, and not the tiniest bit of a slowdown.


Tbh I found OpenSSL to be its own world of pain. And that was from using it in C. ;)


For someone new to Rust. What past changes in the language broke the dependencies you used? Were they using the nightly branch?


I ran into two issues, one in rustc-serialize broken by https://github.com/rust-lang/rust/issues/35203 (fixable by cargo update -p rustc-serialize) and one in url broken by the move to NLL on the 2015 edition. The second is an important bugfix in the language, but it still broke my build :). Not sure how I fixed it but it's now fixed as well. Additional changes were required due to openssl.

My own code has compiled just fine except for a ton of warnings (most about try! -> ? change) and the needed API updates of dependencies (2 changed lines).

Most of the effort I spent on actually updating as little as possible. If you update everything you fix all issues resolvable by updates but you also have to adjust to every API change :).


e: this is wrong

Dependencies can set lints to deny. When lints change, they might not compile anymore.

(I think denying lints is a bad idea, just like -Werror)


Doesn't cargo's behavior of capping lints prevent this?


Yes, you're right. I thought I had seen this in dependencies, but I must have been compiling something directly.


Also isn't the point of tools like Rustup is that you don't need to make sure your code is constantly up-to-date with respect to the compiler? Like you can keep using rustc 1.0 on code written for Rust 1.0, right?


Thanks :)

The current error handling consensus seems to be "anyhow for applications, either no crate or thiserror for libraries." Obviously this is not uniform, but the discussion + download counts point to this, imho.


To clarify for others: "anyhow" and "thiserror" are not placeholder names in that sentence. They refer to:

https://github.com/dtolnay/anyhow https://github.com/dtolnay/thiserror


Maybe we need another number like 1.42.2? I have no idea if this can be relevant for languages.


We already have minor releases, but they are exclusively for critical bugfixes. Everything else either rides the train (up to 12 weeks) or gets backported to beta (up to 6 weeks before hitting stable).


Thank you. I wonder how long this will hold though? I would not like to see it get to a point where the language is fine but people aren't really sure what else to do and just make changes for the hell of it.

In other words I think there needs to be a long term standardisation plan to keep the language stable, but for now this is a good step.


To expand on Steve's point, code written in 1.0 style should continue to work indefinitely, modulo soundness fixes, lints and plain old bugs.

You should only need to update at your own pace or because your dependencies force you to, and they shouldn't do so often if they care about supporting as many users as possible.

Big changes, like async/await sugar or impl Trait or the ? operator are uncommon and they either make your code that much nicer to read or allow new constructs that were previously impossible.


> I would not like to see it get to a point where the language is fine but people aren't really sure what else to do and just make changes for the hell of it.

We never make changes for the hell of it.

But also, all changes are purely additive at this point. The language is stable. It has been for years.


In that case I think keeping up very small additive changes is good, thanks for clearing up


If you are like me, and stick to Rust lang for about a year or so, you will have a hard time going back to low level languages such as C. When you have your errors caught at compile time - it's simply beautiful! Forget about explicit memory freeing - and don't worry about memory management, it is really predictable. Code will always have the same precise meaning across all platforms. Hell yeah!


On the bright side, I definitely write safer C code now after wrapping my head around Rust. I'd much rather have the compiler tracking lifetimes than my own imagination though.


I finally "understood" C++ after writing a bit of Rust and studying the ownership principle.


It took me a little while to get my system down, but once I'd figured out how to build and call in Rust libraries using CMake and the the C ABI, I do my best to write as little raw C++ as I can.

Using the Rust test framework, limiting direct memory management to the low number of unsafe functions I need to pass raw data/pointers across that FFI boundary, being able to count on the compiler to check if I'm doing something dumb, and being able to easily add other dependencies into that Rust .so/.a file product Cargo have all been huge productivity boosts compared to pulling out valgrind/gdb whenever I run into a segfault because I've done something stupid.


After years of watching Rust excited, I finally have an opportunity to use it this year, and I'm loving it. Like you say, going back to C feels weird and cumbersome (ugh I have to free this in _how_ many places?).


same here, I will never write C or C++ again


Minor or not, this:

> You can now use associated constants on floats and integers directly, rather than having to import the module. That is, you can now write u32::MAX or f32::NAN with no use std::u32; or use std::f32;.

is neat! Awesome! Thanks!


That's my RFC! Yes, it's great to be able to address such a curious (albeit minor) historical wart from Rust 1.0. :) https://github.com/rust-lang/rfcs/blob/master/text/2700-asso...


I always check these announcements out wondering when Rocket will compile on stable.

https://github.com/SergioBenitez/Rocket/issues/19

Anyone have any background on what's holding back proc_macro_hygiene and why that's even a requirement of Rocket?



Very insightful thank you, is it not possible for Rocket to rewrite some of those if it's going to take a while before they're ready for Rust? Another thought I had is... does Rust support conditional compilation? I haven't done enough Rust to know but maybe they could do that for Stable instead...


We do, and that’s part of what I meant in the comment by “dropped”; the first one can’t be worked around, but the others could, in theory.

That said I’m not an expert on Rocket and don’t know what Sergio’s plan is exactly.


Fair enough, out of curiosity are there other frameworks you have used or find interesting?


If I were to make a service today, I would probably start with Actix, or maybe Warp. I don't think that this problem has really been solved yet. I've taken a few cracks at it myself over the years and am still not 100% happy, I dunno.

The last time I wrote something I just used raw hyper. I didn't need complex routing though, if I did, that would have been an absolute pain. It was also a read-only service so I didn't need auth or a lot of other stuff that's needed in every real app.


I agree, the more I do web and look into web frameworks and the ones I like the most, it feels like the nice ones take advantage of some form of syntatic sugar capabilities of their respective languages such as annotations in C#, metaprogramming in Kotlin/Kotor, decorators in Python and so on.


The added cargo env variable pointing to the path of generated binaries is a nice quality of life fix. I think that was one of the first annoyances I ran into with cargo and binaries. It was minor but was like a little splinter that stood out from cargo's otherwise nicely integrated build and test capabilities.


I've been using https://docs.rs/assert_cmd/ for this. (Helps spawn the binary and assert output.)


Thamks for the call out! I'm excited by the new env var and need to consider its impact on `assert_cmd`.


The real problem though is that cargo doesn't play nice as part of system where cargo doesn't own everything.

Cargo needs a lot more ways to be interrogated about the artifacts it produces and control over how they are produced.


Exactly this. It would be really great if for example when building Rust in Nix, Nix could own each of the artifacts, in order to share artifacts between tools that use the same revisions of dependencies, as well as to avoid rebuilding artifacts unnecessarily on every change to a derivation.


How does Nix handle areas with extensive package ecosystems? Python, Node et al? It seems like every distro/super packager has different compromises for this question.


I haven't investigated Python, but for Node it uses a tool node2nix that fetches the metadata for the entire dependency tree and constructs a Nix expression for each one, such that each expression represents the source for that specific dependency. It then takes the top-level package and makes a call to `nodeEnv.buildNodePackage`, which contains the source for that top-level package and declares the entire recursive dependency tree. buildeNodePackage itself rebuilds the entire node_modules folder from the declared dependencies and reconstructs the lockfile. And then it asks npm to rebuild the package.

The end result is this bypasses npm's downloading and managing of node_modules, does it all in a fully declarative fashion, and then hands control over to npm to build the package. Also, each dependency's source is cached independently by Nix, so if multiple packages use the exact same dependency, it doesn't have to be redownloaded (unlike Rust where the entire dependency tree is vendored separately per package).


> (unlike Rust where the entire dependency tree is vendored separately per package).

To be clear, Cargo downloads the source globally. The compiled output is per-project, however.


Nix uses `cargo vendor` to literally vendor the source of all dependencies as a separate derivation. Maybe cargo keeps a global cache of the source as well when doing `cargo vendor`, I don't know, but Nix does construct a derivation that contains just the vendored dependency sources and then provides that as input when building the full crate.


I'm curious why there's no similar variable pointing to a built library (e.g. a cdylib).


What's the use case?


I feel like I've been pushing my use case plenty already, so please take this as an answer to your question rather than one more push ;-)

I'd like to be able to create plugin-style cdylibs that get loaded into a host program. For instance, PostgreSQL extensions are shared libraries (typically written in C) that need to be loaded into a running instance of postgres for integration testing. I am trying to make it reasonable to write such extensions in rust (https://github.com/jeff-davis/postgres-extension.rs).

The environment variable to find the shared library would solve one problem, but there are still a couple more right behind it:

* Running "cargo test" tries to build a binary that links to the library to run the unit tests, but this will fail because there are unresolved symbols in the .so file (symbols that call APIs in the host program, in this case PostgreSQL functions like palloc()). In other words, the test binary is useless because it can't load the plugin.

* Linux seems fine creating a .so with unresolved symbols, but Mac and Windows need special linker flags. I can document that the linker flags are needed by consumers of my crate, but it would be nice if there was a better/supported way to create plugin-style shared libraries. I tried filing a bug with PR (https://github.com/rust-lang/rust/pull/66204) it was determined that it needed to go through the RFC process.

The simplest way to see the problem is to look at this crate: https://github.com/jeff-davis/unresolved

My problems don't end there, unfortunately. For my crate to really work, I also need to solve the problem of setjmp/longjmp at the edges between C and Rust (https://github.com/rust-lang/rfcs/issues/2625).


It's all good; I was asking because I legit didn't know, and couldn't think of anything off the top of my head. I don't work on Cargo, so I'm not as in-tune with its development.

So, I am still not 100% sure that I understand, mostly because each package can only have one library, so this would only work if you wanted one plugin. I guess maybe if they were all in a workspace?


Typically one postgres extension would have only one plugin library. So, the idea is that one crate would produce one plugin. I guess it's possible to have more than one plugin per extension, but it never occurred to me and I haven't seen that before.

Having the location of the plugin might also allow "cargo install" to work.


For a non-rust person's point of view, I assume a macro is something like a super-pre-processor that helps keep the code concise. In the example below, what portion is written ahead of time (by the person making the macro) and what portion is the macro usage?

  macro_rules! mac_trait {
      ($i:item) => {
          trait T { $i }
      }
  }
  
  mac_trait! {
      fn foo() {}
  }
Is this making it easier to implement a trait? interface?


In general as a rust programmer you'll end up consuming orders of magnitudes more macros than you'll ever write; from what I've seen macros are more commonly written in libraries to make their use smoother, and my experience in other Macro heavy languages was similar (Clojure, SBCL).

But as others said, the point is boilerplate reduction. You can write a macro that'll automate that process away. A lot of examples are about automatically implementing certain behaviors for structs, such as equality, but you'll also see extensive macro use in ORM libraries that can generate a lot of runtime code off of a smaller table definition macro.

Anything with an explanation point is a macro invocation, by the way.


“Anything with an explanation point is a macro invocation, by the way.”

Can you explain what this means, exactly?


This is a function invocation:

    foo(bar)
This is a macro invocation:

    foo!(bar)
It's done this way because macros can have much more complex things inside the ()s than functions can. This helps both humans and computers parse such things.


Not OP, I think they mean the exclamation mark (!)


TIL that “exclamation point” is region specific. Exclamation mark sounds subtly wrong to my ear.


I thought the confusion was that you wrote "explanation point" instead of "exclamation point".


foo(x, y, z) is a function. foo!(x, y, z) is a macro.


    println!("something")

that's a macro


Exclamation mark. e.g.

    println!("hello, world");
or

    todo!();
are macro invocations.


Macros:

#[Foo] struct Bar {}

baz!()

cat!{}

blah![]


exclamation point?


TIL that “Exclamation point” is region specific. Other people call it exclamation marks.


Top level says “explanation point” and apparently didn’t ever reread after getting odd sounding questions.


Uhh, I think you’re referring to me.

I didn’t bother to clarify, since 5 people jumped in to explain, and I felt it was redundant.


The first block (macro_rules!) the declaration is written by the macro author, the second block (mac_trait!) is the usage written by the user. Note that macro_rules is itself a macro.

`trait T {}` and `fn foo() {}` are used in the final code, which would be:

    trait T { fn foo(){} }


Thanks. I am trying to understand where the savings for the user is (as in less code to type). I guess it is somehow related to T being a generic and if they didn't use this macro, they would have had to repeat this all over?


In the example given the macro is almost useless. The benefit comes if you want the expanded code to have more things, like methods defined that otherwise you would have your consumer write or sister methods and types.

The macros are only about convenience on usage and reduction of boilerplate. A good example is the vec![1,2] macro call that expands to

    {
        let mut x = Vec::new();
        x.push(1);
        x.push(2);
        x
    }
The reason the blog post is using that example is because it's showing a new type of macro argument which accepts items (functions, structs, enums, traits and impls) and expands them correctly without any extra needs by the macro writer.


Just nitpicking, but a better explanation of the expansion of `vec![1,2]` would use `Vec::with_capacity(2)` instead of `Vec::new()`, that is, it already allocates the Vec with the correct capacity (the real expansion uses in-place construction of a boxed slice, but the effect is similar).


Here's a real-world example from the code I was working on yesterday: I have several database tables where the records have numeric IDs, and instead of having the ID type be just an int everywhere, I want them to be proper types to avoid mixups. Instead of

  fn set_document_owner(document_id: i64, user_id: i64)
where I could easily provide the arguments in the wrong order, I want

  fn set_document_owner(document_id: DocumentID, user_id: UserID)
where this cannot happen. Each ID type is just a one-element tuple containing an i64, which would just be

  pub struct DocumentID(pub i64);
  pub struct UserID(pub i64);
  //... and so on ...
and so on. But in order for them to behave as expected (e.g. with the == operator or when reading/writing IDs in the database), I need to write a bunch of boilerplate. This is the macro that I have now:

  use rusqlite::types::{FromSql, FromSqlResult, ToSql, ToSqlOutput, ValueRef};

  macro_rules! make_id_type {
      ( $name:ident ) => {
          ///A type-safe primary key.
          #[derive(Default, Clone, Copy, Hash, PartialEq, Eq, Deserialize, Serialize)]
          pub struct $name(pub i64);

          impl fmt::Display for $name {
              fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
                  self.0.fmt(f)
              } 
          } 

          impl ToSql for $name {
              fn to_sql(&self) -> rusqlite::Result<ToSqlOutput> {
                  self.0.to_sql()
              } 
          } 

          impl FromSql for $name {
              fn column_result(value: ValueRef) -> FromSqlResult<Self> {
                  Ok($name(i64::column_result(value)?))
              } 
          } 
      };
  }

  make_id_type!(DocumentID);
  make_id_type!(UserID);
  //... and so on ...
This is pretty much the most basic usecase for macros. You can do some more interesting stuff than just take a name and insert it into some template, because macros can actually pattern-match the phrases given to them in the macro invocation. A good example, also from the RDBMS domain, would be https://docs.diesel.rs/diesel/macro.table.html (link goes to the documentation of the macro).


In this specific case the macro isn’t very useful since you can easily write `trait T { ... }` yourself. They tend to be more useful when there is a lot of similar boilerplate code that only differ in a few ways. A macro can cut down most of that repetition. They can also be used to provide pseudo “literals” like the vec! macro does for the Vec data type.


One thing I use it for all the time is data-driven tests, e.g. https://github.com/jcdickinson/racemus/blob/master/racemus-b...


One common usage is to use macros to repeat some definition for every integer type. In this case T is not a generic, but the name of the trait.


It's a bit like C's:

    #define MAC_TRAIT(i) trait T { i }

    MAC_TRAIT( fn foo() {} )

Except that in Rust macro arguments are typed and can dictate their own microsyntax. Macro output is assembled in the syntax tree, not as text find'n'replace, which also avoids many surprises.


Rust macros are their own little thing, they help reduce code repeatability by writing code in your place.

You don't need them to write rust applications, but they could, for example, help writing tests on 3 different backends without having to write 3 times the same testing function.

In this situation, it's different from having a type parameter, because macros won't make you change the function definition. They will just "copy and paste" the code block for you.


Rust macros are real, structural macros, like in most macro supporting langugaes. So they are not like the preprocessor style macros in C and relatives.

A good mental model is to think about this kind of macro is as a compiler callback that defines a syntax tree fragment in its body, to be evaluated when the macro is called.


The first block defines a macro; the second block is the macro usage.

The goal of macros is to deduplicate code; either inside a function or to implement traits (eg. the Debug trait, that allows printing a structure/enumeration/... recursively; this would be very boring to implement yourself every time)


The looks of this makes me want to go back to Lisp.


This macro system is very powerful, if kludgy, but it's not the final version of the system. Macros 2.0 has been in the works for a while now and will eventually have a much nicer syntax for macro definitions. When Rust 1.0 was gearing up for release the team (rightly imo) decided that launching then with known-imperfect-but-usable macros was a worthwhile trade-off.


Rust's macros are unpleasant to write and debug but very nice to use. The language is also much more expressive than C so I don't find myself needing to (ab)use macros quite as often.

Of course lisp macros are on a different level but the languages are massively different so it's not necessarily a super fair comparison.


One issue with macros is that they are currently breaking most IDE integrations.


It was designed and implemented by some huge Racket fans, incidentally...


I wonder if recency bias combined with the unusually large volume of major changes in 2018 was responsible for Preston Carpenter's "I can't keep up with Rust" post (https://news.ycombinator.com/item?id=22818150)


I think it depends a lot on what part of the language you’re focused on. I’ve been using rust heavily for 3 years and have never felt like things were changing quickly. But I have no interest in the whole futures/async side of things which is where a lot of churn seems to be happening.


I would say there is an argument to be made on both sides. On one hand rust is only 10 years old with much less corporate backing than Go, but on the other hand you have to be very careful with even small changes if you want the language to be stable in the long run.

I can see why people are worried, and it's valid, but I also trust the devs are making correct choices


> On one hand rust is only 10 years old

Periodization is hard; it's been five years since Rust 1.0. The project is much older, but the language as we know it today has only been around half that time.


> rust [has] much less corporate backing than Go

I don't know if that statement is really justified. Sure, Google is a much bigger and more influential company than Mozilla. But when taking the average over the entire company, all-of-Google supports Go much less strongly than all-of-Mozilla supports Rust, and I think that pretty much cancels each other out the company size, so I would consider both langs to have roughly equal corporate backing.

I think the actual difference is that Go has a much larger immediate audience than Rust. Go appeals primarily to developers of web service backends, who are coming from Ruby/Python/Perl/PHP and are looking for something faster and statically typed. Rust OTOH initially mostly appealed to systems developers looking for a better C/C++, although that audience has been steadily expanding since then.


Rust is also supported by more than Mozilla these days.


All those new features sound so ... unexciting.

I like that!


The primitive type inference is helpful, especially given the number of `val as i32` conversions that one might encounter when doing arithmetic from different crates. It however feels "magical", so I'm not sure of how I feel about it.

Do the primitive type inferences also work when comparing i32 vs &i32, or even better, &i32 vs &u64?


I believe that this only works for literal type inference, i.e.:

    let a: f32 = 0.0;
    let b: f64 = 0.0;
    
    let this_works: f32 = &0.0 + 0.0;
    let this_doesnt: f32 = &a + b;
And that's a good thing IMO, implicit promotion rules in C are one of my least favourite features of the language. It makes it so easy to write seemingly innocuous code that's completely broken.


What is &0.0 + 0? Adding a reference to a float to a float - is it being automatically dereferenced?


Yes, it is being automatically de-referenced.


Yes, it is only for literals.


This seems like an odd choice... do you know the rationale? (or have a link to an issue/RFC I could go read)


I am finding it a bit tough to explain concisely, maybe go check out https://github.com/rust-lang/rust/pull/68129/ and it might help?


It's becoming a recurring pattern in newer languages that number literals are their own magical type (usually backed by a bigint or bigfloat) that only coerce into a concrete type when they first come into contact, so to say, with a typed variable or expression.


Though note that the implementation in Rust is much less magical than that of, say, Go. Every numeric literal expression has a concrete type. When that type can be inferred from context (using the ordinary type inference rules), it is. When it can't be inferred from context, the compiler forces integer literals to i32 and float literals to f64; if that causes additional type errors to occur, then Rust requires the user to insert type annotations where necessary rather than trying to be smart about it (this can all be observed in the error messages in the OP). IOW, code using the numeric literal fallback types only compiles when it makes no semantic difference what the type is (modulo `unsafe`, as ever).

This mostly happens in small code examples and test cases/doctests, e.g. `fn main() { dbg!(42); }` only compiles because of the existence of this fallback.


Yes, all combos of &T and T on both sides of the operator do. Sorry for not being more explicit about that in the post!


The one feature I need on stable is inline assembly. It's something that should not be sidelined to the nightly compiler, and makes Rust a harder sell for the embedded world.

Yes, I know that I can just use .S files for my assembly, but sometimes I just need to access a CSR.


It’s closer than it’s ever been! There’s been a lot of renewed interest, with an RFC and an implementation. We’ll see!


Considering all the excitement here, I want to learn Rust. What is a good resource to start with if I didn't program ever?


The official book is fairly beginner friendly but some background of computer knowledge is unskippable if you want to progress with programming.

https://doc.rust-lang.org/stable/book/title-page.html




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: