Last night I was writing a blog post about how to use const generics. I included instructions on installing Rust Beta, and setting your directory to use the Beta compiler instead of Stable. I started to write a disclaimer like "By the way, this will be stabilized in a future version of Rust, probably on..." and realized that const generics were being stabilized in less than 9 hours! So I removed all that stuff.
If you're interested in some concrete examples of using const generics to write faster code, my post on them is here: https://blog.adamchalmers.com/grids-2/
For all the pain points C++ has, template metaprogramming was always so satisfying when you got it to work. Satisfying in "I feel really clever for solving this so generally and with no runtime overhead" way, but also being completely aware that another person reading the code and having to maintain it migh order a hit on me over darkweb.
Vector<T, N> an Matrix<T, N> is a fancy way to do small linear algebra types - like 2,3,4 vector and 3x3/4x4 matrix - you implement simple operations like add, multiply and dot products with loops and the compiler will unroll it and then you add some specific overloads for stuff like cross product, etc. Saves you having to reimplement it for every N. It's the simplest case of this kind of metaprogramming and feels good once you discover this kind of generalisation.
> This is gonna require some const generics stuff that isn't actually stabilized yet. This means, for the first time in my life, I'm going to have to use... Nightly Rust.
Rust 1.51 stabilized a lot of const generics use-cases, but there are still some advanced parts which are only on Nightly. For example, doing math with const generics, like an array [String; WIDTH * HEIGHT], still requires nightly.
I think the way that Rust releases keep hitting the front page of HN helps create an illusion that the language is changing faster than it really is. There's a sense in which a front-page worthy release is expected to be a big one, and if that happens all the time, there must be a lot of big changes, right? That said, I'd have to say it's worth it in the case of stabilizing const genetics.
aye - it opens the door to native computing to a large number of devs who have only worked in modern GC'd languages. The cross-compilation targets of rust are highly appealing to those whose work may be
1. performance sensitive
2. want to run one stack across frontend/backend/native apps
3. don't want to learn multiple languages for multiple activities.
How come? Wasm is only at ~92% browser adoption right now (per https://caniuse.com/?search=wasm), but certain classes of apps can easily take that 8% hit for all the benefits over the status quo languages that Rust brings.
You get painless sharing of domain types between client and server via serde; you still get native async/await support; you get more powerful data modeling features than even Typescript can provide (although Typescript's union types are definitely much more ergonomic). You get a standard library that's not a joke. You don't have to wait until proposal-temporal gets ratified and implemented in order to have sane date/time handling (recently hit Stage 3 btw, woo!).
Honestly, only ScalaJS can match all of those benefits, as far as I can tell after examining a dozen of languages targeting the web.
So yeah, I'm super curious to hear what problems you encountered with Rust as a web frontend language.
> although Typescript's union types are definitely much more ergonomic
meh. Doing tricky stuff with them is far easier as TS is a dynamic language, but the fact that enum variants are each a variant makes constructing them really easy:
let array = [1, 2, 3];
let some_array: Option<i32> = array.iter().map(Some).collect();
Right, but that could just as easily be blog posts, rather than release announcements particularly, and reduce the appearance of constant change. I only mention it because people have wondered why so many think the language is changing a lot, when in practice it has slowed down quite a bit. Maybe part of the solution is not posting releases on HN.
I think that your comment is spot on, and I also think that this release is front-page worthy. It has the biggest stabilized feature since async/await which happened in 1.39.0.
This couldn't come sooner! I had to fork a particular dependency because Cargo was pulling the wrong dependencies. It drove me a bit crazy before I figured out the dependencies causing the issues.
> For a long time, even the standard library methods for arrays were limited to arrays of length at most 32 due to this problem. This restriction was finally lifted in Rust 1.47 - a change that was made possible by const generics.
I haven't been following every single update, so it's good to know that this constraint is no longer present (and has been gone for a bit).
I'm a bit of a noob, but could const generics allow for a const generic `.windows<const WIN_SIZE: usize>()` function that returns a fixed length array instead of a slice? E.g., instead of:
Yeah, that's fine. I'm making a static site generator and I want each post to have optional links to the next and previous posts, so something like:
let pages = posts.windows::<3>().map(|[prev, this, next]| Page {
item: this,
prev: Some(to_url(prev)),
next: Some(to_url(next)),
});
// then attach first and last pages
It was removed from that standard library because std is built on nightly (which had const generics a while ago). Now that const generics are stable, crates can take advantage of this too. Previously they were using macros to generate the 32 array implementations.
I was bit by this yesterday. I had a 32 array, but the implementations only went up to 31 (with 0, so 32 in total). Found that quite amusing. 1 day away, 1 array element away.
32 seems so normal that it made me question why the default was 33, to include 0-32, but whatever heh. Glad to see this!
> It was removed from that standard library because std is built on nightly
IIRC std and the compiler are built with the previous stable version of the compiler but with some special flag that allow using unstable features for bootstrapping.
What are you quoting? I can't find that in the article linked here and I don't understand what it's saying. Surely Rust didn't cap arrays at length 32?
> If you'd like to learn more about const generics you can also check out the "Const Generics MVP Hits Beta" blog post for more information about the feature and its current restrictions.
The article links to more details in that post [1], that's what the citation was from, sorry.
I thought it was conspicuous that they had '32' in the example because I remembered it as a limitation before.
No, arrays were not limited in length but IIRC generic types weren't able to make an array size generic across arbitrary counts.
Nit: Copy has always been a builtin auto trait so it has always been implemented. You are right for the other traits.
(also as other commenters point out, std builtin types like Clone etc have already been supporting larger array sizes because of the standard library's ability to use unstable features, but before const generics were available as an unstable feature, that restriction very much did exist, and now crates from the ecosystem can use it too).
std::array::IntoIter[1] is great. I used to maintain a crate[2] to do this. Of course doing so requires unsafe so I am happy to move that danger into std::
I think I understand what const generics are. But I fail to see a use case for it, other than using them in array literals and abstractions over arrays.
In the graphics programming I've done as a hobby, if I have a struct that contains a buffer for an image (say 1024px x 1024px = a buffer array of length 1048576), and have functions that operate on that buffer, etc, I either have to:
1) Create a Vec (separate allocation)
2) Make 1024 or 1048576 a global constant that every relevant struct and function references directly (whole program must use the same size at once; forget about exposing it as a library)
With const generics I can make all that code generic over the buffer resolution, meaning I can trivially use multiple different versions within the same build, without doing extra allocations, and I could even expose that code as a library for others to use at whatever resolution they wish
This is, literally, the exact same reason I use const generics. I was so excited when I started using them on Rust beta and could shave off some allocations!
I'm allocating a megabyte inline. I can put the containing struct on the heap if I need to (I may be doing that; can't remember), but I don't want that to be the concern of the struct itself.
For example: what if I decide at some point to create a Vec of this struct? I don't want each of those elements then also putting their internal buffers into separate allocations on the heap, when the Vec itself is already on the heap. I want the struct to be as "flat" as possible, and to only allocate for the sake of the stack as a final step when the use-case demands it
In emulators I often have to implement various FIFOs with a fixed word size (8 bits, 16bits, 32bits etc...) and depth (16, 32 or 64 elements etc...).
I could very easily make the word size generic, but not the depth. So I either had to make the depth dynamic (pretty expensive if you have hundreds of thousands of FIFO operations per second) or hack around it somehow.
Now I can just make the depth a generic parameter.
You can also abuse these types of generics to force the compiler to generate specialized versions of some methods in order to get better performance (at the cost of binary size) without having to manually create a bunch of variants of the same code.
Not OP, but my use case is serializing an array. Implementations were provided by the library for serializing arrays of `T` for lengths 0 through 31. I had an array of `[T; 32]`, so i couldn't serialize my array.
With const generics, the library provided implementations for all `[T; N]`, so my code worked out of the box with no headache.
Well, array literals are very convenient in some areas.
Matrix types can be cleaned up across projects, one widespread area being 3d graphics. Previously, you either had to create a separate type for each matrix size, or use a backing vector (of vectors); the first solution is ugly (because of redundancy), the second is potentially underperformant (in cases where one wants to avoid dynamic allocations as much as possible).
Wow, const generics were three years in the making. Longer even if you count the work leading up to the RFC. Rust's pace of change can seem very slow at times but I'm glad they take the time to do it right.
Enormous strides have been made in compile time evalutation since then, and other interesting fields like MIR optimizations (optimizations that happen in rustc and not the llvm part).
x86/-64 intrinsics are stable (since Rust 1.28, i.e many releases back), but inline asm is not. You could maybe try something simple and representative to get your own idea.
Creating unaligned references is UB, so it's illegal to take a reference to a packed struct field. However, even if you just wanted to create a raw pointer to that field, you would've had to do `&packed.f2 as *const u16`, which is technically UB since you do create a temporary reference before casting it to a pointer. I'm sure one way to solve that is to just allow that as a special case; say that `&<expr> as *const _` semantically doesn't actually create a reference. But, the way the lang team went is to have a separate macro that transforms `ptr::addr_of!(<expr>)` to something like C's `&<expr>` operator, so that it creates a raw pointer directly instead of a reference.
Making the existing syntax - `&<expr> as *const _` like you said - just work would be far and away the best solution. Otherwise we have a 5-year transition period before all code out there that touch this get the memo.
Good question. I think the answer is no, and that `let value = *ptr;` is UB if the pointer is misaligned. The right way to go is `let value = ptr.read_unaligned();`
But the docs will say this:
> Accessing unaligned fields directly with e.g. packed.unaligned is safe however.
It's not a stupid question! The short answer is: the compiler doesn't care. An array of type [T; N] (where T is the type of the elements and N is the length of the array) can be allocated on the stack or in the heap (using Box<T>, as stated in sibling comments). If you access it by value, it will be copied as-is, so be careful with large arrays! You mostly access them behind a reference, but in that case, the type of the reference will be &'a [T; N] where 'a is the lifetime of the referent value. The type system by itself doesn't actually care whether the value lives in heap or stack.
The size of the arrays will still be determined at compile time to a specific value. This only allows code to be written that is generic in terms of the size of the array, but it still requires some other place of the codebase to supply an actual size for it to become part of the executable.
As with any other rust value it depends how you create it. If you just to "let x = ..." in your function then it'll be on the stack. If you do "let x = Box<...>" then it'll be on the heap.
To refine what another comment has said: the array will be local (not across a pointer) to whatever struct or block it's contained in. So if it's contained in a heap-allocated struct (a Box or a Vec for example), it will be on the heap (within that struct, not behind another pointer). If it isn't contained within a heap-allocated struct, it will be on the stack (or if it's a const, it'll be in a static location).
This is why arrays in Rust must always have a statically-known size: because they themselves do not create heap-allocations, so the compiler must know how much space to set aside for them. And that fact is why const generics are such a big deal: they allow a certain kind of variability in that static size for different invocations of the same piece of code, so long as the size can still be determined for each usage at compile time.
Interesting to see the Wake trait introduced. It appears to make it possible to create a Waker implementation without requiring unsafe code, at the cost of requiring the use of Arc.
The way the trait is used by the standard library is amusing. A trait bound is used to ensure a valid implementation is provided, but then the trait wrapping is thrown away and the inner value is only kept around as a raw pointer. That pointer is then casted back to the trait when a method needs to be invoked on it. I guess this works because the methods use Arc<Self> as self. The trait implementation basically carries its own boxed data with itself, so whoever "owns" the trait implementation doesn't have to worry about managing a dynamically sized value. Has anyone seen that pattern used anywhere else?
If you're interested in some concrete examples of using const generics to write faster code, my post on them is here: https://blog.adamchalmers.com/grids-2/