
Rust 1.45 - pietroalbini
https://blog.rust-lang.org/2020/07/16/Rust-1.45.0.html
======
sitkack
> Rust 1.45.0 adds the ability to invoke procedural macros in three new
> places!

Rust 1.45 will be the Rocket Release. It unblocks Rocket running on stable as
tracked here
[https://github.com/SergioBenitez/Rocket/issues/19](https://github.com/SergioBenitez/Rocket/issues/19)

This is so excellent, and I love seeing long term, multiyear goals get
completed. It isn't just this release, but all the releases in between. The
Rust team and community is amazing.

~~~
luhn
For those out of the loop like me, Rocket is a web framework for Rust that
apparently was using a lot of experimental features.
[https://rocket.rs/](https://rocket.rs/)

Or maybe it's an explosive weapon crafted with metal pipe and gunpowder.
[https://rust.fandom.com/wiki/Rocket](https://rust.fandom.com/wiki/Rocket)

~~~
moksly
Hadn’t heard of it, but it looks a lot like flask. Good stuff, maybe I should
consider looking into Rust after all.

------
angrygoat
This rather niche fixing of unsafe behaviour is excellent: [https://blog.rust-
lang.org/2020/07/16/Rust-1.45.0.html#fixin...](https://blog.rust-
lang.org/2020/07/16/Rust-1.45.0.html#fixing-unsoundness-in-casts)

I spent a few years as a scientific programmer and this is exactly the sort of
thing that just bites you on the behind in C/C++/Fortran: the undefined
behaviour can actually manifest as noise in your output, or just really hard
to track down, intermittent problems. A big win to get rid of it.

~~~
davrosthedalek
I'm not sure I understand this. Does it not produce a run time error? Why not?

This looks very dangerous, because it essentially does the "nearest to right"
thing. Say, you cast 256 to a u8, it's then saturated to 255. That's almost
right, and a result might be wrong only by 0.5%. Much harder to detect than if
it is set to 0.

~~~
pwdisswordfish2
> I'm not sure I understand this. Does it not produce a run time error? Why
> not?

It’s not supposed to. Type casting with ‘as’ is supposed to be lightweight and
always succeed; there is no room in the type system to return an error. In
case lossless casting is not possible, some value still has to be returned.
Until now, this was outright UB — meaning the compiler is not even obligated
to keep it consistent from one build to another. Saturating, while still not
optimal, is at least deterministic.

> This looks very dangerous, because it essentially does the "nearest to
> right" thing.

That’s why the intention is to introduce more robust approximate conversion
functions and eventually probably deprecate ‘as’ casts altogether. There has
been a number of discussions about this; current disagreements seem to be
about how to handle the various possible rounding modes.

~~~
chowells
> meaning the compiler is not even obligated to keep it consistent from one
> build to another.

Way worse than that. The compiler wasn't obligated to act like anything at
all. It would be totally legal to compile it so that the first time the value
was accessed you got 0, the next time you got 1 - within the same program
execution, with no mutation of the value. _That_ is the sort of thing that is
observed behavior of UB in the worst cases, and why it's so terrible to just
pretend that UB is innocuous.

~~~
Sharlin
_Way_ worse than that, even. UB poisons every state of the program that
_eventually_ results in UB. For example, the optimizer is well within its
rights to remove as dead code any branch that, if taken, would provably lead
to UB at some arbitrary future point of execution.

~~~
TomMarius
That could literally produce no output program?

~~~
Spivak
Yep! Dumb example.

    
    
        main()
          x = get_from_some_external_data_source()
          if x:
            print("Hello World")
            trigger_ub()
    

You might expect this code to always print if x is true but the optimizer can
look at this and say "welp, if x is true then it would trigger ub, therefore
it must be false, and since x must always be false we can just remove that
entire branch."

~~~
kbenson
My favorite example along these lines (in C) is "Cap'n'Proto remote vuln:
pointer overflow check optimized away by compiler"[1] which was covered here a
few years back and shows all of these "theoretical" compiler behaviors coming
to a head in a real bug which is thoroughly explained.

1:
[https://news.ycombinator.com/item?id=14163111](https://news.ycombinator.com/item?id=14163111)

------
fullstop
I keep seeing more and more news about Rust, and figure that perhaps it is
time that I learn something new.

99% of my development work these days is C with the target being Linux/ARM
with a small-ish memory model. Think 64 or 128MB of DDR. Does this fit within
Rust's world?

I've noticed that stripped binary sizes for a simple "Hello, World!" example
are significantly larger with Rust. Is this just the way things are and the
"cost of protection"? For reference, using rustc version 1.41.0, the stripped
binary was 199KiB and the same thing in C (gcc 9.3) was 15KiB.

~~~
steveklabnik
The smallest Rust binary ever produced was 145 bytes.
[https://github.com/tormol/tiny-rust-
executable](https://github.com/tormol/tiny-rust-executable)

That is a bit extreme but it demonstrates the lower bound.

There's a lot of things you can do to drop sizes, depending on the specifics
of what you're doing and the tradeoffs you want to make.

Architecture support is where stuff gets tougher than size, to be honest. ARM
stuff is well supported though, and is only going to get better in the future.
The sort of default "get started" board is the STM32F4 discovery, which has 1
meg of flash and 192k of RAM. Seems like you're well above that.

~~~
hellofunk
> ARM stuff is well supported though

FYI Rust (and Go) currently don’t work on the new Apple ARM macs.

[https://news.ycombinator.com/item?id=23856806](https://news.ycombinator.com/item?id=23856806)

~~~
steveklabnik
That is not _entirely_ correct for Rust. I know because I helped a friend of
mine debug the porting process. (Apple did not see fit to give me access to
the hardware, alas.)

[https://github.com/rust-lang/rust/issues/73908](https://github.com/rust-
lang/rust/issues/73908) has the full details.

(Also, not to be super pedantic, but this (among other things) is partially
why I said "and is only going to get better in the future," that is, the
support is generally good (I know, I am literally doing some of that in
another tab right now) but not flawless.)

~~~
hellofunk
Very interesting, thanks for the link!

~~~
steveklabnik
You're welcome! And I'm sorry you're downvoted, I think that's a little overly
harsh. Had I not known, that would have been helpful.

------
baseballdork
> array[i] will check to make sure that array has at least i elements.

At least i+1 elements, right? Or am I getting caught up by one of the three
hardest problems again?

~~~
dagmx
Edit: brain not woken up yet. It’s because of zero indexing.

———-

Original question: Out of curiosity, why the +1?

~~~
tryptophan
Arrays start at 0. example_array[5] is actually the 6th number in the array.

~~~
dagmx
Ah yeah that makes sense. I was thinking the `i` check already accounted for
the zero indexing.

------
trait
[https://github.com/SergioBenitez/Rocket](https://github.com/SergioBenitez/Rocket)
on stable rust finally!

~~~
ldng
IMHO, it's a shame So Much time has been spent (~3 years ?) on async at the
cost of basic features like multipart and CORS.

But I understand it could be more fun for the devs :-)

~~~
wtetzner
I think it makes sense to get your foundation and ergonomics correct before
piling on features. Otherwise you end up building features that may need to be
completely restructured/redone later.

~~~
ReactiveJelly
I recently switched from old sync versions of hyper and postgres to the new
async versions. [1]

It wasn't hard, but yeah it was not fun either.

I can only imagine it's worse if you're actually writing the libraries and not
just a CRUD app like I am

[1] Apparently the postgres crate was a wrapper around tokio_postgres all
along and I didn't notice. So to remove a dependency I switched to using
tokio_postgres directly

------
p4bl0
By any chance if anyone is in the Paris area and is interested to teach Rust
at university next year during the first semester. Please get in touch :).

~~~
harikb
Is this undergrad? Genuinely curious, how will you get someone to understand
what ownership helps avoid without them having experienced the pain on the
other side?

I guess with younger and younger kids learning programming these, may be there
can handle more? I am not sure if my son would understand all of the
intricacies in his first semester.

~~~
p4bl0
Yes it is a course aimed at undergrad students, in their second year at
university.

Of course they won't be able to grasp everything that Rust has to offer, but
that is true of any language. I think Rust will expose them to many
theoretical and practical CS concepts that they will be glad to have at least
heard of during their studies.

In our degree, the first year students learn to program with Python, Racket
(or OCaml depending on which teacher they get), C, Prolog, Bash, … Each of
these language have way more to offer than what they can grasp. But each of
them offers a different approach to programming and help the students to
actually learn to _program_ (rather than learning to _write Java code_ , for
example).

The course in question is actually called "Advanced programming". I want to
experiment with a Rust course in second year as a kind of followup to both the
functional programming course (the Racket/OCaml one) and the imperative
programming course (the C one) that they have during the first year. If it
really doesn't work, we'll change for something else or simply swap back to it
being "Advanced C programming" for instance. But first, let's try to make the
Rust experiment work. I really think it can benefits our students!

~~~
harikb
sorry, when you said first semester, I assumed you meant first semester of the
4 yr course, as in intro classes. But you probably meant first semester of
this year. Either way, it is good to experiment. Thank you for doing that.

------
snalty
I'm building an embedded project that currently runs a python script for
automatic brightness. It takes a brightness value from a sensor over I2C,
applies a function to get an appropriate LCD brightness value and then sends
that to the display driver over a serial port. Would this be an appropriate
project to write in Rust to learn the basics of this language?

~~~
pas
Yes. We used Rust to drive a few things via GPIO and USB-RS232 a few years ago
on a Raspberry Pi, it was a pretty pleasant experience.

Maybe take a look at this I2C lib: [https://github.com/rust-
embedded/rust-i2cdev](https://github.com/rust-embedded/rust-i2cdev)

------
cuddlybacon
As someone who hasn't used Rust, I am curious about why Rust has macros.

I use C++ at work, which admittedly isn't the language I use most, and macros
are used quite a bit in the code base. I find they just make the code harder
to read, reason about, debug, and sometimes even write. I don't see them
really living up to their claimed value.

Is there something different about Rust's macros that make them better?

~~~
JeromeLon
There is almost no intersection between the kind of things that can be done
with the C++ macro system and the kind of things that can be done with the
Rust macro system. They are not related. You can see them as another feature
that is not available from C++.

~~~
zozbot234
You can get quite close to the use case for Rust macros by considering C++
template-based metaprogramming. Of course the biggest difference is that Rust
macros have been designed from first principles, not as a clunky afterthought.

------
adamnemecek
If you have been on the fence about learning Rust, I encourage you to dive in.
It is very productive.

~~~
rgrs
How is the build system? Is it stable and easy?

~~~
computerphage
In my experience, Cargo is much easier and more reliable than the systems I've
used across Java/C++/Scala/Python/Go/JavaScript. It is such a pleasure to use.

------
rockmeamedee
The post says

    
    
      The new API to cast in an unsafe manner is:
    
      let x: f32 = 1.0;
      let y: u8 = unsafe { x.to_int_unchecked() };
    
      But as always, you should only use this method as a last resort. Just like with array access, the compiler can often optimize the checks away, making the safe and unsafe versions equivalent when the compiler can prove it.
    

I believe for array access you can elide the bounds checking with an assert
like

    
    
      assert!(len(arr) <= 255)
      let mut sum = 0;
      for i in 0..255 {
        sum += arr[i];//this access doesn't emit bounds checks in the compiled code
      }
    

I'm guessing it would work like this with casts?

    
    
      assert!(x <= 255. && x >= 0);
      let y: u8 = x as u8; // no check

~~~
laszlokorte
It should be `assert!(len(arr) >= 255)` (greater instead of less than), right?

~~~
wtetzner
If you want to omit a bounds check, the compiler needs to know that the length
of the array covers the upper bound of the loop, right?

~~~
nybble41
If the array length is known to be _strictly_ less than 255 then there is
definitely an out-of-bounds access inside the loop, but since this is a panic
rather than undefined behavior it could matter how many loop iterations are
executed before the out of bounds access occurs, so the check can't be
omitted.

If the array size is definitely greater than _or equal to_ 255 then all the
array accesses in the loop will be in bounds and no further bounds check is
required.

~~~
wtetzner
Oh, right.

------
pavehawk2007
Rust has come a LOOOOOONG way. I'm really impressed with what they've
accomplished in just a short time.

------
devit
Can we please deprecate the "as" operator?

Something so lossy and ill-conceived should not be a two-letter operator.

~~~
95th
Then what do you propose for replacement? C++ style casts `(int) x` ?

~~~
devit
into()/try_into() and methods designed for each of the other cases (e.g.
truncate(), saturating_to_int(), approx_to_float(), etc.)

------
person_of_color
Any algo-trading backtest frameworks in Rust?

~~~
mas3god
All you need for algotrading is to query an api. Rust would be a poor choice
for that anyways, like using a semi truck to carry your bike around.

~~~
lucasmullens
Performance is super critical in high frequency trading, so Rust sounds like
reasonable choice. Having your code run a millisecond faster means beating out
a competitor with the same algorithm as you, getting you a better price.

~~~
estebank
Be aware that Rust gives you the tools to be fast, it is not necessarily fast
by default, although a lot of constructs it guides you towards usually help
with that. You still need to profile your code to see what you need to
optimize, whereas other languages with fewer knobs _will_ perform
optimizations that you otherwise need to manually annotate in your code in
Rust. I prefer this approach, but it _can_ be surprising to people used to the
alternative.

~~~
dilap
Do you have examples of this? I'd be curious to know if so. (I've played w/
Rust a little bit -- I implemented a Boggle board scorer + high-scoring board
generator; Rust outperformed my C++ code! I was impressed.)

~~~
estebank
One example of the _choice_ you have is how you can deal with generic data
types:

    
    
      fn foo<T: Trait>(_: T){}
      fn foo(_: impl Trait) {}
      fn foo(_: &Trait) {}
    

These three different fn definitions have two different behaviors and affect
both the speed of the code and the speed of compilation and it depends
entirely on how they are called.

The first one is what the language calls generics: they are always
monomorphized, which means that if you have three calls to `foo` with
different types (that implement Trait) the compiler will expand three
different functions with different types (code expansion).

The second one is a separate syntax level feature (impl Trait) which was
mainly added to introduce a _new_ feature which is static opaque types, where
the _function_ determines what the underlying the return type will be, but the
caller can only interact with it using the trait's API.

[Aside] This is useful for cases like the following:

    
    
      fn it() -> impl Iterator<Item = i32> {
          vec![1, 2, 3].into_iter()
      }
    

where you would otherwise have to specify the specific type:

    
    
      fn it() -> std::vec::IntoIter<i32> {
          vec![1, 2, 3].into_iter()
      }
    

This example doesn't seem like much, but if you want to add a `map()` call to
this you start to see the benefit:

    
    
      fn it() -> impl Iterator<Item = i32> {
          vec![1, 2, 3].into_iter().map(|x| x * x)
      }
    
      fn it() -> std::iter::Map<std::vec::IntoIter<i32>, fn(i32) -> i32> {
          vec![1, 2, 3].into_iter().map(|x| x * x)
      }
    

The more types you nest the more the benefits come into play. [end of aside]

Now, with that out of the way, the type of an impl Trait in argument is
decided by the caller (not the function), so they are implemented internally
exactly the same as type generics. The only difference is arguable nicer
syntax in the definition and not being able to specify a type using the
turbofish. For all intents and purposes, those two are the same feature.

The third function is different, it uses a virtual table, with everything that
implies: there's type erasure, there's only a single function in the expanded
code (which makes compilation faster because the compiler doesn't need to do
work), calling this function _can_ be slower because the final executable has
to perform some pointer chasing to call methods, instead of directly knowing
where to call them.

All of this to say: if you use `fn foo<T: Trait>(_: T)` or `fn foo(_: &Trait)`
affects compilation and execute time, so you have to be aware of their
distinction. This means that if you're _not_ aware you might have slower code
than you would with a compiler (like Swift, for example) which relies on
heuristics to decide to do static or dynamic dispatch, but it also means that
your code's performance characteristics won't change all of a sudden because
you modified a tangentially related part of the code and suddenly crossed some
threshold.

Another example can be `.clone()`: is it slow? The answer is always "it
depends". You might be cloning an `Arc`, which is cheap, you could be cloning
a 10MB string, which is slow. But because we train ourselves to see clone as
slow we might be worried or annoyed by `Arc`. We could make it `Copy`, but if
we did _that_ then you have _less_ control over where the `Arc` gets copied
which would make it harder to keep track of where the RC gets incremented. The
language also doesn't automatically implement `Copy` for small structs, even
though it could, which would make it easier to learn that part of the language
(you don't learn to add derives early on), at the cost of baffling behavior
(you might add a field and suddenly your struct isn't considered "small"
anymore).

Yet another example, you also have access to `Cow<'_, str>`, which lets you
deal with both static and heap allocated strings in the same way in your code,
but it pollutes your code, where the naïve thing to do would be to use
`String` everywhere.

My personal wish is for Rust to remain explicit as much as possible, but use
lints to emit suggestions for the cases where a more "magic" language would
change the emitted code. That way the code documents its behavior with fewer
surprises.

~~~
dilap
Thanks!

The monomorphization-vs-dynamic dispatch thing feels natural from a C++
perspective, as it completely mirrors the choice of achieving 'polymorphism'
via templates or virtual methods (though of course the Rust syntax is way
nicer!, using traits for both, whereas in C++ you have either a class
definition or...nothing, just ungodly compile errors ("compile-time dynamic
typing")).

That's interesting re Swift. It seems similar in a way to using heuristics to
decide whether to inline a function or not.

I _think_ C# does monomorphization for value types ("struct") and vtables for
reference types ("class"), though I wouldn't bet on it...

> fn it() -> impl Iterator<Item = i32> {

One of the things that impressed me w/ rust was being able to write really
concise code using ".map()" and friends and finding that it all ended up
running just as fast as raw loops.

(The thing that has _most_ impressed me about rust was the crossbeam crate +
type system + derive stuff, which let me parallelize board search in an
incredibly easy fashion. I found it much nicer to work w/ than Go channels,
which is supposedly one of Go's big tricks!)

------
cjhanks
Doesn't this mean that a conditional branch is added to all existing code
which performs casting?

~~~
oconnor663
I think in the specific case of casting a float to an int, more instructions
will be added, but it doesn't have to be a branch. Here it looks like rustc
emits a conditional move:
[https://godbolt.org/z/1cfqof](https://godbolt.org/z/1cfqof)

------
kgraves
this is extremely exciting, a truly wonderful release.

well done rust team

------
FartyMcFarter
> Just like with array access, the compiler can often optimize the checks
> away, making the safe and unsafe versions equivalent when the compiler can
> prove it.

Can it "often" solve the halting problem as well?

The hope that this kind of optimization will happen sounds a bit fanciful for
any non-trivial part of a program.

~~~
steveklabnik
You would be surprised, at least with array access stuff. And, if it doesn't,
you can often help it understand with a bit of work. Sometimes an assert
before a loop or re-slicing something can take a check in the body of a loop
and move it out to a single one.

I ported a small C function to Rust recently that involved some looping, and
all of the bounds checking was completely eliminated, even once I took the
line-by-line port and turned it into a slightly higher level one with slices
and iterators instead of pointer + length.

