Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Rust 1.28 (rust-lang.org)
319 points by steveklabnik 9 months ago | hide | past | web | favorite | 88 comments



I've been studying Rust on and off in the past 3 months. There are a few things that I find annoying, they are somehow discussed in the issue tracker, but it's not clear to me what the priorities are to land them:

* Partial borrow. I can't borrow a field of a structure and then reference another field. I can't cite how many times in a few weeks I've been forced to rethink and split my structures just to work around this, making my code harder to read and write for no good reason than working around this limit of the borrow checker.

* The Rc/RefCell/Box dance is very verbose, when you just want to share a reference to some long-living object within other long-living objects. Worse than verbose, it also causes a runtime overhead because of the RefCell.

* No simple way to declare non-const global objects, like something as easy as a global HashMap of constants strings to constant integers. I'm a little spoiled from Go's excellent support of init functions (both compiler-generated and manually written), but this is something I really miss in Rust. lazy_static! is a workaround which is harder to use and forces the code to be thread-safe even if no threading support is necessary (that is, I can't use my non-thread-safe classes in a lazy_static).

* In generic code, there is no simple way to represent integer types. You need to use the "num" trait which breaks the "as" operator causing big pain and less readable code (as it's normal to covert between integer types).

* No integer generic parameters. We have to use the "typenum" crate which ends up causing less readable code and generates C++-like error messages.

* PhantomData is awkward, it absolutely sounds like something the compiler should handle, not the programmer.

* "Wrapping<u32> + 1" doesn't compile, but is there a good reason why it does not? I really wish Rust added wuNN/wiNN native wrapping types, or evolved in a way that I can write the same code with "u32" and "Wrapping<u32>" (another example: "as" doesn't work). Right now, it seems like the solution is half-baked: it tries to use the generic to avoid adding a native type, but the code that users must write is more bloated compared to native types.


As someone who absolutely loves Rust, I think this is a really excellent critique of the language for a very specific reason: many of the issues listed here are the result of intentional tradeoffs or hard problems that are a consequence of the language's design. I personally think Rust makes the right call on many of these decisions, but it's very worthwhile to have a sober look at the cost.

Rc/RefCell/Box are verbose because they deal with shared ownership, and Rust forces you to solve problems around shared ownership that other languages don't. This is usually a good thing, but in times where you do require this type of architecture, there's a verbosity cost. Arguably the real tradeoff here is a loss of brevity for a gain in correctness.

Non-const global objects has definitely been a pain point for me. `lazy_static!` has always felt like a hack (although a community member released a possibly-better alternative just today[1]). I personally think HashMap literals might make much of the pain here go away.

`Wrapping<u32> + u32` is a result of Rust avoiding doing implicit coercion, which has advantages but also comes at the cost you've enumerated here.

[1]: https://www.reddit.com/r/rust/comments/9406rl/once_cell_a_la...


> Rc/RefCell/Box are verbose because they deal with shared ownership, and Rust forces you to solve problems around shared ownership that other languages don't.

Would there be a space for a mode inside Rust, call it a DSL if you will, where Rust behaves like those other languages? I mean that in this mode it would use the heap by default and copying to not alert the borrow-checker?


It could work, but there hasn't been a ton of demand for it. Yes, it might help at the very beginning, but people do learn and become productive without it.

It'd also be a worry if libraries started doing stuff like this; it'd be harder to use from "normal" Rust.


It can be hard to get a hold on so many features, I hear you. Some of this is Rust, some of this is your perspective vs. Rust's perspective. Here's what I have for you:

* Partial borrow.

The status of this is unclear. It does work in many cases. It's complicated. I would push back a little on the "no good reason" though, but this is gonna be long so I'll leave it at that.

* Rc/RefCell/Box

Yes, this is painful, but that's because shared ownership is painful. Rust is pushing you away from a certain kind of architecture. Many people think this is a good thing...

* No simple way to declare non-const global objects

That is lazy_static. But

> forces the code to be thread-safe even if no threading support is necessary (that is, I can't use my non-thread-safe classes in a lazy_static).

This is perceived as a good thing; if you code changes later to use threads, you're not hosed.

If you want something that's not for multi-threading, then don't use something that can be used with multiple threading! TLS is probably what you actually wanted here.

* In generic code, there is no simple way to represent integer types.

We went through three versions of these traits, and were not happy with any of them. It's a tough problem.

* No integer generic parameters

This is coming; the design work has been accepted, and now it's on to implementation. It's expected in nightly by the end of the year and probably stable early next year.

* PhantomData is awkward, it absolutely sounds like something the compiler should handle, not the programmer.

The compiler could try to infer variance, but that means that if your code changed, you'd have a breaking change. In general, around interface boundaries, Rust requires you to state what you want.

It's even less awkward than what came before it, for what it's worth.

* "Wrapping<u32> + 1" doesn't compile

Rust is pickier about numbers than many langauges; 1u32 + 1u8 doesn't compile either. Many people find this valuable; you don't have to memorize complex promotion rules, and you always get the semantic you asked for.


>> "Wrapping<u32> + 1" doesn't compile

> Rust is pickier about numbers than many langauges; 1u32 + 1u8 doesn't compile either. Many people find this valuable; you don't have to memorize complex promotion rules, and you always get the semantic you asked for.

Although I'm happy with rust avoiding automatic promotion across the board, Wrapping<u32> + {integer}` could be handled more gracefully, in the same way that `u32 + {integer}` is handled.


Yes; to be clear, what I'm saying here is "we tend to be conservative with these kinds of things at first". We could and may change this in the future, but once you do it, you can't go back. It's better to be conservative, feel where the pain leads you, and improve, then to speculate on things and end up adding bad stuff.


Re: non-thread-safe items in lazy_static, you can also wrap them in a mutex within just the lazy_static so the struct only uses synchronization when it actually needs to be thread-safe.


That removes the Sync requirement, but still requires Send, so doesn't completely resolve the requirement for thread-safety (e.g. Rc can't be stored).


> * Partial borrow. > The status of this is unclear. It does work in many cases. It's complicated. I would push back a little on the "no good reason" though, but this is gonna be long so I'll leave it at that.

Not all partial borrows are safe, but a lot of them are. Most programmers know how to write safe code coming from different languages without borrow checker; they do put bugs sometimes -- that's true -- but most of the times the code I write is safe. It's just that there's no way to tell Rust that it is, and the workaround has a global impact on the code being affected (it forces to split structures, which greatly increases the cognitive load in reading and writing the code).

> * Rc/RefCell/Box > Yes, this is painful, but that's because shared ownership is painful.

But the point is that I do NOT want shared ownership. What I want is to have a single owner of a long-living object, and several long-living references to it in other objects. When I say "long-living", I mean that the liveness of those objects is not visible through the call stack, so I can't use borrowing references. The only way to model this today is with shared ownership, which implies more than I would be willing to imply.

The same problem is apparent when trying to implement the callback pattern (which I forgot to list in my OP). Callbacks are basically impossible to do in Rust, and I found out I must have used callbacks everywhere in my last many years of programming in different languages, because I feel helpless in Rust when structuring code without being able to register and use callbacks.

> This is perceived as a good thing; if you code changes later to use threads, you're not hosed.

This does not sound very Rustecean :) If Rust is not opinionated and gives me full power with zero cost overhead, why does it force me to pay for thread-safeness (overhead) so that I can use threads later (opinion)?

Notice also that code which is not thread-safe doesn't prevent me from using parallelism. It just prevents me from moving objects across threads. But I can surely leverage the power of multi-core with threads, as long as the thread-unsafe objects are created and used within a single thread. So I'm not sure why I should make them thread-safe... just because I want to use them in a non-const global.

> If you want something that's not for multi-threading, then don't use something that can be used with multiple threading! TLS is probably what you actually wanted here.

What I want is actually much simpler: let me initial globals with runtime code that the compiler schedules for me to run before main. C++ does that very well; Go does that optimally. Either approach is good enough.

PS: TLS is potentially overhead again: indirect reference on non-PIE binaries.

> Rust is pickier about numbers than many langauges; 1u32 + 1u8 doesn't compile either.

I love that "1u32 + 1u8" doesn't compile, I hate integer promotion. What I'm saying is that "1u32 + {integer}" compiles, while "Wrapping<u32> + {integer}" doesn't. I think it's been a bad tradeoff to force people to use generics for wrapping integers when generics are still not good enough to implement a type which behaves like a native integer on common operations like "+ {integer}" or "as <type>".


> But the point is that I do NOT want shared ownership.

Yeah, to me it sounds like you exactly want shared ownership. What you're describing is what shared ownership is.

> Callbacks are basically impossible to do in Rust,

I wouldn't go that far; yes, sometimes you need to do that dance for them, but not super often. It feels like maybe we've had wildly divergent experiences here :)

> why does it force me to pay for thread-safeness (overhead) so that I can use threads later (opinion)?

Because you're using a feature that can be accessed through multiple threads, even if you're not doing it now. If you use the version that can't, then Rust doesn't force you into it.

> let me initial globals with runtime code

This, in many cases, will be solved with `const fn`. But that runs at compile time, not before main. Life-before main has a ton of issues and is a huge pain.


Since nobody mentioned it by name, the TLS replacement they're suggesting for lazy_static is `thread_local!` and it's part of libstd.


> Partial borrow

Yeah, that's the annoying part of Rust :(

> I can't use my non-thread-safe classes in a lazy_static

Well, you can, if you wrap them in Fragile/Sticky from https://github.com/mitsuhiko/rust-fragile :)

> PhantomData is awkward, it absolutely sounds like something the compiler should handle, not the programmer

This is quite common in Haskell (called Proxy there), for different reasons (not for lifetimes since there are none — but for passing type-level stuff around calls), but because of that, it feels completely natural to me.


Partial borrow works for me. Yes, if it didn't work it would be very annoying.


It works when you access fields directly. Where I've seen people get frustrated is that calling a method on `&self` or `&mut self` is always a full borrow, even if it only really accesses and returns something partial. The borrow checker doesn't look to see what that method really uses in its implementation, just that the signature would allow full access.


That's... understandable, perhaps.

It would be difficult to implement something better, yes, but I'm also not sure it'd be better. In OO thinking, the question of which fields a method accesses should be an implementation detail, which shouldn't affect how the code calling it can look.

If the borrow checker was smarter, then a change to a class implementation could break the code using it.


Yeah, Rust usually eschews the sort of non-local reasoning that would require. It would be harder to implement, and harder for the programmer to think about too, especially with changes from afar as you say.

Also, auto-`Deref` can make this even more surprising! If you have some `foo: Rc<Foo>`, then `&foo.bar` borrows the whole struct via `Rc::deref(&self)`. You can sometimes get around this by taking a full dereferenced borrow yourself -- `let foo: &Foo = &*foo;` -- and then work with partial borrows from that. Same goes for `DerefMut`, or really more so since `&mut` is exclusive.


Did delegation ever make it into Rust? That would let you do a more comprehensible kind of partial borrowing: if you have

    impl TR for S {
	delegate * to self.f;
    }
then calling TR methods should only require borrowing/owning f rather than the whole of self.


There aren't any new delegation features since 1.0 unfortunately. That said, if it worked this way, delegation would become part of a type's public API, such that changing the delegation details internally could break callers.


I have to say, the Rust compiler's commitment to nice error messages makes it a pleasure to work with. It is also cool to hear about the GlobalAllocator trait stabilization. I'll have to check out what the embedded-Rust space has been doing in this area!


Yes, it's nice to see improvemed error messages. I read many people loathed what the borrow checker threw at them.


I'd like to mention that along those lines specifically, a lot of people have talked about NLL in terms of "the borrow checker accepts more of my code", but it also produces way better errors. See the "Diagnostics" section of http://smallcultfollowing.com/babysteps/blog/2018/06/15/mir-... for an example that I ran into in real code. (It's niko's blog but I supplied him with this example)


I think pouring more time and money into the error messages will pay out in the long run.

Messages like that make coding low-level stuff fun again.


A summary of what is new:

- Global allocators

- Improved error message for formatting: provides a specific reason why it is invalid

- Library stabilizations: NonZero number types

- Changes to Cargo: src directory can not be modified


Global allocators are nice, but what would be really helpful in some performance intensive applications is to specify an allocator for a collection.


That work relies on this work; that's coming!


Global allocators is the first step to instance-wise allocator selection.


After two versions where performance has been unchanged I once again see a little bit of performance improvement with this version in my pet photo camera raw decoding benchmark. 1.28 is ~1% faster than 1.27 and nightly seems to be ~1% faster than 1.28 again. Don't know if there have been LLVM updates or if work on the rust compiler itself has helped. In total, since version 1.17 I'm measuring almost a 25% improvement in CPU intensive code. I need to make a more comprehensive benchmark easy to run by everyone (many different kinds of files, fully automatic running, etc).


Due to an upstream LLVM bug, we had to remove some information about aliasing; that bug was fixed and this was re-enabled in this release. That may be what you’re seeing.


The NonZero type reminds me of something else I'd like to see in Rust; ranged integers. By which I mean integers that can only represent a specified range of numbers.

This is not so much for optimization but for type safety (i.e. avoiding bugs).


We'd like that too; it's likely to fall out of const generics, which are looking likely to hit stable sometime early next year.


With const generics ranged integers would be implemented as a user specified check.

How viable would it be to implement an optimization step which seeks to minimize the number of these checks?


I'm not entirely sure what you mean, exactly. As far as I know, all of the usual optimization passes would already be working to minimize them.


Rust could emit LLVM range metadata for ranged integers. IIRC, it already does so for enum discriminants.


That seems quite good!


That's probably down the line from const generics, as it has the additional constraint of defining the workings of those (API, over/underflow behaviour), how they'd interact with regular integral types (in terms of conversions back and forth), literals, …


That would be awesome, especially if it works similarly to Ada.


Good times! And to those that like me enjoy a good dead trees-based book to learn a language, the Rust Book (from NoStarch publishing) arrived yesterday at my door :)

https://nostarch.com/Rust


I've been reading the O'Reilly book but I'll look at TRPL next. BTW I'm read this on Safari: Steve, do you know how publishers/authors get compensated for Safari books? Are there just normal royalties or is it somehow negotiated in a bulk rate?

EDIT: Steve answers a semi-related question in [1]

[1] https://www.reddit.com/r/rust/comments/8u0ggw/trpl_is_releas...


I don't know how far in you are, but I would suggest reading TRPL first. I read the Programming Rust book first, but I found TRPL a much better introduction. I wish I had done them in the other order. Maybe read TRPL first, and then go back to PR.


I completely agree and I did the same thing. I own dead tree variants of both and TRPL approach immensely helped grokking the language whereas the Programming Rust book feels more like a reference book for dives into concepts.


Yeah, I don't happen to know, as I'm not being paid for this book :) None of the other books I've worked on have been on Safari, or at least, if they are, I've never seen a breakdown of the (not large, mind you) payments that described that.


Thanks for not using Safari — I cannot afford it ;)


I wonder, why is it NonZeroU8, NonZeroU16, etc instead of something like NonZero<u8>, NonZero<u16>, etc?

In theory, in the future, when Rust supports constants as template arguments, one might even add a const argument, the value that should be used as "null". E.g. if my u8 never reaches 255, I could use something like Never<u8, 255>.


We used to have NonZero<T>, but changed it https://github.com/rust-lang/rfcs/blob/b728bf68703f681ef0e42...

The real motivation is

> With NonNull covering pointers, the remaining use cases for NonZero are integers.

It's unclear how extensible we truly want this to be, and so we decided to go with the concrete types. In the worst case, we'd eventually re-introduce some sort of NonZero<T>, and deprecate these types. This was deemed worth it to let people use this important feature on stable, rather than waiting and seeing if we ever decide to do otherwise.


>I wonder, why is it NonZeroU8, NonZeroU16, etc instead of something like NonZero<u8>, NonZero<u16>, etc?

The RFC covers this. https://github.com/rust-lang/rfcs/blob/master/text/2307-conc... Also see the RFC discussion at https://github.com/rust-lang/rfcs/pull/2307

Basically the generic NonZero already existed, but it's hard to express safely when used with arbitrary user types.

>In theory, in the future, when Rust supports constants as template arguments, one might even add a const argument, the value that should be used as "null". E.g. if my u8 never reaches 255, I could use something like Never<u8, 255>.

This requires const generics, so it could happen when they're implemented. (This is also mentioned in the RFC discussion.)


Does "const generics" essentially mean singleton types?


There are so many "slightly different but mostly the same" things in this space. In Rust, this is const generics: https://github.com/rust-lang/rfcs/blob/master/text/2000-cons...

  struct RectangularArray<T, const WIDTH: usize, const HEIGHT: usize> {
      array: [[T; WIDTH]; HEIGHT],
  }

  const X: usize = 7;

  fn main() {
      let x: RectangularArray<i32, 2, 4>;
      let y: RectangularArray<i32, X, {2 * 2}>;
  }
I haven't read enough about singleton types to know the exact details about their similarities, so I'm not sure.


Is this what is known as pi or dependent types?


Yes, but limited to being applied at compile-time, with constant arguments.

Fully general dependent type would allow passing in runtime values, which is significantly harder to type-check and execute


I believe that this initial version is not quite fully pi types; we did have an RFC for those but it was deemed too large to start off with.


Can anyone brief what NonZero even is.. or rather, it's application? From the sounds of it, it's literally just a u8 (or w/e) that can't be zero.. but what use case does that have?


Let's take an enum, like Option:

    enum Option<T> {
        Some(T),
        None,
    }
Under the hood, this is a "tagged union"; Rust has to have enough space for the T, but also for a tag, to say which variant is the current one.

If T is a NonZero type, then we know that zero is never a valid value. That means that instead, Rust can use the zero value to mean the None case, and any other value to be the Some case, completely eliminating the extra tag, and reducing the size of the Option.

This is a good example of a "zero-cost abstraction"; an Option in this case has absolutely no extra associated overhead at runtime.

Note that these optimizations aren't specialized for Option; they apply to anything that has this kind of shape.


I've occasionally thought that if I were to design my own language, it would be nice to try using INT_MIN as the invalid value that marks None. I've never actually used the extra negative value you get from twos-compliment, so giving it up in exchange for a smaller Option<T> seems like an excellent trade. Though, I wonder if that might have some complications for stuff like overflow checking.

I suppose you could always wrap NonZero with an accessor that adds 1 if negative to achieve the same result, though that might be too much overhead to be worth it.


Is NonZero* special cased by the language? If for instance, I had a preexisting binary interface where 0 through (2^n - 2) is valid, but all bits set is the invalid case, can I explain that to the compiler as well?


Yes, it is. The case you're talking about is pertinent to the above discussion on why it's NonZero* vs NonZero<T>; you cannot currently inform the compiler in your case.


Excellent explanation. Have you considered writing some sort of book? :P


Heh, thank you. There's too many books that I want to write...


That's actually really cool! Thanks for the explanation


No problem!


Whilst a great response, I think the practical implication of being able to ensure you never divide by zero did not shine through :)


It's also useful for divisors.


Now that users can switch out the global allocator, the next step may be to change the default allocator for Rust programs from jemalloc (which, note, is only the default on some platforms) to the system allocator. This has been blocked on stable global allocator support in order to make sure that users who find that jemalloc is superior for their program have a recourse to avoid performance regressions; ideally switching to jemalloc (or tcmalloc, etc.) should be well-facilitated by Crates.io.

As for the rationale behind moving away from jemalloc as a default, the tracking issue is here: https://github.com/rust-lang/rust/issues/36963 (TL;DR: lower maintenance burden, easier cross-compilation, smaller binaries, better system integration (including packaging), and the ability to use Valgrind).


In case of FFI, if C & Rust are using the same allocator (say the system allocator), does it mean that memory allocated by C can be freed by Rust and vice-versa?


This is not safe in general, even within a single language. For instance, on Windows, it is essentially required that a pointer is freed by code from the dynamic library (even if they're both written in, say, C) that allocated it, because (IIRC) they could be using different versions of the platform allocator.


Yes.


Many types in the standard library do not consider it sound behavior to have their allocations freed by functions other than their own destructors:

https://doc.rust-lang.org/std/boxed/struct.Box.html#method.i...

    After calling this function, the caller is responsible for the memory 
    previously managed by the Box. In particular, the caller should properly 
    destroy T and release the memory. The proper way to do so is to convert 
    the NonNull<T> pointer into a raw pointer and back into a Box with the 
    Box::from_raw function.
I would guess that this might become a fun latent footgun in crates.io code -- everyone writes their unsafe code in a way that Just Works under the default allocator, but then things break down in confusing ways when enabling alternative allocators, which should be safe to do even when using external crates.


That's already true in cases; for example, I've seen FFI code that has a use-after-free bug in it, but because of the different way that the allocator works on different platforms, on OS X it "worked", but on Linux, it segfaulted.


Will it also shave off fifty-odd KB from executable size?


It depends; for example, on Windows, we already do not use jemalloc, so you won't see any change there. I don't have a Linux box handy to find the exact size of the binary there, but it will be smaller, yes.


I'm pretty happy about the GlobalAllocator stabilization. Now you can run Massif and Heaptrack on rust code without switching to unstable!


I want to expand my knowledge but don't know whether to invest in learning C++ or Rust. Both look like they have interesting concepts, but Rust is new and not yet widely used, and I hear C++ is easy to break in confusing ways. (I have cut Go out of the picture because copying and pasting code is the first thing you learn not to do in school, but all the Go leaders say you have to do this since they don't want to implement generics, so I don't trust their judgment on language design.)


I'm not an expert in either language, so take this with a grain of salt, but I've been using Rust pretty regularly for a range of hobby projects since just before Rust 1.0 released, and recently spent the last few months messing around with C++ trying GUI development with Qt. I thought that Rust was much easier to get a grasp of than C++, the main differences being documentation and Rust felt like it had a more consistent design. By consistent design, I mean that C++ constantly felt like a mix of C along with some newer features tacked on, but Rust just felt like Rust, and there wasn't any syntax that felt out of place. I think the documentation issues I had are mainly because of the age of C++, it kind of reminded me javascript where you can ask the same question and get 5 different, but valid, answers. Overall, Rust required a lot of learning up front, but the official documentation is excellent. C++ felt like following a trail of breadcrumbs where each resource gave me a piece of the answer I was looking for, increasing the amount of time it took to figure something out.


You might need to re-think Go. The people who designed it know far more about language design that the people who taught you in school. And they still made choices that the people teaching you in school think are wrong. Instead of thinking "How could they be so stupid?", you should think "Hmm... I wonder why? What do the Go designers see that those other people don't?"

The lack of generics in Go is a sore spot. It's quite possible to solve. The Go designers are not solving it, because they want other things more than they want generics, and they aren't willing to give up those other things in order to get generics.

For any given project, that could still be the wrong choice. For that project, then, reject Go. But don't totally reject the language because you don't trust the Go leaders' judgment on language design. It is more reasonable to mistrust your own.


If you're trying to expand your knowledge, then waffling over which thing to learn is just procrastination.


No. By that logic, you could as well open a random page on Wikipedia and start learning new stuff. Choosing to study a certain technology in depth carries with it a high opportunity cost in the form of your time.


They are almost equivalent choices in terms of return of investment. C++ is used almost everywhere. Rust will allow you to write programs with less bugs and similar performance to C++.

The opportunity cost of figuring out which one is better can often be higher than just trying out both choices. [1]

[1] https://xkcd.com/1445/


> They are almost equivalent choices in terms of return of investment.

Someone who asks whether they should learn C++ or Rust probably doesn't know this – otherwise, they wouldn't ask.


Well... to learn a language moderately well takes weeks. To learn it really well takes months. To learn it at an expert level takes years. It might be worth a day or two of thought before that kind of investment. Eventually, sure, it becomes procrastination. But just randomly choosing isn't really the optimal approach either.


I will tell you what works for me. Dont try to make one language to fit all cases, but try to use the best tech for each scenario.

In the past you had to learn a lot of languages and platforms and you still need it in some cases. But for my particular needs, i've managed to narrow down mostly to 2 languages: C++ and Swift. But thats for me, for others it will be different things.

But dont try to find the perfect language.. dont be a hostage to any tech suffering with stockholm syndrome with a given piece of tech.

This weird trend of one tech-fits-all started back in the nineties with Java, with things comming out straight from the religions like 'evangelism', and calling people infidels for trying different things.

Maybe whats work for me will work for you too. I bet you can narrow down to 2 langs to do almost anything you want.

About C++ vs. Rust; Fear not. Modern C++ can basically 'anotate lifetime' in methods contracts by using smart pointers and move when you need. I prefer the C++ approach, have no problems with leaking, lifetime or whatsoever and deal with very large codebases programmed by large teams.

But of course, if i were to create a program to control a nuclear reactor or a submarine i would consider using Rust or Ada. But particularly i find Rust programming more stressful than C++ without (almost) nothing to gain. And on the language design aspect where Rust is superior to C++, i prefer to use Swift if i can, which i consider 'a better Rust'.

By the way in a lot of cases you might consider using Rust or C++, you can use the joyful Swift. Using a powerful language with less of a burden.

TLDR; Where some people will try to use Rust for everything, i prefer to use a Swift/C++ combo, and given you can mix both without much problem in a single program, i hardly need or miss anything else.


> About C++ vs. Rust; Fear not. Modern C++ can basically 'anotate lifetime' in methods contracts by using smart pointers and move when you need. I prefer the C++ approach, have no problems with leaking, lifetime or whatsoever and deal with very large codebases programmed by large teams.

Is there a book (or other type of resource) that approaches C++ from the "modern" direction, i.e. with a focus on correctness?


I don't think any embedded systems engineers are programming in Rust for nuclear reactors or subs, and I don't think they are considering it either. No amount of safety constraints in Rust make up for the fact that the Rust team is sometimes willing to forgo backwards compatibility to keep the language clean. Not to mention those are usually legacy codebases from before Rust even existed, and no one is convincing the DoD or the DoE to "rewrite in Rust".


> the fact that the Rust team is sometimes willing to forgo backwards compatibility to keep the language clean

What cases are you thinking of here?


Well I suppose backwards compat might have been the wrong phrase here. I'm thinking more that it's a language whose v1.0 release was only 3 years ago before which it was "difficult to maintain projects written in Rust, since the language and standard library were constantly changing" [0]. That's not to say that it isn't a stable language now (and I actually quite like it) but with only 3 years of stability to show, "nuclear reactors" and "submarines" are strange things to list as applications for the language.

[0] (https://blog.rust-lang.org/2015/05/15/Rust-1.0.html)


" but all the Go leaders say you have to do this since they don't want to implement generics" they never said that.


Why not both?

https://github.com/nrc/r4cppp

https://www.reddit.com/r/rust/comments/71w6ht/is_there_a_c_f...

C++ sucks big time when it comes to "management" - there's no official compiler, no official package manager, no package registry, no easy dependency handling. It's easy to start with sane C++ and then "oh, I need a library for X", and you just wandered into a forsaken of hell accidentally. [ https://i.imgur.com/a4CVG.jpg ]


A strategy that has worked well for me was to only use Qt and libraries based on Qt with C++. Qt is pretty comprehensive for a lot of usecases, and has a rich ecosystem of third-party libraries that follow the same conventions as the Qt APIs.


They have pretty solid reasons on why they are delaying the support for generics actually.

Check this out https://www.youtube.com/watch?v=sX8r6zATHGU




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: