Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Rust 1.36.0 (rust-lang.org)
353 points by mark-simulacrum 18 days ago | hide | past | web | favorite | 180 comments



std::future is stable. Async/await in about 12 weeks. Been writing loads of code, trying out the new futures and especially the possibility to use &self in an async context is a huge benefit.

Beware though to use an executor that can drive the new futures and watch out certain libraries using `tokio::spawn` , which will cause panics.

Some executors for the new futures:

https://docs.rs/futures-preview/0.3.0-alpha.17/futures/execu...

https://github.com/withoutboats/juliex

And a web server to try out async/await on nightly:

https://github.com/rustasync/tide

Compatibility layer from 0.1 to 0.3 and back is in futures-util-preview if compiled with the feature flag `compat`.

https://docs.rs/futures-util-preview/0.3.0-alpha.17/futures_...


For anyone wanting to understand how to implement a basic engine for std::future from scratch, I mashed some code until it worked: https://gist.github.com/jkarneges/cb1ee686ef97bb05ebe04b5fc6...

It's based mostly on this article, which predates std::future: https://www.viget.com/articles/understanding-futures-in-rust...


Thank you. We really need more examples... and hopefully a full guide to futures/async. As someone currently in the outskirts of the Rust community, it's really hard to 'peer inside' what's happening and how to use it in practical applications. It's a matter of time, but it's just so exciting xD


There is an “async book” in the works, by the working group. You’re 100% right that good docs will be important here!


Here's another example how to mix futures 0.1 and async/await together.

https://github.com/pimeys/blocking_test/


> watch out certain libraries using `tokio::spawn` , which will cause panics

care to expand on that?


tokio::spawn needs to be run from the context of tokio's executor. If you use some other executor, such as juliex, it will panic.

This is hopefully solved by the Runtime crate and the crates using it will use a more generic version.


Maybe I'm the only one, but I have a very hard time grasping all the functionality/concepts offered by Rust.

I really like the safety guarantees offered at compile time, and really do think that we should move away from C-like languages if we ever want to control the tsunami of security flaws, but I can't stop wondering if Rust isn't (perhaps needlessly) complicating things and scaring off (non-C++) programmers.


I mean, C++ programmers are exactly the correct audience for a language like this - there are many other options for memory safety if you can live with a garbage collector[1]. But writing safe C++ is much, much, much more complex than than writing okay-ish C++. The way I see it, Rust has basically taken a lot of the best practices required to write sane C++ (e.g. RAII) and formalized them in a way where the compiler can enforce them. That means in order to write ANY Rust code at all, you have to adapt a lot of best practices all at once. That's not very beginner friendly, and will probably lead to cognitive overload in most - you certainly don't get the same freedom you get in other languages where you can implement something in dozens of ways, because most of those ways won't compile here. So I'm not saying they shouldn't keep working on the ergonomics and learnability of the language, but I think a lot of these complexities are essential to the task of writing sane programs while dealing with raw memory, and the fact that they have been named, formalized and checked by the compiler is entirely a good thing - and if that means the programmer has to know about them, then that's ok.

[1] Sidenote - I find it really fascinating how Rust can also use the stronger static checks to prevent things like race conditions in a way few (/no?) other languages can.


> But writing safe C++ is much, much, much more complex than than writing okay-ish C++. The way I see it, Rust has basically taken a lot of the best practices required to write sane C++ (e.g. RAII) and formalized them in a way where the compiler can enforce them.

A concrete example that I've run into recently when trying to write C++ code. I figured that, for safety reasons, I needed to make my type be move-only. I then had to spend about two hours trying to figure out why the program was blowing up. The reason was that I was reusing the variable after moving from it, and the compiler never gave any warning (even on -Wall -Werror) telling me that what I was doing was wrong. In Rust, the same situation would be a compiler error.


Yep. As much as people extol lifetimes, my personal opinion is that Rust's aliasing rules are its true golden goose. C/C++'s lax approach to aliasing causes a whole host of issues that Rust is able to avoid by being more strict.


Using a moved-from object in C++ doesn't produce any warnings because it isn't an invalid operation. The standard library types make very limited guarantees about the state of moved from objects (generally just that it remains valid to assign to them and that the object's invariants still hold), but even then it's valid to reuse them as long as you first do something that ensures they're in a known state.


I would also argue that even after an object has been moved (out of?), that object should still be a valid object. So that even if somebody makes a call to it, it shouldn't 'blow up' , but do add asserts to indicate you're now talking to an object that is no longer initialized to a useful state. E.g. if moving a class that wraps a file descriptor, then calling write() would do nothing, but a debug build could also assert() that the file descriptor is not valid.

That way you get runtime stability if you screw up, but no weird side-effects.


Rust and C++ have rather different concepts of moving. A moved-from Rust object is entirely dead, cannot be used, and will not be dropped. A C++ moved-from object is alive as far as the language is concerned, and the destructor will still run. The move operation and the destructor need to cooperate to avoid crashing. This often adds overhead.


C++: null pointers were a mistake, so we're introducing null objects too.


clang-tidy should catch this and warn about it at compile time.

The two hours seems on the high-end, if someone's able to e.g. use ASan and the program is crashing reproducibly.


Thing is many of Rust features could probably be enforced with a static analysis tool, which a large majority unfortunately ignores.

So you either have a C++ shop where everyone is on board regarding security, with the caveat of third party dependencies, or no one cares and writes something along the lines of C with C++ compiler, without any kind of static analysis.

Relying on external tooling means it usually gets ignored if it is not enforced. After all C's first version of lint goes back to 1979.

Sadly JetBrains latest questionnaire results prove exactly that.

So having safety as integral part of the language semantics matters a lot. Defaults matter.


> Thing is many of Rust features could probably be enforced with a static analysis tool, which a large majority unfortunately ignores.

But it definitely can't be? There are plenty of open source projects (Chromium, Firefox) that develop and leverage state of the art static analysis tools and best practices. It's very clearly not enough, and the costs (built/ test time) are really significant.


Our day to day software development practices still fail short of what design by contract, MISRA, AUTOSAR, DO-178B and similar offer in terms of delivered quality.

Only with further increase in lawsuits and returned faulty software, like in other commercial areas, will companies start paying attention to QA budgets.


MISRA and DO-178B deliver more on the illusion of quality then anything else. They are desperate attempts to tame software complexity. But they don't fundamentally solve anything.


How come? They aren't perfect, but they seem to at least make Ada/Pascal out of C.


I would agree with the parent. It's a while since I did my last MISRA project, but I know that it doesn't even prevent basic memory safety issues or leaks. It's more a set of coding guidelines that prevent some kinds of errors than robust tool that will reliably detect those.

Static analyzers work better, but often have a terrible signal-to-noise ratio. I think Rust can on average prevent more errors than all of those things out of the box, which is impressive.

The downside is obviously the increased complexity, and that it sometimes feels one is forced to work around the limitations of the "static analysis tool". Which likely comes from the fact that the borrow checker is some kind of analysis tool, where the annotations are directly included into the language.


Thanks, my experience is just on reading papers about it, so it is nice to have feedback from actual uses of it.

Regarding with Rust having a kind of analysis tool directly built into the language, fully agree, that is what is so nice about safer systems languages, and what I liked in Algol/Wirth languages.


Please elaborate with examples.

Since most new cars are Internet connected and have whole hosts of complex safety features dependent on software correctness, I sure do hope you are wrong about this.


MISRA and similar standards are incredibly limiting. For example MISRA forbids dynamic allocation.

They not only make writing software a lot more difficult and expensive, they also restrict the kind of software you can write.


Do you think QA budgets would actually help here? At least the way I know it QA and development is separated, and no matter how many QA people you hire, many developers brush off QA until later.

Considering Rust shows can enforce so many things in the compiler, to me it's clear that a better compiler/language is a better way to address this problem than QA people.

Also the built in testing with cargo test makes TDD so much more attractive.


They don't, but others do. Sound static analysis tools like TrustInSoft (https://trust-in-soft.com/) guarantee no undefined behavior (array overflow, use-after-free etc.).


That's basically a reduced form of program verification, and requires a lot of developer help. You end up programming in a language that looks like C but isn't. It is not simply a matter of throwing a pile of C++ code at the tool and fixing a few errors it reports.


That depends what you mean by "a lot". The effort required is significantly less than a rewrite in a safe language. If we're talking about properties that safe languages can verify, i.e. simple, local, ones (like memory safety), verifying those in a sound static analysis tool is not hard, either.


Rust is not a collection of C++ best practices, formalized. It has its own "personality", borrowing ideas from several other languages and unfortunately these other ideas happen to be alien to most C and C++ programmers.

There was a funny discussion on the Rust subreddit, where even some language contributors have started having doubts about that complexity. One of them was trapped in his own programming language theory ivory tower, the other was trying to convince them that they are losing developers if they keep adding stuff to the language.

That discussion was a clear hint that the Rust developers don't have C and C++ programmers in mind when designing Rust. They have their own ideas about how a modern systems programming language should look like, and they're doing that. Perfectly fine, but we need to correct the misconception that C or C++ programmers will rush en masse to learn Rust.


Rust developers have actually been very cautious about integrating the 'usual' sorts of PL-theory driven features in Rust. The PL theory of Rust-like languages is still in its infancy in many ways, but it has already usefully informed the design of library features like e.g. Pin<>, as well as eased the understanding of seemingly ad-hoc language features like so-called 'internal' mutability, which - as it turns out - can be described via a remarkably simple theoretical basis.


> prevent things like race conditions

Rust can prevent all data races but not all (any?) race conditions. Related question: can you use the type system to catch a subset of race conditions?

https://stackoverflow.com/questions/49023664/could-software-...


Also, most c++ code is just...ugly. It can really assault the senses, looking at some of the template & indirection abstractions, etc. Rust just looks better, from what I've seen.


While this is very subjective, I tend to agree. All codebases I've read and worked on in C++ were eyesores in all but the simplest places.

That being said, I think Rust macros are much worse compared to C++, if you ignore templates.

Don't get me wrong, I really like Rust. I just think that it's macros make for some of the most unreadable code I've ever seen.


This seems kind of inevitable with macros. They help the macro creator write code with new, "better" abstractions, but readers of the code have to learn the semantics of the macro before they can understand what's happening. The built in mechanisms of a language are familiar to those new to a program's code but macros usually hide something significant (otherwise why have a macro).

I've written fancy macros for assembly language programming to support my own looping, iterating, argument passing etc. but I noticed that the other programmers on the team weren't interested in using them.

On the other hand, I'm so grateful to John Wiegley for his use-package macro for emacs lisp.


> readers of the code have to learn the semantics of the macro before they can understand what's happening.

They always have the alternative of reading the expanded code, which is very similar to what the author of the macro could have written by hand instead of the macro.


I had the same feeling with macros intially, but once you wrote a few of them it's ok, and they provide many more guarantees than pure text preprocessing of C/C++.


Rust is not a pleasant language to read tbh: https://github.com/SergioBenitez/Rocket/blob/v0.4/core/http/...

Clean C++ 14/17 is less cluttered.


That file looks quite readable (ignoring the comments - which again looks very readable in color syntax highlighted documentation).


I think it's the numerous single character lifetime annotations that they are referring to. I agree that this particular bit of code is somewhat hard to parse (as a rust amateur), but that might be alleviated quite a bit with some less terse naming.


Your comment prompted me to look at Rust's release history. In the whole of the history, I can spot maybe a handful of changes that made the language non-trivially harder for a beginner. Everything else either makes the language easier (by making things more predictable), or is a net neutral. Most of the language-level changes are about enabling current features to work in more scenarios, such that coming across new features tends to feel more like "oh I hadn't realised I could do this" than "what does syntax thing mean?"


This

>I have a very hard time grasping all the functionality/concepts

is (partially) because this

> if we ever want to control the tsunami of security flaws

Most focus in how the borrow checker work "against" you but that is not even the harder. Performance and how manage memory are more "painfull" in rust.

BECAUSE NOBODY KNOW HOW DO FAST & SAFE CODE.

Not ALL the time. Without extra help of the compiler your assumptions can get wrong in invisible ways...

Rust WANNA:

- See what is costly

- See what is unsafe or not

- See what own what

- See what is on heap or stack

The borrow checker is just a part of it.

From https://this-week-in-rust.org/blog/2019/07/02/this-week-in-r...

    Python and Go pick up your trash for you. 
    C lets you litter everywhere, but throws a fit when it steps on your banana peel. 
    Rust slaps you and demands that you clean up after yourself.

    – Nicholas Hahn


> Python and Go pick up your trash for you.

> C lets you litter everywhere, but throws a fit when it steps on your banana peel.

> Rust slaps you and demands that you clean up after yourself.

> – Nicholas Hahn

This is brilliant and will save me time explaining language differences. Thanks for sharing.


Python and Go pick up your trash for you, but sometimes they get in your way while they do so. If you generate a lot of trash, you might find yourself stopped quite often.

C lets you litter everywhere, but if you or anyone else steps on your trash it will tackle you to the ground. Usually. Sometimes it ignores the first 10 times and does it on the eleventh.

Rust snatches up your trash as soon as you're done with it, but if it can't reason well about when you'll be done using it, it will make you fill out a form explaining how you plan to use it. It will also slap you silly if you try to deviate from that plan.


I don't think you're alone in being put off by the complexity of the language. I think once you get over the hump things start to click. The mindset that has helped me is to say, "Ok, this syntax might sometimes look familiar, but this language is COMPLETELY DIFFERENT from anything else I know, so I can't use my usual metaphors/analogies and need to use a clean slate". Then you can look at things like Ownership, Traits, and Pattern Matching and see how the whole language is built up around a few key ideas (with a lot of subtle variations) and then it might start to click.

You need to give it a lot of time. Some of the ideas are really not familiar. I don't think Rust presents some of the ideas perfectly, but I can imagine that in 20 years there might be a whole slew of languages that borrowed ideas from rust and maybe make them appear more idiomatic.


> this language is COMPLETELY DIFFERENT from anything else I know

Rust itself borrows a lot from functional programming while also topping the story with lesser-known things like lifetimes, so no wonder it feels alien to a lot of people. In fact, the following:

> I can imagine that in 20 years there might be a whole slew of languages that borrowed ideas from rust

is actually already happening, except it's FP that's inspiring contemporary language designers (including Rust team).

To me personally even limited familiarity with Haskell probably helped a lot back when I started tinkering with Rust, it all felt more familiar to me than to average C or Python dev.


If someone can do systems programming with FreePascal/Delphi, Modula-3, .NET Native, D, Swift, Ada,...

They will be almost at home with Rust.

The biggest hurdle is dealing with the borrow checker when writing GUI code (hello Rc<RefCell<T>>), but for other kind of applications it is quite ok.

Also it speaks a lot that Ada, C++, Swift are adopting the same ideas regarding the borrow checker, even if implementation has some constraints given backwards compatibility.


In what way do you think C++ is borrowing the idea of a borrow-checker? Smart pointers pre-date Rust.


Visual Studio and clang are introducing a borrow checker in their static analysis tools. If you leave it on as part of the build you get an similar experience (note on similar, they aren't bullet proof due to language semantics).

There are a couple of conference talks about them.

Naturally smart pointers predate Rust, I used them back when Windows 3.1 was considered recent, alongside OWL.

However they aren't the same thing, introduce runtime overhead and don't prevent use-after-free, or use-after-move.


One thing about static analysis tools that seems to be easily forgotten about in discussions about rust vs other languages is that those tools are trivially ignorable. If your CI pipeline doesn't let you get a binary for a given milestone, because the code doesn't pass static analysis, unit/integration tests etc. people will just disable it for myriads of reasons not the least of which will be "management wanted it to be done by yesterday and it compiles, so it's probably ok", "it's probably false positives again [yeah, right...]", "this is the test that fails from time to time, it's probably nothing". A compilation error is a much better protection from the social pitfalls of programming in a corporate environment. So yup, you have all those wonderful tools at your disposal in the C/C++ which I guarantee you will ignore or be forced to ignore at your peril.


Using the same logic, you could argue that people will just use unsafe and shared pointers everywhere if they have a deadline and they can't get their code to compile.

This is an organizational problem, not a language / tool problem.


Even when abusing unsafe, it's hard to get away with as much laziness (especially sneaky, dangerous, indirected laziness) as you can get away with by default/accidentally in other languages when you disable their linting/analysis tools.


Actually I tend to refer to it quite often.

However one needs to see the full picture, not only language grammar and semantics.

If I want to create a GUI application today, I will definitely use a mix of .NET ,Java, with C++ for the low level performance bits, because Rust is lacking in that area, in spite of being a safer language.

So, if C++ takes a lesson or two from Rust, and helps developers like myself to keep productive, while improved the security of the whole stack, then so much the better.

And if Rust continues to improve, maybe one day Android Studio, XCode, VS, will provide an end-to-end mixed language experience, and OS frameworks, for Rust just like they do for C++ nowadays.


That makes more sense, though not actually part of the language. Smart pointers were the only thing close that came to mind for that reason.

I’ve used the Clang experimental lifetime analyzer on Godbolt, and I welcome improved tooling.


Smart pointers are a different feature than the borrow checker.

I believe your parent is referring to the Core Guidelines and the Guideline Support Library.


Lifetime analyser from VS and clang.


Do you have a link? I thought they were related, but maybe I’m behind the times!


EuroLLVM 2019 on YouTube has a talk on clang current implementation state.

Apple also demoed their XCode integration at WWDC 2019, on the talk about Objective-C, C and C++ support.


When is the Xcode integration coming?


I remembered it incorrectly, sorry about that, there are several improvements regarding type safety, but not for lifetimes, C++ use-after-move, std::string vs char*, and a couple of others.

https://developer.apple.com/videos/play/wwdc2019/409/

However here is the clang talk I was referring to.

https://www.youtube.com/watch?v=VynWyOIb6Bk


>complicating things and scaring off (non-C++) programmers.

On the contrary, Rust allow us (non C++ programmers) to use a system language without fear of breaking something. I'm a Rust developer with Ruby background and loving the language more and more.


Did you have background with languages other than Ruby before starting with Rust? Languages like C++, D, Haskell, OCaml or maybe C#?


Have you read the Rust book? I came to Rust from JavaScript, having never written more than "hello world" in C++ and found it pretty approachable.


Rust makes you think about certain things that you should be thinking about anyway when coding in C/C++ but sometimes forget about it. IMHO this makes it easier for someone who's never programmed in C/C++ to learn Rust than for a veteran in either of these - unless that person adhered religiously to certain rules of handling pointers.


Rust is a lot simpler than C++. Broadly speaking. the only 'complications' it has over C are those that are actually needed to support its patterns of memory-safety-guarantees-through-RAII. The only real alternative is GC languages, and those have their own sorts of very real complications that do absolutely scare off performance-oriented folks.

The biggest problem with Rust right now is actually its novelty and lack of maturity, that makes using it at this time a lot more problematic than it should be. But Java and Python were once "new and unproven" languages, too.


While I agree in general, just wanted to make the heads up that not all GC enable languages are made alike and a couple of them do allow for C style low level programming.

Usually the performance folks that get scared are the ones that put all of them into the same basket.


There is certainly much more accumulated cruft in C++, but you can write your C++ program ignoring most of it. You will have to deal with some of Rust's harder parts rather soon.


You can write a C++ program ignoring most of it, but when you go to work on someone else's C++ program, or import someone else's C++ code (i.e. most real-world programming) you will have to deal with it.

And when you're not working alone you will spend a lot of energy discussing which C++ subset you will use, and enforcing that.


C++ has accumulated complexity over decades, and the committee is working on making things easier for beginners. Usually it works, mine-fields like string_view aside.

Rust has accumulated its complexity over four years, and it's already comparable to C++. The thing that worries me the most about Rust, is how the language will look like in another 10 years.


> Rust has accumulated its complexity over four years, and it's already comparable to C++.

It’s not. C++ constructors alone rival the entirety of rust, and grow in complexity with every release.

You’re just so used to the unfathomable complexity of c++ you don’t realise it exists anymore.


Something which is explainable in one cppreference wiki page is not as complex as an entire language, which is described in a 550+ page book.


C++ is so complex no-one can really grasp how complex the language actually is.

C++ is not slowing down. C++ is on the verge of deprecating STL-style iterators in favour of Ranges, and modules and concepts are imminent. Template metaprogramming is being superceded by constexpr. Of course STL iterators, header files, and template metaprogramming are still going to be around, people will just need to learn all of it if they want to work on a variety of C++ projects.


STL-style iterators are not deprecated, just like LINQ and streams have not deprecated interactors on .NET and Java respectively.


Your post is kind of vague. Rust was hard to grasp, and then I learned it, and now it isn't.

I don't find anything about the language to be particularly more complex than, say, Python or C++.


As much as I like the language I actually don't think it's for everyone.

If your program is running on the server reading from a DB and producing simple JSON (like I assume most of HN's audience), rust is probably not what you want. There's plenty of more pragmatic approaches. At least I think it's not the right language for my employer's department (and it pains me to say that)

If what you want is to write code that runs on bare metal then consider Rust


> If what you want is to write code that runs on bare metal then consider Rust

The closest I was to bare metal, i.e. code that works without an OS, when I developed stuff for “small” MCUs, like Intel MCS51, Motorola COP8. Rust supports none of them: https://forge.rust-lang.org/platform-support.html

I’ve developed for Nintendo Wii, nominally there’s an OS but it’s very “thin” one, mostly statically linked libraries provided by Nintendo. Rust can’t compile for that platform either, it only supports PowerPC Linux.

I’m currently working on low-level software working on bare Linux kernel. Rust apparently supports ARM Linux, but C libraries are literally everywhere, both kernel APIs and user mode: drm, kms, gles, udev, freetype, low-level kernel stuff like tons of ioctl calls for SPI and USB I/O, wpa_supplicant, and more. That’s too much native C stuff to integrate together, using a foreign language causes too much friction.

I can think of bare metal software for which Rust is good. If I would work on x86 bare metal hypervisor, I would look at Rust very closely. Platform support is good, not much libraries are needed, and the project is extremely security sensitive so using Rust will probably pay off in the long run. But I don’t think that’s a rule, looks like an exception to me.


Rust FFI is very good. Federico Mena-Quintero's blog has a detailed description of mixing C and Rust code while he rewrote librsvg in Rust. https://people.gnome.org/~federico/blog/librsvg-is-almost-ru...


It’s still foreign. Might be good enough for isolated libraries, but IMO not good enough for the level of integration with OS required by any complex software. Just too much work.

There’s nix::sys::ioctl in Rust stdlib, but there’re also issues with ergonomics, e.g. https://stackoverflow.com/q/51898034/126995 These variable-length structures are used a lot in practice, not just for HID, SPI and USB bulk protocols use similar things. They’re pain to consume from any other language except C and company (C++, obj-c). C# also has very good FFI, but variable-length C structures at API boundaries still require manual marshalling.

There’re third-party bindings for drm, https://github.com/rusty-desktop/libdrm-rs, but apparently that project is not maintained, not sure it works on ARM. It contains more than 3000 lines of code, which will require support. The equivalent C headers, xf86drm.h and xf86drmMode.h, are not small either (800 and 500 lines, respectively), but the important difference is C headers are already supported by Linux kernel so I don’t have to.


I guess this shows my ignorance of low level computing! I used "bare metal" wrong even though Rust can do it.

I wanted to point at a lower abstraction level than a typical corporate application, but higher than bare metal.

Something just above the OS, like any command line.


while it would be nice to have a simple, C like language, with memory safety, but without GC, it may not be possible. Some of the complexity of Rust is to help deal with life of borrow checker land.

There are some middle ground options though, like Zig, which is a nice simple C like language with less undefined behavior and no nulls. so safer, but not offering memory safety.


There is a lot of unexplored wiggle room in the design of borrow checking that might get closer to what you want.

For example, Rust puts &T and &mut T at the forefront, which leads to a slightly alien way of handling aliasing- it's all or nothing. This makes some things feel way harder than they are in C, but helps out the optimizer (every pointer is now restrict/noalias).

A different language could emphasize (the equivalent of) &Cell<T>, which allows shared mutability but restricts certain "shape changing" mutations. Most of those C patterns would feel easy again, with a bit less of Rust's non-safety-essential guarantees.


Cell<T> (1) is not safe to reference across threads, and (2) can only mutate via the equivalent of a memcpy. It can be useful in many ways, but there is a real sense in which &T and &mut T (which would probably be called &uniq T, if Rust devs cared about theoretical cleanness over reusing short keywords!) are truly fundamental.


Point 2 is only a limitation of the current standard library, not of the language-level model. It has even been relaxed recently, so you can go from a &Cell<[T]> to a &[Cell<T>]: https://github.com/rust-lang/rust/pull/61620

The same could be done for struct fields if the type system knew about it, and the whole thing could just use normal syntax.

Sharing between threads still needs &T or &mut T (or an owned value), but that's not usually involved in the painful cases.


It is definitely possible to have Rust's compile time lifetime analysis while having a less complicated language that mostly deals memory automatically: https://aardappel.github.io/lobster/memory_management.html


What are the concepts you have trouble with?


Try the Rust track: https://exercism.io/tracks/rust


I can’t help but find it a bit like an academic project; a proof of concept. I suppose the complexity of it is warranted to help both you and the compiler from acknowledging unsafe code and get better error messages, so maybe the question is not whether Rust needs to be like this, but whether I as a developer need Rust.


MaybeUninit<T> is very welcome considering mem::uninitialized turned out to be such a mistake. I only tried using it once which dissuaded me from trying again, which was probably for the best.

I'm still looking forward to const generics and a more usable const fn. In a way it's a shame Rust doesn't have a purely constant function in the interim. But a hybrid function will be more versatile once it allows some form of looping.

The last thing on my wishlist is extern types (aka opaque types aka void *). The current workaround using a pointer to a [i8; 0] type relies on LLVM's particular handling of such pointers and always looks weird in rust.


I might be revealing myself as a novice rust programmer here, but can't you use 'ptr : * const()' as an opaque pointer type?

It's how I interface with C and C++ callback functions.


You can, but there’s reasons a real external type feature is useful: https://github.com/rust-lang/rfcs/blob/master/text/1861-exte...


According to the nomicon[0], using a zero sized array in a repr(C) struct is the best practice for type safety. I also seem to remember someone saying the zero sized array is analogous to how LLVM bitcode represents void* in C.

[0] https://doc.rust-lang.org/nomicon/ffi.html#representing-opaq...


We've been slowly rewriting our core instrumentation code at FullStory to take advantage of the new futures and async/await.

I'd love to blog on this at some point but I think that the real big win here was being able to use ? to early exit in async code.

I'm excited to see what the future brings here - we're still pretty new with async/await and building our own internal patterns.


Did you try to use an executor that can drive the new futures? The ability to use &self in an async context is so much nicer than playing around with Arc with things that really don't need one.

Also very happy to not being forced to write .map_err ever again.


We're in a tricky spot with our async code because we need to interop with both Android JNI, Objective C threads, and some diagnostics code that uses tokio/websockets. For the first pass there was a lot of "let's get this working with a modern executor and make it perfect later".

When we get some spare bandwidth we'll definitely see if we can get some extra productivity out of using &self. So much of our existing futures code is either self-less or uses some macro code to generate glue to allow us to use Arc-typed self - this is to allow a bunch of async core code to interop with these async platform drivers.

Been on a crash course getting better at architecting Rust programs for nine months. Luckily the Rust ecosystem and toolchain is getting even more amazing each time around so we can justify some work to refactor and try new approaches.


Do you have experience doing async ffi to other runtimes from Rust? Such as passing down futures from Rust to JavaScript or Java so you can use them from their context. I'd love to read a blog post about that subject...


I can definitely talk about how we did it - maybe see if we can get something up on our blog.

Our current approach-du-jure uses callback handles in combination with channels to let the ffi code trigger a real rust future's completion. This has worked reasonably well, but I'm sure we'll experiment with a few other patterns.

We don't specifically interface with Java Futures (no particular reason other than it hasn't seemed necessary to add that complexity), but that would be a pretty cool library to build on top of the existing Rust jni crate.

One thing I'd like to pass by the Rust community is our internal "teleporter" that allows you to borrow an object mutably on one thread and then "teleport" an immutable ref to that object to any other thread using only a u64 handle (with obviously huge unsafe flags). This has been very handy for some of our async ffi work.

I'm hoping to get a few more Rustaceans onboard (aggressively hiring!) over the next few months so we can focus more deeply on some of these interesting problems.


If your Java usage is constrained to Android there is a JNI workaround that many NDK folks tend to use.

Instead of doing JNI calls, send Android messages between NDK and Framework threads.

There is the setup of MessageHandler on both sides, but long term they are more productive than JNI boilerplate.


Interesting - do you mean using the Android handler/message infrastructure? I hadn't considered that at all. Do you have any references?


Yes.

One example would be SDL, although they use a mix of JNI and messages (search for SDLCommandHandler).

http://hg.libsdl.org/SDL/file/abb47c384db3/android-project/a...

http://hg.libsdl.org/SDL/file/abb47c384db3/src/core/android/...

EDIT: Sorry forgot about the C side (counterpart is Android_JNI_SendMessage).


> One thing I'd like to pass by the Rust community is our internal "teleporter" that allows you to borrow an object mutably on one thread and then "teleport" an immutable ref to that object to any other thread using only a u64 handle (with obviously huge unsafe flags). This has been very handy for some of our async ffi work.

Would love to see a blog post (or better yet, library!) for this - sounds interesting!


> In Rust 1.36.0, the HashMap<K, V> implementation has been replaced with the one in the hashbrown crate which is based on the SwissTable design. While the interface is the same, the HashMap<K, V> implementation is now faster on average and has lower memory overhead. Note that unlike the hashbrown crate, the implementation in std still defaults to the SipHash 1-3 hashing algorithm.

The wording here confuses me. They say they took the implementation from hashbrown, but then finish by saying that the implementation is different. What am I missing?


HashMap implementation and the hasher used in it are not the same. You can use the std hash map with any hash implementing Hash, and I think you can do so with hashbrown, too. The different is what hasher is used by default if you don't explicitly specify one.

Hasher => Takes the key and turns it into a hash (in this case a 64bit hash).

HashMap => Takes (key, value) pairs + a hasher and then does "magic" to get a fast lookup based on key+hasher.

The "magic" part is what changes. (which include thinks like which datastructures are used to store keys/values, how deletion is handled, how hash collisions are handled, how the given hash is used to lookup keys, etc.).


The hash table implementation (the data structure) is changed, but the hash function (the one which generates the hashes) is kept the same.


Maybe I misunderstood the speed complaints about HashMap in Rust. I thought it was the hash function that was the slow bit? What is the anticipated improvements from using SwissTable?


We do choose a hash function not designed for speed by default, but that doesn’t mean that the implementation of the table can’t be improved. This is effectively the third re-write of it.


I think the bottleneck depends on the size of your table and the size of your keys. A large table with small keys will bottleneck on memory IO, but a small table with large keys will bottleneck on hashing the keys. But I definitely haven't benchmarked any of this.


The hashtable implementation is taken from hashbrown but the default hash is still the same and not taken from hashbrown.


If someone inexperienced in systems programming chose Rust as their first systems language, would there be difficulty adapting to other languages like C++? It seems like I'm caught in this back and forth between "C++ isn't pretty but it makes you money" and "Rust is so nice but where are the jobs".


Are you trying to become a systems programmer? Then you need to know C and C++, period. And not just C and C++, but also the surrounding ecosystem like CMake, Make, Autoconf, GoogleTest, Catch2, Python, Qt5, Boost, etc. Unless you are fortunate enough to work at a place doing completely new development, the programs you will work on are written in C/C++. Knowing a language isn't just knowing the syntax, but also the libraries, scripting, and build tools.

If this is for fun / education then learn Rust. It's conceptually nicer and doesn't have legacy cruft from decades of industrial use.


On the contrary, I'd say that learning Rust is a fabulous stepping stone to "modern" C++ (much of which served as the philosophical foundation for Rust in the first place). And once you get good enough at Rust that you've internalized the rules regarding memory ownership, you'll be able to instinctively apply those same rules successfully in C++, where the compiler proves fewer things for you.


I appreciate what you're saying, and that there's some truth to that, but I think there's at least two components to good allocation management in C++ (or C, or other memory-unsafe language)...

The first component is conventions and idioms for managing allocations, and Rust will force you into (and support) some good (but nontrivial) ones.

The second component is self-discipline. Look at the long history of vulnerabilities in C and C++ code that are due to carelessness -- of an oops that a programmer made when they knew better.

If what's being considered is Rust as a stepping stone to C++, how much does Rust help with the first component, and is Rust even counterproductive for the second component?

Regarding counterproductive for the second component, you might've seen a conventional practice of grinding the Rust Clippy until the code compiles. I don't know how that affects the development of self-discipline (e.g., maybe some people try to make a practice of being Clippy-free on every compile attempt?), but it seems a reasonable and interesting question to ask.

(I'm not dissing Rust for this. I mostly like Rust, and would be happy to be working in/on it.)


I'd say that concerns regarding self-discipline are overblown in this case. Experienced Rust programmers aren't simply typing blindly into their editor and hoping that their code will compile. When writing Rust one comes to learn the code that the compiler likes, and strives to write code that is free of compiler errors in the first place. This is itself an expression of self-discipline, except that the discipline comes in the form of compiler errors rather than runtime errors. There's less of a penalty for making an error in Rust than in C++, but that's going to be true regardless of whether one's background is "I already know Rust" or "I don't already know Rust", which is what the parent commenter appears to be concerned about.


> When writing Rust one comes to learn the code that the compiler likes, and strives to write code that is free of compiler errors in the first place.

I suggested that possibility, but is it generally true, or something personally true for you, or are you advocating that it would be good if people did that?


When I started learning Rust and low-level programming (I'm coming from Ruby), spamming/searching for error/fixing code and wait for `cargo build` to turn green was the strategy I used. As I have more experience with Rust I become more and more aware of ownership/lifetime of everything I'm using, the compiler errors appear less and less, most of the time it's typo or mut missing now and not lifetime issues anymore.

So yes, if you work enough with the borrow checker, your brain will form another logical one, and that one you can use for writing C/C++ code. I have much more confident now in learning/writing C/C++ than before I learn Rust, because I feel like I can form a Rust-like design (tree-based, clear ownership/lifetime objects) and put that in using C/C++ syntax.

Definitely recommend using Rust as stepping stone to learn production-grade level C/C++.


I personally think it’s more nuanced than either of these. I do blindly type stuff in, and lean on the compiler to help me. At the same time, as I got more proficient, my initial code is closer to correct and produces far less errors.


Adapting won't be too hard. I've heard a few people say learning rust made them better C++ coders, since the habits which make it easier to write rust which passes the borrow checker also make for more robust C++ code (in effect, the same rules the rust compiler enforces also apply to C++ code, but the compiler cannot help you nearly as much).


The main difficulty is that the compiler no longer checks your work.

There are of course, fairly significant differences in idioms, and things like that, but that’s true for every language switch.


The alloc crate stabilization should provide serde with improved options. Currently they maintain two json modules, one that does heap allocation and one that does not.

https://serde.rs/no-std.html

https://blog.rust-lang.org/2019/07/04/Rust-1.36.0.html#the-a...


Wouldn’t the non-allocating module remain useful and the allocating one just get lowered to depend on alloc rather than std?


Future being stabilized to me is confusing. You still need `tokio` or a runtime to spawn them into an executor in order to do anything with them, right?

So you have a standard trait from the language officially, that is useless without a third party library?


Sort of yes and sort of no. What you need is to call poll at the appropriate time. Doing so does not, strictly speaking, require external libraries. That said, you probably don’t want to write that code yourself; the naive implementation will be extremely inefficient. This is where external libraries come in.

The reason they’re external is, depending on what you want to do, you’ll want an executor with different characteristics. An embedded executor has very different needs than a network IO executor than a GUI event loop. By stabilizing the trait, we can ensure library compatibility: everyone agrees on the same interface.

Given that we’ve invested so much in making it easy to add libraries to your project, including a single one wouldn’t be appropriate.


I'm not sure what this is worth, but I'd personally feel better and take Rust more seriously if they had an implementation as good as Tokio's available as part of the standard library instead of it being split across a bunch of third party libraries.

Are there talks to make that a reality in the next 18 months?

Is `async / .await` going to be just syntactic sugar around `Future` or is it going to necessitate an executor lives in the standard library?


I don't think it's going to happen. The "Rust way" is to simply live with the fact that it's not "batteries included". Cargo means that Rust comes with "batteries reliably delivered" kind of service, and I think that has been more beneficial for the community in the long run.

async / .await are going to turn functions into Futures, and they by themselves don't necessitate an executor any more than the Future trait itself.


Only if those libraries are upheld to a specific quality level and available in every single platform that Rust is able to target.

Here Java, .NET and future C++ are a clear winner, given that they are part of the standard library.


Async await is already syntactic sugar for futures, and does not require any specific executor.

There’s no plans to add an executor to std for all the reasons I’ve said.


Which makes Rust stand out versus what Java and .NET already do, and what C++ will do (even if they are only fully done by C++23).

Standard library executors are guaranteed to be available across all platforms supported by the compiler, with a validated level of quality for production loads.

Random implementation from Internet not so much.


There are no real advantages to putting Tokio in the standard library. Cargo makes it trivial to manage the dependency.

There are some real disadvantages to putting Tokio in the standard library, for example tying Tokio releases to the standard library release cycle and making it difficult/impossible for people to use non-latest versions of Tokio.


Look at all of the flack the JavaScript ecosystem gets for having everything as a "dependency", and in this thread 3 people are cheering that Rust makes the executor required to do anything with Futures a third party library. Wild.


In this case the Future trait puts the "standard" in "standard library" by providing an agreed-upon interface by which third-party crates can interact without conflicts. There's a high bar for adding new modules to the stdlib, and fundamental building blocks like Future are far, far easier to accept than, say, the entire Tokio stack.

Note as well that Tokio isn't the only library that can be used here, there's plenty of experimentation in this space.


In addition to what the others said there's another point: async fn's implement Future, otherwise you wouldn't be able to use them. So it's essential for Future to be in the standard library so that async fn can work.


I think that allows library authors to implement against the trait, but then the user of that library can choose which concrete implementation they want to use in their app, so they don't have to have multiple and convert between them.


Is there any way to financially support the project? I have only been able to find a couple of Patreon pages of people working on specific libraries I haven't been using personally.


I don’t think there exists a way to support the Rust project directly, but there are some indirect ways: supporting Mozilla, that employs many of the core develioers is one. Another one, announced just this week, is to support the Rust Analyzer, which is a project to create a next-level IDE-compatible Rust compiler: https://opencollective.com/rust-analyzer/expenses/new


Agree on supporting Rust Analyzer. The project recently published an update[1] on the status and future plans.

[1] https://ferrous-systems.com/blog/rust-analyzer-status-openco...


The project is sponsored by Mozilla so I guess the best way to support it is to use Firefox...


Here's Mozilla's donation page: https://donate.mozilla.org


Note that donations go to Foundation rather than Corp though. It's not clear how much - if anything - goes to dev projects like Rust.


Mozilla Corporation is fully owned by Mozilla Foundation.

https://en.wikipedia.org/wiki/Mozilla_Corporation

Corporation is separated from Foundation for legal and tax reasons, otherwise, it is the same org.

(But employees of Mozilla get their salary, and it's not possible to give money directly to Mozilla Rust developers.)


Yes I know, but I thought Foundation does a lot of less-techie things outside of Corp?


It is currently not possible to donate directly to Rust development. Those Patreons are the closest thing.


If anyone's particularly wealthy, they could personally sponsor a contract developer to spend time working on a desired feature. There's plenty of work to be done that's limited by the amount of labor available.


Writing modern safe c++ isn't really the hassle everyone makes out of. Besides smart pointers the clang's sanitizers go a long way. I did try to pitch Rust at my corp but aforementioned safety checks are considered enough against the overhead of learning new language and I agree. Personally I don't like the Rc and Box syntax that's required to get a simplest homebrew version of even linked list going, C++'s metaprogramming hacks are rivaling that.

I wish the stigma against "unsafe" C++ was a bit more rational. People who use it aren't the kind fresh out of bootcamps and mostly realise the gains and risks. But maybe I'm skewed by my job which uses C++ and takes any risks seriously.


Seriously, what is this fascination with linked lists?

In comparison to array-based lists they're: - less memory-efficient, - do not allow random-access, - worse for cache locality (so can be up to orders of magnitude slower) and - more complex.

They are nice to learn some principles in the context of an Intro to FP course but apart from that, meh.


The linked list is just an lowest common denminator example. I think he and others (and me when I bring it up) mean mutually linking data strauctures.

Almost any kind of data structure in Rust is extremely painful to do efficiently. You either go the unsafe route of you drowned in a sea of boxes and cells.

On Reddit recently somebody gave the ludicrous claim that you shouldn't have to write your own data structures in rust - the rust system library should have everything you need.


On the real, large projects I have worked on for years (Firefox, rr, Pernosco) in C++ and Rust I have spent negligible time writing container data structures. Of course I create data structures, but almost always by combining hashtables, arrays and smart pointers and occasionally something more exotic from a library.

It's unfortunate that a lot of teaching programmer has people implement data structures from scratch. It gives the false impression that that's what programming is largely about.


Maybe the project should have implemented more from scratch instead of cobbling together some Frankenstein data structure (and Firefox wouldn't be such a massive memory hog with poor performance)?

I guess it really depends on your job, skill level, and mentality. While I do use a lot of off the shelf pieces, their relationships don't always it neatly and shoehorning them can cause performance issues. (I'm not going to pay for a double indirection when I can avoid it entirely).

But then again, I think this cookie-cutter approach to software is poor craftsmanship and often results in bloated, slow code that is way larger than it needs to be. I want to write something better than everybody else, not just make the same paint-by-numbers piece everybody else does.


I have a PhD in computer science from CMU, I have published many academic papers, and I was a distinguished engineer at Mozilla. The issue isn't skill level.

Randomly lashing out at Firefox is silly, especially at this time when it's getting so much praise for performance compared to Chrome. Firefox does indeed contain some complex, micro-optimized data structures for its core data (e.g. the CSS fragment tree and the DOM). It's just that it also contains a lot more code besides.

You wouldn't use an off-the-shelf hashtable to implement the mapping from a DOM node to its attributes. You should use an off-the-shelf hashtable to track, say, the set of images a document is currently loading. Like any kind of optimization, you optimize your data structures where it matters and you write simple, maintainable code everywhere else.


Slow down there turbo. Nobody said anything about your skill (although PhD doesn't particularly mean a talented developer - some of the worst code ive seen come from cs phds where some only understand the highest polynomial in big-o but forget the other factors). And nobody cares a cent about you getting whatever award from moz.

A said anything about optimizing in inappropriate areas (honestly, what did you get that from). This entire thread started because somebody didn't understand why people often user linked list as an example of something difficult in rust.

> Of course I create data structures, but almost always by combining hashtables, arrays and smart pointers and occasionally something more exotic from a library.

But that does scream "I don't really do a lot of performance oriented work". That you can somehow cobble together an apple out of a banana and a cat by probably using a metric ton of boxes and refcounts (that are just used to get around the borrow checker) doesn't surprise me if you are willing to make the readability and performance sacrifices.


> Maybe the project should have implemented more from scratch instead of cobbling together some Frankenstein data structure (and Firefox wouldn't be such a massive memory hog with poor performance)?

Sorry, but what is that supposed to mean? Have you looked at Chromium's (or any other modern browsers') memory usage? Firefox is timid compared to it, and always has been so. Maybe it's not due to the browser engineers' low skill level, but due to the enormous complexity of modern web? It's a separate operating system on top of your operating system.


Sadly it is not a stigma,

https://www.jetbrains.com/lp/devecosystem-2019/cpp/

34% don't use any kind of unit testing.

35% don't use any kind of static analysis tooling.

36% don't use any kind of guidelines.


I wouldn’t rely on JetBrians survey to tell the whole story for C++. TONS of places use Visual Studio for C++ and would never even know about a JetBrains survey


Naturally it isn't representative of the whole industry, but it does show a trend.

I can also post a ISO C++ one with similar results.

Or the video from Herb Sutter's talk at CppCon, where only 1% of the audience confirmed using any form of static analysers.

As anecdote, many enterprise places that use VC++ are still using versions like 2008 or 2010 and writing code as MFC/ATL had just been released.

The same kind of shops that are running Red-Hat enterprise 5, some Java version pre-8, and such.


Those might be even worse.

I think my experience correlates with the study. Most lower level code that I had seen used neither unit-tests nor any good structuring. At least in close-to-hardware projects that seems to be more the rule than an exception. I think this is due to many contributors there not having a pure software-engineering background. Those often have not worked in other higher level stacks and therefore are not familiar with practices.


In my opinion, the advtanges of Rust over C++ are not so much the borrow checker, but all the other features. In particular the error handling. I understand the reasoning for implementing exceptions in C++, but I really don't like the implicit nature of them. Algebraic Data Types are really easy to use, and with the '?'-operator using them is very clean.

Having proper metaprogramming is also really great. Sure, you can definitely go overboard, but a few things are just only possible with proper metaprogramming like quickly printing the value of a struct or enum for debugging and easy serialization/deseralization (like serde does). It's just a huge boon for doing introspection.

But it's not just the particular features that are important, it's the fact that best practices are integrated into the language. There are standard solutions for most things: error handling, unit tests, build system, package management, formatting style, etc. Sure, if you have a long-running C++ project, you're gonna have answers for all that, but the consistency matters when you want to integrate libraries.

I think if you're going to use Rust, you should try to speak to its strengths rather than retrofitting existing C++ idioms onto it. There are both real advantages and very real costs to doing this, and you certainly shouldn't just switch an existing C++ codebase to rust.


Can anyone recommend an http/rest api framework for Rust? I'd love to look at using it, last checked a couple of years ago and I don't remember finding anything that looked particularly stable / production ready.


Actix web and rocket are the two most popular.

HTTP stuff benefits a lot from asynchronicity, and so there’s been a lot of churn over the past few years as this story shakes out. We’re almost there though!


I can second the suggestion for actix-web, it is a joy to use and I believe it is the only HTTP framework that has reached version 1.0 :)

Have a look at the sort of things you can do with it https://github.com/actix/examples


Iron could be worth a look: http://ironframework.io

Haven’t used it extensively, but it’s pretty feature-rich and looks reasonably well-maintained.


Yeah, Go or Python both have good ones. Don't torture yourself - this isn't rust's wheelhouse.


Would you please not break the site guidelines by posting flamebait? Your comment would be fine without the last bit.

https://news.ycombinator.com/newsguidelines.html


I don't feel quick one-off rest servers where performance is dominated by HTTPS/SSL (or even just play HTTP) are anywhere near rust's sweet spot. You pay heavy penalty for rust, and doesn't seem to be to be an area where it is appropriate.

Stop censoring opinions you don't like.


I'm not concerned about your opinions. To the dim extent that I'm aware of them, I probably agree with you. But if you continue to violate the HN guidelines we're going to have to ban you. I don't want to do that, because it's clear that you know a lot and also have good things to say. But at a certain point it's just not worth the damage it causes to the community. If you value being here, you need to abide by the contract, the same as other users do.


Depends on what you need; on benchmarks, like Techempower, actix is absolutely killing it.


Just that page seems to be down right now.


It's working for me.


Strange. I'm just going to read the markdown instead, https://github.com/rust-lang/blog.rust-lang.org/commit/d7214...



Can someone point to a explanation of how to speak/read Rust. exactly how would you say "use std::fs::File" How does one pronounce :: Seems simple enough but things get much more complex by the third or fourth chapter of the Book.


I would just read the words, "use" "standard" "fs" "file" and wouldn't pronounce the "::" at all.


so the " :: ' has no verbal meaning, it's just a way of linking traits, libraries, functions,or crates? Not easy to explain


It's for dereferencing namespaces. "a::b::c" is analogous to a file system path /a/b/c.


I found what I was looking for in the appendix

B-3 Path related syntax """ ::path Path relative to the crate root (i.e., an explicitly absolute """


std pronounced "stud" with a very very short "u".


I never pronounce separators like ::, ., ->... And braces, brackets, parenthesis and angle braces I do not pronounce. Programming languages are made not to sing them, but to write and read. To read silently I mean. While I understand what I'm reading, I do not bother to pronounce it in a way that would be digestible by an accident listener.

If such a strategy stands between you and understanding, I'd suggest you to use silence gaps of different lengths. Like 0.5s for space and 0.2s for ::.


you don't pronounce "->" as returns even in your thoughts?


I used above -> in a C++ sense, but I tried it watching myself with Rust, and seems that I do not pronounce ->. I pronounce just identifiers, keeping track of the context (and therefore the semantics of identifiers) by other means.


Does Rust have its own linear algebra, image processing, computer vision libraries in pure rust?

I would love to see how such libraries are built from scratch in a low level language. I feel like I would learn a lot as well.


Oooh! Maybe this version can be compiled deterministically!


Is there something specific you’re thinking of here? We do generally try to keep things reproducible, though sometimes things slip in. There’s a tracking issue for this, IIRC.


This has been possible at times in past versions of Rust, though I don't think there's currently any automatic regression test to ensure it so the ability comes and goes. Here's the tracking issue: https://github.com/rust-lang/rust/issues/34902




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: