Beware though to use an executor that can drive the new futures and watch out certain libraries using `tokio::spawn` , which will cause panics.
Some executors for the new futures:
And a web server to try out async/await on nightly:
Compatibility layer from 0.1 to 0.3 and back is in futures-util-preview if compiled with the feature flag `compat`.
It's based mostly on this article, which predates std::future: https://www.viget.com/articles/understanding-futures-in-rust...
care to expand on that?
This is hopefully solved by the Runtime crate and the crates using it will use a more generic version.
I really like the safety guarantees offered at compile time, and really do think that we should move away from C-like languages if we ever want to control the tsunami of security flaws, but I can't stop wondering if Rust isn't (perhaps needlessly) complicating things and scaring off (non-C++) programmers.
 Sidenote - I find it really fascinating how Rust can also use the stronger static checks to prevent things like race conditions in a way few (/no?) other languages can.
A concrete example that I've run into recently when trying to write C++ code. I figured that, for safety reasons, I needed to make my type be move-only. I then had to spend about two hours trying to figure out why the program was blowing up. The reason was that I was reusing the variable after moving from it, and the compiler never gave any warning (even on -Wall -Werror) telling me that what I was doing was wrong. In Rust, the same situation would be a compiler error.
That way you get runtime stability if you screw up, but no weird side-effects.
The two hours seems on the high-end, if someone's able to e.g. use ASan and the program is crashing reproducibly.
So you either have a C++ shop where everyone is on board regarding security, with the caveat of third party dependencies, or no one cares and writes something along the lines of C with C++ compiler, without any kind of static analysis.
Relying on external tooling means it usually gets ignored if it is not enforced. After all C's first version of lint goes back to 1979.
Sadly JetBrains latest questionnaire results prove exactly that.
So having safety as integral part of the language semantics matters a lot. Defaults matter.
But it definitely can't be? There are plenty of open source projects (Chromium, Firefox) that develop and leverage state of the art static analysis tools and best practices. It's very clearly not enough, and the costs (built/ test time) are really significant.
Only with further increase in lawsuits and returned faulty software, like in other commercial areas, will companies start paying attention to QA budgets.
Static analyzers work better, but often have a terrible signal-to-noise ratio. I think Rust can on average prevent more errors than all of those things out of the box, which is impressive.
The downside is obviously the increased complexity, and that it sometimes feels one is forced to work around the limitations of the "static analysis tool". Which likely comes from the fact that the borrow checker is some kind of analysis tool, where the annotations are directly included into the language.
Regarding with Rust having a kind of analysis tool directly built into the language, fully agree, that is what is so nice about safer systems languages, and what I liked in Algol/Wirth languages.
Since most new cars are Internet connected and have whole hosts of complex safety features dependent on software correctness, I sure do hope you are wrong about this.
They not only make writing software a lot more difficult and expensive, they also restrict the kind of software you can write.
Considering Rust shows can enforce so many things in the compiler, to me it's clear that a better compiler/language is a better way to address this problem than QA people.
Also the built in testing with cargo test makes TDD so much more attractive.
There was a funny discussion on the Rust subreddit, where even some language contributors have started having doubts about that complexity. One of them was trapped in his own programming language theory ivory tower, the other was trying to convince them that they are losing developers if they keep adding stuff to the language.
That discussion was a clear hint that the Rust developers don't have C and C++ programmers in mind when designing Rust. They have their own ideas about how a modern systems programming language should look like, and they're doing that. Perfectly fine, but we need to correct the misconception that C or C++ programmers will rush en masse to learn Rust.
Rust can prevent all data races but not all (any?) race conditions. Related question: can you use the type system to catch a subset of race conditions?
That being said, I think Rust macros are much worse compared to C++, if you ignore templates.
Don't get me wrong, I really like Rust. I just think that it's macros make for some of the most unreadable code I've ever seen.
I've written fancy macros for assembly language programming to support my own looping, iterating, argument passing etc. but I noticed that the other programmers on the team weren't interested in using them.
On the other hand, I'm so grateful to John Wiegley for his use-package macro for emacs lisp.
They always have the alternative of reading the expanded code, which is very similar to what the author of the macro could have written by hand instead of the macro.
Clean C++ 14/17 is less cluttered.
>I have a very hard time grasping all the functionality/concepts
is (partially) because this
> if we ever want to control the tsunami of security flaws
Most focus in how the borrow checker work "against" you but that is not even the harder. Performance and how manage memory are more "painfull" in rust.
BECAUSE NOBODY KNOW HOW DO FAST & SAFE CODE.
Not ALL the time. Without extra help of the compiler your assumptions can get wrong in invisible ways...
- See what is costly
- See what is unsafe or not
- See what own what
- See what is on heap or stack
The borrow checker is just a part of it.
Python and Go pick up your trash for you.
C lets you litter everywhere, but throws a fit when it steps on your banana peel.
Rust slaps you and demands that you clean up after yourself.
– Nicholas Hahn
> C lets you litter everywhere, but throws a fit when it steps on your banana peel.
> Rust slaps you and demands that you clean up after yourself.
> – Nicholas Hahn
This is brilliant and will save me time explaining language differences. Thanks for sharing.
C lets you litter everywhere, but if you or anyone else steps on your trash it will tackle you to the ground. Usually. Sometimes it ignores the first 10 times and does it on the eleventh.
Rust snatches up your trash as soon as you're done with it, but if it can't reason well about when you'll be done using it, it will make you fill out a form explaining how you plan to use it. It will also slap you silly if you try to deviate from that plan.
You need to give it a lot of time. Some of the ideas are really not familiar. I don't think Rust presents some of the ideas perfectly, but I can imagine that in 20 years there might be a whole slew of languages that borrowed ideas from rust and maybe make them appear more idiomatic.
Rust itself borrows a lot from functional programming while also topping the story with lesser-known things like lifetimes, so no wonder it feels alien to a lot of people. In fact, the following:
> I can imagine that in 20 years there might be a whole slew of languages that borrowed ideas from rust
is actually already happening, except it's FP that's inspiring contemporary language designers (including Rust team).
To me personally even limited familiarity with Haskell probably helped a lot back when I started tinkering with Rust, it all felt more familiar to me than to average C or Python dev.
They will be almost at home with Rust.
The biggest hurdle is dealing with the borrow checker when writing GUI code (hello Rc<RefCell<T>>), but for other kind of applications it is quite ok.
Also it speaks a lot that Ada, C++, Swift are adopting the same ideas regarding the borrow checker, even if implementation has some constraints given backwards compatibility.
There are a couple of conference talks about them.
Naturally smart pointers predate Rust, I used them back when Windows 3.1 was considered recent, alongside OWL.
However they aren't the same thing, introduce runtime overhead and don't prevent use-after-free, or use-after-move.
This is an organizational problem, not a language / tool problem.
However one needs to see the full picture, not only language grammar and semantics.
If I want to create a GUI application today, I will definitely use a mix of .NET ,Java, with C++ for the low level performance bits, because Rust is lacking in that area, in spite of being a safer language.
So, if C++ takes a lesson or two from Rust, and helps developers like myself to keep productive, while improved the security of the whole stack, then so much the better.
And if Rust continues to improve, maybe one day Android Studio, XCode, VS, will provide an end-to-end mixed language experience, and OS frameworks, for Rust just like they do for C++ nowadays.
I’ve used the Clang experimental lifetime analyzer on Godbolt, and I welcome improved tooling.
I believe your parent is referring to the Core Guidelines and the Guideline Support Library.
Apple also demoed their XCode integration at WWDC 2019, on the talk about Objective-C, C and C++ support.
However here is the clang talk I was referring to.
On the contrary, Rust allow us (non C++ programmers) to use a system language without fear of breaking something. I'm a Rust developer with Ruby background and loving the language more and more.
The biggest problem with Rust right now is actually its novelty and lack of maturity, that makes using it at this time a lot more problematic than it should be. But Java and Python were once "new and unproven" languages, too.
Usually the performance folks that get scared are the ones that put all of them into the same basket.
And when you're not working alone you will spend a lot of energy discussing which C++ subset you will use, and enforcing that.
Rust has accumulated its complexity over four years, and it's already comparable to C++. The thing that worries me the most about Rust, is how the language will look like in another 10 years.
It’s not. C++ constructors alone rival the entirety of rust, and grow in complexity with every release.
You’re just so used to the unfathomable complexity of c++ you don’t realise it exists anymore.
C++ is not slowing down. C++ is on the verge of deprecating STL-style iterators in favour of Ranges, and modules and concepts are imminent. Template metaprogramming is being superceded by constexpr. Of course STL iterators, header files, and template metaprogramming are still going to be around, people will just need to learn all of it if they want to work on a variety of C++ projects.
I don't find anything about the language to be particularly more complex than, say, Python or C++.
If your program is running on the server reading from a DB and producing simple JSON (like I assume most of HN's audience), rust is probably not what you want. There's plenty of more pragmatic approaches. At least I think it's not the right language for my employer's department (and it pains me to say that)
If what you want is to write code that runs on bare metal then consider Rust
The closest I was to bare metal, i.e. code that works without an OS, when I developed stuff for “small” MCUs, like Intel MCS51, Motorola COP8. Rust supports none of them: https://forge.rust-lang.org/platform-support.html
I’ve developed for Nintendo Wii, nominally there’s an OS but it’s very “thin” one, mostly statically linked libraries provided by Nintendo. Rust can’t compile for that platform either, it only supports PowerPC Linux.
I’m currently working on low-level software working on bare Linux kernel. Rust apparently supports ARM Linux, but C libraries are literally everywhere, both kernel APIs and user mode: drm, kms, gles, udev, freetype, low-level kernel stuff like tons of ioctl calls for SPI and USB I/O, wpa_supplicant, and more. That’s too much native C stuff to integrate together, using a foreign language causes too much friction.
I can think of bare metal software for which Rust is good. If I would work on x86 bare metal hypervisor, I would look at Rust very closely. Platform support is good, not much libraries are needed, and the project is extremely security sensitive so using Rust will probably pay off in the long run. But I don’t think that’s a rule, looks like an exception to me.
There’s nix::sys::ioctl in Rust stdlib, but there’re also issues with ergonomics, e.g. https://stackoverflow.com/q/51898034/126995 These variable-length structures are used a lot in practice, not just for HID, SPI and USB bulk protocols use similar things. They’re pain to consume from any other language except C and company (C++, obj-c). C# also has very good FFI, but variable-length C structures at API boundaries still require manual marshalling.
There’re third-party bindings for drm, https://github.com/rusty-desktop/libdrm-rs, but apparently that project is not maintained, not sure it works on ARM. It contains more than 3000 lines of code, which will require support. The equivalent C headers, xf86drm.h and xf86drmMode.h, are not small either (800 and 500 lines, respectively), but the important difference is C headers are already supported by Linux kernel so I don’t have to.
I wanted to point at a lower abstraction level than a typical corporate application, but higher than bare metal.
Something just above the OS, like any command line.
There are some middle ground options though, like Zig, which is a nice simple C like language with less undefined behavior and no nulls. so safer, but not offering memory safety.
For example, Rust puts &T and &mut T at the forefront, which leads to a slightly alien way of handling aliasing- it's all or nothing. This makes some things feel way harder than they are in C, but helps out the optimizer (every pointer is now restrict/noalias).
A different language could emphasize (the equivalent of) &Cell<T>, which allows shared mutability but restricts certain "shape changing" mutations. Most of those C patterns would feel easy again, with a bit less of Rust's non-safety-essential guarantees.
The same could be done for struct fields if the type system knew about it, and the whole thing could just use normal syntax.
Sharing between threads still needs &T or &mut T (or an owned value), but that's not usually involved in the painful cases.
I'm still looking forward to const generics and a more usable const fn. In a way it's a shame Rust doesn't have a purely constant function in the interim. But a hybrid function will be more versatile once it allows some form of looping.
The last thing on my wishlist is extern types (aka opaque types aka void *). The current workaround using a pointer to a [i8; 0] type relies on LLVM's particular handling of such pointers and always looks weird in rust.
It's how I interface with C and C++ callback functions.
I'd love to blog on this at some point but I think that the real big win here was being able to use ? to early exit in async code.
I'm excited to see what the future brings here - we're still pretty new with async/await and building our own internal patterns.
Also very happy to not being forced to write .map_err ever again.
When we get some spare bandwidth we'll definitely see if we can get some extra productivity out of using &self. So much of our existing futures code is either self-less or uses some macro code to generate glue to allow us to use Arc-typed self - this is to allow a bunch of async core code to interop with these async platform drivers.
Been on a crash course getting better at architecting Rust programs for nine months. Luckily the Rust ecosystem and toolchain is getting even more amazing each time around so we can justify some work to refactor and try new approaches.
Our current approach-du-jure uses callback handles in combination with channels to let the ffi code trigger a real rust future's completion. This has worked reasonably well, but I'm sure we'll experiment with a few other patterns.
We don't specifically interface with Java Futures (no particular reason other than it hasn't seemed necessary to add that complexity), but that would be a pretty cool library to build on top of the existing Rust jni crate.
One thing I'd like to pass by the Rust community is our internal "teleporter" that allows you to borrow an object mutably on one thread and then "teleport" an immutable ref to that object to any other thread using only a u64 handle (with obviously huge unsafe flags). This has been very handy for some of our async ffi work.
I'm hoping to get a few more Rustaceans onboard (aggressively hiring!) over the next few months so we can focus more deeply on some of these interesting problems.
Instead of doing JNI calls, send Android messages between NDK and Framework threads.
There is the setup of MessageHandler on both sides, but long term they are more productive than JNI boilerplate.
One example would be SDL, although they use a mix of JNI and messages (search for SDLCommandHandler).
EDIT: Sorry forgot about the C side (counterpart is Android_JNI_SendMessage).
Would love to see a blog post (or better yet, library!) for this - sounds interesting!
The wording here confuses me. They say they took the implementation from hashbrown, but then finish by saying that the implementation is different. What am I missing?
Hasher => Takes the key and turns it into a hash (in this case a 64bit hash).
HashMap => Takes (key, value) pairs + a hasher and then does "magic" to get a fast lookup based on key+hasher.
The "magic" part is what changes. (which include thinks like which datastructures are used to store keys/values, how deletion is handled, how hash collisions are handled, how the given hash is used to lookup keys, etc.).
If this is for fun / education then learn Rust. It's conceptually nicer and doesn't have legacy cruft from decades of industrial use.
The first component is conventions and idioms for managing allocations, and Rust will force you into (and support) some good (but nontrivial) ones.
The second component is self-discipline. Look at the long history of vulnerabilities in C and C++ code that are due to carelessness -- of an oops that a programmer made when they knew better.
If what's being considered is Rust as a stepping stone to C++, how much does Rust help with the first component, and is Rust even counterproductive for the second component?
Regarding counterproductive for the second component, you might've seen a conventional practice of grinding the Rust Clippy until the code compiles. I don't know how that affects the development of self-discipline (e.g., maybe some people try to make a practice of being Clippy-free on every compile attempt?), but it seems a reasonable and interesting question to ask.
(I'm not dissing Rust for this. I mostly like Rust, and would be happy to be working in/on it.)
I suggested that possibility, but is it generally true, or something personally true for you, or are you advocating that it would be good if people did that?
So yes, if you work enough with the borrow checker, your brain will form another logical one, and that one you can use for writing C/C++ code. I have much more confident now in learning/writing C/C++ than before I learn Rust, because I feel like I can form a Rust-like design (tree-based, clear ownership/lifetime objects) and put that in using C/C++ syntax.
Definitely recommend using Rust as stepping stone to learn production-grade level C/C++.
There are of course, fairly significant differences in idioms, and things like that, but that’s true for every language switch.
So you have a standard trait from the language officially, that is useless without a third party library?
The reason they’re external is, depending on what you want to do, you’ll want an executor with different characteristics. An embedded executor has very different needs than a network IO executor than a GUI event loop. By stabilizing the trait, we can ensure library compatibility: everyone agrees on the same interface.
Given that we’ve invested so much in making it easy to add libraries to your project, including a single one wouldn’t be appropriate.
Are there talks to make that a reality in the next 18 months?
Is `async / .await` going to be just syntactic sugar around `Future` or is it going to necessitate an executor lives in the standard library?
async / .await are going to turn functions into Futures, and they by themselves don't necessitate an executor any more than the Future trait itself.
Here Java, .NET and future C++ are a clear winner, given that they are part of the standard library.
There’s no plans to add an executor to std for all the reasons I’ve said.
Standard library executors are guaranteed to be available across all platforms supported by the compiler, with a validated level of quality for production loads.
Random implementation from Internet not so much.
There are some real disadvantages to putting Tokio in the standard library, for example tying Tokio releases to the standard library release cycle and making it difficult/impossible for people to use non-latest versions of Tokio.
Note as well that Tokio isn't the only library that can be used here, there's plenty of experimentation in this space.
Corporation is separated from Foundation for legal and tax reasons, otherwise, it is the same org.
(But employees of Mozilla get their salary, and it's not possible to give money directly to Mozilla Rust developers.)
I wish the stigma against "unsafe" C++ was a bit more rational. People who use it aren't the kind fresh out of bootcamps and mostly realise the gains and risks. But maybe I'm skewed by my job which uses C++ and takes any risks seriously.
In comparison to array-based lists they're:
- less memory-efficient,
- do not allow random-access,
- worse for cache locality (so can be up to orders of magnitude slower) and
- more complex.
They are nice to learn some principles in the context of an Intro to FP course but apart from that, meh.
Almost any kind of data structure in Rust is extremely painful to do efficiently. You either go the unsafe route of you drowned in a sea of boxes and cells.
On Reddit recently somebody gave the ludicrous claim that you shouldn't have to write your own data structures in rust - the rust system library should have everything you need.
It's unfortunate that a lot of teaching programmer has people implement data structures from scratch. It gives the false impression that that's what programming is largely about.
I guess it really depends on your job, skill level, and mentality. While I do use a lot of off the shelf pieces, their relationships don't always it neatly and shoehorning them can cause performance issues. (I'm not going to pay for a double indirection when I can avoid it entirely).
But then again, I think this cookie-cutter approach to software is poor craftsmanship and often results in bloated, slow code that is way larger than it needs to be. I want to write something better than everybody else, not just make the same paint-by-numbers piece everybody else does.
Randomly lashing out at Firefox is silly, especially at this time when it's getting so much praise for performance compared to Chrome. Firefox does indeed contain some complex, micro-optimized data structures for its core data (e.g. the CSS fragment tree and the DOM). It's just that it also contains a lot more code besides.
You wouldn't use an off-the-shelf hashtable to implement the mapping from a DOM node to its attributes. You should use an off-the-shelf hashtable to track, say, the set of images a document is currently loading. Like any kind of optimization, you optimize your data structures where it matters and you write simple, maintainable code everywhere else.
A said anything about optimizing in inappropriate areas (honestly, what did you get that from). This entire thread started because somebody didn't understand why people often user linked list as an example of something difficult in rust.
> Of course I create data structures, but almost always by combining hashtables, arrays and smart pointers and occasionally something more exotic from a library.
But that does scream "I don't really do a lot of performance oriented work". That you can somehow cobble together an apple out of a banana and a cat by probably using a metric ton of boxes and refcounts (that are just used to get around the borrow checker) doesn't surprise me if you are willing to make the readability and performance sacrifices.
Sorry, but what is that supposed to mean? Have you looked at Chromium's (or any other modern browsers') memory usage? Firefox is timid compared to it, and always has been so. Maybe it's not due to the browser engineers' low skill level, but due to the enormous complexity of modern web? It's a separate operating system on top of your operating system.
34% don't use any kind of unit testing.
35% don't use any kind of static analysis tooling.
36% don't use any kind of guidelines.
I can also post a ISO C++ one with similar results.
Or the video from Herb Sutter's talk at CppCon, where only 1% of the audience confirmed using any form of static analysers.
As anecdote, many enterprise places that use VC++ are still using versions like 2008 or 2010 and writing code as MFC/ATL had just been released.
The same kind of shops that are running Red-Hat enterprise 5, some Java version pre-8, and such.
I think my experience correlates with the study. Most lower level code that I had seen used neither unit-tests nor any good structuring. At least in close-to-hardware projects that seems to be more the rule than an exception. I think this is due to many contributors there not having a pure software-engineering background. Those often have not worked in other higher level stacks and therefore are not familiar with practices.
Having proper metaprogramming is also really great. Sure, you can definitely go overboard, but a few things are just only possible with proper metaprogramming like quickly printing the value of a struct or enum for debugging and easy serialization/deseralization (like serde does). It's just a huge boon for doing introspection.
But it's not just the particular features that are important, it's the fact that best practices are integrated into the language. There are standard solutions for most things: error handling, unit tests, build system, package management, formatting style, etc. Sure, if you have a long-running C++ project, you're gonna have answers for all that, but the consistency matters when you want to integrate libraries.
I think if you're going to use Rust, you should try to speak to its strengths rather than retrofitting existing C++ idioms onto it. There are both real advantages and very real costs to doing this, and you certainly shouldn't just switch an existing C++ codebase to rust.
HTTP stuff benefits a lot from asynchronicity, and so there’s been a lot of churn over the past few years as this story shakes out. We’re almost there though!
Have a look at the sort of things you can do with it https://github.com/actix/examples
Haven’t used it extensively, but it’s pretty feature-rich and looks reasonably well-maintained.
Stop censoring opinions you don't like.
B-3 Path related syntax
::path Path relative to the crate root (i.e., an explicitly absolute
If such a strategy stands between you and understanding, I'd suggest you to use silence gaps of different lengths. Like 0.5s for space and 0.2s for ::.
I would love to see how such libraries are built from scratch in a low level language.
I feel like I would learn a lot as well.