Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.37.0 (rust-lang.org)
393 points by pietroalbini 37 days ago | hide | past | web | favorite | 183 comments

Rust has evolved from something safe and simple (once you grasp the different type of pointers) to something way less accessible.

Rust source code is now full of annotations, and very hard to read and maintain.

Traits have evolved from a nice feature to something overused everywhere.

Basic crates use so many generics, impl Trait and abstractions over abstractions over abstractions that it's really difficult to follow what concrete types you need to satisfy these.

When one of the oldest and still unanswered issue in the Hyper HTTP library is "how do I read the body?", there is a conceptual problem.

And things got even more complicated with Futures. async/await don't solve much, as besides textbook examples, these are unusable without deep knowledge of how everything works internally. Pin/Unpin makes things even more complicated.

I'm an early Rust adopter and advocate. However, I wouldn't consider it any longer for new projects.

My productivity in Rust has become very low compared to other languages such as Go or even C, that let me easily express what I want to do.

With Rust, 90% of my time is spent trying to understand how to express what I want to do. Such as what types I have, what 3rd party crates expect and how to convert them.

> something way less accessible

This is subjective, but there are at least a couple big counterexamples. Non-lexical lifetimes are a huge benefit for newcomers. I've gone through my old projects and just deleted all the nested blocks that I used to need to satisfy the borrow checker. Everything just works now. Also being able to match on `&T` or `&mut T` without putting the `ref` and `ref mut` keywords in each non-`Copy` binding, is another pretty big win. That's an entire keyword that we no longer need to teach to newcomers.

> Basic crates use so many generics, impl Trait and abstractions over abstractions over abstractions that it's really difficult to follow what concrete types you need to satisfy these.

Again this is subjective, but not following concrete types it the entire point of `impl Trait`. The type of `foo.iter().map(...).filter(...).step(...).take(...)` is a monster. Life is better when we don't have to type it. There are also new capabilities that come along with it: APIs don't have to commit to returning a certain kind of unboxed iterator, and unboxed closures can now be returned.

> async/await don't solve much, as besides textbook examples

See the real code examples here: https://docs.rs/dtolnay/0.0.3/dtolnay/macro._01__await_a_min...

Is there a way to ask Rust to tell me the type of an expression like `foo.iter().map(...).filter(...).step(...).take(...)`? I often feel very much in the dark about Rust types and some kind of command to show me Rust's interpretation of the type of an expression in context might help a lot.

The hack that I've seen people use the most is to write a line like this:

    () = foo;
Then rustc will print out an error that includes the type of `foo`.

Scott Meyers also suggests the same sort of thing in Effective Modern C++ when not using an IDE.

Personally, I use the technique a lot for debugging template errors in the language.


template<typename T> struct TD;

Instantiate this as ‘TD<decltype(foo)>’ to get the full name of the type.

Rust’s does seem more concise.

Note that you don't get the full name of the type with a bad assignment in Rust, you only get the part of the type that was deduced before type checking failed. A generic type might have undeduced blanks, eg. Vec<_>. A side-effect of type inference.

Nice, thanks!

Just try assigning the expression to a variable of definitely the wrong type, and the compiler will print out the full type in the error message, e.g.

let x: () = foo.iter().map(...).filter(...).step(...).take(...);

In the old days in C++ we'd try to assign it to int and look at the compiler error message. (That literally was the instruction of how to figure out the type to use in one of boost's libraries in pre-c++11 times ...)

In VSCode, you can assign it to a variable and then mouse over to see what type it infers.

This usually works, but type inference in RLS (or Rust-Analyzer) breaks often enough that it's not a complete solution.

IntelliJ shows the inferred types by default. However it Gasversorgung it’s limitations with complex Rust types.

> However it Gasversorgung it’s limitations

I guess this is some autocorrect error? What did you mean to say?

Oh wow, I just saw that now. It was certainly autocorrect on phone. Should only have been "has".

If you are curious: Gasversorgung would be be "gas supply".

    let _ = 5 + foo.iter().bar().quux().asdf();
The compiler will now tell you that <your type> cannot be added to an `i32`.

    const foo = await callApi(url)
Why do we need anything more complex than this?

Translating that to Rust:

    let foo = call_api(url).await;
    println!("{}", foo.bar);
Syntatical differences aside, what's complex about it?

None of what you're describing really tracks for me, having written many thousands of lines of rust and interfacing with my third party crates.

"async/await doesn't solve much" like your whole post confuses me.

I think right now a lot of rust libraries are "hyper generic" because they're intended to be lower level and to be built on top of. Like hyper, as an example. I've never found it particularly hard to work with generic libraries though.

As a beginner, I think it's largely a matter of documentation.

In Go, you can very easily see all the methods that a type implements. However, you don't see which interfaces it implements because it's implicit.

In Rust, the methods are often hidden away in trait implementations, so it's harder to see what you can actually do.

It seems like this might be fixed if the doc tool generated an index to all the methods, as a reference? Or maybe there already is an easier way that I overlooked.

cargo doc does do that already, not sure what you mean.

IDEs should also be autocompleting trait impls.

My experience is that it will only autocomplete traits that you have imported. Which sucks because what if I don’t know what traits my type uses? It’s crazy

IntelliJ will suggest unimported trait methods and will automatically import the required trait if it is selected.

For example, on std:String, the + operator is hidden away in the Add trait implementation. (In this case it's reasonably obvious anyway.)

Ah, interesting. I haven't really noticed that sort of thing myself.

Async+Await+Futures isn't needed at all, performs worse than a basic epoll loop, is more confusing, and less able to handle complex flow. But they add a lot of complexity. The futures paragidm makes the hard stuff almost impossible to do efficiently.

It took c++ years to blow its complexity budge. Rust, "Hold my beer."

> Async+Await+Futures isn't needed at all

Define "needed" ? Code with async/await is radically simpler compared to without - this is not just basic sugar but really, really important due to rust's borrow semantics There is code I have written that required Rc/RefCell ceremony to get around the borrowchecker that will be completely handled by async/await.

You can read more about this here:


> performs worse than a basic epoll loop

There are open issues and afaik all of these are solvable problems with known/ theorized solutions.

> is more confusing, and less able to handle complex flow

Sorry, that's absolutely just not the case Even if "confusing" is subjective, you can not tell me that writing complex loops/ control flow is easier in futures than in async/await. Just look at the examples here:


There are programmers who mostly work on code that does a lot of work with the CPU, and needs to be multi core. For these applications, async/await has little to offer.

People who are puzzled by async/await often fall into this camp.

Why would they not just use threads?

Async and await solve a different problem to epoll (and do so in a cross-platform manner). A “basic epoll loop” won’t help you for file IO on Linux, for example.

I get the feeling that Rust is going to become, like C++ and Haskell, a language that you have to "live in". That is, if you take a break from the language for a few years, you won't understand the code that people are writing when you come back. Contrast that to, say, Go or Python, where you can come back to the language after five or ten years without any major difficulty.

This only applies now because fundamental aspects of a competent modern language are still being added - it was only recent that you could have a const fn at all, async is only coming at the end of the year, there are still no HKTs, etc.

I'd be willing to bet Rust 2021 will be a great place to start at because it will represent 95% of what the language will become having already been put in place. From that point there will certainly be additions to the language in the forms of libraries and features but the mechanisms of how you write code in general will be solidified.

Until then yea, theres a lot of churn along the way to having a "mostly complete" language, especially when the point of the language is to basically be a kitchen sink of everything.

I really think that could go either way. Haskell keeps getting new bells and whistles with no sign of any slow down. Once you get into sophisticated type systems, it's never "done". There's always going to be (semantically) safe code that doesn't type check in safe Rust, and the temptation to extend the type system to rectify that.

But Haskell is intentionally used as a laboratory to experiment with new ideas. That is not Rust's primary goal.

An interesting example is the addition of `{-# LANGUAGE LinearTypes #-}` to GHC. The primary interest for this "experiment" is not from academia but from industry (tweag IO). Another example is `QuantifiedConstraints` which was quickly caught up by practical applications as well. Haskell is not just for academia, it is very much intended for practical use after all.

It is intended by it's creators as a research language. That doesn't mean it isn't suitable for practical use, it just means that you should expect it to continue to gain features and grow in complexity.

It's not C++'s goal either, AFAIK, but major new features keep getting added to the language anyway.

I've already felt the slowdown in changes. I did have a (little) break from Rust and the only things that have changed are ergonomic improvements and the async stuff. This is a predictable outcome given the roadmap for this year was one of stability for the language.

When async/await stabilises, I think it's going to be less about the language changing and more about the ecosystem. I think there is going to be a lot of development on top of async over the next few years and we're going to learn a lot about the right way to structure programs and deliver useful abstractions. This is going to be particularly true in the embedded space I care the most about.

This is the hidden downside of having a good package manager - knowing how to use the ecosystem is as much of a skill in real world programming as knowing how to use the language. To be clear though - that cost is entirely worth it.

It could be. Rust has been stable Real Soon Now for quite some years.

"Stable" is not a binary condition, and Rust has been moving continuously in a 'more stable' direction.

I'm not sure I agree with that. Async/await is a major new feature and is just landing now.

Python is over 20 years old and got async/await only 4 years ago. I think most people would still consider Python a fairly stable language.

Go, for sure.

Python, knowing it since version 1.6, I doubt it very much.

python 3x seems is a milestone, and relatively speaking, python is very readable

Python is very powerful.

Reflection, meta-classes, decorators, comprehensions, slots, operator overloading, multiple inheritance and plenty of other features, some of them with semantics that changed across minor versions.

Python the full language, is at the same capability level as C++.

Many just don't realize it, because it is kind of targeted to teach programming and then people move on.

I'd say that's something Python does well. Many people don't realize because they don't need those features. If you can do with simple code and patterns, go with that. Rust on the other hand, hits you with all the complexity from the start (or a lot of it). I'm not saying this is necessarily bad, or that having a lot of "hidden complexity" is always good, the use-cases are different, but in many cases it's kinda good to not need to know about the things you don't need to use.

And the level of mutation one can inflict upon the runtime is magnificent!



Right, you can write your own debugger for Python in Python (assuming CPython).

Yeah, and the cool thing about being able to write your own debugger, you can have domain specific debuggers or debuggers that put the application in the debug state and phone home giving the eng a chance to capture the error and handle it.

Lua has a similarly powerful debug hook mechanism. Being able to debug the system from within itself is an amazingly powerful feature.

Same theoretical capability; inferior performance and safety in general. Its versatility in the stack is, in my mind, its defining characteristic - unlike C++, it's a great scripting language that's easy to get started with- maybe the best. Like C++, it's a capable low-level, close-to-the-metal language. But I doubt it's the best choice for those cases. (We know what Linus said about C++ devs working on the kernel ... what would he say about python devs??)

Side bar, I wasn't aware it was a popular educational tool. I was taught type strong languages, and coming into Python felt like writing pseudocode. Too easy, as the Aussies would say!

You are mixing languages with implementations.

If CPython devs would care as much as Common Lisp, Dylan, Scheme, Smalltalk, JavaScript, Julia, it surely wouldn't be as slow.

There are PyPy, TrufflePython and OpenJ9 Python, they all suffer from the stigma of not being CPython.

Is it that readable when the code is making heavy use of advanced features like decorators and magic functions, or the code is deep into the inheritance hierarchy?

This comment is pretty ironic because Python changed so much changed in the past few years.

Wow, hard disagree for me! I feel like Rust is the easiest language to maintain that I have ever used. I'm never afraid of changing things around since the compiler will always catch my mistakes and it is impossible (without unsafe) to introduce undefined behaviour. Whenever I'm using Rust my confidence levels are pretty much 100%. Having said that I don't know how the situation is for web dev since I'm a game dev, do you think that's the reason we have such different experiences?

I'm using Rust full time for a lot of web dev. Using web frameworks and http clients hasn't been too painful, but asyncio web dev gets hairy because of the rules and syntax governing futures 0.1. Database dev using rust-postgres and extending it using its ToSql/FromSql traits has been very straightforward. Reading through the underlying source of these web frameworks / clients does involve medium/high difficulty, to varying degrees. Excessive use of generics ought to be curtailed, and it does but that seems to happen further down the pipeline as refactoring for readability plays a more important role. I can empathize with the OP, who has struggled to understand code written by others. It's not an issue just for Rust, though. Excessive use of object inheritance does no one any favors -- pick any language that features it and you'll find inheritance hell. Go also seems to have its own category of readability issues that people aren't being forthright about.

This is precisely why I've lost interest in Rust. It feels limiting in the same way that fat frameworks do - it just becomes so much harder to express what I _really_ want to do sometimes, because I'm always working around the limits and requirements of the framework/abstractions which a lot of the time won't map very well to my solution.

What do you find is the solution to this problem? How do you remain productive? Genuinely curious since I am merely a Rust dilettante

Generally choose something "boring" instead, whether it be libraries or language choice.

> My productivity in Rust has become very low compared to other languages such as Go or even C, that let me easily express what I want to do.

As a C/C++ programmer who has being following Rust from a distance for sometime, with an intention to learn it later, this is somewhat concerning to hear. I've been meaning to start learning it for some time, however, my current workload doesn't provide enough spare time to start absorbing it.

In addition, it seems to be somewhat hard to find detailed information on how to do low-level things that would be very easy in C. Just as an example, one of our simulations relies on being able to craft raw UDP packets in order to "impersonate" remote basestations in our lab environment.

While this is trivial to do in C or C++, I haven't been able to find a good reference on how to do that with Rust. The lack of accessible documentation means that I'm unlikely to consider writing some of these more security sensitive parts of our system in Rust, even though there may be net benefits to doing so.

That's just one take, and I very much do not agree with it personally. My productivity has only gone up as the tooling and language have improved.

One thing that I am working on figuring right now is the proliferation of dependencies. But that's a"nice" problem to have.

> Just as an example, one of our simulations relies on being able to craft raw UDP packets in order to "impersonate" remote basestations in our lab environment.

I would assume you would do it in Rust just like you would in C. Where's the hang up?

> I would assume you would do it in Rust just like you would in C. Where's the hang up?

Why assume? If you know it can be done, why wouldn't you provide a pointer to an example? I've done my Google searching and can't find one. There are endless examples of this for C online, I'm just stating that with a cursory search I couldn't find one for Rust.

The hang up is clearly at my end, I'm currently time poor, so now is not a good time for me to be learning a new language where I can't find good examples for what I'd use it for.

Admittedly I need to start at "Hello World" with Rust anyway before I get as far as crafting raw UDP packets. The point I'm getting at is if I didn't know how to program in either C or Rust, I'd pick C for this problem right now because there are clear examples. That's all.

If you don't know how to do it because you don't know Rust, then fine, that's reasonable. (Albeit a strange criticism to lodge from my perspective.)

Otherwise, I would "assume" that you need to use libc bindings over ffi to use whatever udp APIs you would normally use from C. This is how Rust's standard library implements the higher level udp APIs for example: https://doc.rust-lang.org/std/net/struct.UdpSocket.html

But I used a question mark also because this is not something I've done before, so I can't share an example. I wasn't being adversarial. Even my suggestion above could be wrong. That is, I don't understand the essential difficulty here. Perhaps someone else can chime in.

Sorry, not to be rude but in this case is sounds like you don't understand the problem domain. Perhaps I should have just said "UDP spoofing".

There is a huge difference between creating a generic UDP endpoint like the code you have shown, and crafting a raw UDP socket, where the source IP address is "spoofed", and UDP checksums generated to ensure that the packet looks like a valid UDP packet so that the intermediate network infrastructure like routers, switches and firewalls will pass the forged packet rather than discarding it.

The latter requires quite a bit of extra code. A good example is here:


and last time I looked (month or so ago?) I could not find an equivalent example for Rust. It may be out there but I was unable to locate it. I'm not asking you to do my homework for me, just stating that learning what I need to know in Rust to do the above would take more time than I have available right now. That's not to say I won't figure it out in the future when I have more time.

> If you don't know how to do it because you don't know Rust, then fine, that's reasonable. (Albeit a strange criticism to lodge from my perspective.)

It's not a criticism of the language; what I'm asking may be very possible, I don't know yet. I just know that finding C examples is trivial, Rust examples not so much, so when I had a need for this kind of code recently, I turned to the language where I know it can be done and I could easily find examples.

The specific task is so obscure, that only a language as widespread as C would have an example online for it. So people are confused when you name this as a criteria for selecting a language, since it’s a criteria that most likely only C can satisfy.

Then someone jumped to the conclusion that you were actually saying that Rust doesn’t have enough low level kernel APIs because that would make sense.

> The specific task is so obscure, that only a language as widespread as C would have an example online for it.

Well you could do it in Python I suppose - here's someone mentioning a solution using scapy, although it likely wouldn't work too well for my particular case due to Python performance.


> So people are confused when you name this as a criteria for selecting a language, since it’s a criteria that most likely only C can satisfy

Rust is intended to be a replacement for C and C++, no? Eventually it will need to be able to do things like this if it is to be effective in that role. It's OK in my mind if it is not mature enough to do these kind of tasks yet, or if it can but I can't find a good reference yet. I wanted to use it for this task, but I selected C instead since it was easy to do that way and I didn't have a lot of time. Not holding it against Rust, it's a much younger language.

> Then someone jumped to the conclusion that you were actually saying that Rust doesn’t have enough low level kernel APIs because that would make sense.

I am almost completely certain I don't understand this sentence the way it is phrased.

In case you want to give OCaml a try one day, I've done plenty of stuff like that in the past when I needed a (much) faster scappy. UDP exemple here:


I'm not really a career programmer or language enthusiast, rather, I'm more of a utilitarian/imperative coder - I write programs to plug gaps in existing infrastructure as quickly, as securely, and as future-proofed as possible, even if it means dumbing the code down somewhat.

So I think my head would explode if I tried to learn a FP language without being able to have months off work to study. In addition to that, my team would kill me, as it's hard enough to find good C/C++ programmers in my city, let alone something like OCaml. Given that we run emergency services infrastructure, unfortunately we have to use more popular languages to ensure that the business doesn't end up with a lot of code written in languages they can't support.

Regardless, thank you very much for the example. It's nice to see that this can be done in other languages even if I can't understand it. Unfortunately too many years of C/C++ have made it hard for me to grok FP code - the C example I posted up before is much more readable to me.

> So I think my head would explode if I tried to learn a FP language without being able to have months off work to study.

What's so hard you find about OCaml or FP in general you haven't had to deal with in your day-to-day imperative programming?

> Unfortunately too many years of C/C++

You mean you find OCaml making your head to explode after years of C and C++?

> What's so hard you find about OCaml or FP in general you haven't had to deal with in your day-to-day imperative programming?

I guess I still find FP very abstract as a concept. I'm starting to like the notion of immutability as a design goal. But then I'm the kind of sicko who enjoys assembly language to some degree, where practically every instruction has a side effect, whether it be assigning a value to a register, or setting a bit in a flag register, or doing I/O whether port mapped or memory mapped.

> You mean you find OCaml making your head to explode after years of C and C++?

Not specifically OCaml, but most FP languages in general. I find the OCaml code posted up a couple of levels to be very hard to understand. But I'm very used to the POSIX interface and honestly programming against much else tends to get me out of my comfort zone a little too much. Shell, Perl and Python I can manage as well because I've been doing them so long, but I don't excel at those, particularly the latter two.

I think the paper referenced in this (https://news.ycombinator.com/item?id=15179188) HN article ("Some were meant for C") captures my mindset very well. Some of us are just a bit broken as programmers.

I also enjoy the ubiquity of C compilers - knowing that I can program in it from anything from a microcontroller to a multicore CPU (even being able to use the same compiler version accross many platforms) is a huge boost.

> I guess I still find FP very abstract as a concept. I'm starting to like the notion of immutability as a design goal.

I'm rather asking about OCaml or SML than about something abstract matters like denotational semantics, system-F or pure lambda calculus.

There are pretty much the same abstract concepts behind imperative programming languages as well: operational semantics, Turing machines, which, I argue, are even more complex.

And if you write OOP, you have to deal with open recursion, covariant/invariant/contravariant subtyping relations, late bindings, polymorphic self type etc etc. There is a book called ``Theory of objects'' by Luca Cardelli (who is guilty for both SML and C++, BTW) if you are interested.

You just don't bother yourself with this theory, and neither should you when programming FP. You don't bother number theory and arithmetic (complex topics) when doing simple addition in your code, right?

> But I'm very used to the POSIX interface and honestly programming against much else tends to get me out of my comfort zone a little too much

Then maybe your starting point should rather be this [1].

(Although this book is rather about system unix programming in OCaml than about language itself. For learning the language its manual should fit pretty well [2], finding a sweet spot between a narrated book and a standard. Other good and more in-depth introductions are [3] and [4]).

There is also a unikernel called mirage os which includes TCP stack and other stuff and gives a good taste of what system programming in OCaml looks like.



> But then I'm the kind of sicko who enjoys assembly language to some degree, where practically every instruction has a side effect, whether it be assigning a value to a register, or setting a bit in a flag register, or doing I/O whether port mapped or memory mapped.

FP doesn't disallow side effects, a usual FP practices just encourage to deal with them explicitly. There are low-level FP languages like ATS [5] (though I discourage you to look at it since it's unnecessarily complicated and could demotivate you), F* [6][7] and even Coq to some degree [8] which allow to deal with effects like allocations and state just fine (and prove invariants).

[1] https://ocaml.github.io/ocamlunix/

[2] https://caml.inria.fr/pub/docs/manual-ocaml/

[3] http://dev.realworldocaml.org/

[4] http://ocaml-book.com/

[5] https://www.youtube.com/watch?v=zt0OQb1DBko

[6] https://fstarlang.github.io/lowstar/html/Introduction.html

[7] https://fstarlang.github.io/lowstar/html/LowStar.html#memory...

[8] https://www.microsoft.com/en-us/research/publication/coq-wor...

Thank you very much for all the time taken to collate all those links. I hope one day I will have the headspace, time and patience to absorb all that.

Thanks for this, those do look very helpful.

Yeah, I mean, as I mentioned, I hadn't done it before. I feel like I've put an appropriate amount of uncertainty in my comments, but I still feel like you're punishing me for it.

In any case, I don't see anything particularly interesting in that C code. It's just shuffling data around and using some C functions, all of which is pretty easy to do in Rust. With that said, I do agree that translating that code would be non-trivial for a Rust beginner and would require getting pretty comfy in the language first. You'd likely also have to redefine any structs that you need in Rust (or use a tool like `bindgen` to generate bindings to, e.g., netinet/udp.h for you). But still, as an experienced Rust programmer, I think that C example would be enough for me to tackle this particular problem. So this might just be a case of being unfamiliar with the language, which is totally fair. There are going to be far less resources for Rust when compared to C.

Thanks for the advice regarding bindgen etc, I will look into that when I have more time. Generating the compatible structures (if Rust doesn't have them already) is one part of the problem, the other is really just the sendto() and socket() calls which as I mentioned are basically syscall wrappers for the most part. As per my other email, if I was writing this program in Rust, I'd prefer to use Rust all the way down rather then calling into C. Otherwise it's a Rust wrapper around C facilities. It should be possible do it all in native Rust code as far as I am concerned.

As per my other message, I apologize if I have offended you, it certainly wasn't my intention, I was just trying to clarify the use of the raw IP API, as opposed to the higher level solution which wouldn't work for our simulator.

> It should be possible do it all in native Rust code as far as I am concerned.

Yeah, it would be, but Rust's standard library doesn't provide any way to invoke syscalls. You either need to go through libc's syscall wrapper, or write Assembly. I used the former to get access to faster directory traversal: https://github.com/BurntSushi/walkdir/blob/ec33af7f9b25afefc... (The rest of that code in that module may be interesting to you, since the API for getdents is pretty interesting and very C-like. Buttoning it up behind a safe API that can never be misused took a bit of thinking.)

Besides, Rust's standard library, on Linux at least, currently requires libc anyway. So you're already going to be linking to it.

My apologies - I don't mean to come off as adversarial either. And as I clarified in my other message, I should have just said UDP spoofing, so I probably contributed to the confusion.

> I would "assume" that you need to use libc bindings over ffi

I wonder about whether that is even necessary. socket() and sendto() are basically C library wrappers around system calls for the most part. Surely the Rust standard library already has equivalent wrappers, or will someday. There's no real reason I can think of that I'd need to FFI into C unless there is some limitation of the current Rust standard library.

No worries.

You can use libc::syscall to make syscalls in the normal way, and that's technically ffi. Otherwise, to do raw syscalls, AFAIK you need to write some Assembly to do it. Running an assembler and linking it into a Rust program is pretty straight-forward. It would be nicer if one could use inline assembly (and let the compiler handle it for you), but that hasn't been stabilized yet.

The topic of whether to build a Rust standard library without libc is definitely one that has gotten some attention, and some folks have made some progress on that front, but AFAIK there is no serious ongoing project or effort that attempts this. Some folks definitely desire it though, at least for Linux. (In other platforms, like macOS and Windows, raw syscalls aren't a stable interface, so you kind of need to use the system libraries.)

I am not a Rust expert, but I've written kernel modules in it so I've had to deal with interacting with the foreign function interface and didn't have the benefit of any of its standard library.

The basic answer on this is you can specify that structures in rust have the same representation as a comparable C structure. If this isn't good enough for your use case (and it isn't when representing some wire format), then there are additional keywords to more precisely define the packing.

Of course, it's also possible to just have a binary blob of the appropriate size and write the appropriate values at the appropriate offset inside of it.

Generally I've been able to do whatever I need to do in Rust. I also wouldn't discount wrapping some C code at the very lowest layers. This isn't very different than using C code in a C++ project. Yes, it weakens type safety, but at the end of the day we have to get shit done, no?

To the higher level point of whether or not it's worth learning rust: I don't know. How much time do you spend implementing and how much time do you spend fixing? Rust increases the former and decreases the latter.

That's true. But you have to understand that Rust doesn't complicate things: Things are complicated. The difference between Rust and say, Python, is that Python hides lots of these complexities for you. This can have two sides effects: Performance and unintended consequences (bugs/hard to refactor/etc...)

Rust instead puts all of these complexities in front of you: Deal with it now.

So you have a choice: Deal with it now. Or deal with it later. And Rust has picked the former.

There definitely is a problem with some libraries using so many abstractions (traits) you have no idea how to use the thing.

I however do find writing rust to be quite enjoyable.

But I have so far entirely avoided the futures ecosystem and consider it a very good decision. One needs to have a very critical eye on what libraries to use

I can share similar experience. When I started to use Java, I happily digged into collections and stuff. It was quite easy to read and follow. Now with Java 8 I tried to read and follow their streams implementation. That was very hard. They implemented streams for semi-automatic parallelization which I never used and won't ever use and that made otherwise simple concepts just unreadable and unbearable. May be Rust developers tried to cover too big spectrum of the problems at once.

This is nonsense. Rust hasn't changed much since 1.0, but has matured a lot, and numerous small usability fixes add up, making it much nicer and productive than it was.

Rust is simpler to use, and simpler to learn than it ever was.

It has removed need for common pointless annotations (like ref in patterns, extern crate). Ambiguity of static vs dynamic dispatch in traits has been clarified with `dyn` and `impl` keywords.

The const generics feature is going to obsolete the worst-of-the-worst abuses of the type system trying to emulate that feature, and remove gotchas around arrays.

Async/await enables use of borrowed types in async code (freeing users from having to know about the `Pin`/`Unpin` implementation detail). Allows use of regular error handling, instead of requiring users to keep track of error types at each step in the chain of futures. It allows normal control flow instead of arcane wrangling with `Either` and boxing.

You listed a bunch of problems, and at the same complained that Rust has been solving them!

"Go is the result of C programmers designing a new programming language, and Rust is the result of C++ programmers designing a new programming language"


You are entitled to your own opinions but not your own facts. Rust was never simple. :)

I would argue it was, and still is, simpler than modern C++.

Isn't everything simpler than modern C++?

Haskell would beg to differ.

Arguably, all dynamically typed languages are also more complex because there are huge classes of programs that are incorrect which simply would not even compile as valid programs in C++.

If complexity is based on how likely a syntactically correct program is also logically correct, then type systems remove a lot of complexity.

> Arguably, all dynamically typed languages are also more complex because there are huge classes of programs that are incorrect which simply would not even compile as valid programs in C++.

There are no programs (including ones that produce runtime failures) in a dynamically-typed language that cannot be duplicated in C++.

There are certainly dynamic-typed programs wthat will fail at runtime where the natural way to attempt to express the same idea in C++ would result in a compile time error, just as there are dynamic-typed programs that will operate correctly that are harder to express in C++ because you either have to do type gymnastics to convince the compiler of their correctness, or evade the compiler by building an interpreter for a dynamic-typed language, to do it in C++. This is a difference in where the complexity in each language lies, not the overall complexity.

> If complexity is based on how likely a syntactically correct program is also logically correct, then type systems remove a lot of complexity.

Types are not really at the syntax level. I would measure the complexity of a language by the complexity of it's grammar and the size of the standard library/amount of common idioms needed to write most programs in a reasonable way.

It definitely is. But that is rather low bar to clear, given that modern C++ is by far the least simple programming language in wide use.

Yes, but rust can be seen as an alternative to C++. They are closest in terms of features/power/performance.

> When one of the oldest and still unanswered issue in the Hyper HTTP library is "how do I read the body?", there is a conceptual problem.

Is https://github.com/hyperium/hyper/issues/1137 the issue you are referencing?

I think it is. I answered it, along with others. But hyper is changing a lot so the question is asked again.

Strangely, the OP says async/await won’t help with things but in this case it would be a big help!

I'm working on a language with similar goals, but with the focus on simplicity:


A relatively stable 0.2 release will be out this month.

I'm curious about the example code at https://vlang.io/compare (apologies if you answered this before):

- Does the compiler use escape analysis to determine that the `cursor` variable needs to be heap allocated? If not, how does it know that the worker threads won't outlive the stack frame that owns `cursor`? If so, when does the allocation get freed? And does implicit heap allocation conflict with the goal of being as fast as C?

- How does the compiler know that the `lock` block is the only code that touches `cursor`? Does it still work if the `lock` block is in a subroutine? Does that require whole-program analysis? If two different `lock` blocks touch the same variable, are they implicitly a single `lock`?

The two languages I'd be most interested in seeing added to that comparison are Nim and Zig. Those definitely have most of my mindshare on "new low-level/efficient languages simpler than Rust".

Are you trying to do data lifetime analysis in V? I don't see that listed and to me it's pretty much Rust's key feature to enabling a high degree of memory leak and multithreading safety.

Sounds very interesting. I guess you should compare compile speeds with D and Nim, since they are also interested in fast build times.

It looks nice, but I'd add to the comparison table that Erlang has both hot code reloading and no global state.

In case anyone else is curious, async/await isn't part of this release but should be in the next (1.38):


Seems like async/await is going to slip into Rust 1.39 instead: https://github.com/rust-lang/rust/pull/63209#issuecomment-52...

Sadly it missed the boat as another user pointed out. The earliest it could go in is 1.39, but as of now the stabilization PR still hasn’t landed.

Genuine question: given a language which has a real notion of parallelism/concurrency and can run real threads on multiple cores, what is the appeal for async/await?

Modern fast IO is built on top of facilities like Linux's epoll(), which don't block an entire thread. However, that means that when an IO operation is finished, we no longer have the original callstack around to keep track of what operations were supposed to happen next. Instead we need some sort of state machine that's able to pause and resume every time it does IO. But writing state machines by hand is much less convenient than using the callstack, because you lose all the nice language features that you'd normally use to compose things, like `if` and `for` and `?`. So async/await is all about writing normal-looking code with the usual conveniences that can instead be compiled into a state machine that lives off the callstack.

And now you have 32 cores just waiting for slow HTTP clients. Apache with the prefork MPM does exactly that.

See http://www.kegel.com/c10k.html for an in-depth, if old, discussion.

Why would cores be waiting? A thread blocked on synchronous I/O will in general not be scheduled on a core. This is true on all OSs that I’m aware of.

Not sure if Apache had some spin-wait loops or something, but if so then that was a bug in Apache, not a fundamental characteristic of doing synchronous I/O in threads.

You're still paying some costs: there might be scalability issues in the scheduler data structures (how long does it take to schedule one thread out of a million blocked ones?), you need memory to store their stacks (hope you're on 64-bit), when they become runnable, you have a lot of context switches (which cost even more now after the Spectre fixes). Probably others that don't come to mind right now.

As for Apache, it spins a number of processes up to a point. When there aren't any free ones, the clients don't get any data.

Yes, I pointed all of this out in another reply:

> For one thing, threads take up memory and address space, and create work for the scheduler.

It is the right answer. "Cores are waiting" is not the right answer.

Async is about servicing thousands, or tens of thousands, of clients at once. Since everyone is convinced that their program will have tens of thousands of clients, they clamour for async.

This is about servicing tens of thousands of connections. Threads are ok in the low thousands, but now servers are in the realm of handling millions of current connections.


Creating a thread for every connection consumes massive amount of memory and is a known anti-pattern.

How does this contradict my comment?

Part of the problem is that context switches are really slow, so for software that needs maximum performance people will go to great lengths to avoid them.

> what is the appeal for async/await?

Time for a mini blog post contained in a HN comment!

(NOTE: A lot these descriptions are simplified)

Back in the old days we had Apache and its ilk, who approached handling multiple clients by spawning 1 thread per client. The model was simple and effective ... until you had thousands of clients, which resulted in overloading the OS with too many threads.

So along came nginx and its ilk. Instead of a thread per client, they used epoll and a state machine per client. This allowed Nginx to handle a massive number of concurrent connections since the state machine was much smaller than a full thread's stack, and nginx could implement its own scheduler instead of the OS's thread scheduler. But it's a more complex system, because you have to manually engineer those state machines. For a web server serving static content or routing connections that's not a big deal. For the backend to a modern web application? Not so much.

Eventually the web was no longer static content with PHP/Java backends; it was responsive, dynamic, explosive. And with those new requirements we needed ways to build complex web servers that could handle thousands or more clients at once. Apache's model wouldn't work; too much wasted memory and the OS still struggled with large numbers of threads. nginx's model also wouldn't work; it required too much engineering.

A lot of ideas began floating around. Around this time NodeJS showed up and exploded in popularity. Partially because it made building these backends easier. No threads to worry about; no custom state machines. Just nests of callbacks! It was crude ... but it kind of worked. Callbacks hell was ... hell, but less challenging than custom epoll based state machines. And most importantly, it was lightweight compared to the threading model.

So we've been evolving from that middle ground. Javascript added Promises, which simplified callback hell. And then eventually Javascript added async/await.

Ultimately, though, those two evolutions are just different ways of expressing the same underlying thing: custom state machines. Ah! See, whether you write a callback hell, a Promise tree, or an async function in Javascript, it all compiles to a kind of state machine. A blob of state that we can store and transport around in our underlying concurrency framework, and ratchet forward when asynchronous events are delivered by the OS.

So really, async/await is just the epoll model pioneered by nginx, but instead of having to write the state machines by hand, we can express them as regular looking code. And in fact, behind the scenes, all implementations of async/await whether it be in Javascript or Rust, are driven by epoll (or similar equivalent).

And empirically we know that epoll based models are just more efficient in terms of CPU and memory. Trying to use the OS's threading model hasn't worked out; you need a whole stack for every concurrent operation you're trying to perform, and the OS's scheduler isn't designed for the kinds of workloads we'd offload on it.

I guess the short of the long of it is that threads are great, but our OS's handling of threads just isn't good enough. Engineers have decided that putting in the effort of writing state machines, whether from scratch or with modern conveniences like async/await, is worth the cost.

The epoll model is a lot older than nginx: squid and thttpd date from 1996, 8 years before nginx. But back in the 1990s select() and poll() had limitations, which is what the c10k problem was all about. (See link elsewhere in this thread.) What nginx brought was much better ergonomics, lots of features, and really good implementation.

Reducing memory pressure on systems that need to handle lots of concurrent connections.

For one thing, threads take up memory and address space, and create work for the scheduler.

Interesting how much two of the big cloud providers are assisting the Rust project. Is GCP absent due to Chrome/Firefox?

> AWS has provided hosting for release artifacts (compilers, libraries, tools, and source code), serving those artifacts to users through CloudFront, preventing regressions with Crater on EC2, and managing other Rust-related infrastructure hosted on AWS.

> Microsoft Azure has sponsored builders for Rust’s CI infrastructure, notably the extremely resource intensive rust-lang/rust repository.

I did a chunk of the work on the AWS side, and it's not not due to Chrome/Firefox—Google is using Rust for Fuschia, after all.

At least on AWS' side, it was some individuals—me included—who sent emails to the right people internally & pushed on tickets to be prioritized. There was surprisingly little pushback.

Google doesn't use Rust for Fuchsia. Rust is used in Fuchsia :) Most of the system is C++ still.

Are you in Seattle? Do you go to the Rust meetups?

Nope, I'm in Boston, but I did the attend the July Rust meetup at Microsoft. You?

I will at some point. Meetings in Redmond are difficult but this should also allow the NWC++ users group to more easily cross paths with Rust.


We just never started using GCP; we've been using AWS for a long time, but we recently switched from Travis to Azure. Google employs a bunch of Rust programmers, and there's no connection between Firefox and Rust project decision making. (I'm typing this comment from Chrome right now)

I guess this is just a matter of prioritization. Though Google has internal infrastructures tightly integrated to each other, which might make the situation harder to introduce new external technologies like Rust.

Glad we’re finally able to talk about the Amazon and Microsoft support; both companies have been good to the project.

I don’t personally use cargo vendor but seeing it up streamed is also a big plus, IMHO.

I'm happy to have helped on the AWS side!

Why was it a secret at all? Seems like nothing but good PR all around.

Not really a secret, given that we're an open source project. At the same time, we didn't want to talk about the Azure stuff until we were actually moved over, and the process took a while.

It wasn't a really a secret. They ran into all sorts of issues with Travis, and people from Azure offered to run the Rust CI instead. The initial discussions were public.

Unfortunately, the build times still seem to be close to four hours, which limits the amount of changes that can be merged.

We aggressively use rollups to merge PRs into rust-lang/rust to mitigate the effect of the 4 hour build times. But it would sure be nice to bring it down to less. It would certainly make my life as the maintainer of the bors queue easier. ;)

It's probably a testament to the scope of the build + test that makes it four hours. If it were faster, would the scope tend to increase in order to hit more tiers/more tests?

I'd rather my commit sit in a test queue for several hours than push and cross my fingers like LLVM does it.

With faster builds, would probably reduce the queue latency and land more PRs. We'd probably also allow more toolchains to be tested and whatnot.

As one of the larger production users of Rust, it has been great to quickly the language and the ecosystem are growing.

Small things like Option::xor() or default cargo run are signs more and more people are using it "for real".

> Option::xor()

Just curious; what is your practical use case for that method?

Not the person you asked, but taking mutually exclusive options (such as CLI flags)

What are you building with Rust? Curious about its prod usage!

Not the person you were replying to, but we're using Rust to build Materialize: https://materialize.io

We're using Rust to build better software for doctors... jobs@commure.com

how big is your install base? We've been using Rust since 2017 and have about 5,000 instances of our software installed at about 20 customers

I really appreciate the clarity and communication style of these Rust release notes.

As an enthusiastic Rust user who is perhaps not as academic/intellectual as most of the Rust community, I often find Rust-related reading material quite daunting. But never these. They're simple and they're great.

Thank you! I tried hard to make this blog post easy to read so it's nice that it feels that way. :)

Profile-guided optimization seems huge. Does anyone have any numbers relating to the performance increase we can expect?

The effect is very dependent on program structure and actual code running, but for a suitable application it's reasonable to expect anything from 5-15%, and sometimes much more (see e.g. Firefox reporting 18% here: https://glandium.org/blog/?p=3888 )

I just gave it a whirl on one of my own programs, which runs the distribution of votes for Australian senate elections. In my particular case I didn't see a speedup (but it's a relatively simple program: perhaps the branching was already predictable enough.)

It wasn't hard to try out, the docs are here (including using it via cargo): https://doc.rust-lang.org/rustc/profile-guided-optimization....

Rust platform size has recently increased by 50%:


which puts it at over double the size of Go. Worse is seems to be no impetus to fix this. Also notable is Rust still doesnt have a Map literal:


I'm confused, why would you care about the rust compiler size? I can't imagine a situation where this metric is useful.

Why would you not? Surely you would agree there's a point where it starts to matter. What if the platform was 1 GB, 10 GB? Eventually everyone would care. So the point is it's crossed the threshold where I care about it. Maybe it hasn't for you, or maybe you just haven't thought about how big is too big.

I would say the size of the download doesn’t matter for most people which is why there is little activity there.

People who care about size have probably self-selected out of Rust.

Considering the Rust compiler performs much more useful checks than the Go compiler, has generics, etc. I don't see how that's really a fair comparison.

How does it compare to GCC or Javac or V8? It seems weird that the most used programming languages aren't on that list.

I almost got to make my first contribution to Rust, by correcting a small semantic error in the (pre)release notes, but alas, I wasn’t at a real computer at the time and someone else was faster :)

Well there is still time and lots to do. Let's help the project in any capacity we can.

I am a coder but I'm not well versed in Rust enough to be able to contribute code, but i'm gonna try to investigate and find other venues I might be helpful (writing documentation, translation to the languages i speak, etc).

I managed to get a small doc fix in a release a while ago and was pretty stoked about that. It was nice of them to put me in the contributor list for the release even though it was such a small change. (If I recall, removing superfluous `mut` parameters on `BufRead` or something)

I’ve tried spending time learning rust, but keep getting stuck since I don’t have a good project to work on.

What are some good open source projects built in rust which wouldn’t be too difficult to contribute to?

Depending on your profiency : - you can actually contribute to the compiler itself : https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais... - you can contribute to the low-level abstracting graphics library : https://github.com/gfx-rs/gfx/labels/contributor-friendly - and more generally you can also take a look at the weeksletter (and previous iterations of it to see where help was requested) :Week's letter : https://this-week-in-rust.org/

The call for participation in week in rust gives a lot of different projects.

I also like that the compiler issues are tagged by difficulty and whether a mentor is available, fantastic.

Thank you!

Advent of Code¹ with its puzzles increasing in difficulty is always a good opportunity to learn a new language. It starts again December 1st.

1: https://adventofcode.com

I've heard about this before but never looked very closely.

Looks like a lot of fun :)

Newb Question:

Why didn't Rust go with an easier to read syntax? Why did they stick to old C syntax?

Edit/ Sorry, didn't mean to offend anyone.

For most programmers, a C-ish syntax is going to be most familiar. In general, I'd define the "universal C-ish syntax" as:

* Ending statements with ;

* {} for braces, lowercase keywords for control flow

* [] and 0-based indexing for arrays

* Infix operator notation, with mathematical operator precedence

* &, |, ^, ~ for bitwise operators

* . is used for member access

But they didn't copy all of C's syntax:

* There's no ->, since dereference is usually automatic.

* No ?:, if/else can be used as expressions instead

* No ~, since ! on an integer variable does bitwise negation instead.

* Pointer, array, and function types don't have the weird syntax they do in C... you say [ * mut fn()->i32; 4] instead of int ( * ())[4]

* Types follow variables instead of precede them (x: i32 instead of int x)

* No C-style casts (x as i32 instead of (int)x)

I was going to make a snarky comment to the effect that the Rust syntax for an array of pointers to functions isn't obviously less weird than the C syntax -- I assume the basic thing there is [<type>; <size>] -- but then I noticed that the C syntax above was very wrong, and the correct syntax is weirder :-).

So what it should actually be is int ( * [4])(). The principle is that "declaration mimics use" and that a type is written like a declaration with the variable name omitted. So, for an array of pointers to functions mapping no-args to int:

If f is such a function, you would call it as f() and get an int, so the declaration would look like int f() and the type would be int () except that functions, as opposed to function pointers, aren't first-class objects in C and don't have a type.

If pf is a pointer to such a function, you would need ( * pf) instead of f, so the declaration would be int ( * pf)() and the type would be int ( * )().

If apf is an array of those, then you would need to replace pf with apf[something], so the declaration would be int ( * apf[4])() and the type would be int ( * [4])().

Which is, indeed, a bit weird.

(Note: Spaces around all the asterisks because that seems to be the least-bad way of making HN not interpret them as markup. It's quite impressive how damaging HN's markup feature manages to be given how little it's capable of.)

Good lord, you're right. I constructed the C syntax off the top of my head, and I knew about the declaration-mimics-use, so I started constructing it as if I would use it... I just somehow mixed up the [4] and the ().

C's function pointer and array syntax does make sense when you understand it, but even for experts, it can be difficult to get it right on the first go.

> There's no ->,

Well, there is, but it's more like ML’s -> than C’s ->.

“Easy to read” is relative to your audience. For our initial target audience, systems programmers, C style syntax is what’s easier for them to read.

We did diverge where we felt it was appropriate.

Which syntax do you prefer? Lisp or python or php styles? That is pretty much your answer: It's subjective.

Off the top of my head, I'd imagine that familiarity for the huge pool of C/C++/Java/C# programmers was probably a huge push for the chosen syntax.

> Why did they stick to old C syntax?

Did they? Rust always seems to me to a pretty free mix of C, Ruby-ish, and ML syntax (and an amazingly well chosen mix, because that description makes me want to run away from it, but the reality is pretty nice.)

I'm not really going to complain about syntax, but if I were... the only thing I don't like is the Ruby vertical bars around closure formal arguments. ES6 smooth parens and double arrows seem so much more fitting to a function to me. (I mostly write Python). I guess double arrows are used in match and single arrow in function return types, but if there had been a way to use arrows for closures I'd have voted for that.

While there is some objective, measurable aspect to 'easier to read' it is mostly dominated by what someone is used to.

The only C style syntax that's used is block grouping (curly braces), and statement separation (semicolon). Unless you go for using whitespace to perform those functions, you're making fairly arbitrary choices about which symbol to use, and those two are familiar.

Most other syntax is really quite different:

* variable definition has a completely different order to it with (optional) type ascription

* function definition has some aspects of C++ (<> for types in generic functions) but much else is different - e.g. position of return type, where clauses, the 'fn' keyword, type ascription

I don't think I'd describe Rust's syntax as C-like. What gives you that impression? The only resemblance I see is curly braces. Rust doesn't even have a C-style for loop.

If anything, Rust's syntax can be confusing because the order of names and types are reversed compared to C. This is a trend that a lot of recent languages are returning to because it can simplify the parser.

Also backtick lifetime parameters, but that's a feature that no other mainstream language has, so it was bound to be a little confusing regardless of the syntax.

To be precise: lifetimes are denoted by an apostrophe. (Backtick can be used in comments for Markdown formatting.)

I would certainly describe Rust's syntax as being in the family of C-like syntaxes. Function calls use function named followed by positional args in smooth parens, curly braces for blocks, & and * for pointers/references, if/continue/break/while, semicolon termination, square bracket indexing of array/vec, etc etc. It's very obviously a descendent of the C syntax family.

Of course it has several critical extensions related to impl, generics, lifetimes, references, modules etc, partly influenced by C++ (e.g. generics syntax), Ruby (closure) and Python (self). It depends on your perspective, but if the scope is all programming languages, then Rust is syntactically very close to C.

Because audience. Sadly C/C++ dominate and that spread elsewhere.

However "hard to read" is also the many concepts that rust have and certainly the combination of traits/lifetimes make some stuff very obtuse.

not really into rust(yet), just wondering why each cargo install takes forever(it rebuilds everything related from source?).

By all means I feel rust still compile too slowly, if it has faster build time I might spend some time playing with it.

It only has to do that once though. Once it's built once it's cached. When you're working on code you don't even need to fully compile your own code every time.

Congrats! Like many I was looking forward to async/await in this release but I'm happy they've taken some extra time to work through any existing issues before releasing it.

With Rust can you cross compile for another platform?


- https://github.com/japaric/rust-cross (more involved)

- https://github.com/rust-embedded/cross (straightforward)

(Edited for formatting)

Or just `cargo build --target` if your distro has cross-compiler + library packages already, instead of needing to use cross's two-year-old Docker images.

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact