Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft: Rust Is the Industry’s ‘Best Chance’ at Safe Systems Programming (thenewstack.io)
523 points by adamnemecek 57 days ago | hide | past | favorite | 440 comments



A lot of 'anti-rust' sentiment here.

Personally, I think the article is somewhat correct. Well vetted and tested C (or C++) is great, but even the guys who are looking for these things get it wrong sometimes.

Rust isn't perfect, and I think the negative reaction is people thinking that evangelists consider it perfect. But I certainly wouldn't call it perfect, and I'm rather fond of Rust.

That said, I still think it's the "best chance", that doesn't mean it's the only possible thing to make headway into more safe systems, but it means that it has the 'best chance'..

Maybe you can make D memory safe, but when I was getting into D development I couldn't even get a compiler working.

Maybe Ada is your favourite, but I don't know anyone writing Ada and integrating it with a C codebase.

--

Frankly it comes down to the fact that Rust was designed specifically for replacing components of a large C++ codebase that it will be adopted. If you can write a driver that works with linux that's written in rust and you don't have as many memory safety issues or concurrency problems (and theres no overhead) then that's reasonable.

Even if Rust never gets supported by the kernel's buildsystem officially or if the code can not be upstreamed.


"Anti-rust sentiment"?

I'm a Rust user, a fan of the language, and I believe it's a great step forward for systems programming.

However, it's kind of ridiculous how on any thread about Rust, anyone who brings up criticism, no matter how valid, gets downvoted into oblivion. Just look at what happened to the comments critical of Rust on this post.

The Rust community really needs to take care not to become an echo chamber.


I'm one of the leads of the Rust language and community, and we absolutely welcome constructive criticism in many different forms. That's part of how we improve the language. I regularly upvote comments that point out things Rust needs to improve, as well as linking them from appropriate github issues or Zulip discussions.

The whole "RESF" thing is not in any way welcomed by Rust community and governance folks. It's not funny, it's harmful, and anyone posting it is doing a disservice to the language they purport to support.

To anyone reading this: please don't downvote constructive criticism of Rust. Save your downvotes for the comments that wish Rust wasn't as welcoming to everyone. ;)


constructive criticism in many different forms

The thing about constructive criticism, when it comes to something like programming language design, is that its usefulness varies inversely with the importance of its aim.

Core features of the language are highly important (obviously) but criticizing them is not very useful since they're highly unlikely to change. Thus, only the most peripheral and superficial parts (as well as new additions) are subject to valid, constructive criticism.

People who have criticism for (or are otherwise skeptical of) the core parts of Rust are not being constructive, then, and so these fights can break out.


I've seen (and participated in) useful, productive discussions about core aspects of Rust; occasionally such things can even be improved. It takes a lot of care to have a productive discussion on such things, but it can absolutely happen.


Constructive doesn't mean actionable. Not sure where that assumption comes from. It may be that only future languages can reap the rewards of a discussion on core features that are unlikely to change in rust but that's still possibly valid constructive criticism.


What does RESF stand for? I know of "Rust Evangelism Task Force" and I am trying to match against "Rewrite Everything ..." or similar, but I am lost...


I had to Google to find out... Rust Evangelism Strike Force https://twitter.com/rustevangelism


Same thing, "Rust Evangelism Strike Force".

I'm fairly certain it's a meme.


Rust as a language is great and at this point I'd much prefer to use it over C or C++ any day of the week if the application and context permit. However, I don't think I'd ever want to participate in the Rust community or really even have any contact with it. Everything I've seen of Rust's community has been deeply toxic and hostile to anyone who falls short of worshipping the language and aggressively promoting and defending it at every chance.


I'm really confused by your comment because the experience that I have had and in surveys of my coworkers in the Rust community has been uniformly amazing: 1) the technical discussions are always respectful, 2) the in-person conferences have clear code of conducts and the speaker panels represent the best of the engineering and human community, and 3) the community actively encourages people asking newbie questions and goes out of its way to be supportive and non-judgemental when replying.

The only counter example that I can think of is the way that the actix unsafe maintainer's behaviour was handled (brigadiering bands of Reddit evangelists flaming him for his own flaming). However, I chock that up to an anomaly caused by Reddit's toxic community and not representative of the Rust community.


I wonder which community you’re talking about. My experience is rather positive. The infamous Rust evangelization task force is just a bunch of fanbois to put in your killfile.


I’d be curious what your exposure to the Rust community has been and where?

Within the Rust reddit or forums, they specifically dissuade blind evangelism and there are often posts critical about the language that get quite a lot of traction.

Outside the main rust community, there are definitely zealots but they are usually not accepted by the larger community, and are usually called out.

I’m not saying it’s perfect, nor that there aren’t zealots who go overboard. However it’s probably one of the least toxic of the programming communities I frequent (swift, c++, Python, c# , go etc)


I think this is a combination of two things, and is true of many new programming platforms (languages, etc.).

1. If the tech hasn't made full contact with reality yet. For instance, there aren't a diverse set of large projects written in rust yet, so it's a little easier to imagine that it's perfect.

2. We should acknowledge that technology choices have a big impact on one's career. Sure, we are all software engineers that could work in a lot of environments; but we are simply more productive in the ones where we have some experience/depth. If someone is learning rust today, in many respects that's a bet, and they (subconsciously or consciously) have a vested interest in rust's success. More experienced people will usually have a more balanced risk portfolio, spread between proven tech and a few up-and-coming technologies. But for people earlier in their career, it might be a pretty major bet for them.


They’re also super weirdly political. As a not-especially-political guy I’m super turned off by all the “activism” I see coming out of the community, even where it’s totally irrelevant. I remember seeing a whole thread about how the rust mascot crab is canonically non-gender-binary or something. Wtf?


If discussing the gender of the crab is politics, then surely assuming the crab is one gender or another is also politics. The idea that you can avoid politics in anything is a lie told by people whose beliefs conform to mainstream society. What you're really asking for is everyone to agree with the mainstream view.

It's kinda like the old military policy of "don't ask don't tell" - the policy wasn't actually don't ask don't tell - the actual policy was "pretend to be straight". Guys didn't get in trouble for talking about their girlfriend.

Beyond that, as someone that doesn't care about politics, I just skip the thread. There are dozens I don't care to read about. It's just another one. No big deal.


> If discussing the gender of the crab is politics, then surely assuming the crab is one gender or another is also politics.

A) sane and professional communities just don’t talk about irrelevances like this at all, which is my point

B) if some fringe political belief rejects some standard bread-and-butter assumption, it’s not really useful to describe running with the default assumption as “political”

“If abolishing private property is politics, then surely locking your door is politics.”


If communities want to go down that road, fine, but then they need to get out of this weird limbo where some views are allowed and some aren't. That means that someone must be just as free to say, "I believe the rust crab should have a normal gender because non-gender-binary is not a thing." without getting banned or cancelled. Right now it's reverted to a very don't-ask-don't-tell-like environment where one can either speak in support of such things or not say anything at all, effectively pretending to agree just as in your military example servicemen had to pretend to be straight.


In my experience many people (unfortunately) generally feel free to disparage non-gender-binary concepts on internet communities


Maybe in other places on the internet, sure, but I was referring mostly to the code-of-conduct-gated development communities that pretend not to discuss politics while allowing specific viewpoints above others.


I can see a difference between light-hearted speculation about an imaginary crab's gender and saying that non-gender-binary people don't exist. Can you think of any reason why one of these might be allowed, while the other might not be?


I guess I can't, because they're both political statements. Politics is either allowed or not. I assume, however, you have some reason in mind, by how you phrased the question.


Interesting - well, honestly, not sure how anything wouldn't be political if discussing a crab is politics.

Not that that's necessarily wrong - I'm definitely sympathetic to the argument that the personal is political. But that argument would justify more technical communities taking political stances, not fewer.


Discussing a crab is not politics, but attributing to it a non-existent gender is intentionally contrary to most people's social beliefs (especially outside of the SV bubble) and is thus political. Calling it a normal boy or girl isn't.

If, however, some community members wish to do that, they open to others the option to speak their minds on the topic, including stating that such an action ought not to be taken because no such gender exists. Contrast this with how most "codes of conduct" are written: they would permit community members to speak in favor of such a decision but not against it, as doing so would be "hateful", "bigoted", or "exclusionary".


Don't think of it as a community being political. It's not like people pick a programming language and then suddenly one day decide to be political, in part due to their programming-language choices.

Think of it instead as "people who are political" looking for a community (not necessarily because of that community's politics, but often for orthogonal reasons, e.g. a language being relatively new and so not yet used by anything they dislike) and then those people being political about their programming wherever they land. If some percentage of programmers are like that, then just statistically, some language is going to end up with a lot of them, whether its maintainers wish to cultivate the community image as being such or not.


My first reaction was: why is a non-gender-binary crab mascot considered "political"? I'm assuming this came up incidentally in the context of its name and pronouns?

By this implied definition of political, more or less any human social interaction is political in the broader view.


It’s one example out of many where rust users bring a political topic (and yes, post-gender queer theory stuff is political) into a discussion where there’s no reason to do so except to leverage the discussion as a platform for boosting your political belief. It’s very unprofessional, very counterproductive, and a huge turn off for people who don’t have the same fringe politics.


You're right that it's primarily a social issue, but people will always try to use the state to enforce their social beliefs, so they become political. See New York City's ordinance that can cause those who won't use a made-up "pronoun" to be fined [0], undoubtedly a use of force by the state to enforce a certain set of beliefs.

[0]: https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...


I have to say my experience with the community has been really good. Whether it be the reddit channels, discord channels or conversations with people putting rust related stuff on youtube.

Maybe you had a particular bad experience after someone mistook your behaviour as trolling - not that you would have been ... I just haven't witnessed any bad behaviour in the community yet - either towards myself or others.

If anything, I have appreciated at how much support the informed members are giving to the relative newbies like myself. Not only in advice but listening without ridiculing if the speaker is not that knowledgeable on the topic of conversation.

@jasondclinton, I agree with you in regards to the bad experience the actix project owner got in regards to unsafe code use. It was sad to see that happen to an open source contributor (or anyone).


That is how the Mozilla community works and the kind of people they attract.


This was my opinion too, might be a generalisation, yet it is the impression I and it would seem others have!


Then don't come near the Clojure community either, that's the most cultist one I have encountered.


That has a lot to do with Rich Hickey and Cognitect encouraging cultishness.


Would you care to elaborate? I'm very fond of the language but have found people to be generally pleasant. The only criticism I have is that its a pretty small community so finding fellow Clojurists tends to be hard.


(Not the person you responded to.) My direct interactions with the Clojure community have been very pleasant and helpful, and I have zero complaints there. But when searching around for common questions related to Clojure, I often run into answers along the lines of "Clojure doesn't have X because our language is so superior we don't need it", which does sound cultish.

For example, I was looking for GraphQL ClojureScript libs a few days ago, and found a thread where the first answers were essentially:

- "If you can’t find a framework that does it, chances are it’s so easy to do in Clojure/ClojureScript that nobody bothers to put an abstraction there"

- "Datomic has been doing what GraphQL offers plus much more since like 2013"

Both answers are a bit cultish. The first suggests a lack of enlightenment. The second suggests that salvation can be found by embracing the cult even further and locking yourself into a proprietary database invented by Clojure's author and used almost exclusively by Clojurists.

(Edit: This thread was from 2017. Such answers may have become more rare since then.)


The second was probably me. Sorry about that. My words reflect me as an individual who is sometimes socially challenged, not the wider Clojure community.


Ouch—I didn't link to the thread in my previous comment specifically because I didn't want to point fingers at people who were trying to help back in 2017, even if I disagree with their answers, but here you are anyway! For what it's worth, I'd rather have more people in the Clojure community than fewer, even if I don't always share the same viewpoints.


At least they don't bring politics in it.


I think the downvoting is pretty ubiquitous across HN now-a-days. This will also happen with any criticism of Apple, for example. Most of HN is an echo chamber as evidenced by the often off topic posts and discussion that reach the front page. I still come here for insight but I rarely have in-depth discussions here because HN and the crowd simply no longer cater to that kind of discourse for the most part. This is probably due to its increased popularity.


where do you go instead?


I just started having the conversations with my peers and colleagues instead. The net benefit of getting upvoted here is minimal and the social/networking aspects as well. In person convos really can be much more rewarding and fruitful because you can focus it rather than post on HN and some person comes along who is offended can derail and kill the conversation. I’m honestly more jaded about online interaction than most though.


If you ever need a penpal, hit me up.



> anyone who brings up criticism, no matter how valid, gets downvoted into oblivion.

When I made this comment there was nothing I could call "valid" criticism.

There are valid criticisms though; Lack of dynamic linking and immaturity (and most crates only building in the nightly channel, and not easily knowing if there was a version buildable on stable) are things I can think of as large weaknesses.

I know these and I'm not a full time rust programmer.


Dynamic linking is quite possible. You have to use the C ABI however when interfacing with code that's not part of the build.


We do this _all the time_ as we write almost all of our software in Rust, but lots of it is for use with/in embedded systems developed in C.

It's true that you can mate up to C by exporting the C ABI everywhere. This isn't a free lunch though. There are annoying and/or weird side-effects you sometimes have to deal with when leaning heavily into doing this.


> This isn't a free lunch though. There are annoying and/or weird side-effects you sometimes have to deal with when leaning heavily into doing this.

as a person not super-experienced in rust, im curious what kind of side-effects? binary size bloat?


That works in theory but often not in practice. The problem is that many C libraries keep their API backwards compatible but not their ABI. For example, a function might become a macro or the size of a struct might change.

It is possible to keep the ABI backwards compatible by practicing good C hygiene, but many library authors doesn't know how. Especially when it comes to Windows which is a platform most C library authors are not intimately familiar with.


Dynamic linking requires a backwards-compatible ABI essentially by definition. If the ABI changes, that involves a bumped SONAME and version number hence binary objects should never end up being linked in incompatible ways.


Few library authors understand that. In recent years libmq, libpcre, and libssh have all introduced ABI breaking changes without bumping the SONAME. Unless you have very detailed knowledge of how dynamic linking works on all supported platforms it is very hard to know what constitutes an ABI break.

The reason it mostly works on Linux is because header files are available. For example, if a function in library A that program B depends on is changed to a macro then all that is required is for B to be recompiled. Which B almost always is because most users get their software from package archives that are recompiled when dependencies are updated.

On Windows there is no such thing as an SONAME and Microsoft has essentially given up on ABI compatibility. If you write C or C++ in Visual Studio you must ship with it with the language runtime dlls for that version of Visual Studio. Microsoft doesn't guarantee ABI compatibility with any other version of the runtime.


It looks like the Windows equivalent is called a "Side by side manifest". It has to be opted-in-to from both the shared object (.DLL for Windows) side and the loading application, though. On Linux, it just works - it is idiomatic to include major version in SONAME.


libc usually does pretty well, FWIW.


> The Rust community really needs to take care not to become an echo chamber.

Exactly. The language has been proven to be stable, however a valid argument is the unsoundness risks of a project when they begin to depend on lots of third-party crates.

The moment one imports a crate or third party dependency into their project, unless the dependency explicitly specifies the lack of unsafe{}, all bets are off on the 'safety guarantees', especially when 'that crate' is used in multiple projects.


"All bets are off" is an insane way to characterize this. IF a project is 99.99% safe code and has one unsafe dependency that is a massive difference from a project that is 100% unsafe. The surface area of the project that may be vulnerable is going to be relatively tiny.

As for an echo chamber, yeah the Rust community downvotes way too much. I find myself upvoting just to counter it.

But the downvoted comments here are legitimately bad. Someone being pedantic about the 70%, and being wrong, and another person calling this propaganda.


Rust doesn't check for numerical over and underflows unless you run in debug mode which almost no one does because it is about two times slower than release mode. I.e. Rust doesn't live up to the safety standards many programmers coming from dynamic languages are used to where over and underflows are checked.

Many people in the Rust community appear to be unaware of this. Which is actually a big deal. Many of the bugs with the most catastrophic consequences are caused by over and underflows. Consider for example a gas pedal indicator kept as an unsigned that underflows causing the indicator to reach its maximum value.

If safety was my primary concern, I wouldn't choose C++ but I wouldn't choose Rust either.


You can pass "-C overflow-checks=on" to rustc to generate an optimized build with over/underflow checks enabled, even in a release build. Also, note that over/underflow behavior is defined to always be two's compliment wrapping when overflow checks are disabled.


> Rust doesn't check for numerical over and underflows unless you run in debug mode which almost no one does because it is about two times slower than release mode.

Totally, I wish Rust could express over/underflows better!

> Rust doesn't live up to the safety standards many programmers coming from dynamic languages are used to where over and underflows are checked.

> Rust doesn't live up to the safety standards many programmers coming from dynamic languages are used to where over and underflows are checked.

Yeah, that's possible. Thought it won't lead to unsafety, it is definitely a gotcha.

> Many people in the Rust community appear to be unaware of this. Which is actually a big deal.

Maybe, not sure.

> If safety was my primary concern, I wouldn't choose C++ but I wouldn't choose Rust either.

Really depends on the goals and constraints, of course. I'd be really concerned about using rust where an integer overflow could kill someone. I'd be really happy to use rust where a program crashing is totally acceptable, but exploitability isn't.


Rust is memory safe, not infallible. Catching more runtime errors is a something rust can improve on. Erlang being one example of a strategy.


> "All bets are off" is an insane way to characterize this.

Well, it depends. If the unsafe{} code is also unsound, then "all bets are off" is quite correct because you can't assume memory safety or lack of UB, even in your own code. In practice, you can use cargo-geiger to warn about crates that internally rely on unsafe code, and cargo-audit will warn about known security issues (including but not limited to unsoundness and memory-unsafety).


> then "all bets are off" is quite correct because you can't assume memory safety or lack of UB, even in your own code.

The problem with "All bets are off" is it's not meaningful. What does that mean? It's so vague.

Lots of unsafe code is safe. Lots of UB is not exploitable. I can link to a vulnerable crate and not be vulnerable.

Can there be an exploitable bug in rust code? Yeah, sure. But saying "all bets are off" gives the impression of "why even bother if you link to unsafe code", which is absurd. Yeah, we should strive for more safe code, audit unsafe code, be very cautious about using it, but "all bets are off" come on.


> Lots of unsafe code is safe.

This seems like a weird statement to make to me when defending Rust simultaneously. The whole selling point of Rust is that it's supposed to let you write provably safe code. When "unsafe" is used, what you're writing can no longer be proven as safe by the compiler. Maybe you've convinced yourself it's actually safe via some other argument, but people have been convincing themselves that their code is memory safe for decades and they're often wrong. More insidiously, relying on anything in Rust that uses "unsafe" makes it seem like the compiler is proving your own code is safe, but that is no longer true.

With that said, I don't really disagree with what I think your main point is. But seeing unsafe code in Rust is similar to seeing unsafePerformIO in Haskell libraries. Both of these are much more common than the languages' respective evangelists would have you believe, and at some point it actually does start to call into question whether the languages are successful at their basic design goals. (Spoiler: I believe that they are successful more than they are not, and even in the face of the proliferation of "unsafe" code in both languages, they're still worth using.)


> The whole selling point of Rust is that it's supposed to let you write provably safe code.

I disagree. A significant purpose of rust is to encapsulate unsafe code, and provide safe abstractions. Lots of unsafe code is safe - most of it, even, I'd bet.


> Lots of unsafe code is safe

I mean this with the greatest possible respect, but you replied by simply restating verbatim the quote that I originally responded to, so I'll just point you to my previous post.


`unsafe` the keyword vs unsafe the concept. Lots of `unsafe` code is safe when used through a safe wrapper. Verifying that is the duty of the programmer. The compiler can't prove it, but quite often the programmer can.

IMO it's good practice to provide a comment with such proof for every `unsafe` block.

So `unsafe` does not imply unsound, but unsound does imply `unsafe`.


This is what I was trying to say. Unsafe doesn't imply exploitability, at all.


Sorry if I misunderstood your post then.


The compiler has very limited "proof" capabilities even in principle, especially wrt. unsafe{} code. From a principled POV it would be nice to be able to provide a formal, compiler-checked proof that some block of code that has to rely on "unsafe" operations doesn't actually violate the language safety guarantees and can be wrapped safely. But this will have to depend on a formal model of "unsafe" itself which is still lacking. And it wouldn't even help for things like FFI, which involves calls into outside code for which the semantics are not formally known.


Well, there's two interpretations of "all bets are off". One is the runtime one, which is if you have a bug in unsafe code, then it is possible that all the other code in your program is safe but you're still totally out of luck because it can invalidate every guarantee. However, from an auditing perspective, the only code you need to look at for memory safety issues is that particular snippet, so you're in a much better position from that perspective.


"All bets are off" is not an engineering way to look at the problem imho. You alter your risk tolerance with the benefit gained. It's just trade-offs and the ability to do that is good.

I'm not going to argue you should use rust or not use rust but I think the state space is not binary and people reading will benefit from that.


This is true, but it's also something the Rust community discusses all the time, along with what they can do about it. It seems like a huge leap to the conclusion that the Rust community is an echo chamber because some middlebrow dismissals got downvoted on Hacker News.


Even legitimate criticisms get downvoted or countered with dishonest marketing speech like 'borrow checker doesn't matter'.


IMO crates are the biggest danger to Rust. It may very easily turn into the next NPM.


I think an even more problematic corollary is the desire to put nothing in the standard library, which cause the proliferation of crates that are the "standard" way of doing things but don't really have the guarantees/benefits that come with being part of "the" standard library.


Yeah, I agree with this.

I see it kind of like Go's lack of generics, in a strange way. With Go, no one was explicitly "anti-generics", it's just the core team couldn't find a design they liked, and as time went on it became sort of a meme. "Go is opposed to generics." "No, here's that blog post by Russ Cox about finding a way to implement them." Etc, etc.

Similarly, the rust core team was afraid to settle on designs for certain libraries, pointing out things like python's various standard library HTTP modules, and at least "for now" shunting a lot of functionality out into the ecosystem. When you have limited manpower, this makes sense, but I think a lot of people have since rationalized that decision as philosophically the right thing to do, not just a way of prioritizing development work.

In other words, I feel like it's a case of a smart tactical decision growing into an identity.


The saddest example I’ve come across so far is the lack of strftime, or any way to even print a clock time / date time in std. Even JavaScript has that. In a supposedly way past 1.0 language one has to rely on 0.x third party libraries to do even the most basic things.

Disclosure: only dabbled in Rust a bit, currently maintain two crates of some traction.


The rand and regex crates are two other examples. I would also prefer to have bindgen in the standard library.

I’d love to use Rust in the OS I work on, but integrating our build system with Cargo and dealing with external crates is a huge obstacle.


> I would also prefer to have bindgen in the standard library.

That's a challenging one. I'd like to have native support for C integration as well, but we'll have to balance that with the stability of the corresponding clang interfaces, and the requirement to ship extra libraries that the Rust toolchain doesn't currently ship.

Working on it, though.

> The rand and regex crates are two other examples.

rand I'd agree with, though it would need paring down to not have as many dependencies. For regex, there are multiple tradeoffs to be made; for instance, the most popular regex crate is fast and does most of what people want but doesn't have backreferences or lookahead/lookbehind assertions.


Question: is bindgen std quality yet? It seems there are still largely unsolved problems, e.g. https://github.com/rust-lang/rust-bindgen/issues/1549 which is causing hundreds of warnings in a -sys crate of mine and for which I have no recourse.


> the requirement to ship extra libraries that the Rust toolchain doesn't currently ship.

Ah, that's something I did not consider...


I think the issue there is that versions are arbitrary and being able to tell when a library is 1.0 is a non-decidable problem. There are lots of 0.x crates which are stable and feel feature complete, but aren't specifically versioned as 1.0.

I would actually prefer if the Rust std would not include functionality that's not essential, for example sockets don't seem essential and the standard library does not implement them in a particularly complete and well thought out manner either IMHO.


It would indeed be nice to have some sort of middle ground, where a selection of outstanding-quality crates can be officially "endorsed" by the Rust development team, but the endorsement can be withdrawn freely if something better comes along. C++ has something like this via Boost, and Haskell has their Haskell Platform, so it's not a new idea.


Actually, I am looking for more than this because I think Rust actually has a number of libraries that I think function fairly similarly to Boost. What I really want is the language committing to a library that is available without a third-party dependency, basically under a "linking exception" to be extremely close to "use however you want", supported by the people developing the language, designed alongside the language to remain ergonomic, with support that is tied to the language, … Personally, I consider this to be a very valuable feature from a programming language. Having a "pretty good" solution available "for free" is immensely value compared to having to go look online for every little thing, evaluate the landscape, audit it, ensure the licensing matches, consider whether will remain supported, and try to work around its idiosyncrasies.


The Rust teams do maintain some crates outside of the standard library.


I think the proposal is specifically to bless a set of crates (that may include those developed by Rust teams, but probably also others such as regex).

Being in std implies some kind of quality standard, but it also implies no backwards incompatible changes. A set of libraries blessed by a trusted group could give you the quality standard without constraining backwards compatibility.


Regex is one of those crates.


No, they don’t. If the Rust team maintains a crate officially and they will always do so (for reasonable values of "always"), then it is part of the standard library. Otherwise, it isn’t.

It is as simple as that. When serious, important projects depend on a standard library is because they want those guarantees. Having some members of Rust work on some crates does not give anyone those guarantees.

Which is why all those crates must be in std, or declare that the standard library is a set of crates (including core, std and others).


I don’t see how that follows. Rust could easily have a (small) standard library that is guaranteed to be present everywhere and any of a number of optional libraries that are maintained by the same team that builds the compiler and the standard library.

That’s what java spent years on getting (https://www.oracle.com/corporate/features/understanding-java...), and I think that’s the way to go for any language. It prevents the standard library from bloating, yet provides quality libraries that can be trusted. It also allows some implementations to be standard without providing all the bells and whistles. That can be useful on small platforms, and allows for ‘heavier’ optional libraries (for example a SVG renderer, PDF parser, etc)


... I don’t see how that possibly follows. Not everything the project does is the standard library. The compiler, rustup, all kinds of tools.


> [crates may turn into the next NPM]

Can't cargo refer to arbitrary git repos or even just plain directories? I don't see how it's any different from C or C++ in that case.


It can, but then so can ruby's Bundler and elixir's Hex. But neither of them have the same issues as npm. I think npm's issues arise from a culture of making lots and lots of tiny dependencies, probably spawned by JS's anemic standard library.

At work, we have a number of Phoenix web apps with JS frontends. Looking at one randomly, there's... checking... 48 elixir dependencies and... checking... 911!! JS ones. I have to spend literally hours every week keeping those stupid JS dependencies up to date across our apps. The churn is enormous.

I think the fear is that as long as rust keeps its standard library small, a culture may develop to have more and more in the crate ecosystem, which proves to be a huge maintenance burden, as those are freer to update in breaking ways, and cross-depend on each other.


I think NPM obsession with small libraries comes from the lack of an decent optimizing compiler that can remove/not-include unused code, combined with a deployment scenario (frontend) where code size matters.


It can, yes.


Except npm has order of magnitude more users.


What part of the Rust community are we talking about here the parts I've interacted with (mostly a local meetup and the rust discord) have been great and don't feel like an echo chamber

do you just mean reddit and hacker news posters because I find calling that part of the community pretty absurd that's just lowest common denominator internet yelling


What would you have happen instead?

IMHO, on an ideal platform for debate, the audience's experience of the debate should itself resemble the statistical distribution of opinions held by the debaters.

I.e., if 80% of people think X while 20% of people think Y, then comments saying Y should not make up 90% of the debate by any metric (number, word-count, above-the-fold representation, etc.) If Y did take a majority share by any metric, that'd be a misrepresentation of the debate, that is likely to cause the audience to come away with a misunderstanding of the issue.

I don't know how close we are to an "ideal platform for debate" on HN (probably not very)—but I feel like a certain degree of people reflexively downvoting contrarian opinions (no matter the subject) probably bring us closer, not further, from that point. Contrarian opinions should not be presented as if they were orthodox opinions. They should be given some representation—I'm not advocating for censorship—but they should only be highlighted in any algorithmic way, to the exact degree that they're recognized as something some appreciable group of people really believes.


I think this is wrongheaded, for the simple reason that whether an idea is contrarian or not is irrelevant to its merit as an idea. Misrepresentation is only possible through sophistry, not through volume, or even placement. Therefore, it makes sense to downvote only things that are 'sophist', in the sense that they contain no kernel of communication, but rather, just obscure or muddy - not things that are contrarian. Indeed, to do so is in itself an attempt to cloud and thwart communication, as you're manipulating placement and representation to make arguments less or more present, independent of their actual content.


This presumes that people have infinite time and motivation both to debate, and to read the ensuing infinite debate.

Rhetoric works to win debates because debates are time-limited (and the audience's attention is both time- and attention-limited), and so you can win just by e.g. preventing your opponent from ever getting a word in edge-wise, or forcing them to answer for trivial side-issues and therefore never have time to say what they really want to say; etc.

What I'm railing against here is the https://en.wikipedia.org/wiki/False_balance: the rhetorical rearrangement of non-equal evidence into a debate presented to the audience as if each side had equal support behind it.

And, make no mistake, that sort of rhetorical rearrangement is what discussion forums with comment voting are doing when the community rearranges the comments with upvotes/downvotes: they're curating which comments appear first (and thus which comments appear at all, to those without interest in reading the whole page.) They're changing the impression of aggregate weight-of-evidence someone gets by skimming the debate.

But my claim is that this is not a bad thing, when it is done in the goal of presenting the debate accurately. If one side of a debate has little evidence, but more voices, those voices should be rhetorically damped down to compensate, until the force of their combined voice matches the volume of the evidence on their side. That is the proper goal of editorial curation in e.g. news journalism: to let authoritative primary sources speak loudly, secondary sources quietly, and kooks and rumormongers not-at-all. And it should, IMHO, be the proper goal of a community's editorial curation of a comments section, as well.


I agree that rhetorical damping is necessary - I just think that it should be done on how much a given post contributes to a discussion, not by how much it contributes to the majority opinion's side of a discussion.

I think Carl Schmitt, for instance, should have been put in prison. His ideas are horrible. I still think they are worth reading.


I think this will get better as Rust becomes more mainstream. At the moment, I would guess that most developers who use rust do so by choice. By contrast, many developers who use C++, Java, JS, or even Go (see docker/kubernetes ecosystem) have little say in the matter. It's not that people don't like these languages, it's that they've been successful enough that some people, who aren't crazy about them, are writing code using them.


It seems to me some people are genuinely excited too much about it and some people are trying to show off that a 'hard' language is easy for them.


> The Rust community really needs to take care not to become an echo chamber.

This is a natural consequence of the restrictive rules imposed on the rust community.


> the restrictive rules imposed on the rust community.

Can you be specific?


Not to speak for GP, but perhaps they mean https://www.rust-lang.org/policies/code-of-conduct ?

> Respect that people have differences of opinion and that every design or implementation choice carries a trade-off and numerous costs. There is seldom a right answer.

> Please keep unstructured critique to a minimum. If you have solid ideas you want to experiment with, make a fork and see how it works.

I am not "anti-CoC" but I do see how the particular wording of this one could be interpreted to silence pretty much any technical discussion that someone doesn't like.


I think many in the rust community would feel the opposite. Technical conversations, especially tough ones, are far easier when you come in with some enforced civility. More people can contribute their differing technical opinions, not fewer.


I work with autistic people.

Not as a doctor, but as software engineer, I teach programming to autistic people that work in the field and are hired because they are autistic, not despite it.

This particular company looks for them, trains them and put them to work on software projects, just like any other software house would do.

One thing they are often bad at is "enforced civility" not because they are uncivil people, but because their though process is different from ours and forcing them to adhere to some rigid form of presenting opinions that has no other use than enforcing a rule just for the sake of it, bores them in the best of cases, makes them angry in others, but makes uncomfortable in general.

You shouldn't decide how people in a community interact with the community, you should value their contributions and just that.

Civility can be enforced of course, but post-facto, after further investigation.

Doing it preemptively in the COC sounds bad to me.

But I could be wrong.

BTW I have interacted with Rust community and have been downloaded every time I've said something on the line of "maybe it's not the silver bullet"

So my experience says less people can contribute.


I don't understand why an autistic person would be incapable of expressing a technical issue in a way that isn't sexist/racist, etc, which is basically 99% of compliance with a CoC.

Downvoting has nothing to do with the CoC.


Autistics are not neurotipical

The Rust COC is so vague that anything can be flagged as "non welcome"

Nobody said racist or sexist and I don't understand were it's coming from

If they don't want sexism or racism, they should write sexism and racism

An autistic person will react differently from neurotipical when confronted about manners, in a way that the COC forbids

Communities should be goal based, tribes are based on rules and enforcing them


> Nobody said racist or sexist and I don't understand were it's coming from

> We are committed to providing a friendly, safe and welcoming environment for all, regardless of level of experience, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, nationality, or other similar characteristic.

"Non welcome" doesn't appear in the CoC. The most vague part, I would say, is:

> Please be kind and courteous. There’s no need to be mean or rude.

The bar is extremely low here. No one is saying "you have to say please and thank you or you're banned" it's more like don't personally insult people to an extreme. That could be more explicit, since some people may have a harder time interpreting that - seems like valuable feedback.

> An autistic person will react differently from neurotipical when confronted about manners, in a way that the COC forbids

I am quite sure that many members of the rust community are far from neurotypical and they do fine. Autistic people are capable of technical conversations in which they are not attacking people.


> on any thread about Rust, anyone who brings up criticism, no matter how valid, gets downvoted into oblivion. Just look at what happened to the comments critical of Rust on this post.

That's just not true.


> Well vetted and tested C (or C++) is great

The review process and standards maintenance must be very strict.

Otherwise, as someone said, "A C compiler is a mechanism to turn coffee into security advisories".


There is actually lots of Rust supporters in HN (and what other places you'll find people who actually know what Rust is). This extra-hype made people sensitive for possibly two reasons: Too much coverage (re-write everything in Rust) and the other one is the fear of realizing that Rust is going to take over some day.

Yes, I'm a Rust moon-boy ;)


> extra hype made people more sensitive..

Sure, because you people talk about it literally everywhere.

> other one is the fear of realizing that Rust is going to take over some day.

It is not. It is a good systems language, not so much for application domains where other languages get shit done in half the cognitive load. just because rust community likes to promote rust for everything and involves in dishonest marketing that borrow checker is not a problem, rust is not the optimal language for applications domains where high guarantees and low resource usage aren't mandatory.


Or we've been around long enough to hear the same bullshit (a technical term) enough times that we are allergic to how different it is this time.

I've still to hear why Rust is better than Ada and I caught the tail end of that madness.


I know very little about Ada, but googling it leads me to understand that freeing memory in Ada is considered unsafe. If that's correct, safely freeing memory (generally in a destructor, much like C++) might be one advantage of Rust.


I think a good insight of rust and other static languages is it moves a lot of testing into the language itself. I once saw an extension of the testing pyramid by adding 'static analysis' and I think that's a good way to think about it.

With more static analysis baked into the language itself although, comes the trade off of more restrictions and longer compile times.


If I might ask, when was the last time you have D a shot? Available documentation has improved recently, though I think there's a big need for improved tooling docs.. in my case, it took forever for me to figure out how packages work with `dub`.

However, if you can get past the startup barrier, D is a real pleasure to work with.


> Even if Rust never gets supported by the kernel's buildsystem officially or if the code can not be upstreamed.

That is not how the (Linux) kernel likes to work…


What do you mean? I have the Rust working in Kbuild; I haven't published the patches. At Linux Plumber's conf for later this year, I've submitted a proposal to discuss the barriers to adoption within the kernel proper. Some very senior kernel devs are supportive.


You have it working, but if you're writing drivers and stuff the kernel might change out from under it and you'll have to continuously keep it up-to-date if it's not in-tree, right?


Yes; but what I'm talking about (Kbuild) is in-tree support for Rust. Bindings are regenerated when the kernel is built.

As far as "how the Linux kernel likes to work" either it's in tree, or it doesn't matter. (Not my opinion per se, IDC, but look at upstream's collective treatment of ZFS and other out of tree drivers in general; they remove APIs they know external modules are using). I do generally find out of tree drivers to be a PITA to build outside of the kernel, and sometimes find curious things in their build scripts. Updating NVIDIA drivers seems to be...not great.


> A lot of 'anti-rust' sentiment here.

"here" meaning where? Can you point to it, because there's not "a lot" on this thread.


Rust makes sense for replacing C++ user space programs. I don’t see how it’s relevant to the Linux kernel at this point.


Due to its strong and simple C bindings, there's no reason new kernel modules can't default to being written in Rust, and allow Rust to spread from the "outside in" to the heart of the kernel. I don't advocate wide-scale rewriting of the kernel, but I do think default-to-rust for new kernel components and modules is rational and improves the system.


A mixed-language codebase will always be more complex than a single language one. So it's not just a matter of Rust integrating well-enough with the rest of the kernel, but it has to offer enough improvement for the cost to be worthwhile.

So to evaluate whether it's worth introducing rust, you'd ideally want to find a self-contained part of the kernel that is responsible for many CVEs and you'd want to show a substantial reduction of CVEs with the rewrite of just that component. Not impossible, but also not so easy.


The kernel is required to be compilable with very old versions of GCC. So unless all users/companies agree on lifting that restriction, there is not much to do. rustc is not stable nor old enough.


So it's currently GCC 4.6+, with patches queued to bump that to 4.8+. Also, you can use Clang. So I'm not sure how rustc being used for some parts of the kernel has anything to do with developers "restriction?"


Via LWN, looks like the GCC 4.8+ patches got merged into the kernel's 5.8 merge window recently, with a footnote suggesting this be bumped to GCC 4.9+.

https://lwn.net/Articles/822527/

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


What about a Rust to C translator? It would produce "safe" C right?


You would also need to comprehensively annotate the Linux C headers in order to provide the information that Rust expects (e.g. for each ptr instance whether it's a "raw" pointer, a shared/exclusive reference or a piece of "owned" data, whether it can be NULL etc. etc.). This might actually get accepted upstream though. Having the kernel-build itself depend on Rust could be far more problematic, given e.g. the variety of hardware that Linux must support.


I would think such a thing would be more like "compiling" rust to C. As in the C code would have no means to maintain most of the safety guarantees, rather they would be verified at compile time, and the produced C would be meant to be used unchanged. So as long as nobody ever edits the C manually, it would still presumably be safe.


That's what I really meant. Write it in Rust but then output C instead of ASM. The parent is right though. A component/driver would need to use the C headers from the Rust side.


Not only that, but Rust depends on LLVM.


Last time I looked at this comment it was flagged [dead]; I'm glad someone was able to remedy that as I was unable to upvote.

I think this is a good criticism, but might not be true forever and is certainly something to keep in mind.

As I said in the GP, Rust will not be upstreamed into the kernel- this is one of the reasons why.


If a comment is dead that you think is worthwhile, you can follow the permalink to it and vouch for it. I imagine that feature is gated behind some point threshold, but my guess is whatever that threshold is you're past it.


Oh, nice, thanks for that. :)


It doesn't have to, but yes, it currently pretty much requires LLVM.


https://lwn.net/Articles/797828/

Perhaps the way to do it is to rewrite common bits that get CVEs, just so it doesn't happen again.


The output of the translator would be right, but messy, and someone would have to go back and clean it up. Doing that work would almost certainly introduce bugs. How many, and whether it outweighs is the question.


There is a lot of work being done on a translator now [0]. However, the goal is to translate C to equivalent Rust and then allow developers to intervene to migrate to safe Rust.

[0]: https://c2rust.com

EDIT: Oops, read the parent comment backwards.


That's C to Rust, not Rust to C. (mrustc provides something like the latter, but it doesn't actually support full Rust.)


We just seen a news that Firefox is reusing Chromes regex c++ code, so from my point of view Firefox is barley pushing for Rust components but Rust fans expect the kernel to push harder. IMO when Firefox and all it's componetns/dependencies are Rustifed then I think we can ask for the kernel to attempt the same.


One component choosing to use a mature implementation in another browser instead of a rewrite from scratch in rust means very little.

We can look at actually statistics instead of just one random anecdote. For instance, according to [1], there are 2 million lines of rust code in firefox, 6 million lines of C++, and 3 million lines of C. Very roughly speaking that's 20% of the lines of low level language being in Rust.

Firefox only started requiring rust to be built about 3 years ago, this is a pretty breakneck pace of replacing C/C++ with rust.

[1] https://www.openhub.net/p/firefox/analyses/latest/languages_...


I personally would also check if the libraries Firefox uses are Rust, I mean the browsers this days are huge, you have image,audio,video,WebGl,pdf and a ton more features so you need all the dependencies to also be memory safe especially the ones that handle untrusted input.


Firefox JS engine is in C++ even though rust has been available for years now, they've said more than once that they have no plans on porting Spidermonkey to Rust.

The most recent information I've found is from a Spidermonkey Dev here on HN who said they might even port the Regex engine to Rust

That didn't happen either

https://news.ycombinator.com/item?id=18988024

Something tells me that there's more than meets the eye.


It seems unlikely that there is any great conspiracy at play here to me... they found a good solution to a problem, they implemented it. Spidermonkey is a highly optimized carefully examined code base, it's not easy to replace wholesale or particularly low hanging fruit to be replaced with rust.

Mozilla is working on replacing the wasm compiler, and eventually the compiler for "ion MIR" (an intermediate langauge that js is compiled to), with cranelift, a rust implementation of a compiler backend. See [1] for pretty pictures of this.

It's been possible to use this backend to some extent since 2018 [2]. The best place I was able to find in the last 5 minutes to describe the current state of the integration is the doc comment at the top of this file [3].

In other words it looks like spidermonkey is not not being ported, it's just not being entirely ported yet.

[1] https://github.com/bytecodealliance/wasmtime/blob/master/cra...

[2] https://old.reddit.com/r/rust/comments/9mvnrk/in_firefox_nig...

[3] https://github.com/mozilla/gecko-dev/blob/master/js/src/wasm...

Edit: Another component of spidermonkey that is being rewritten in rust is the JS parser, that project is called smooshmonkey. You can read about it in the spidermonkey news letters [4] [5] [6], and there's an nice comment describing it at the top of this reddit thread [7]

[4] https://mozilla-spidermonkey.github.io/blog/2020/01/10/newsl...

[5] https://mozilla-spidermonkey.github.io/blog/2020/03/12/newsl...

[6] https://mozilla-spidermonkey.github.io/blog/2020/05/11/newsl...

[7] https://old.reddit.com/r/rust/comments/h0ddpi/smooshmonkey_n...


If Firefox wrote a new regex module and didn't use Rust that would contradict the poster's advice, but them using an existing mature bit of coded doesn't seem to prove much.


What Firefox done makes sense, but it does not look good to the idea you can just rewrite in Rust function by function. A regex engine seems to me a thing that is easy to TDD, so I would expect if rewriting in Rust is that easy Firefox would rewrite the most dangerous parts in Rust already (I assume regex is used with untrusted input but I am clueless so i could be wrong). IMO the giant army of Rust fans should put their energy working on Firefox or donating to have Firefox rewriten before 2030, because the story that is heavily promoted is that you can rewrite shit in Rust super easy, function by function, etc but the reality is different.


Firefox didn't choose the C++ regex implementation because rewriting pieces of code with well defined interfaces is hard. It was for a few reasons: 1) at the time they made the decision, there was no existing rust regex library that matched the way that JavaScript worked, 2) they didn't want to write their own/rewrite existing code because regex libraries are complicated beasts, 3) the C++ code is mature and well tested, so the primary benefit of a rust implementation would be more speed (because it's easier to safetly reason about shared data) they didn't think that optimizing the regex engine was a priority.


It makes sense, I am still a bit disappointed (so is a subjective thing) hopefully the RIR crowd gets more rational and reads your explanation and see that is not so easy.


>. A regex engine seems to me a thing that is easy to TDD Why go to all that effort when the thing you need is already available? I don't think there are that many people that argue that everything, even the stuff that works fine and doesn't have issues, should be re-written just because rust exists.


C code can be rewritten function-by-function, even programmatically via c2rust. C++ is a bit messier and even the FFI interoperability story wrt. Rust is not quite complete.


Systems programming in any language would benefit immensely from better hardware accelerated bounds checking.

While Intel/AMD have put incredible effort into hyperthreading, virtualization, predictive loading, etc, page faults have more or less stayed the same since the NX bit was introduced.

The world would be a lot safer if we had hardware features like cacheline faults, poison words, bounded MOV instructions, hardware ASLR, and auto-encrypted cachelines.

Per-process hardware encrypted memory would also mitigate spectre and friends as reading another process's memory would become meaningless.


> Systems programming in any language would benefit immensely from better hardware accelerated bounds checking.

[Mostly discussed deeper in the thread already but felt it was worth bringing up directly as a reply to this]

This is an active area of development though still plenty of work to do. The Cambridge University Computer Lab have been doing research in this area in the form of CHERI: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ this gives you hardware capabilities which are effectively pointers with bounds given out by the OS. Want to access something? You need the appropriate capability to do so.

ARM announced MTE last year, which whilst far less capable than CHERI begins to give you something (though it's targeted for bug hunting effectively rather than providing any actual security/safety properties): https://community.arm.com/developer/ip-products/processors/b...

ARM are also building a hardware CHERI implementation, Morello: https://developer.arm.com/architectures/cpu-architecture/a-p.... There's already CHERI implementations running BSD on FPGAs, this will take it to the next level with real silicon using modern ARM processors (maybe you could run android on it for instance?).

I think Intel have been looking along similar lines but I'm far less in touch with what they're up to so I don't have links.


> Systems programming in any language would benefit immensely from better hardware accelerated bounds checking. ... The world would be a lot safer if we had hardware features like cacheline faults, poison words, bounded MOV instructions, hardware ASLR, and auto-encrypted cachelines.

This isn't really true. The main performance cost of these safety checks (and largely of others such as overflow checking) is inability to optimize because the compiler needs to preserve partial results/states in case a fault occurs. The checks themselves are trivial.


Partial results like, say, a hardware exception firing instead of forcing the compiler to reason about it?

I want efence on steroids.


To the maximum extent, bounds checking should be elided via greater compiler knowledge of what exactly is happening. This would leave arbitrary bounds checks limited to user input, in which case the vast majority of the performance penalty goes away right? "Bounds" are a higher-order programming language concept that I suggest may not have a place in hardware.


Compilers already do this. It would still be nice to have cheaper checks; compilers are not omniscient.


Yeah, for sure.

I think I was having trouble envisioning what exactly a "cheaper" check could look like in hardware. Bounds checking is basically read length, subtract your index, and a conditional branch (potentially with a hint that it would succeed).

To do this properly in hardware I suppose you'd need a list of memory regions that are "live" and default the rest to "dead", though how many do you support? What does updating the list look like? Page tables are pretty slow to update, and those don't change too often. Array tables would be pretty gnarly, and impose a further penalty on context switching as they'd have to be thread-local and app-local.

I wonder if this is a case akin to spinlocks. Sure, I'd love a lock that doesn't busy-wait, but there's not really a cleaner solution -- in hardware or otherwise.

Maybe I'm just not seeing something obvious, though!


You might find an almost-practical (not shipping yet, but should very soon) example helpful: https://community.arm.com/developer/ip-products/processors/b...


That's brilliant, and I retract my statement completely. Thanks for sharing!


In addition to ARM MTE, see also Cheri: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/


> The world would be a lot safer if we had hardware features like cacheline faults, poison words, bounded MOV instructions, hardware ASLR, and auto-encrypted cachelines.

Sure, the world would be a lot safer if we used microkernels, too, but the tech world has been obsessed with performance over all other characteristics for decades.


My first full time professional job (1976) involved writing assembly language for the TI 960, a process control minicomputer with a 64K data address space and a different 64K instruction space.

At first, I just thought it was odd to make the hardware more complex by having two address spaces. However, this prevented a common cause of difficult to find bugs in asm programming, and I came to appreciate that hardware could make programming safer.

Hardware architecture, programming languages, and compilers advanced rapidly, but safety always seemed to take a backseat and was left up to the programmers. I’m glad to see developments like Rust and I look forward to using it for a real project soon.


You have it on CHERI, ARM and SPARC, it was Intel that screw up.

So make use of Solaris SPARC, iOS or Android 11 (ARM HMT is a requirement).


You mean that Intel should invest in something like MPX?


MPX is dead, and was so expensive that 100% software bounds checking was cheaper...


That was my point. :-) It's not clear that hardware bounds checking acceleration is actually a meaningful win.


Sure it is, Intel screw up.

Solaris SPARC, iOS and now Android 11, all make use of some kind of hardware validation in memory accesses.


MTE today, Morello as the experiment for fully safe C tomorrow: https://developer.arm.com/architectures/cpu-architecture/a-p...


Any help is welcomed. Microsoft is also having a go with Checked C.

The main problem is forcing developers to actually use them, I guess that is why Google has decided to make MTE a requirement for Android 11 on ARM devices.


When people talk about Rust's advantage in safety, is it just the compile-time checks about Ownership? Could not the C compiler be given the same thing, with a flag like "strict mode"?

Sorry if my question is really stupid. My experience has been with scripting languages, not systems programming. But I read the chapter in the Rust Book about Ownership, and I think I get it. In a sense, is not the compiler simply inserting function calls to free() in the right places (and warning you if it can't)? Couldn't this be added to the C compiler? In fact I'm not sure what the difference is from a linter.

The article has the phrase "C/C++ Can’t Be Fixed", so it anticipates my question. But its answer is that programmers either won't remember or won't bother to set their C to "strict mode", run the linter, or whatever, because "static analysis comes with too much overhead: It needs to be wired into the build system. . . . If it’s not on by default it won’t it won't help." But if this is the only argument, it seems weak. I agree that it is more effort, but isn't it much more effort to switch out your entire language and ecosystem?


> is it just the compile-time checks about Ownership?

Sometimes. There's a lot more to safety than memory safety, but that's the easiest to define.

Other aspects might be safety against null values, safety against poor error handling, etc. Those are much harder to define.

> Could not the C compiler be given the same thing, with a flag like "strict mode"?

Not really, no. There's a lot of research into proving C code safe, and automation of it, and it's not really feasible for general C codebases. There are tons of static analysis tools that try to get part of the way there, but they can be slow, miss things, and have many false positives.

It would require, at minimum, some seriously ugly annotations. There are papers that demonstrate such annotation based approaches with C. One that I read ended up looking quite a lot like Rust, at which point, you wonder why you wouldn't just write rust.

Not a stupid question at all, by the way.

> Couldn't this be added to the C compiler?

Rust is able to insert those frees because it has lots of information about memory usage at compile time. C doesn't, therefor C can't insert those frees in the general case.

Other issues, cultural ones that is, like "people just disable it" are fair but not the biggest issue imo. But to further emphasize that point there are many runtime mitigation techniques (ASLR, stack cookies), and they get disabled all the time because, and this is the extra sad part, they legitimately find bugs and crash a program that otherwise may have appeared to work, and the fastest "fix" is disabling it.

Re: Effort, totally, yes. But the effort isn't really split. There are tons of people working on safer C/C++ through tools like fuzzing, static analysis, and runtime mitigations.


> you wonder why you wouldn't just write rust

Because it is another language.

If the researchers behind the borrow checker had implemented it on a safe variation of C/C++, then it would have been much easier to push for it in many companies. Something like the Python 2/3 split is better than another completely different language.

Don’t get me wrong, Rust brings improvement in other areas too, which is fine, but when we are talking about many-million-LOCs, you don’t care that much.

I am still waiting for someone to bring lifetime annotations to C and C++. Those I could use right away. Rust, not so much.


But adding rust-like lifetime annotations to a many-million-LOC code base would result in totally bricking the code with 99.99999% certainty.

I tried to port an (Earley) parser from C to Rust. It wasn't even a port, but more of a rewrite with similar function structure. The first part worked well, even though I had to resort to an arena (a special type of allocator that can guarantee lifetimes at the expense of freeing them later than strictly possible). Then I tried to get in shared parse trees. It was a nightmare. It could simply not be done with the same efficiency as in C. I had to resort to using array indices instead of pointers, or use ref counting where it wasn't needed. This had quite some impact on the calling functions.

And that was a few hundred lines of plain, non-tricky C.


If your C/C++ is sane, then your code was already written with lifetimes in mind and enforcing them is not a big deal.

There will be places, no doubt, that you cannot do it, but if you can easily cover 80% of the code, that is way better than nothing and then you can work on redesigning the 20% (perhaps even rewriting it in another language since it has to be changed, for instance to Rust). Same argument as Rust uses for unsafe blocks.

There have been specialized annotations for things like mutexes for a while in several projects which help static analyzers prove things too, so it is not new.


Doesn't unique_ptr in C++ provide that?


unique_ptr allows for use-after-move, so it's not perfect.


The issue is that C/C++ Can’t Be Fixed backwards compatibly. You would need to add lifetime annotations: existing code wouldn't compile. You could make a Rust-style language that is closer to C, but it wouldn't be any to use than Rust.

And at that point why not just use Rust. It doesn't require you to switch out the entire ecosystem, you can happily link Rust and C, and C++ together in one binary. In fact, as an end-user it's sometimes easier to use a popular C library from Rust because someone will have wired it into Cargo (Rust's build system) and it will "just work": you don't have to mess around with working out what kind of build system the library is using and integrate it with whatever your using.


> And at that point why not just use Rust

See my sibling comment. Using Rust implies many more changes than just moving code to an incompatible-but-close version of C and C++ that adds lifetime annotations and unsafe blocks. Compare it with the Python 2/3 split vs. rewriting all your Python 2 code to, say, Ruby.

Then there is the OOB issues too, like using another build system, a single compiler based on LLVM, etc.


Given the degree of changes required in some codebases, I think some teams would rather treat it as a rewrite job.

Even moving large enterprise codebases from python 2 to python 3 is taking so long they had to push that deadline back years. Given the distance you'd have to cross to push an older C++ codebase to something with similar guarantees as Rust, I don't know if any company would voluntarily cross that gap.


> adds lifetime annotations and unsafe blocks

Worth noting that the vast majority of lifetime annotations in Rust are “elided”, i.e. inferred by the compiler.

I’d say that a successful “safe C” compiler would accept any existing C code that never invokes undefined behaviour, without a runtime performance penalty.


> I’d say that a successful “safe C” compiler would accept any existing C code that never invokes undefined behaviour, without a runtime performance penalty.

That's unfortunately what's not possible. The C source code doesn't contain enough information to prove safety (a lot of real world C code depends on invariants that happen to be true at runtime, but aren't provably true at compile time)


Yes, it probably is impossible. On the other hand, C programmers are asked to write code which doesn't invoke undefined behaviour otherwise their code can break in unpredictable ways, so either we just don't have sufficiently good checking algorithms yet or it is impossible for them too.


What you described is just a destructor. C++ already has those. If you think that destructors are all there is to Rust then you are missing the entire point.

Rust has an affine type system [0] and this is something you either have or you don't. There is no way to retrofit it without rewriting all C code in existence.

When you look at linear types it becomes pretty obvious how restrictive they are but this restriction is also what allows them to make guarantees about memory safety. With linear types every variable must be used exactly once. This means that no matter through how many functions you pass a variable eventually you have to call the destructor to destroy the variable.

Rust doesn't have linear types because they are too restrictive. Instead it has affine types where variables are used at most once and if they are not used the destructor is called automatically. However even that is too restrictive. The definition of "use" in Rust is a borrow and the affine type system rules only apply to mutable borrows. You can have an arbitrary amount of read only borrows. The end result is a language where it's not possible or much harder to implement data structures that do not follow these rules such as doubly linked lists. You will have to use completely different data structures in a variant of C that has an affine type system. This will probably result in an affine only ecosystem that is completely disjoint from the rest of C. The Linux kernel will definitively never be rewritten in affine C.

[0] https://en.wikipedia.org/wiki/Substructural_type_system


"strict mode" is not a subset - you need stronger type information than what C provides to get Rust level checking - if you wanted to add that to C and also make it expressive enough to be usable you'd eventually end up with something that's closer to Rust than C.


This exists, and is called static analysis. There are tools that can analyze code paths and build a theoretical model of lifetimes throughout a problem.

Mind you, I don't know the formal theory enough to say whether you can build a static analysis tool that is actually capable of doing what Rust's lifetimes accomplish in a provably complete manner. Maybe someone else can chime in here.

However, there are other disadvantages to this approach. Not having it in the language means everyone working on the program has to be using the exact same static analysis tools, and they have to be using it on the entire program, including libraries. Rust ensures that the lifetimes are enforced consistently, since there's only one possible engine enforcing them.

There are some very impressive code analysis tools used in the industry (with C and C++, in particular), but far as I know, they are all commercial and fairly expensive. As far as I know, open source tools like Valgrind are not powerful enough.


Even if such a tool exists it would be along the lines of a tool that prevents your C code from containing turing complete constructs so that it can be formally proven. It will complain about perfectly valid code. Are you willing to use a tool with false positives? I've seen many C programmers that are highly confident in their abilities. If someone told them to use a tool that marks their obviously correct code as "potentially wrong" they will just stop using that tool the same way they refused to use Rust.


That is exactly what is expected from developers that need to certify their code via MISRA, AUTOSAR and other security critical certification processes.


Rust is definitely great for systems programming, and the speed is fantastic too.

Take a look at some of the system utility tools on crates.io, such as ripgrep (like grep), bat (like cat), fd-find (like find), lolcate-rs (like locate), procs (like ps).


All of those are MBs in size, compared to KBs for their C counterparts. Something needs to budge in Rust land for small utilities to be viable.


That's kind of unfair since operating systems ship with large libraries that C can dynamically link to, but rust has no such advantage.


Rust binaries effectively statically link a lot of code via monomorphizing generics. I may well be wrong but I don't think you could get Rust binaries down to C sizes just by separating all the dynamically linkable parts.


That is Rust's own fault for not making their standard library ABI stable beyond a single release (and making releases so frequent). If they kept the ABI stable for just a year and gave library authors a way to do the same, distros might choose to dynamically link the stdlib, serde, clap, and other commonly used crates.



> All of those are MBs in size, compared to KBs for their C counterparts.

What difference does that make in the real world?


Sometimes a binary optimized for size is faster than one optimized for speed. (Due to cache residency)

Embedded systems frequently have flash size limitations in the megabytes or even kilobytes.


An order of magnitude difference... my ARM boards have 16 MB SPI flash. No way I am fitting more than a single rust binary on there.


No way are you fitting a normal operating system on there.

You can certainly compile rust to fit on there without any trouble though, you just have to try... the same way you have to try to compile any language in that sort of environment really. You can get to about 30kb while still keeping the standard library [1], you can get something with a fully featured web server in a few hundred kb [2].

[1] https://github.com/johnthagen/min-sized-rust

[2] https://jamesmunns.com/blog/tinyrocket/


Rust has partial-but-improving embedded support. I don't think you ever run any of these cmdline tools on a 16MB embedded system.

And that's fine. You wouldn't run most C cmdline tools on an embedded system either.


Wouldn't this be a no std situation then? I do agree with the other poster that binary sizes only matter in specific cirumstances though.


> Rust excels in creating correct programs. Correctness means, roughly, that a program is checked by the compiler for unsafe operations, resulting in fewer runtime errors.

This is a fairly low bar for correctness, and not really one I would really agree with :/


It's an extremely low bar, but one that hasn't been met by some popular languages.


I find that monadic error handling and null safety is as much of a game changer. It gets discussed less, and is obviously not unique to rust, but coming from Java I see what I’d guesstimate to be an order of magnitude difference in unhandled error cases.

The same logic can be applied in Java or C++ or probably even C, but it also comes down to the wide spread and idiomatic approaches of the language. The rust eco system is not necessarily coherent around the details of errors, but error handling is generally rigorous. The benefits of that can’t be overstated. The compiler is a huge help in getting it right in a way that Java checked exceptions never was.

I’m sure that there are other languages that are showing similar wins, though. I haven’t written much Swift, but I imagine that the experience might be similar?


Calling just “memory safety” correctness is a low bar and misses all sorts of other correctnesses (type safety, progress, matching-the-spec, etc.). However, it is a fundamental property: without memory safety, no other correctness can be guaranteed.


So is it a property more important for inexperienced programmers?


You think only inexperienced programmers cause memory errors?

I am no fan of rust and actually hate its evangelism. But this is ridiculous.


... more ...


Launching the missiles is memory-safe!

Deleting the production database is a thornier philosophical question, but Rust's position is that it is.


Yeah, there's a couple of issues, one of which you mentioned :) Others are the fact that "runtime errors" are not necessarily bad; the goal is to prevent exploitable runtime errors. Crashing when you detect an out-of-bounds access is a bug and a runtime error, but one that is safe from a security perspective and as such something that Rust will do readily when necessary.


The bar here is C.


Whenever I think about Rust vs C I think of Curl, where they've been working on a C program for decades that is used by an incredible amount of software, and they're still fixing memory leaks in it as recently as November 2019: https://curl.haxx.se/changes.html#7_67_0


Rust considers memory leaks as safe. However I do find it easier to avoid memory leaks in Rust due to it’s stronger scope based dropping than when I write C.


You can leak memory in Rust too.


I think that bar is met by some term which means something synonymous with "this is better than C". When dealing with "correctness" I generally would like the program to do what I intended.


I agree, and sincerely, I'm a Rust programmer and "memory safety" is definitely not why I use Rust. There are lots of memory safe languages but I feel more productive and get more satisfaction writing Rust than, say, Python, Lua or C++.

In Rust I feel like almost every time the program works on the first time it compiles successfully, even without writing tests. Probably a combination of very well designed API, builtin RAII, very explicit syntax, compiler strictness and clear docs. I also cannot forget to handle errors (even if I just decide to propagate or crash).

This is what "correctness" means to me, but it isn't exactly enforced by the compiler itself.


The bar is set by the worst programmer not the worst language.


The language works for the programmer. Those with a higher focus on safety (like rust) make the worst programmer better.


Combined with the large number of vulnerabilities in mature, well-designed programs implemented by skilled programmers. The mythical programmer that can use C/C++ without bugs doesn't exist.


The mythical programmer that can use Rust without bugs doesn’t exist either. I deal with higher level languages than rust and we’re capable of writing just as dangerous stuff through implementation errors.


While bugs like logic errors clearly can exist in all languages, those that do things like memory management/enforce memory safety for the programmer eliminate whole classes of bugs.

No language/tool/etc will stop all programmer mistakes, but they can definitely stop some.


The fact that Microsoft keeps saying it wants to move away from C/C++ is great IMHO.

> Microsoft c++ continues to be written and will continue to be written for a while

I interpret this as saying that they cannot instantaneously convert all their existing code to Rust, and it seems fair enough. However, it seems a little odd to start new projects in C/C++ when you are publicly saying it's the wrong thing to do.

Infact, just a little over a month ago they announced a new QUIC implementation written in C/C++ (MsQuic)

Is this to be expected since Microsoft is such a big company? Or maybe that lib is not important and is not going to be used in, say, Edge?

Moreover, isn't networking code the worst kind of code to write in C/C++ (I am thinking of heartbleed for example)? I would really love some explanation.


The more I learn about Microsoft the more this joke makes sense:

https://en.wikipedia.org/wiki/Manu_Cornet#/media/File:%22Org...


Which is why they are the main drivers for C++ Core Guidelines, and have contributed the initial effort of a C++ lifetime analyses to clang, while making it available in all Visual Studio editions, including Community.

They are quite clear that a constraint version of C++ still gets to play in a secure world.

And while Azure Sphere still gets to be written in C (due to Linux kernel), it has a security layer (Pollonium) that takes care of bad code.

Finally, since they failed to move people into UWP programming model, sandboxing has been coming into Win32 as well, to the point that on Windows 10X, everything is sandboxed, including Win32 apps.


What role (if any) does GCC play in Rust? I understand that Rust relies on LLVM. If the industry continues to move more and more to Rust then will GCC begin to lose relevance? Or will the GCC developers add a Rust ~~backend~~ frontend? Has there been any movement toward that anyway?

There are many gaps in my understanding of Rust with regard to LLVM and GCC.


> There are many gaps in my understanding of Rust with regard to LLVM and GCC.

The rust compiler generates some intermediate code (LLVM IR) that's fed into LLVM, which optimizes and turns it into machine code. You can check out what this looks like in the rust playground. Getting GCC to compile rust would be a matter of getting the rust compiler to generate GCC IR.

Cranelift (built with rust) is an alternative to LLVM with it's own IR. The cranelift backend for rust should help future projects that want to do something similar, I guess.


"will GCC begin to lose relevance"

Systems programming is not web where they change the framework and 'script flavour every six months.

Systems programs live for decades - and hence their critical infrastructure like GCC will as well. C will be with us at least 3 decades more. Longer if Linux kernel will use it still.



Wouldn’t you need to add a RUST frontend?


ah right


I’m not sure about GCC, but a significant amount of work has gone into a cranelift based backend for rust (https://github.com/bjorn3/rustc_codegen_cranelift).


There was a rust frontend for GCC[0]. Will need a lot to work though.

[0]: https://gcc.gnu.org/wiki/RustFrontEnd


> then will GCC begin to lose relevance?

That has already began. For example all the popular browsers (Chrome and all the derivatives, FF, Safari) moved away from GCC.


Just starting to get into rust, got a nice a rest boilerplate that uses sqlx (compile time verification of sql) and some embedded stuff on a pi working with rust (PWM, Serial)

My questions is where to find jobs with rust. I’m really enjoying it, but my resume is heavy python/react atm



Whilst it is true that it is Amazon’s _own_ first SDK, one already exists - Rusoto - https://github.com/rusoto/rusoto (disclaimer: I made some contributions earlier in the year to help out with the transition to being async-await-friendly).

I’d be (negatively) surprised if AWS started from scratch instead of from Rusoto. That was the approach taken for the original Go SDK, which was adopted from Coda Hale’s work.


I have used Rusoto to build a small tool to list all ECS container images of all clusters of different AWS accounts. It helped me at my company to have a quick overview of application versions deployed in different environments (1 account per environment).

I migrated it recently to async/await and the code was much easier to read.

Could have done it in Python's boto3 or any other SDK. Rust does not give a performance benefit since it is mostly waiting for AWS answers but that was an opportunity for me to just write some Rust code.


I recall many/most of the contributors to Rusoto being AWS employees.


Where are you living? Are you OK with working for a big company?

Edit: for people looking for rust work in the Bay Area, you can message me. Email is username @gmail


I’m in Austin/Houston area. But for the right opportunity I wouldn’t mind Bay Area for awhile.


You might not have to relocate, send me an email if interested.


Cloudflare uses Rust and has job openings in Austin.


[flagged]


In what sense?

Just because I don’t work there anymore doesn’t mean that I won’t recommend others give it a shot if they think they might enjoy it.


Must have missed something? I thought you recently joined Cloudflare.


I was there for a bit over a year, but left a few weeks ago.


start a business and do your projects in Rust. most clientd just care avout outcome. thats what I do and i have a lotnof fun trying new stuff with every new project.


Those clients may have trouble taking those projects in house or bringing them to another 3rd party developer!

As an advisor on these kind of projects I usually advise clients to be very conservative about the tech used by 3rd party developers or agencies..


the strategy of course is to serve them as long as possible ;-)

no seriously. you are right. however that never had been a problem. mostly weird langauge choices also attract pretty capable developers.


Currently doing that, not a big fan of 40 hours for some one else.


of course you should try to keep IP and resell if possible. there is no proper endgame in services. still better than being a slave employee


That's if you use 'Safe Rust', then the safety guarantees hold true. However, the risks increase as soon as you start using unsafe{} or import a crate that also has unsafe{} code which is why I like using cargo-geiger on my projects to see the unsafe{} uses in my code and dependencies [0]. Use unsafe{} at your own risk. [1]

[0] https://github.com/rust-secure-code/cargo-geiger

[1] https://arxiv.org/abs/2003.03296


The difference is that rust is safe by default. When doing something routine/trivial, you are typically operating with safe code.


Unsafe is not a problem in my experience. The same way a kernel might have some part written in asm (for freebsd it's like 1% I believe) your rust code will have 1% of Unsafe code. Might be more depending on what you are doing. However the same way few bugs happen in the asm code of a kernel, your unsafe will not have many bugs.


Actually, a lot of bugs happen in the assembly part of the kernel. Processors are hard ;)


Do you have a source for that claim?


A couple examples off the top of my head: all the recent speculative execution stuff, OpenBSD's buggy SMAP implementation, some recent bugs ARM's PAN.


You specifically responded to a claim that there were relatively few assembler bugs due to it be a relatively small body of code. So in that context, I would like you to show that the handful you've highlighted is "a lot" against the backdrop of bugs in non-assembly code.

My semi-informed (but not quantified) opinion is that you are broadly wrong and that there are not a lot of assembler bugs relative to C/C++ bugs.


OK thats how many 10? How many are not asm bugs? Like it's not many considering the total number of bugs.


Relative to what?


Relative to zero.


No its relative to the total amount of bugs.


In the context I was replying to, it's not. My response is to the claim that writing code in assembly has "few" bugs.


No, it's pretty clear from the full context of the comment you are replying to that it is expressing a relative measure[1]:

> Unsafe is not a problem in my experience. The same way a kernel might have some part written in asm (for freebsd it's like 1% I believe) your rust code will have 1% of Unsafe code. Might be more depending on what you are doing. However the same way few bugs happen in the asm code of a kernel, your unsafe will not have many bugs.

[1]: https://news.ycombinator.com/item?id=23510862


Right, comparatively.


Proven asm code like in TAOCP is safer than any other language. For example you know exactly what the FPU will do. Good luck with any compiler.

I know that is not what people mean with "unsafe", but it is getting tiresome.


TAOCP was written in different times, namely one where microcode was not a widespread thing. As it stands, your asm "proof" relies on having no µcode update. CPUs are no longer easy to understand like 8086s.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: