Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft seeks Rust developers to rewrite core C# code (theregister.com)
207 points by yau8edq12i 81 days ago | hide | past | favorite | 248 comments



I love writing Rust, but I was really surprised by how difficult it was to find a job _actually_ writing Rust.

I'm happy to see the increased activity in the space, but searching for a job in Rust is probably still 10x harder than C or C++.

It worked out in the end, and I'm happy to be getting paid to be writing Rust every day, but I hope that the market for Rust jobs continues to grow -- ideally even faster than it has been.


Agree, I am still looking for a Rust position, although I have more than 3 years of experience with it.

And yes, it is very hard.

It seems Rust is in a chicken/egg position: employers avoid it because "there are few developers", developers avoid it because "there are few jobs".

Until recently the majority of Rust jobs were blockchain stuff. Now I see a rise in network infrastructure and security.


I know some teams that are hiring Rust devs. Please feel free to ping me if you would like to know more.


I sent you an email from diego.moita69 at gmail.

Thanks for reaching out!


Seems like RUST needs work to become easier to learn. Hard for a business to take a risk on using it (outside of a use case absolutely needing a memory safe language) if the talent pool is shallow.


I don't care that it's hard to learn. What turns me off rust is that if I ever make a change that requires a lifetime annotation, I now have to go back through all my code potentially adding lifetime annotations to everything it touches.

I was making a little toy compiler in rust and basically gave up when I wanted to make a change that would have amount to string -> string_view in c++ because in rust I now need to tell the compiler that the String to my &String is going to outlive the struct. Now I understand why it's good but Id rather just let c++ blow my feet off. Maybe once I nail down all the data structures I'll rewrite it in rust.


This effect of <'a> spreading everywhere is very frequently a novice mistake of putting temporary scope-bound loans (AKA references) in structs that aren't temporary views themselves.

People used to C or C++ perspective tend to reflexively (over)use references to avoid copying, but references in Rust are to avoid owning, and not-copying is handled in different ways.

BTW, apart from edgiest of edge cases, &String is a useless type in Rust, because String already stores data "by reference" and is never implicitly copied. For loans it's generally better to deref to &str.


I feel like this could be a great in depth blog post...unless such a blog post already exists? It would greatly benefit novices and those who haven't gotten to that stage where this mentality kicks in.


There's a pretty good YouTube video I saw pulling all the types of strings in rust.

I find it confusing still... Been wanting to do a few things that would mean dealing with it streams that could be cp437 or utt8, generally and haven't been quite sure how I want to deal with it.



I'm thinking I want to convert to/from utf8 at the edges even for the encoding... Namely web/terminal, then the internal interfaces and relay in utf8, then back to co437 for door game interaction.

Basically a BBS door running service, with some extra niceties internally.


I am a novice when it comes to rust but as I recall the instant you put anything that isn't a scalar type you need lifetimes. Is that what the person you are replying to frustrated with?

How would not copying be handled?

Sorry for the basic questions. Your comment tickles something in my brain that suggests I need to know.


Very rarely do you need to pass around a lifetime-bound struct or enum that isn't _also_ meant to be temporary. (One database pool has database connections lifetime bound, for example, but the pool still owns the connection.) When I do, I generally eat the cost and put it behind a Rc/Arc, where cloning is cheap.

It is 'borrowing' because you don't own it, not because you don't want to clone/copy it. Sometimes it is cheaper to borrow than it is to clone/copy; sometimes it is not.


> and not-copying is handled in different ways

Can you give a few examples?


Isn't that why Rc<> and Arc<> exist in Rust? So that you can deal with situations where you don't know the object lifecycle at compile time?


If you're fine with dropping safety you can just make a wrapper around a raw pointer. I wouldn't do it in public library but if it's fine for your cpp code then it's fine for your rust code. You can always drop to the "blow my feet off" level.


This is more or less the reason I don't like adding "const" to member functions in C++, or setting any method or field to anything but public.

Working with the code base gets so annoying if you suddenly want to change something you didn't plan for.

Such changes can have a domino effect through the code.


The last few versions made a lot of the lifetimes elided so you can drop them.


Do you have a direct link? I'd love to read more.


Sadly not. But between 1.5x and 1.7x a lit of things changed, especially around traits and there is clippy which actually helps. So our functions barely need lifetimes anymore.


I feel like this has been this way for at least two years, too?


Well maybe we started with 1.5x or so and especially impl trait‘s made a lot of things easier. We barely have lifetimes in our rust code.


Honestly. Rust is not all that “difficult”. There’s just SO MUCH that you MUST know to be effective.

You need to know all of the rustisms. Plus you need to have a general knowledge of memory. You need to understand how memory is being abstracted and how to work within those constraints so as not to be slow. You need to know all of the monads, what they mean, why each exists, what the nuance is, and why you might use one over another in a specific situation.

It’s not difficult so much as it just takes so much time and challenging-yourself practice to try to memorize everything.

A “being effective at rust” book would probably dwarf a “being effective at C++” book in pages assuming similar prose. And we all know how much of a beast C++ is.


> A “being effective at rust” book would probably dwarf a “being effective at C++” book in pages assuming similar prose. And we all know how much of a beast C++ is.

I've written C++ for close to a decade before switching to Rust, and this is categorically untrue.

> Plus you need to have a general knowledge of memory. You need to understand how memory is being abstracted and how to work within those constraints so as not to be slow.

If anything, this is more true so of C++.

> You need to know all of the monads, what they mean, why each exists, what the nuance is, and why you might use one over another in a specific situation

You can use a lot of stuff before needing to know their full history. You don't need to know the history behind 0, null, pointer, numbers and addresses before, how hardware works, etc to use Option, for example. You also don't need to know the (inconsistent and arcane) history of errno and magic numbers to use Result. Etc, etc.

This vastly over-exaggerates Rust's complexity.

Yes, it borrows concepts from functional programming, but a lot of these concepts are simpler than stuff you deal with in imperative programming.

I think the issues here is that people don't tend to like to change their way of thinking, even if it's for the better. Also, Rust has ample in common with imperative/traditional language, so it's not a huge leap into some alien mental model, as you make it sound.

Further, C++ makes you think way more than Rust; in Rust the compiler does a lot of the thinking for you. Not to mention, the type system is easiest to reason about than C++'s old (and mentally inefficient) template system.


"Dwarf"? "Dwarf"?! I may be biased (because I am very effective at Rust and terrible at C++) but I can't believe this seems probable. std::optional, move semantics, boost, template templates, SFINAE(!!) C++ has all the expressivity of Rust and more, with far less regulation to assist the programmer in using it effectively.


you don't have to know _any_ of what you just listed in order to be effective with C++.

the closest thing to required would be move semantics.


Depends on what you're trying to do. The same can be said of Rust. You can write massive web services serializing days from RDBMS databases quite a bit easier in Rust than with C++ as an example.


> Depends on what you're trying to do.

No it doesn't.

people were effective with C++ before std::optional existed, before compiler assisted move semantics existed, and SFINAE is not something anyone has ever needed to understand in order to use something like the STL or any other template based library, such as Boost.

You don't know what you're talking about, this isn't up for debate.


>It’s not difficult so much as it just takes so much time and challenging-yourself

What's the difference?


It's not that hard for relatively easy work... I've written a few basic web services, as an example, using a few of the more popular web frameworks for Rust, it hasn't been much harder than say C# or Node.

Now some of the more complex things, have been far more complex to do by hand.. Shared data, channels, Arc, etc. Those have been a bit more cumbersome and still not sure I've done the right thing at times.


Rust is very easy to learn. Just read what the compiler says. The compiler is there to help you, it has a holistic view of what's going on, and it knows what it needs for your code to be fixed.


Learning Rust to the point of being productive is 100x easier than C or C++. It's not even funny.


That hasn’t been my experience with it. I found rust significantly harder to learn than C; and this is after I already knew C. (At least - learn rust to the point of feeling productive, and like I wasn’t fighting the borrow checker constantly).

Arguably becoming an expert in C means you need to understand all the nuances of undefined behaviour. And that’s a much harder process. But it can happen slowly. Rust preloads all the pain. You essentially can’t write rust at all until you understand rust references, lifetimes (implicit and explicit) and the borrowchecker. The payoff is huge, but I found climbing that mountain to be no joke.


> You essentially can’t write rust at all until you understand rust references, lifetimes (implicit and explicit) and the borrowchecker.

You can if you're willing to use stuff like .clone() and the interior mutability types. In Rust, you can tell when code has been written to be a bit sloppy because it has that kind of boilerplate. And the compiler checks are a huge help when it comes to refactoring the code and making it cleaner and better-performing.


Eh. If you don’t know how references work in rust, you’ll struggle to use most of the standard library or any 3rd party crates. And if you can’t pass mutable references to functions or iterators there’s a lot of programs that will be hard to write at all. Performance will be pretty rubbish too.

You might be able to write some simple programs, but I wouldn’t say you know rust yet, or could really be productive with the language.


> I found rust significantly harder to learn than C

language itself maybe, but you also need to learn ecosystem, libs, build systems, testing.

Benchmark could be: how fast you can learn and bootstrap some type of app of your choice: high performance DB or torrent server, etc.

> You essentially can’t write rust at all until you understand rust references, lifetimes (implicit and explicit) and the borrowchecker

why is that? You can write rust the same way you write C but with nicer syntax, standard lib and build system and more potential to future expansion.


> but you also need to learn ecosystem, libs, build systems, testing.

That’s a really good point that I hadn’t thought of. If we include header files, compiling and linking, makefiles and CMake and the mess of dealing with 3rd party libraries in C - well, yeah. All that stuff is probably worse than learning the rust borrow checker. I think I’ve forgotten how horrific that mountain is for beginners because I learned most of it decades ago, while the pain of learning rust is still fresh.

> why is that? You can write rust the same way you write C

Because I use pointers everywhere in my C code. Rust’s borrow checker simply won’t compile my code if I transliterate it directly from C. There are software patterns which work well in rust with the borrow checker in mind - but they take time to learn and get used to. Until you do, you simply aren’t productive.


> Because I use pointers everywhere in my C code. Rust’s borrow checker simply won’t compile my code if I transliterate it directly from C.

borrow checker is more relevant to C references I think.

For pointers you can use Arc or Rc and don't worry about borrow checker.

Disclaimer, I am not a rust or c coder, so my opinion has a high chance to be wrong..


If you want to learn rust, there really isn’t a way to avoid learning lifetimes and the borrow checker. It’s a core part of how the language works. You can probably get partway there with Rc and by cloning stuff everywhere but you’ll still run into borrowck problems. Rc doesn’t turn off the borrow checker. (Nor does unsafe). But then you won’t have the tools to solve your problems. And you also won’t know how to use most of the standard library or any crates. That’s no way to write rust.

> Disclaimer, I am not a rust or c coder, so my opinion has a high chance to be wrong..

I’ll take a step back. I hope this is helpful to someone and not patronising.

Go and similar languages are memory safe because no matter what go code you write, go runs your program with a GC that makes sure your variables are freed correctly only when it’s safe to do so. Rust’s safety is very different. It comes from checking at compile time that your program is written in a way that obeys a bunch of complex rules. For that to work, you have to write your code very carefully. If you mess up, the compiler doesn’t fix it for you. It just refuses to compile. So you have to write your code with those invariants in mind, or your program won’t build at all.

C code generally doesn’t obey any of rust’s rules - for obvious reasons. If you were to just blithely translate C to rust code, the rust compiler will refuse to compile your program because the rules aren’t being followed. To write rust code that the compiler accepts, you need to first learn the borrow checker’s way of seeing the world. Then you need to restructure your code to work in accordance to rust’s rules. And sometimes that’s a super obscure and tricky problem.

Weirdly, I think it’d be much easier to go the other way. Any compiling rust program could be translated into a memory safe C program if we had a compiler that did that. (I think). And lots of C programmers say learning rust made them better at C - because rust’s rules genuinely teach you to be more disciplined with memory and show you some clear rules for making C code that’s much less error prone.

Not learning the borrow checker is like not learning how objects work in Python. You could write some simple programs. But you’re going to have a bad time, especially if you try to do anything nontrivial, use library code or read anyone else’s programs.


Yeah, that's what I meant by productive. Not like learn the language but like build stuff.


Given the number of CVEs that can be directly attributed to C and C++ you only need to be half-joking to argue that approximately nobody has managed to learn the languages sufficiently to write production code in them.


As uncomfortable as this is to say, the existence of latent security vulnerabilities doesn’t seem to stop people from shipping a lot of apparently working code in C and C++.

And I don’t think a lot of those CVEs come from people misunderstanding the languages. As I understand it they mostly come from honest software bugs. C++ makes it easy for honest mistakes by experts to become security nightmares. This is a controversial opinion - but I think the fault is in C and C++ themselves. Not the programmers who need to work even harder such that they stop ever making mistakes.


Which is why some people push for no longer using C and C++ as much as possible. It's just not feasible to expect the average programmer to avoid all the security pitfalls. Of course, whether Rust is the best option isn't a given.


There aren’t a lot of memory safe options if you want the kind of performance rust offers. Rust is fine, but it’s far from perfect. It’s hard to learn, complex, and the macro and async systems are honestly a bit of a mess. I’m really looking forward to whatever comes after rust. I’m hoping some bright sparks out there manage to make a language with rust’s memory safety but that cleans up some of rust’s rough edges.


To be fair, how many applications really need Rust's level of performance? I mean operating system code and drivers... But, many apps are getting the job done in electron with a browser engine.

I like rust, I like the sensibilities. Maybe my experience is too limited to higher level problems though. There's plenty of room for the likes of Go, Java and C# is all I'm getting at.


Yeah I think I broadly agree with you. Not many applications need rust's performance. Go, Java, C#, nodejs, python, etc are all great languages.

But the software which does need or want "native performance" is incredibly important software. I'd use rust for things like databases, operating systems, web browsers, high performance web servers, network services (samba, ssh), and other "systems" software like that.

Most of the lines of code ever written are probably in application code. If you're making a company website, python or C# or whatever is totally fine. But a remarkable proportion of the lines of code your computer actually runs were written in C or C++. Your web server might be in python, but if you use nginx as a proxy - well, thats C++ (I think). Use postgres? C++. Running your software in linux? C. Testing with Chrome? C++. What is python itself written in? I dunno, but I bet its either C or C++. You get the picture.

Whether or not rust is a big deal is largely a matter of focus.


So you're saying there is no CVE in php, java, go, js, and python?


In their defence, there are entire classes of security bugs that memory safe languages protect you from. A large percentage of the security bugs in Chrome, OpenSSL, iMessage and lots of other apps wouldn’t have happened in any of the languages you listed. Or, almost certainly, in Rust.


And how many of such bugs could be prevented with compiler warnings and flags?


I suspect fewer than you think. At least in FAANG code.

Google chrome has had a fair few CVEs over the years despite having a lot of the worlds best security researchers working full time looking for security vulnerabilities.

I think someone has tried reading the compiler warnings.

There’s an old story about John carmack running a new static analysis tool on the quake3 source code. The code worked well. The tool apparently found a huge list of issues in the code - including a massive pile of real bugs. Then he tried another static analysis tool and it found more real bugs. And so on.

The story is well worth a read. This is a great takeaway:

> This seems to imply that if you have a large enough codebase, any class of error that is syntactically legal probably exists there.

C++ is really hard to do “right”, at any scale in a real team.

http://www.sevangelatos.com/john-carmack-on-static-code-anal...


If you're going to start writing it testing or C++ like Rust or follow certain guidelines, a lot of them. But if you're going to constrain how you do things, why not use a language that also gives you higher level functionality along the way?


I agree.

My point is that I just feel like "but someone please think of the ̶ ̶c̶h̶i̶l̶d̶r̶e̶n̶ memory safety" argument is over blown. There are ways to eliminate majority of those issues in cpp as well, but people simply don't care.

If You want to use Rust because it's just better language - go for it, I do it as well. But let's actually use that as an argument, instead of hiding behind superficial ones


> There are ways to eliminate majority of those issues in cpp as well, but people simply don't care.

If its that easy, why do Google, Apple, Microsoft and basically everyone else keep making memory safety related bugs? Are they all just idiots? Do you think they just don't care about security? Carmack found C++ static analysis tools found mountains of latent bugs in the quake source code - despite the game running great.

Personally I find it a very arrogant statement to claim that memory safety in C++ is easy. If google finds it hard, despite throwing millions each year into the problem, I'm of a mind to believe them.


>Do You think people just don't care about security

Yes, people don't care nearly as much as we like to pretend in online debates


I partially agree with you. I think most security compromises are due to misconfigured mongodb databases, bad passwords and unpatched software staying unpatched for months or years. Things like that. Lots of B tier engineering companies get done by this stuff every year because they’re sloppy.

But memory bugs in C++ seem genuinely hard even if you do care about the problem. Google and Apple have never (as far as I know) had customer data stolen by some trivial misconfiguration problem. They pay out a lot of money in bug bounties. Google recruits some of the world’s smartest people to look for security problems in their products. And I’m sure they pay a bomb for access to proprietary C++ static analysis tools. And yet, they apparently still can’t consistently write memory safe C++ code. So yeah, I think that writing bug free c++ at scale is hard even if you do care about it.


Apple? The same company that forked the JVM and then was taking months to fix vulnerabilities for which exploits were readily available on the internet, and that had been fixed immediately on linux and windows?

The same company that has had a stream of no click 0days in imessage, because they parse the messages outside of a sandbox, and patch the issue but not the larger issue of the no-sandbox?

Yeah they don't care about security at all. It's mostly just a thing their marketing department talks about. I'm sure their R&D budget for it is quite limited given their size.


Fun fact: working at a large company doesn't make you smarter.

In fact all the laid off people probably feel less smart then average for accepting to work somewhere that treated them like that.


As someone that speaks legalese, and knows C and C++ relatively well to be productive, if nothing else by knowing them since 1993 and 1994 respectively, also have already written a couple of interesting stuff in Rust, that 100x easier is a bit too much.

Specially if we are talking about anything related to graphics programming, or asynchronous programming.


Objectively false


We acquired a product last year, where the entire back-end was written in Rust.

Unfortunately, Rust developers were hard to get by, and we didn't have any internally that could maintain the Rust code at such scale.

The entire back-end ended up being re-written.


Why not just learn rust? It really is not that exotic to learn. I doubt it is faster to rewrite it.


Yeah, this is just baffling. A team can be so averse to learning new tools, good ones too, that they would rather dump their time into rewriting. Instead of getting paid to level up their skills, they'd rather block forward movement of the company's goals to maintain the status quo.


I'm a bit surprised at this, too.


Trash build times and slow iteration cycle as a result


Build times are pretty good in rust compared to most languages. Where do you get they are trash?


What happened to the developers who wrote the back-end in the first place?


maybe the borrow checker determined their lifetime was up


>The entire back-end ended up being re-written.

In Go, I presume?


I honestly wish I could have been in that boat because I crave the opportunity to use rust on the job.


Same here... Job hunting right now and would love to bridge the gap to primarily using Rust


Nooo :'(


The trick is to introduce Rust wherever you're at, assuming it's a good fit for the task. This may be harder to do if you don't have enough influence.


Yes, I did this at my startup. Fast forward a few years, and now the company has more Rust code than Python, and the majority of the company's IP is in Rust.

I suggest beginning with small, one-off things that don't have much impact. People, even developers, tend to shy away from things that aren't familiar. By introducing Rust in a small, low-risk way, it helps people get familiar with it. They get to build familiarity with building Rust projects, navigating the project structure, and reading docs. I submit pull requests that get people to read Rust code, even if it's just to say "looks good". Their familiarity builds slowly over time, meaning they'll be less triggered by seeing Rust in a larger, more impactful project down the road.

How do you boil a software developer? Slowly.

If they give Rust a chance and your team has a champion to guide them, they'll see its merits. I think a lot of people come to Rust for the performance, but that's not why they stay.


I am in the process of oxidizing some stuff at work with Rust. I too am starting from small pieces, things that I can incorporate and call off python directly. Also relying a bit on codegen to blend the two languages and slowly remove all python code.


What domain are you working in where Rust is the replacement for Python?


Performance, portability, reduced memory use.. even containerization which can benefit from all of the above.


Those aren't really domains. Chances are if portability was a concern to you, you didn't start your project in Python.


Quick way to be annoying and unlikeable at work.

The times I've seen rust introduced, and tried it my self, it introduces a whole bunch of related tech debt. Like hacks to make builds work


When I see people on my team (myself included) shoot themselves in the foot over and over with C++, sometimes causing very costly production bugs, and yet I know of a tool that could have prevented nearly all of that pain at compile time, I am going to risk being disliked in order to help lift everyone up. Most people that I mentor on Rust have become completely sold on it after some initial resistance. I don't like change for change's sake, or rewriting things in a new language unless there is some really big payoff.


I think that argument makes sense in a vacuum. Like considering nothing else, yes, 100%, rusts compiler prevents more bugs then c++ would. I don't think bugs are the only way to waste time or break things. In a large company or complex project your never talking about one small code base. There are inter-dependencies within code and outside code. Rewriting all of anything significant is a huge undertaking. In reality your introducing a new language in addition to cpp/java/go. Its another support task. Its another thing to context switch between when your working on "the legacy" repo and the new stuff. You need to be skilled in both languages.

Its like any team that starts writing automation and support projects in python, when their main language is something else. You get trash python scripts and programs. Its another language where technically the domain is right, but the skill is lacking.


Yes, there's definitely a cost of introducing Rust to an existing project. But there are a ton of projects that could also benefit greatly from it, such that it'd be a net benefit even in the short term, and a huge benefit in the long term. I'm not advocating for people to blindly add it to their project.


If it's a good fit then it's a good fit. Of course you'll need to ensure the CI is set up properly, just like everything else.

If it's not a good fit, then you're just forcing it, and yeah...


On the reverse side of things, trying to find a C++ dev is usually harder than finding people who want to write Rust.


I suppose C++ devs are on the correct side of the demand curve, unlike myself. :D

I was _really_ tempted to take a C++ job at some point but I held out.


It's been pretty great having one of my primary languages c++ lately wrt job opportunities.


Yup, we were hiring C++ devs recently and it was quite common for candidates to mention Rust. Did they not read the job description??? Immediate red flag.


I've been a C++ dev for most of my career. Not looking to change jobs at the moment, and I like C++ a lot. We still occasionally start new projects in C++ where I work. I've tinkered with rust in my spare time, but never introduced it at work. I think the borrow checker is a fantastic tool, but Rust has just never been the right tool for the domain we're in.

But I'm a bit confused by your statement. There's a lot of overlap in the domains that C++ and Rust serve. Isn't it a good thing for job candidates to have an interest in learning about other approaches to their work? To understand what makes C++ better or worse than Rust in difference circumstances?

Why would a mere mention of Rust be a red flag?


It’s a red flag because you’re going to get people bring in random languages just because, then they fuck off to another job and you’re left supporting code that nobody knows.


> bring in random languages

Rust is hardly a random choice. You can debate its merits for some domains, but replacing C++ in most of the places that C++ is used is exactly what Rust was built for and is widely recognized as doing well at.

> just because

If your engineers are pitching a technology with no more basis than "just because", I can understand dismissing them out of hand. But ask yourself if that's actually what happened, or if they gave specific reasons justifying their proposal and you dismissed them out of hand anyway.

Even if you disagree with the reasons, or agree with them but conclude they are outweighed by other arguments, either of those is totally fine, but dismissing them as "just because" is not an effective way to make technical decisions as a team.


They’re not “your engineers”. This is an interview for a position listing the job requirements.

You go ahead an interview however you like, but generally, giving your interviewer a feeling that you plan to shake up their tech stack because you feel like it isn’t going to go well.


What is this job where the position is so high up that it can make unilateral decisions on the tech stack but also needs to be punished if they know about Rust? Would you hire a team leader that didn't know what Rust was?


People _mentioning_ Rust means they will bring a random language into your stack if they are hired?

This red flag cultural trend is blinding people to nuanced thought.


n=1 but I did bring Rust to my C++ job. I needed a web server, and I didn't want to put in an order with the web server team, and writing a web server in C++ looked like a recipe for pain.


Really people write a web server from scratch? What industry? Don’t tell me it’s some CRUD


I learned a lot from it


What pain? Just include mongoose.h and .c in your project. It's in C, but C is C++.


I did try both Civet and Mongoose. We didn't have a good build / dependency system for C++ at the time, just CMake, so when I considered all the little libraries I'd have to learn and pull in like HTML templating, SQL, etc., while making sure my code was thread-safe, it looked like too much work.


    docker run --name docker-nginx -p 80:80 nginx


Yes I suppose I could have written it as a Lua plugin for Nginx. Or CGI.


A bit of greenfield NIH is damn fun through sometimes :-)


I feel like you’re giving rust a pass here.

It would be a red flag if you were interviewing for react and decided to bring up vue or svelte or angular or whatever else as well.

It’s not like it’s only this C++/Rust type deal that is being picked on. Although I would suggest that rust fans tend to be particularly ardent and loud at the current moment, so interviewers may be far more turned off of you as a person just for bringing it up. Interviewers probably felt this way about people bringing up Python 20 years ago (Python devs of yore were WAY WAY worse about their “HAVE YOU HEARD THE WORD?!” Than rust devs are today), blockchain 7 years ago, etc.

Anyway. As an interviewee, I’d probably try to avoid being the one to bring up alternative technologies.


>It would be a red flag if you were interviewing for react and decided to bring up vue or svelte or angular or whatever else as well.

...why?

Seriously, why on earth? I don't follow this train of thought at all; if they demonstrate proficiency within the scope of the position, why does it matter if they also happen to know other technologies?

"Oh, Alice? Yeah, she was a great candidate, unfortunately she also had experience in Vue, so there's nothing we could do. We decided to hire Bob, who has 3 years less experience with React, but fortunately that's the only stack he's ever heard of."

If anything, it's a sign the person is interested in learning, most great devs I've met were not proficient only with a single technology. This sounds completely alien to me.


Nice strawman bro.

Nobody is saying to not expand your knowledge. You’re assuming it of this because it’s literally your only argument, but it’s an unfortunately shitty one, as most logical fallacies tend to be.

Nobody said “don’t have wide experience” but you. What I did say was “I’d probably avoid being an ardent fanboy toward an irrelevant to the interview tech stack”. And that “it’s most often best to leave irrelevant digressions to the interviewer”

Again, you go ahead and give out all the shit tier interview advice you like. For people that actually want jobs, probably try to stick to what’s relevant.


You said if someone brought up something like Svelte when interviewing for a React job it would be a red flag. That just sounds silly to people that know how naturally someone could connect Svelte to a discussion about React.

Could I imagine a scenario where it's a non-sequitur? Sure. Not really "don't hire this person" worthy, though.


I'm sorry but I'm having a hard time understanding why a person bringing up a tool in conversation during an interview is seen as such a clear and strong signal containing so much information about their professional performance.

But nowadays it seems like one has to 'turn on' an interviewer and avoid a minefield of forbidden words to get a job. If a workplace is to become a toxic cult-like environment being policed against such thoughtcrimes as being curious and interested in technology for its own sake I think one would be fortunate to be passed on after an interview to enter such a place.


You should value wide knowledge. Not be scared of it.


Are you not a developer? Being proficient in one means that it will take very little time to transfer to the other. C++ and Rust have a great deal of overlap in that way.


It only works one way though (at least in this case). If you know C++ you can quickly become proficient in Rust.

But not the other way around.


If you know rust, you can carry the same ideas to C++. My C and C++ skills greatly improved as I got better with rust. The compiler forces you to learn proper memory management and that carries over.

Smart pointers? Just Box, Rc, Arc and Refcel.

Move semantics? It's just another name for ownership.

Sure, the OOP stuff are different, but okay, that in and of itself shouldn't hinder you.


If it's an experienced C++ job and you don't know the difference between rvalue/lvalue/etc, you're gonna have a tough time.


Are you sure? Many C++ programmers who have learned rust report that it has also made them better C++ programmers.


Yeah C++ have steep learning curve compared to Rust.


It's not about proficiency, it's about not getting bogged down in programming language bikeshedding.


Sigh. People don't even know what bikeshed means any more.

https://en.wiktionary.org/wiki/bikeshedding

If you think the differences between C++ and Rust are "unimportant but easy-to-grasp issues" then you may not know enough about either/both to contribute to the discussion.

We're talking about differences so impactful and well-recognized that everyone from Google[1] to the NSA[2] is advocating for using Rust to reduce the unsafety compared to C++. Are you saying all of them are bogged down in bikeshedding?

[1] https://security.googleblog.com/2022/12/memory-safe-language...

[2] https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...


One thing is that a large fraction of C/C++ devs are hired by big companies. And those tend to have a large amount of legacy code and in-house tooling. Adding one more language to such big ecosystem can be a significant investment than kicking off a new project from scratch, so they need a good transition story to convince stakeholders and get actual funding(=more headcount/jobs).

I think Rust has reached to the level where many of us agree on that it's probably a good idea to move on, but probably not many of us have a good idea on how to do that. Hopefully some teams are eager to do that, so we will see some success story soon then more and more teams will explore such possibility.


It's equally difficult to try and hire a team of folks who know rust. There is also a lot of legacy code which doesn't cleanly interop with rust (cxx and the like are not sufficiently easy to bridge the gap). It would be much easier to convince your existing team to start using rust where it makes sense than it would be to switch jobs in search of one which primarily uses that. Over time both of these problems will diminish, but that'll likely take another decade.


There are tons of Rust jobs in blockchain.


I’m interested. Can you tell me what do rust developers build or work on in this space? Where do you start networking and build the demonstrable skills to get into blockchain?


I don't know Rust, but I've worked in blockchain for 6 years. Recruiters are always reaching out to me and most list Rust as a language.

In Ethereum (the domain I work in) there are lots of developer tools being written in Rust (Foundry, Build Bear). There is a new Eth node/client being developed in Rust, as well as a "wallet" browser extension. In Bitcoin I know Hiro, a Bitcoin L2, has lots of Rust, but I don't know which parts.


Yes, that's the issue.


What's the issue? Elaborate.


I upvote almost any post "in Rust" here (yeah that's me) but this story went bit too far. It was just one job posting and so many sites acts like MS is going to ditch C#.


Totally agree. The .NET team itself is probably costing above 100 million every year, probably more. Microsoft is not ditching .NET, they are heavily investing it.

The office division is implementing some code in Rust while the same division just posted a success story of their usage of migrating some low latency server code to .NET Core.

Microsoft is a big company and like any other they use JS, TS, Rust, C++, Python, Go and so on. The only thing they do not use is Java ... And I am not 100% confident there.

So let us congratulate Rust for entering the office domain and stop writing swan songs about .NET.


They use a ton of java, they even have their own distro. In fact, they answer this question on its landing page (https://www.microsoft.com/openjdk):

> Java at Microsoft spans from Azure to Minecraft, across SQL Server to Visual Studio Code, LinkedIn and beyond! We use more Java than one can imagine.


Counting Minecraft as a somehow important use of java in Microsoft is pretty funny when they bought the company that introduced it and then rewrote it from scratch in c++ for w10


The C++ ports to mobile and consoles predate Microsoft's acquisition by a number of years.


That’s funny because I downvote almost any “in Rust” post here!

C# — and garbage collected languages in general — aren’t going anywhere. With the shift to Arm, languages compiled for a runtime interpreter will get even more important for Microsoft.


I'm a Rust convert, but that needs to be put into context: I do stuff in in cryptography, often on embedded. Prior to Rust I did C and C++ for most of my work. There's also Ada... and it has SPARK, but Ada has never had the level of exposure that Rust has achieved.

But there's all manner of "business systems", e.g. Java/RHEL shops, ".net shops", where GC pauses don't matter, the code is something other than CPU/memory bound anyway,the GC/JIT are mature, and a human writing C++ likely can't do better and will probably do worse. I've seen this with a write-heavy log-ingesting system where the developers used smart pointers everywhere - which internally use atomic reference counting, which implies sync barriers for otherwise entirely unrelated data. I agree, these kinds of places are unlikely to move to Rust - the code writes faster in anything else and runs well. There's also Go, which has a GC but compiles natively, GraalVM, which is AOT compiled Java etc. I definitely agree GC'd languages are not going anywhere.

I'm not sure I agree with C# becoming more important because of ARM, though. I don't think it'll change much - I think shops already invested in certain stacks will mostly stick to them and for all Rust's popularity, it doesn't make sense to implement your CRM in it really. It does, however, make sense to reimplement the C++ parts of the .net runtime and miscellaneous other C and C++ parts of Windows that can benefit from the borrow checker, because it vastly reduces spatial memory safety bugs. There's a massive cost to this, but it's no longer just about about on-prem patching but Azure.


> With the shift to Arm, languages compiled for a runtime interpreter will get even more important for Microsoft.

How's that? Apple and its app ecosystem just ship universal binaries, so native, heavily optimized x86_64 and aarch64 code is always ready to go with no added fuss for the user.

You could certainly argue this makes already-large binaries even larger, but users care much more about CPU & RAM efficiency than on-disk binary size. Especially on ARM machines that many people buy for battery efficiency in the first place.

On Linux you don't even bother with universal binaries because you just ship the entire package repository already built for the target CPU. It's only more fuss for the user if they need to pick specific packages from outside their distro.

Has Microsoft still not managed to make something similar work on Windows?


May every one of us attain sufficient popularity as to attract people who blankety hate you without even bothering to supply reason or argument. May you be so popular & doing such good things people feel the need to downvote everything they see about it on sight.

Actually personally I wish we could identify such people & ignore their votes. Actually personally I wish we could take persistent downvoters & fire them into the sun. I don't see or understand why anyone would let contempt so deeply into their soul. The world is no zero sum and I have little patience for those would seem to go out of their way to deny & reject others.


Given your username I suspect you're quite a bit older. I've mentored several grey beards on Rust and they weren't at all happy about it, at first. After a few months though, they were some of its strongest advocates. It's hard to not love Rust when you realize just how much pain it saves you from.


What pain does it save most folks from that are using GC languages (as GP alluded to)? Genuinely curious.


Weirdly enough, some of the worst memory leaks I’ve debugged in my career were in GCed languages. Because any object can reference anything, retaining references can hide anywhere in the code. (Including via closures and other exotic things you forget about). And because in GC languages we don’t often have a destructor (or anything like it), it’s very common for people to forget to clean up resources. (Eg filesystem handles, sockets, wasm object references, and so on).

One weird thing about rust is that network sockets and file handles are automatically closed when the handle drops out of scope. The borrow checker takes care of that. If you do the same thing in JavaScript (open a socket then do nothing), the program won’t even quit on its own and you’ll have no idea why.


> And because in GC languages we don’t often have a destructor (or anything like it), it’s very common for people to forget to clean up resources. (Eg filesystem handles, sockets, wasm object references, and so on).

> One weird thing about rust is that network sockets and file handles are automatically closed when the handle drops out of scope. The borrow checker takes care of that. If you do the same thing in JavaScript (open a socket then do nothing), the program won’t even quit on its own and you’ll have no idea why.

That’s not an inherent problem of GC. You can have linear types or destructors in a GC language.


The first sentence you quoted acknowledged that it's a tendency rather than a fact of GC.


In IE6 it was really easy to lay memory with long lived apps. Any HTML object could have a JS event attached and give versa for object references in JS. Done it was across the COM boundary, if you removed an element from the DOM it would stick around if referenced in JS and give versa for inline JS on* attributes...

I worked on an application with extjs that the workers had to close and reopen at least once a day. Or unofficial instructions were portable Firefox.


You are spreading a common misconception. The borrow checker takes care of exactly nothing when it comes to cleaning up resources. The borrow checker is "just" type checking, like a built-in static analysis. But it doesn't affect code generation at all! There is an alternative implementation of the Rust compiler (`mrustc`) which doesn't implement the borrow checker at all! But if your code passes the `rustc` borrow checker, then `mrustc` should compile it correctly (* if you don't use too many new features, afaik it is a bit behind the official compiler).

I think the borrow checker has an unfair reputation of being complicated and magical. Its rules aren't always obvious to a beginner, but at the end of the day it's just type checking / static analysis. People often imagine it also does things at runtime, or affects code semantics in some way. It does not. And it's only scary because because it's so stern :)


If I understand correctly, what you are describing would often be done with a using scope block in C#.

EX

using (SomeKindOfHandle handle) {

// handle releases when the scope ends

}


* Ownership is very clear. In most GC languages ownership of objects is shared between everyone that can access it. That encourages bugs.

* You can trivially make objects immutable. That feature is tacked on to most GC languages (e.g. Object.freeze) and rarely used, if it's even available. Again mutability of everything encourages bugs.

* You can easily copy values.

* You can use RAII to deterministically and automatically clean up resources, and guard things (e.g. using mutexes).

These features are all in C++ too but Rust lets you have all that and memory safety (and it's better designed than C++).

A classic bug that you wouldn't see in Rust might be passing a list into a function and the function mutating it when you weren't expecting it to. Basically impossible in Rust.


Oh man, GC is amazingly nice but when it fails sure does it fail. It also typically fails when your biggest task is reducing overall resource spend, or getting that extra 9 after 3+ 9s of latency reduced.

On multiple teams of FAANG engineers, Ive seen the GC be a big part of some meltdown and none of us knew a good way around it. Tuning only goes so far. Otherwise, you just have to give it more memory and CPUs (horizontally or vertically).

Almost always the answer was, this might need to be reworked in C++ entirely or JNI. In same cases it even got funded and was a tremendous success (like 10x fewer resources). However, then you’re dealing with C++ and all of its issues.

I’m looking forward to more value types in Java but managing the heap will always be a bottleneck eventually.


Nothing like a game server engine in C# that would freeze for 15-30 seconds now and then for garbage collection...


Or like one locking the cache every few seconds a COM counter has to be updated, with a major freeze when a domino effect of objects reaching zero happen, when using DirectX, badly coded.


GC _is_ the pain. Lots of use-cases where precise memory management is critical, or real-time guarantees are needed that are difficult with GC.


Hence why real time GC exists in first place.


“The garbage comes from somewhere” -David Fowler


Not all GC languages are created equal. The best of the GC languages in my opinion is Haskell. So if a project is well suited to the limitations of a GC language, like Haskell, then the benefits of Rust may be limited to things like its nicer tooling and more polished ecosystem, etc..., while also having a few downsides. But if we make the same comparison to Go or Python, the list of advantages gets a lot bigger.


Depends on the language and platform/Runtime...

For C# a single executable over a directory of files. For that and Java, a massive runtime, generally slow cold starts, high initial memory use, reduced relative effectiveness for smaller platforms (RPi and similar).

For JavaScript and Python, slow introp, in addition to the negatives above.

It's also possible to leak memory like a sieve in any language.


- Native interop in Java is surprisingly slow and very clunky (C# interop with C/C++ can be as cheap as direct calls nowadays)

- .NET had been able to produce single-file (and self-contained when needed) executables way before NativeAOT was introduced

- RPi is a fairly large platform compared to e.g. Arduino (for which Rust should be awesome) and can be well-served by NativeAOT (today). I might still use Rust for that one in the long run but using C# wouldn't be an issue both in terms of resource usage and OS since it's just an arm64 musl flavour of Linux usually with 1GB of RAM or more which is a lot for .NET (for context ~256MiB containers with ASP.NET Core image is a popular target for back-ends)


Java also has had possibility to do AOT since around 2000, the difference being that unlike NGEN, it was only available in commercial compilers like Excelsior JET, Aonix, and many others.

PTC, Aicas and microEJ are still around, placing Java in hardware that .NET will never be, even with Meadows.

Panama main goal is to replace JNI performance warts, while Java / ART has had mechanisms to do fast interop (@FastNative and @CriticalNative).


Panama appears to be solving the abysmal UX of JNI but puts the final performance nowhere near close .NET (without even measuring direct P/Invokes which are literal direct C calls if you statically link AOT binary).

Do you have references that say otherwise? (I know you don't, it has JNI level of performance)

p.s.: you have to be insane to use Java today on those listed platforms, which are much better served by C and now Rust. RPi is as small as it gets and on that I'd rather never touch any Java tooling because liking it is literal Stockholm syndrome - it is as low bar as it gets and almost anything else is better.

p.p.s: for anyone's interested, here's the JEP for Panama: https://openjdk.org/jeps/424 do give it a read, then look at C# spans and interop API and ask yourself whether you want to look again at Panama without gouging your eyes out.


Apparently US and French military are insane enough.

I do agree Panama UX could be much better.


In the onterop issue I was referring to JavaScript and Python. As others mentioned, z Java is pretty bad there too.

On the single executable, my understanding is it sad kind of like a self extracting archive that extracted and ran... Meaning slower start times and the need for write access to the file system... Not to mention the framework still needed to be installed. And only more recently was true single executable possible.


What? No. Self-contained means the runtime is packaged in the binary, and then trimmed. Self-extract feature is if you want to package third-party native libraries (think tensorflow) inside your binary instead of separate DLL, it is a separate thing.

You just give the users an exe or unix binary and it runs.


As a gray beard... I'd rather work in rust moving forward for a lot of things.


Just like Jesus!


Haha, that's some high praise!


> C# — and garbage collected languages in general — aren’t going anywhere.

So what? What does that have to do with rust? There’s no fight between C# and rust where only one language will survive. Both languages are great, and both languages probably have a bright future. C# depends on a lot of low level C/C++ code to function and run fast. I can imagine a future where rust enables c# to get better.

I love rust, but there’s also lots of programs and teams for which c# is a better language choice. So your comment is really weird to me - there’s no reason loving c# should impact your relationship with rust at all.


That’s not technically accurate. While both JIT/ILC and GC are implemented in C++, the former might have as well been written in any other language which would impact the time to JIT/AOT compile the code (startup time) but not the final product. Pretty much all performance-sensitive code is written in C#.


> Pretty much all performance-sensitive code is written in C#.

I think you’re exaggerating things a lot here. The JIT/ILC and the GC are hugely important, complex, core pieces of C#. The performance of the entire .NET ecosystem is rooted in the well written (and well performing) runtime and GC. And there’s not a lot of languages those pieces could be written in. That’s the sort of niche in which rust shines, enabling C# itself to be a great language for applications.

What part of my comment is “technically inaccurate”? I stand by it.


Let's break down what "runtime" aka "VM" in this case means:

- Metadata representation: type system, reflection

- JIT compiler

- PAL: low-level platform-specific code and interaction with kernel APIs that are mostly consumed by the JIT and GC themselves

- Garbage Collector

- Special features: string interning, threadlocals, ThreadPool, assembly (un)loading and reflection emit, etc.

Only some of the above is written in C++, mostly JIT, GC, and parts of type system facilities.

This contrasted with CoreLib (like C stdlib) which includes code for everything else:

- Primitives (string, int, long, etc., they are partially or fully special-cased by compiler for layout purposes, good example to see how it works - ZeroSharp)

- What you usually see in standard library like APIs for working with strings, file and network IO, math, etc.

CoreLib is written in pure C#.

My issue is with "C++ making C# run fast" which is not the case - the compiler does, and the goal it achieves can be done in most other languages which would only impact the time to JIT the code which would only impact the startup time and not the performance of the compiled C# code.

To give a better example, some features have been historically written in C++ but later on were rewritten in C# for either NativeAOT, which does extra compilation work (also written in C#) or both NativeAOT and regular CLR (you could think of it as the vast majority of CLR being shared but JIT and AOT being two slightly non-overlapping spheres which implement certain runtime features differently, with NativeAOT using C# for what was in C++ previously).

Examples of these are threadlocals/threadstatics and string interning, where rewriting threadlocal storage in C# enabled further optimization in the compiler to fully elide any interop calls to C++ land and completely inline reading of these values, massively improving performance for this case, while rewriting string interning in C# too improved its performance and usability and allowed to remove the legacy cruft that existed to make it work on NativeAOT (still old C++ version for regular CLR but it's a rarely used feature anyway).

Another example is all string searching and manipulation code, all memcpy/memmove routines are too written in C# and always get optimal performance because C# has rich SIMD and intrinsics API which reach optimal HW utilization without ever having to touch C++ because it already gets compiled to comparable asm.

Or ThreadPool, it used to be written in C++ and then has received a rewrite in C# to enable further evolution and performance improvements (like blocked thread detection, or allowing for small methods to be inlined which is not an option with C++).

The overarching trend throughout all versions of .NET after going OSS has been moving more and more C++ code to C#.


Interpreters such as wasm :P


Below some background from a Microsoft employee (https://www.reddit.com/r/dotnet/comments/1aezqmg/comment/ko8...):

Makes perfectly sense. Rust is a replacement for ultra performance sensitive parts that might have been written in C/C++ before, but not a replacement for C#.

---------------------------------------------------------------------------------------------------------

Hey there!

I work at Microsoft. I can shed a bit of light here.

We use .NET for TONS of things. Absolutely tons of different products and services. I'm on the Office 365 side of the business, currently managing Deployment for all of the hundreds of services that roll out across the world... And we use .NET extensively.

I'm starting my new position managing a team that does routing next week. They have some things that are extremely performance critical. As others have pointed out, we're talking about supporting services and traffic literally across the planet. When it comes to optimizing, they will find ways to squeeze out what they can.

There are languages like C and C++ that get used for some extreme use cases like I mentioned. Reducing as much overhead as possible in certain situations even leads to .NET apps with unmanaged pieces included with them.

There's been a lot of hype around Rust, and for good reason. But it's a system language. It's not like Microsoft is about to go rewrite millions and millions of lines of code and toss out C# (for anyone getting nervous ). They're just being pragmatic and using an effective tool for the job.

Hope that offers some clarity.

---------------------------------------------------------------------------------------------------------


What's the point of rewriting from C# to Rust, as C# is performant enough and already has memory safety? What will offset the huge rewrite cost?


C# doesn't have ownership/lifetimes and the kind of safety that comes with them.

Also, at the scale of Microsoft’s core global services, when you are paying for the processing, fast enough (or “CPU efficient enough”) isn't the same as with common apps. Even small efficiency gains are going to yield sufficient savings to be worth a fair amount of developer time.


> C# doesn't have ownership/lifetimes and the kind of safety that comes with them.

That's not really true.

Like rust, C# is memory safe, although it comes at it in a very different way.

"safe" rust does inherently prevent a certain kind of race condition in multi-threaded code that C# does not, thought that's more of a nice incremental improvement, not a fundamental one -- i.e. it doesn't make your multi-threaded code thread-safe, but it does prevent a one type of thread safety violation.


Rust gives you clear semantics on when it drops a variable, so it's being used not only for memory safety but to close files, kill threads, synchronization primitives, and whatever else you want (can be implemented via Trait).

Does C# have any kind of GC behaviour guarantees that are similar?


Not really. There's the IDisposable interface you can implement, and some language features around making it easy to use[2], however there's nothing preventing you from not using those language features and forgetting to call Dispose.

In case you forget the GC will call Dispose but it's entirely up to the GC when that happens, so personally I wouldn't say it's similar.

[1]: https://learn.microsoft.com/en-us/dotnet/fundamentals/runtim...

[2]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...


>In case you forget the GC will call Dispose but it's entirely up to the GC when that happens, so personally I wouldn't say it's similar.

This actually has to be implement in the classes finalizer its not done automatically. The finalizer is called by the GC which if the designer chooses can call Dispose. This is usually done in well designed classes that use unmanaged resources.

Dispose and using give you the determinism if you want it and the finalizer gives you the backstop to prevent leaks if it makes sense.


Good point, I forgot about that little, but crucial, detail.

So it's flexible, but there are footguns about, especially if you're wrapping a handle to something external.


Doesn't the compiler complain if you don't use using or try? I forget


It does. So does every IDE or linter.


There are warnings given by analyzers in some cases, but it's not exhaustive


C# is IDisposable [1] for that kind of stuff and has the "using" block [2] to make it hard to get wrong.

1. https://learn.microsoft.com/en-us/dotnet/api/system.idisposa...

2. https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...


That's great, but not relevant to the topic of memory safety in C#, which is all I was addressing.

I think you are making a point in the broader discussion of C# vs. rust, but that doesn't fit here. (And, personally, I don't care about that so have nothing to say on the topic.)


He wasn't talking about memory safety. Rust's lifetime/ownership features were originally created to provide memory safety, but it turns out they actually provide more than that and are very good at preventing ordinary logic bugs too.

I don't know if anyone has really investigated why this is the case but it definitely is.


> C# doesn't have ownership/lifetimes and the kind of safety that comes with them.

I believe .net Core has supported several forms of escape analysis for quite a while.


> C# doesn't have ownership/lifetimes and the kind of safety that comes with them.

Could you elaborate? C# has a garbage collector for tracking resources.


I think the parent post is wrong. This rewrite probably has more to do with performance, "fearless concurrency", and/or developer happiness. C# is safe.


thread safe?


It isn't, but neither is rust.

Safe rust is safe from data races, but that's just one kind of potential thread safety issue -- one of the simpler kinds. It's not nothing, but not anywhere close to thread-safe.


C# has a garbage collector for specifically tracking memory, but lifetimes are more broadly useful.

For example, Rust lifetimes (this is also the case in C++ afaik) can be used to suitably scope the lifetimes of mutexes, to have temporary folders which are deleted when they go out of scope, to require that a connection pool is destroyed _after_ the last connection inside it is returned, etc, etc.

Mostly, garbage collected language do a bad job of cleaning up objects which refer to resources held elsewhere. Java had persistent issues with direct ByteBuffers (which were wrappers around malloc (but not free!)). Locks are easily held too long. File handles are easily left open. And depending on your GC settings, that file descriptor that's holding a 10GB file around may not get cleaned up for hours.

Refcounted languages can be somewhat better, but they don't avoid the bug, they just mitigate the effects.


I think GP is talking about concurrency issues that arise even in single-threaded code when simultaneous mutation from multiple sources is permitted.


Almost all of the services I've seen at Microsoft were built in C#


How have you established that there is a “huge rewrite cost”?

Despite other comments here, I see nothing in the post that says “we are moving away from C# and rewriting everything in Rust!”

Microsoft is using Rust in Windows as well. They have given examples of how. In that case, they are not “rewriting Windows” but rather targeting specific components that benefit from the characteristics of Rust. Those components are just part of the larger application that is still primarily written in C.

C# easily with C and therefore with Rust. They interoperate well. So it seems likely that MS is following the same plan and rewriting specific components of the larger system that would benefit from Rust.


> C# is performant enough

until it isn't. At Microsoft's scale, I imagine performance is actually a major concern for certain pieces of applications.


I suspect this is the .NET runtime + libraries, not applications written in C#.


No, they're looking at migrating some core Office 365 services to Rust. This is the job posting: https://jobs.careers.microsoft.com/global/en/job/1633482/Pri...

At the scale of O365 it totally makes sense to look at a high performance non-GC language for your core systems that see the most traffic (I say this as a .Net dev).


Not sure microsoft’s problem is the language. Their frontend JavaScript code is bloated and slow. It makes office365 feel slow and there are so many bugs in the azure management tools. I assume the backend code has the same low quality.


I imagine the goal here is to optimize MS' compute and hosting costs. The core pieces of O365 probably handle many billions of requests per day. At that scale even small optimizations can be worth millions a year in cost savings.


Too bad you have to be in the office 50% of time for that role.


That surprises me since I'd expect that you could get really far with hot path optimisation.


Sounds like a skill issue to me. Their code sucks and they want to blame the language.


Considering the Microsoft consumer software that burns cycles to do trivial things (Teams, OneDrive, pick your favorite example). I do not think language performance should be a compelling argument.


There's a difference in resources on client machines and resources on thousands of servers in server farms.


Right. Microsoft pays for electricity of their server software. Hundreds of millions of consumers have to pay for the electrical usage brought on by the stack.


Who says they are? Probably depends and varies by the application.

One example I could think of is if it's currently a SaaS on Azure, but it needs to be portable enough for a version to run in a low end Arm IoT device.

Or if it's deployed to thousands of servers and taking excessive memory or CPU resources for a monitoring agent. Or GC freezes affecting other adjustments systems.

There are plenty of reasons to retire C# to Rust.


I'm not totally sold on rust as a language but I have to admit the tooling and ecosystem is really nice. I'm noticing that I'm increasingly using more stuff built with it.

I'm also not sold on zig either for many of the same reasons. I prefer my low level languages to be smaller like c. I think that might be true for languages higher level languages too. I just don't like to have to down a lot of documentation on hundreds of different features and concepts behind them.


The tooling/ecosystem looks probably great for developers actively creating with it, but for developers ‘consuming’ it (downloading some foss from github and changing a couple of lines) it leaves a lot to be desired. Not being able to ‘apt install’ a working toolchain on Debian for example - it’s already too outdated after just a year


That's pretty true of a lot of development tools on Debian.


Can they start by not needing a multiple gigabyte download and admin rights to get the rust compiler to work on Windows?


Does C# not require a multiple gigabyte download and admin rights?

FWIW I've gotten rust working with the GCC build tools on windows without admin rights, thought it's admittedly somewhat janky.


Not C# doesn’t need multiple gigabytes and admin rights. Dotnet tooling != Visual Studio


Is that needed because the compiler accesses some higher privileged APIs?


The MSVC-based compiler doesn't come with a linker, so you need the MSVC build tools to get it. One of the ways to get the build tools is to install Visual Studio, which is fairly large.


You don't need the whole Visual Studio installation, only MSVC build tools and Windows SDK. See here: https://rust-lang.github.io/rustup/installation/windows-msvc...


Those are still huge though, for some reason.


Why would a compiler needs any privileges, it's like a text editor


In widows, you get bonus points if your program needs administrative rights


I don't know. That's why I'm asking


Are there any rust books that have the focus of “ok you are productive in rust, this is what you should know / do to write proper rust”?

Edit: ok found https://github.com/sger/RustBooks

Any specific recommendations?


It sounds like you want Rust for Rustaceans: https://nostarch.com/rust-rustaceans


Thank you


I find this very strange, Microsoft has a lot of big internal high-performance services written in C#.

You have to be intentional about some things - largely making sure objects are very short-lived or very long-lived, to avoid long GX pauses. But .NET performs much better than it did 10-15 years ago and I can't think of a fundamental reason why you'd rewrite in Rust.


The Rust folks are working on experimental codegen for the CLR https://fractalfir.github.io/generated_html/rustc_codegen_cl... leveraging the existing CIL/CLR support for "unsafe" languages. Once that work is complete, you should be able to rewrite C# code to Rust on a very fine-grained basis (a single function at a time or thereabouts), just like you can when porting C to Rust. Of course, you'll also be able to remove the CIL/CLR dependency altogether if you're left with 100% Rust code, and compile to a binary just like in ordinary C/C++.


Might not be all about performance but about security aswell.


I imagine it comes down to the handling of threads vs anything else. Technology is rapidly adopting the more cores strategy of technology since we are hitting some IPC limitations and in the server space more cores is better.


Multi-core scaling (in particular within GC) is one of the strongest points of .NET (for example, Go used to have poor scaling on many-core systems (has this changed in 2024?) while in .NET the throughput would continue scaling linearly)).


I really doubt that this is true, might be your bubble. Have you dabbled in Erlang/Elixir? Your standards would increase substantially if you did.


Wouldn't that be trying out something that is a strict downgrade? (bytecode interpreter based VM with weak JIT, GC is likely much weaker too)

It really comes down to performing as much work on a core-local basis and .NET SRV GC does already a lot to avoid inter-core synchronization cost (per-core heaps, I'd expect JVM GCs do a similar, maybe better, job), and so does various thread-safe code in the standard library.


To answer your question:

To see how far behind everything is in terms of parallelism and concurrency. It's not even funny how primitive 99.9% of everything out there is in this area.


Maybe compared to C++ or even Go (yes, Go is very rudimentary with examples expecting you to synchronize goroutines by hand), but unlikely compared to C#. While both parallelism and concurrency are not as central to it as to Erlang, it is a much more approachable language and achieving either or both is trivial:

    // Concurrency
    using var http = new HttpClient();

    var req1 = http.GetStringAsync("https://example.org/");
    var req2 = http.GetStringAsync("https://news.ycombinator.com/");

    Console.WriteLine(string.Join('\n', await req1, await req2));


    // Parallelism
    var user = Environment.GetFolderPath(
        Environment.SpecialFolder.UserProfile);

    var hashes = Directory
        .EnumerateFiles(Path.Combine(user, "Downloads"))
        .AsParallel()
        .Select(path =>
        {
            using var file = File.OpenRead(path);
            return Convert.ToHexString(SHA256.HashData(file));
        })
        .ToArray();

    Console.WriteLine(string.Join('\n', hashes));
For distributed computing, there are Orleans and Akka.net frameworks which allow to achieve it at scale and garden variety of other, simpler frameworks for job scheduling.


OK, but syntax by itself is not saying much. Ruby's ractors can look similar but the underlying implementation is subpar.

Does C# have actual green threads or actors? Proper work-stealing schedulers, low-latency guarantees etc.?

If so, that would be cool, Erlang's BEAM VM desperately needs competition.

But I doubt it. From what I am seeing regularly on HN, people prefer to degrade or belittle / minimize the value of the BEAM VM than to admit that the runtime of their favorite language is still not good enough. Cognitive dissonance is getting in the way it seems. Shame.


"Does this have Green threads?" is a common fallacy (along with mentioning function coloring) and arguably much worse API than Task/Future-based concurrency primitives because it does not give idiomatic control over yielding/dispatching and hides an important characteristic of the executed code, specifically, calls being a promise to produce a result in the future (e.g. the example above with http requests is much clunkier without them).

Some languages intentionally opt for a much more limited way of doing it like Go with goroutines and channels, some have to retrofit an async-alternative to benefit the existing decades of code like Java, and some languages eventually got it right like F#, C#, in a way, Python and JS/TS, and, to dismay of many (and pain I sympathize with), Rust.

Also, in C#, the way to go about it is to just pick the abstraction you think fits the problem best, be it manual threading, async/await Tasks (or anything custom that integrates with it), channels or, as you mentioned, actors for which there are Akka.NET and Orleans. I believe F# is even more flexible in this area.

As for the implementation details of BEAM VM - just look at e.g. the times in 1BRC challenge for BEAM-based submissions - the rift between compiled languages and the former is immense and likely unclose-able, and this trend persists in any (micro)benchmark for BEAM I look at - the overhead of most trivial operations is just too damn high! It was a groundbreaking technology at its inception but today - likely not anymore.

Not sure what* makes you quick to assume that the people working on .NET, JVM and other platforms which care about performance in (massively) parallel domains don't know what they are doing but work-stealing schedulers are yesteryesterday news and "everyone" does them (.NET, Tokio, Go from what I know), both in the form of threadpool implementations and in the form of bespoke parallel abstractions (both examples in my previous comment have these). Same applies to latency-minimization techniques, priority scheduling (worst case you can always have a dedicated worker thread that runs at higher prio, or a scheduler with a pool of those) and more.

(* If you have been burned by Ruby - I did hear about it being subpar in all kinds of ways and observed being very slow but that should rather be an exception than the rule among other languages)


> "Does this have Green threads?" is a common fallacy (along with mentioning function coloring) and arguably much worse API than Task/Future-based concurrency primitives...

IMO both are really just handy abstractions. The salient question and point is: can we have 10,000+ perceptibly parallel tasks? The BEAM VM does it. And yes we're talking such that are CPU-intensive as well.

> and some languages eventually got it right like F#, C#, in a way, Python and JS/TS

If you say so. I haven't seen any proof of it for my 20+ years of programming. C# is quite the nice language but you are stretching the compliments towards it by ascribing to it that "it got parallelism right". It absolutely did not, as didn't 99% of all languages and runtimes out there. To this day. Having an API for good parallelism doesn't mean your runtime is prepared for it. That's why Orleans and Akka still cannot do what the BEAM VM can do, and likely never will.

And Python and JS? You are just inserting a joke hoping I won't notice here, right? Right? I literally made dozens of thousands of bucks rewriting Python programs where people thought they were oh-so-clever with asyncio et. al., to Golang and to Rust. Easily 200x the throughput, and 99.9% of all parallel bugs disappeared overnight (well, after we launched, I mean). There were a grand total of 7 other bugs remaining we uncovered in the first month in production. I still keep contact with those old colleagues. The app ran unhindered for ~11 months before they finally hired a new team to keep developing it after they let go 10 out of 11 contractors after the project was mostly done (and I was in that group).

> Also, in C#, the way to go about it is to just pick the abstraction you think fits the problem best

Sure, do that, but in the end, and again, what matters most are the building blocks below -- do they enable the true lag-resistant parallelism that doesn't require a lot of fiddling?

> this trend persists in any (micro)benchmark for BEAM I look at - the overhead of most trivial operations is just too damn high! It was a groundbreaking technology at its inception but today - likely not anymore.

What's ground-breaking in terms of academic research matters not to industry, at least 90% of the time.

I work with Elixir (and Golang, and Rust) professionally every day. Even today Phoenix is the most stable web stack I've ever encountered (and one of the very fastest throughout all dynamic languages and no, don't quote TechEmpower; they scarcely know what they're doing, though happily they became more open to PRs gradually which helped their later iterations a lot).

Bombard the BEAM VM, DDoS it, all responses get slightly slower and slower as the load mounts up, but it has to be at its breaking point until you start seeing actually failing request-response pairs. Not to mention parts of your app's supervision tree can fail and they just get restarted without bringing the entire app down (like the database pool).

Try doing that with Ruby. Or Java. Exception after exception, and the server OS process has to be restarted if somebody missed a catch clause (lol).

In truth, Golang and Rust fare quite well too, but that's partly by the virtue of them being able to absorb much more hammering (they are between 20x to 1000x faster than Elixir depending on framework) and not strictly because of their runtimes. They do have good runtimes though. Not as fault-tolerant and lag-minimizing like the BEAM VM, but they are not far.

> Not sure what makes you quick to assume that the people working on .NET, JVM and other platforms which care about performance in (massively) parallel domains don't know what they are doing

1. Sunk cost fallacy;

2. Stockholm syndrome;

3. Risk aversion;

4. Sinclair's law: "It Is Difficult to Get a Man to Understand Something When His Salary Depends Upon His Not Understanding It". You likely command a good salary with C# and have made a good career with it. Of course you will not want to think it's suboptimal.

Shall I go on?

And I am not saying that "they don't know what they are doing". I am saying there is too much inertia that nobody is ever going to make a revolutionary change -- everyone is too afraid and are just swallowing the problems and are becoming experts at avoiding them for as long as possible. Also financial stakeholders will never allow such revolutionary changes, but that's a very different topic.

So yeah, not the same thing.

> work-stealing schedulers are yesteryesterday news

1. I didn't suggest they are revolutionary. I suggested they are a good pattern that's proven, and you also noticed that.

2. What is "yesteryesterday's news" matters not. The ideas of the BEAM VM are quite old and they serve perfectly many businesses today. Are you suggesting trendiness > merit? I hope not.

---

Overall, I will vehemently disagree that you just have to pick a language and it will all work out if you try hard enough. Absolutely not. This forced and imagined equality between languages and runtimes does NOT exist. Some of them are objectively better than others for jobs X and Y and I am tired of people pretending otherwise.


I appreciate the long reply (no sarcasm) and the compliment regarding the salary haha (it makes for a nice aspirational goal, to have good total comp). I probably can't respond to the whole post but feel like we can come to an understanding.

You mentioned 10_000 perceptibly parallel tasks so I threw together a small example - what if we had 10_000_000 concurrently executed tasks instead? This takes 1.2-1.5GB of RAM to run but it can give you a good showcase that .NET ThreadPool can take a lot of punishment and this is far from the worst you could see in one enterprise codebase or another:

    var tenMillionTasks = Enumerable
        .Range(0, 10_000_000)
        .Select(async i =>
        {
            // Force the yield - this way the methods will continue the execution
            // in the form of scheduled continuations (work items), so that we pay
            // for context switching too, similar to a more realistic scenario.
            await Task.Yield();
            for (var j = 10_000; j >= 0; j--)
            {
                // This is an okay proxy for doing some work since the
                // JIT is fairly conservative and won't auto-vectorize this
                // because its computation budget is prioritized on more impactful
                // optimizations like inlining, CSE, devirtualization, etc.
                // (this is completrly compensated by portable SIMD API)
                i++;
            }

            return i;
        });

    // Scales linearly with the number of CPU cores
    var results = await Task.WhenAll(tenMillionTasks);
    var average = results.Average();

    Console.WriteLine($"Done! The avg is {average}");
You can try it yourself if you're interested. To do so you can get an SDK from https://dot.net/download, execute 'dotnet new console', paste the code to Program.cs and then execute 'dotnet run -c Release'. This is more stressing to the runtime and is likely more fair than another comparison posts a year ago or so here on HN where that evaluated various runtimes by having the task simply wait for a period of time. Nonetheless, you can replicate that too by adding 'var delay = Task.Delay(TimeSpan.FromSeconds(5))' before tasks variable and replacing the lambda passed to .Select with 'async _ => await delay' to see how small the task overhead is.


I appreciate you putting this together. :)

If I am wrong, I'd be happy. It would be about time the big software stacks took proper parallelism to heart.


.NET supports asynchronous code very well, so that hardly seems like a likely reason for rewriting in Rust.


> .NET supports asynchronous code very well

Work-stealing task schedulers?


Yes, and overall task-based code has been a basis for many APIs since .NET Framework 4.5 (2012), earlier async patterns existed before too. It is not in a frozen state either as each release sees improvements to asynchronous code execution (overall improving ThreadPool, reducing size of state machine boxes, experimenting with alternate underlying implementations like green threads experiment, the learnings of which have been carried over to runtime task handling experiment which will massively reduce the async overhead when it eventually finds its way into mainline runtime).


There is a reddit post where a dotnet developer on the MSFT team details why they are moving certain processes to Rust. TLDR, it is all about performance that scale.


Does anybody have a link?



I will try to find it today if I have a chance. I think it was on cscareerquestions or dotnet.


Here is a link to a user on reddit claiming to work at Microsoft going into more detail, https://www.reddit.com/r/dotnet/comments/1aezqmg/comment/ko8...


Does anybody know more about the "Substrate App Platform group"?

As far as I understood it, actually Microsoft Exchange and ESENT powers a lot of Office 365, e.g. the compliance tools, the search and e.g. teams chats. Next to it is another pillar: Sharepoint, which is also exposed as OneDrive and is based on SQL server.

Is substrate or has it been part of Exchange?


I can understand the move from unmanaged C to Rust for security reasons, but C Sharp to Rust ? Am I missing something here ?


Reduced memory and CPU utilization at scale most likely.


Performance reasons.


Expect MS own clone of rust like C#(for java) or typescript (extended js) in future


I think they already have this internally. That said, I believe they already have board seats and are large contributors to the rust foundation.


Just hire Yehuda Katz


Why is MS so hyped on Rust? It’s good but doesn’t it have its caveats? It seems they’ve become Rust zealots.


> It’s good but doesn’t it have its caveats?

Compared to C#, it's a low level language, where you have to be concerned with a lot of details.

But then, I'd bet this is for replacing bad C# that does low level tasks, that nobody sane would write but MS pushed for anyway because they were hyping C#.


Office 365 is was introduced in 2010. So when it was created Microsoft had two reasonable options:

1. Accept the performance penalty from using a safe garbage collected language (such as their own C#) which is slower which may make the service less attractive and more resource hungry which means more expensive to run at scale.

2. Accept the correctness penalty from C++ which delivers good performance but will cause an endless stream of bugs, some very hard to diagnose.

Rust makes this much easier, because you aren't paying the correctness penalty, your Rust will probably have the same or fewer correctness bugs as your C#, and yet you get the good performance you'd have wanted from C++.

For a 2010 product Rust was not an option, but presumably at some point in the last say five years, Microsoft did the maths and worked out that unless they're killing Office 365 soon or somehow C++ magically becomes safe with no work, a transition to Rust is cost effective.


MS has lots of battle scars from the security headaches they went through in the 2000s/2010s.


In 2007 they were .NET zealots and now .NET is old guard


There plenty of ongoing development and support for .Net, it's much less likely a matter of age, as it is handling more requests per server.


Hi @dang, regarding HN karma, what happened to the karma points if a user posted the same article just twenty hours earlier? (◔_◔) [1]

[1] https://news.ycombinator.com/item?id=39232275




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: