It's interesting that someone in Microsoft is talking about Rust, but the article is very flawed. For example,
> C#, a programming language developed by Microsoft, also features some memory access improvements but is not as advanced as Rust. [..] Besides [Rust] being superior to C# in regards to better memory protections
That's false, isn't it? C# is a memory safe language, period. It relies on GC for that.
In fact C# has arguably better memory safety than Rust because you can do things safely in C# that you would be forced to use 'unsafe' in Rust for. (Granted, then you have overhead from GC, but that's not what the author is talking about.)
> Rust is also more popular with developers these days and might be easier to recruit for.
The author has misinterpreted what "most loved" means in the quoted survey: it means that among Rust developers, it gets a very high rating. That says nothing about how big that group is, nor how popular it is in the general population of developers.
The Rust community is growing but still very small - it's an emerging language. Almost everyone that uses it decided to use it because they like it. (That doesn't diminish the accomplishment - there are other emerging languages that are not as loved by their users.)
It's quite possible that what the author intends by “memory protections” is broader than what is captured by “memory safety”; e.g., the borrower checker is, in a sense, a way of protecting memory from unexpected changes in parallel tasks.
I think that's too generous to the author given the other mistakes in the article, but sure, if you expand the meaning beyond regular memory safety, then it becomes a mixed picture: Rust protects from some race conditions, but other types of parallel code must use 'unsafe'. And as mentioned before some data structures can be done safely in C# but not in Rust.
It is definitely false for the author to say Rust is "superior to C# in regards to better memory protections", in any definition of "memory protections".
Is the author's purported "wording" in the article from extensive coding experience in rust and c#, or any other language?
Or, did someone prod a journalist with little familiarity in programming theory, to write positive "spin" on Rust being explored by Microsoft? Is it even news for the world's largest software company (by market cap), to explore the use of emerging languages?
That's the part you cited, and went on to imply the author might be referring to Rust's borrower checker. Nothing from the author's "wording" implies knowledge of Rust's memory protection techniques
> That's false, isn't it? C# is a memory safe language, period. It relies on GC for that.
The original post's whole sentence there is basically nonsense. They're different enough that it's hard to call either one more advanced or having better memory protection at all.
Using just plain C#, it has better memory protection, since the GC and runtime make it impossible to leak, double-free, or access out-of-bounds. But there's a higher cost for that, and the system for managing non-memory resources is not as reliable - you can leak file handles, network handles, etc more easily. Not up on the latest in Rust, but I think it's possible to leak memory if you are sufficiently clever and doing really weird stuff. But the standard ownership model is great at making it really hard to leak or mismanage any resource, not just memory.
It's hard to call either one more advanced either. .NET has a massive std lib and C# has been getting some cool new features. It's still a runtime language though, with the limitations that come from that. Rust's lifetime and ownership system is pretty advanced, but wouldn't make sense for a runtime language.
> the GC and runtime make it impossible to leak, double-free, or access out-of-bounds
This is false.
Neither C# nor Rust protect from memory leaks. There are actually some gotchas in C# that can cause memory leaks - for example, you have to be very careful about events. Memory leaks are not memory unsafe, though.
On the topic of actual memory safety - C# has unsafe blocks just like Rust does. Safe Rust is just as safe from memory safety problems as safe C#, even more so, because C# doesn't require unsafe for FFI, where all bets are off. And unsafe C# is just as unsafe as unsafe Rust can be.
The only gotcha that can cause memory leaks in C# is a GC root, i.e. referencing something on a stack, that is static or that is pinned. Events/delegates are not special in any way, they are simply a reference footgun because it's non-obvious that they contain a reference to the object that contains the handler.
`mem::forget()` only prevents destructors from running, which leaks data managed by the destructor as a side effect, but it gives no guarantees about not freeing the object passed to it (e.g. if you call it on a stack-allocated object, it can't leak it).
> That's false, isn't it? C# is a memory safe language, period. It relies on GC for that.
True. I think they are conflating memory-safety and thread-safety here. Rust definitely offers better thread-safety properties. This is somewhat nice, since in my experience thread-safety issues are the more commonly undetected bugs - which are later on tricky to find and might require big refactoring efforts to fix. They are also often the source of the memory-unsafety issues.
Imho Go currently has the best blend of both properties for a GCed language - since the race detector does a good job of detecting thread-safety issues. I wish for Java and C# to get the same.
Yes, it's quite possible to cause tearing in C#. It does not cause undefined behavior, but it can, for example, break invariants (supposedly enforced by privacy) in large enough structs.
See this example where I break encapsulation in a struct by abusing a race condition:
> That's false, isn't it? C# is a memory safe language, period. It relies on GC for that.
Ah, that is also false. A GC makes a language safer, but it does not protect against race conditions, so it is not entirely memory safe.
Of course, many languages that are referred to as memory safe have this loophole, but that may also be due to there not previously being a concurrent language with suitable protection in this area.
I think people refer to memory safe in general as "not being able to override memory that is not currently owned by a referred object in the program" - which certainly applies to all GCed languages.
It's definitely true that this does not imply thread-safe, and the ability to prevent the mess up the consistency of valid objects.
> I think people refer to memory safe in general as "not being able to override memory that is not currently owned by a referred object in the program" - which certainly applies to all GCed languages.
This is absolutely false. Race conditions in native structures can very easily invalidate any memory safety guarantees provided by the language, causing issues such as buffer overflows.
In other words, the presence of any memory unsafe operation leads to a total and immediate loss of all memory safety.
The former has a detailed explanation, the latter has a code example that shows more clearly what the technique can be used to do (cast a struct A to unrelated struct B with different length, thus allowing out-of-bounds reads/writes).
If on the other hand you were just saying that these languages are just called memory safe, even though they aren't, then yes. That's what I said.
You could make a coherent argument that Rust has better memory protections than C#. For instance data races are protected, that is a memory protection. Arguably replacing null with options is a form of better memory protection as well.
OK, perhaps we should say a doubly-linked list where each node contains the data and two pointers. While this is safe, it's also fairly inefficient, storing an extra counter with every node, which are dynamically updated, just to keep the borrow checker happy.
Nope, intrusive is different. Because this is using an Rc (reference counted pointer) for the forward and backward pointers, these counters are both set to '2', as (almost) everything in a doubly-linked list has two things pointing to it, the thing after and the thing before.
That's not true and a misunderstanding of what `unsafe` does. It has nothing to do with borrow checking (which is what trips people up when writing a double linked list). Unsafe is primarily used for FFI and lets you dereference raw pointers (among a few other related uses).
It is in general an escape hatch for being able to write code that isn't possible in safe Rust. Limitations of the borrow checker (e.g. for backpointers in circular data structures) definitely fall into that category. Even without FFI Rust would not be practical without unsafe.
You would use raw pointers instead of normal ones the type system (including borrow checker) cares more about, so ultimately yes it is about subverting the type system.
This is getting semantically thorny, but I would not interpret memory protections as memory safety. The borrow checker just makes it easier to establish bounds on data flow, for which there are a number of benefits beyond preventing use-after-free and unintended aliasing.
I think by "memory access improvements" they are referring to the new zero cost memory abstractions (called Span and Memory) not to memory safety. This makes the most sense in context because these abstractions were inspired by similar zero cost abstractions in rust.
Why would you decide someone is maliciously trolling the internet by writing an article on tust versus the far far more likely situation of the author and/or target readership being unfamiliar with the domain leading to poor quality content being churned out to hit a deadline?
Critical thinking is recognizing when people throw out statements without presenting any evidence. If you fill an your article with accusations and without evidence then your trolling.
"70% of all Microsoft patches are for memory-related bugs".
This says a lot. I'm sure Microsoft has very smart and competent engineers, yet they still get all these bugs. It's because manual memory management is really, really hard.
I'm baffled to keep hearing some people saying that manual memory management is okay when the evidence is this huge.
Perhaps for very simple programs is okay to do manual memory management, but for complex ones like most of them nowadays, it seems close impossible for humans to write safe programs.
You don't want to have to rely on people being competent all the time. You want people to 'fall into the pit of success' instead of opting into several safety mechanisms that'll get you something that's 'pretty good'
Besides, if a language guarantees that all pointers are not null, initialized, not dangling, etc you have a solid foundation in which to build large systems.
Also - why are people even mentioning C#? Rust is a language for writing SQL servers, drivers, browsers, and other foundational components where a garbage collector wouldn't make sense.
Just about all of my stuff deals with HTTP, which is farther up the stack from Rust's sweet spot.
Yes, you can ducttape rust style memory safety features into C# or C++, but I'd imagine using it is kind of like introducing asynchrony into a C# codebase - it wants to affect everything and if you don't you'll always be contending with the sync/async border. Better to make it a requirement and reap the rewards from knock-on effects.
Because Microsoft has written several OSes in C# variants, with AOT compilation to native code.
They then took some of those learnings into Windows 8.x, UWP .NET Native and C# 7.x and 8.0 low level memory primitives.
A tracing GC, coupled with value types and manual memory management in unsafe code makes plenty of sense and has been done multiple times since Xerox PARC days.
The only thing missing is a company having the guts to push it no matter what. A bit like Google is doing with ChromeOS and Android.
>The only thing missing is a company having the guts to push it no matter what. A bit like Google is doing with ChromeOS and Android.
This. I see a lot of the so call new idea and hype aren't actually new at all. It didn't succeed simply because it never got the investment or drive until it has sustainable critical mass. And every few years or decades we start to reinvent the wheel by new generation of developers.
Especially in tech industry where you are considered old if you are over 40, but in reality it is the age you finally see all the ideas keeps getting recycled again by newer generation and you could finally say meh, no.
There’s nothing intrinsic to rust that makes it better suited for “foundational components” over anywhere else in the stack. Just because it’s capable of being used lower down doesn’t mean it’s a better or worse language higher up the stack. It’s probably got all the APIs or great crates you need for your work.
It’s not a language “for” anything per se, it’s ergonomic anywhere up the stack.
> There’s nothing intrinsic to rust that makes it better suited for “foundational components” over anywhere else in the stack.
Actually, there is, not having a GC means you can better reason about stack/heap memory usage. That matters for performance critical and things like embedded systems where you might not even use malloc() in c.
I think I may not have phrased it well, I agree with you, it’s better than many other languages for low level software development. What I’m saying is there’s nothing that makes it worse for high-level software development.
Depends on how your mind works really... I know a lot of devs that cannot break out of C# class and enterprise app thinking. I happen to like JS for a lot of workflows. It really depends on mindset.
Been learning rust for a few months now and really enjoying it so far... waiting on the new async stuff to firm up a little before continuing.
Because explicit memory management has a higher cognitive overhead than implicit memory management, a GC-based language is more suitable higher up the stack.
For that matter, wouldn't be too surprised to see MS publish crates that give nicer interfaces to Windows internals or even linux, sql, azure or other platforms access.
Because there is plenty of higher level systems interfaces that could be just as well written in C++, C# or Rust. Just because rust works on low-level problem spaces, doesn't mean it doesn't work higher up. Also, MS could very well create some of the applications in question with C# as well.
This is also a “10x and I’m sorry” issue. The unicorns write a prototype which is then handed to lesser mortals for maintenance. Will the mortals do as well as the initial author that had complete control? No. I was just thinking it’s basically denialism at this point yet I still run into engineers that think memory management is no biggie. Everyone has a blind spot, and this is one often found in the most talented.
That comment isn't scoffing at people who write code with memory bugs. That comment is scoffing at people who think writing memory-safe code (in languages which allow programmers to easily write memory-unsafe code) is easy.
People need to "humble down" and see that if people keep falling into the same traps even with training, it's not as simple as "people sucks"
The parent isn't saying "people suck" or even that "people who write memory bugs suck".
They're saying that 'programmer machismo' sucks.
Saying, "we don't need better tools, we just need better people, and I'm one of those better people," sucks.
Closely related is saying, "manual memory management is hard, so I guess most programmers should use better tools. But I'm a really advanced 10x 'leet programmer, (in spite of how much of my code has wound up in CVE's), and I'm clearly capable of writing code using manual memory management because of my 'leet skills."
It's one of the peculiarities of the field that we're completely ok with practitioners using dangerous tools and building techniques when better tools and techniques are available.
This seems self contradictory. If people keep falling into the same traps even with training, how is "this is a human limitation" not a reasonable conclusion?
> This says a lot. I'm sure Microsoft has very smart and competent engineers, yet they still get all these bugs.
But let's say they were very smart and did not get all those bugs. Even then, why would someone claim the compiler double-checking is not useful?
I have a pretty good grasp of Javascript and can pretty much always predict what type something is going to be. Of course I make mistakes, but even if I didn't: using TypeScript is still great, because I don't have to manually keep track of all those types. Sure, I could do it, but it's more useful to spend my cognitive effort and energy on other things.
I don’t understand how SAL and RAII couldn’t help with this. In three years I never had memory issues reported against my C++ code. The closest thing that happened to me was corruption while prototyping a Zip utility that had new features I was inventing. There I had to build a variable sized input bitstream writer and had corruption on some ISAs due to differences in the interpretation of <<.
if you don't have memory issues in your code then either it's so small that everyone participating in its development can actually keep it in a perfect state or no one searched hard enough yet. After a certain complexity bugs just appear, you don't have to do anything, they are just there.
I’m not saying I never wrote bugs, but tools like SAL or RAII types that manage sharing help. By the time I checked in, they were either squashed or not found until after I left Microsoft.
I built a domain agnostic virtualization and streaming platform that generalized the “Click2Run” features of Office and could apply retroactively to any traditional app that didn’t include kernel code (e.g. worked on Adobe Creative Suite)
I also worked on the early phases of the sandbox that runs WinRT for JS, which also required paying my dues and fixing bugs or implementing new features (e.g. the fetch API) for the Trident engine in IE.
Its hard, but you're trading memory safety for performance. Rust cheerleaders will tell you that its possible to avoid that trade-off, but the proof is in the pudding. Its become almost impossible to see a new OS project succeed in the current landscape, so maybe an entire AAA game written in Rust will be an interesting challenge.
Also, being memory safe doesn't get you much (or rather very very little IMHO), because complex code still contains logic bugs, UI bugs, performance bugs, etc, etc. Personally, I want the program to crash rather than keep running only to find out it was silently messing up data or otherwise leaving things in an inconsistent state.
> Its become almost impossible to see a new OS project succeed in the current landscape,
Fuchsia contains a considerable and ever-growing amount of Rust. And if you consider VM infra in the "OS" category, you have Firecracker from Amazon and crosvm from Google as well.
> maybe an entire AAA game written in Rust will be an interesting challenge.
We'll see how it goes; there is one studio that has made AAA-level games in the past using Rust now, and a new studio that is full of ex-EA folks.
> Personally, I want the program to crash rather than keep running
The alternative here isn't keeping running, it's "does not compile in the first place". That's even sooner than your crash, and guaranteed, as opposed to "happens to hit the right code path."
>The alternative here isn't keeping running, it's "does not compile in the first place". That's even sooner than your crash, and guaranteed, as opposed to "happens to hit the right code path."
Different people mean different things when they talk about correctness. I think it means accurately capturing the intent of the programmer, i.e. the logic/algorithm/code flow or whatever, and eliminating the ambiguity. How does rust help here?
Sure, if you’re talking about that, then Rust has more maintstream tools: a very strong and static type system, built in test framework, that kind of thing. You can help guard against logic bugs by using these tools more effectively, but they do require proficiency to use.
Right, but such tools are available across multiple programming languages, and the onus is on Test/QA to ensure that. Lets say that Rust eliminates memory crashing bugs by restricting certain freedoms afforded by C or C++. The problem of coordinating between multiple threads of who gets to do what when, and with which data and accurately capturing the intent of the programmer still exists. Now if you view the world through Rust-colored glasses, then any invalid state is the result of a programmer error because they used Rust incorrectly.
I'm saying that its this the same kind of programmer error (not being able to specify things without ambiguity or a way to capture intent accurately) that was the cause of the memory bug in C too. Its just that with C you will hard crash whereas in Rust you wont. You didn't crash but you modified a data structure you shouldn't have, and your program will still end up in an inconsistent state.
In much the same way, the kernel doesn't care what happens in userspace - which thread is doing what with memory or what userspace resources are being corrupted, etc. The kernel can trivially kill userspace threads at will, and continue to remain rock solid as ever. So yes while the separation of kernelmode and usermode helped the user escape from poorly coded apps, it didn't really help reduce the actual bugs as much.
Hopefully that made as much sense as it did in my head! :)
Just in this thread stats about memory bugs being 70% of CVEs is presented, so why do you believe "being memory safe doesn't get you much" ? I am not a Rust fanboy and would rather use a GCed language for most work. But I don't get this stubbornness towards progress and better tools. If some tool can make "your" life easier in many ways, use it for your own profit !!
Well, I'm not opposed to it, as much as I have yet to "see the light". I think that the the original non-trivial problem of deciding which thread or which subroutine gets to change what piece of memory at what time is still there. The original problem was a logic bug, but it manifests as a memory bug in C.
As an aside, I've recently been working with PLCs and I've come to respect a lot of the design elements of things like the Siemens GRAPH language which makes it easy to capture my intent for sequential control stuff.
The showcase project for Rust is a web browser (after all, Rust started out at Mozilla), which is certainly one of the larger undertakings one can start nowadays.
Well, I personally wouldn't put any web-browser in the performance applications category. They are important applications, no doubt, and increasing their memory safety is always a plus. But there is something "soft realtime"/definitive about fitting the rendering of millions of textured-polygons, and running AI, Physics, data streaming, audio, gameplay code in a total budget of 16ms per-frame for hours and hours.
On my machine at current gig it crashes every other day making me lose my containers. It doesn't overheat the machine like some others reported though.
According to the same benchmarks that test is a few versions older. Compare it to these and they are neck and neck (in theory). Consider that most benchmarking is subjective...depends on so many factors like is this Clear or Ubuntu Linux? Are the chipsets the same? The numbers are different here for "Speedometer" but between both tests there isn't enough information here even to say the tests are comparable just different. https://www.phoronix.com/scan.php?page=article&item=firefox-...
It's not. You may as well be comparing C++ to Swift. Chrome has had a lot more time and resources put into it anyway while Quantum has only seen release for two years.
That said, you're better off comparing CVE counts for Firefox vs Chrome. That said someone could craft a malformed legacy font file that has to be parsed by some dusty win32 code and BOOM!
I would argue that it actually might not matter too much:
1. In a constraint context (embedded/kernel) you want to preallocate most things anyway, since even normal malloc/free isn't deterministic. Once you preallocate objects, it doesn't matter if those are coming from a long-lived GCed heap or from somewhere else.
2. Kernels like Linux rely on the fact that some small allocations never fail. Plugging in a GC and asserting that it won't fail would have the same effect (with some impact on latencies).
What might be most important is that the language supports explicit stack allocations, which avoids the need for any kind of heap when necessary. That is given for languages like C# and Go.
Could you please expand on this. When garbage collection works in applications and webservers that handle large number of requests, why wouldn't it work in OS kernel?
GC can create noticeable pauses in the whole windows UX because windows scales vertically in a machine not horizontally in a DC. And per another comment I made, if your OOM handler needs to JIT it can be catastrophic.
maybe it would? maybe somebody could create a co-processor just for that.
sadly cpu's won't actually have specific extensions to have a good way of doing garbadge collection. but I guess it would be pretty helpful.
The historic data related to memory access is really damning, 70% of CVE's are memory access related, and that hasn't changed significantly over the years charted, 2004 to 2018.
What's annoying in this article is that it doesn't come right out and say that MS is considering offering a full Windows SDK with a Rust API. The closest I see the article come to mentioning this is: "Exploring the use of a memory-safe language such as Rust would provide an alternative to creating safer Microsoft apps.
But Thomas also argues that third-party developers should also be looking into memory-safe languages as well. He cites reasons such as the time and effort developers put into learning how to debug the memory-related security flaws that crop up in their C++ apps."
It would be interesting to know if a Rust native API is something being actively worked on at MS.
BTW, MS does ship Rust in a product today, the VSCode editor/ide uses ripgrep for search, which the article fails to mention as well.
> What's annoying in this article is that it doesn't come right out and say that MS is considering offering a full Windows SDK with a Rust API. The closest I see the article come to mentioning this is: "Exploring the use of a memory-safe language such as Rust would provide an alternative to creating safer Microsoft apps.
Creating a fully Rustified Windows API interface would be a massive task. The WinApi is vast and parts of it are ancient. It's practically a living history of Windows. And that's before you get to the more modern COM and UWP stuff.
We (primarily Patrick Reisert, I'm trying to contribute a bit) have been working on WinRT support for Rust. It is a work in progress but is already usable now, with some missing capabilities such as UI support. https://github.com/contextfree/winrt-rust
I wonder if this applies to Windows at all (and in which parts). Windows Longhorn (using this name for the two years of effectively scrapped work on Vista) was written in c# but abandoned. The best reason I’ve heard is that the GC wasn’t suitable for an OS, especially in edge cases. For example, the C++ code I wrote at Microsoft had to gracefully handle an out of memory situation after any alloc (including setting the STL to not throw exceptions). An out of memory exception in C# can be catastrophic if the handler had to JIT in low level OS code.
I was fascinated with Midori as a Microsoft employee but nearly 10 years later I’ve resigned that it will probably be forever a research toy and not be subject to the same scrutiny as Windows.
I think that'd be the sweet spot. Do lower level portions in something like Rust or C, but the environment can be in Go, D, etc. Although a D OS could potentially be all D. Was this not what the Inferno OS did anyway? They used C for the core, and then made Inferno for other parts.
"Besides being superior to C# in regards to better memory protections, Rust is also more popular with developers these days and might be easier to recruit for"
Citation needed. I'm quite sure the pool of C#/.NET devs right now is quite a bit larger than the pool of Rust devs, to be honest, if anything for the sheer amount of headstart that C# has got over Rust.
There might also be the aspect that Rust programmers ask fewer questions due to the fact for example that C# might be taught in a university whereas Rust programmers come to the language mostly on their own.
Would that not skew questions toward Rust, where people rely on online resources to learn the language and cannot first ask questions ~~during office hours~~ on piazza?
I hang out a every once in a while in the Rust discord (official and comminity) and theres always a bunch of questions, probably close to the amount of what SO shows. I haven't dipped into it myself but from what I hear the Rust book (available for free on their site) and documentation are really good. So maybe that in combination with other tutorials is a reason why SO isnt so active. It's a similar deal with Elixir, elixir SO isnt too active because almost everything is asked on the Elixir forums, so if you go with SO as your sole popularity meter you're not always going to come away with accurate conclusions.
If MS is serious about this, I’d like to see them make Rust a first class option for Windows kernel development. There are a lot of drivers that really should be “rewritten in Rust”.
Eventually they should get there, but I don't think that the Rust ecosystem is ready for that.
Experimenting works the best when MS doesn't yet commit to external Rust API, but writes new code in Rust (especially CVE bug fixes) and transforms gradually.
If this goes anything like their "exploring integrating Python into Excel", in 18 months we'll have heard no more about it and it will be presumed abandoned.
in both cases, it was teams/users within Microsoft and not Microsoft itself, was it not?
Dunno, I'm glad there are teams looking at things outside of Microsoft developed technologies. NodeJS receiving kudos for being so good at serving requests lead to articles about how it could be done in ASP.NET (with some rearchitecting of the ready bake application code) which apparently gave way to ASP.NET Core and I'd say if all of that is true, we're all better for it (ASP.NET developers, anyway).
I'll definitely agree with that. Broadly, I like the direction Microsoft has been headed in, I guess I'm just disappointed that there has been no more information (even them saying they've shelved the idea) after the huge outpouring of support.
This is interesting coming from MS! I've used a lot of VM/GC languages and IMO C# is the best of them all. All the things people complain the most about, like type erasure in Java, or lack of generics in Go, are done right in C#.
I used it for years and don't have any major complaints.
Hearing that the company with the "best" language (all subjective) is interested in Rust is exciting.
This language should be called `Visual Difficult` since it seems to offload memory management to the human; while the compiler aggressively critiques them.
It is safe though; as in fool proof.
Based on this article it looks like it's possible for MS to truly embrace Rust.
This would be an opportunity for them at the same time to extend the language by offering for example a dialect which is less focused on functional programming idioms and the stranger notations, like lifetimes.
There are many people complaining about the above, and they would probably vote for the MS dialect, which could result in extinguishing current Rust and de facto replacing it with a more beginner-friendly version.
I don't think you can reasonably have a Rust dialect without lifetimes. Ownership, borrowing and lifetimes is the very core of Rust. The other Rust submission on the frontpage right now explains this: https://boats.gitlab.io/blog/post/notes-on-a-smaller-rust/
I thought Microsoft has already been using Rust and contributing to its ecosystem indirectly via its developers such as Actix and Actix-web, although it might just be that the author happens to work for MS while building it in spare time.
Interesting, I would have expected doubling down on c# and .net maybe some compile to native. Obviously not for low level, but the stats are not clear on if it’s general subsystems and applications or low level either
Microsoft uses a lot of different techs. Isn’t a lot of their cross platform and web code for office 365 build with React?
That doesn’t mean they aren’t moving forward with .Net or C#, but the main thing they want to sell to developers in 2019 is Azure where it used to be enterprise licenses for VS and windows server. And they don’t need C# for that.
Also, they seem to be wanting Rust as a replacement for C and C++, so it’s probably mostly unrelated to C#.
> C#, a programming language developed by Microsoft, also features some memory access improvements but is not as advanced as Rust. [..] Besides [Rust] being superior to C# in regards to better memory protections
That's false, isn't it? C# is a memory safe language, period. It relies on GC for that.
In fact C# has arguably better memory safety than Rust because you can do things safely in C# that you would be forced to use 'unsafe' in Rust for. (Granted, then you have overhead from GC, but that's not what the author is talking about.)
> Rust is also more popular with developers these days and might be easier to recruit for.
The author has misinterpreted what "most loved" means in the quoted survey: it means that among Rust developers, it gets a very high rating. That says nothing about how big that group is, nor how popular it is in the general population of developers.
The Rust community is growing but still very small - it's an emerging language. Almost everyone that uses it decided to use it because they like it. (That doesn't diminish the accomplishment - there are other emerging languages that are not as loved by their users.)