As a novice with Rust maintaining a cross-platform library written in Rust, I've found panics/unwinding is one of the most annoying things to figure out in the builds for various platforms.
I'm using nostd and panic=abort to keep the library small (around 200kb after being stripped), but it's really not a well-supported setup.
Depending on the platform, I get an error that the _Unwind_Resume symbol is not defined (despite it not supposed to be needed), or if I define it myself I get a duplicate symbol error on other platforms. I ended up linking to gcc_eh on Linux just to avoid the issue.
I just find it strange that out of all the possible issues with using nostd, unwinding is causing the most issues.
I have had the same issue writing rust libraries for things like performant (Gb/s) network log analyzers. Agree this is one of the weak points of the language in my relatively amateur Rust experience. I also faced the rare cargo dependency bug, but still better than alternatives.
I know little about it and I'm not sure that it works. It might.
I do know that for regular std-using programs, std has its own panic strategy that it was compiled with, so even if you set your own code's panic strategy to abort, it still ends up pulling in unwind-related code from libstd. The solution to that is to also recompile libstd with the same parameters as your own code, which is what `-Z build-std` does.
I don't know if the same thing applies even to libcore, ie for no_std programs. FWIW, coincidentally I spent yesterday making a no_std panic=abort program for a custom freestanding target, and the only missing symbols it needed from me were memcpy and memcmp, even though the code does do panicking things like slice indexing. That program requires `-Z build-std=core` since it's for a custom freestanding target.
> The project aims to provide a way to easily use Rust libraries in .NET. It comes with a Rust/.NET interop layer, which allows you to easily interact with .NET code from Rust
Are there some standout Rust libraries out there? Very curious about the motivation and use cases.
I'm sure that .NET has all libraries you'll need and to absolutely need one from a different language, whatever it is, it's a pretty niche case (e.g. perhaps the extremely good Rust `regex` library is faster, but it would hardly be worth the extra complexity in the general case).
I think is more common that you have some in-house library in a different language (Rust in this case) and you want to reuse them without rewriting them.
Correct, granted it's more complicated than "clients". At least Node.js (Neon), Python (PyO3), and Ruby (rb-sys/magnus) have nice supported bridge wrappers. The .NET-to-Rust interfacing in the Temporal .NET SDK required pure C FFI and P/Invoke and being careful about GC and lifetimes during interop. Can see the bridge at https://github.com/temporalio/sdk-dotnet/tree/main/src/Tempo....
I can say with regards to panics, .NET is very nice to wrap Rust panics into `SEHException` classes (though of course we strive to be completely panic free).
I honestly can't think of any drop-in libraries that would give you much .NET (wow that name sucks to type on mobile) doesn't already give you, though if you're struggling with a particularly slow implementation of something the Rust version is likely faster. I think the more useful case for Rust here is rewriting the heavier or more error-prone parts of your own app logic in it.
Rust is a step sideways if anything. Yeah, you don't have manual memory management headaches in .NET, but you also don't have Rust's fairly strong compile-time guarantees about memory sharing and thread safety.
Which enables stuff like rayon where you can basically blindly replace map with parallel map and if it compiles, it _should_ be safe to run.
(I'm not super familiar with the .NET ecosystem, so it's quite possible there's equivalent tooling for enforced thread safety. I haven't heard much about it though, if so.)
FWIW .NET has many alternatives to popular Rust packages. Features provided by Rayon come out of box - it's your Parallel.For/Each/Async and arr.AsParallel().Select(...), etc. Their cost is going to be different, but I consistently find TPL providing pretty fool-proof underlying heuristics to make sure even and optimal load distrbituion. Rayon is likely going to be cheaper on fine-grained work items, but for coarse grained ones there will be little to no difference.
I think the main advantages of Rust are its heavier reliance on static dispatch when e.g. writing iterator expressions (underlying type system as of .NET 8 makes them equally possible, a guest language could choose to emit ref struct closures that reference values on stack, but C# can never take a such change because it would be massively breaking, a mention goes to .NET monomorphizing struct generics in the exact same way it happens in Rust), fearless concurrency, access to a large set of tools that already serve C/C++ and confidence that LLVM is going to be much more robust against complex code, and of course deterministic memory reclamation that gives it signature low memory footprint. Rust is systems-programming-first, while C# is systems-programming-strong-second.
Other than that, C# has good support for features that allow you to write allocation-free or manual memory management reliant code. It also has direct counterpart to Rust's slice - Span<T> which transparently interoperates with both managed and unmananged memory.
Unfortunately there is no short answer to this. But the main gist is that improving this to take advantage of all the underlying type system and compiler features would require a new API for LINQ, improvements for generic signature inference in C# (and possibly Rust-like associated types support), and introducing a similar new API to replace regular delegates, used by lambdas, anonymous functions, etc. with "value delegates" dispatched by generic argument to methods accepting them, with possibly a lifetime restriction of 'allows ref struct' which is a new feature that clarifies that a T may be a ref struct and is not allowed to be boxed, as it can contain stack references or references to a scope that would be violated by moving to heap.
There have been many projects to improve this like https://github.com/dubiousconst282/DistIL and community libraries that reimplement LINQ with structs and full monomorphization, but the nature of most projects written in C# means their developers usually are not interested or do not need the zero-cost-like abstractions, which limits the adoption, and for C# itself it would need to evolve, and willingly accept a complete copy of existing APIs in LINQ with new semantics, which is considered, and I agree, a bad tradeoff where the simpler cases can be eventually handled through compiler improvements, especially now that escape analysis is back on the menu.
Which is why, in order to "properly" provide Rust-like cost model of abstractions as the first-class citizen, only a new language that targets .NET would be able to do so. Alternatively, F# too has more leeway in what it compiles its inferred types to, but its a small team and as a language F# has different priorities as far as I know.
Yeah it was specifically the (presumed) lack of Rust's "fearless" concurrency that I was referring to... i.e. we can ram this data through a parallel map, but is it actually safe to do?
(And of course the flip side of Rust here is that you need to be able to figure out how to represent your code and data to make it happy, which provides new and interesting limitations to how you can write stuff without delving into "unsafe" territory... something something TANSTAAFL)
> we can ram this data through a parallel map, but is it actually safe to do?
Most of the time - it is, sort of. As in, accessing types that are not meant for concurrent access may lead to logic bugs, but the chance of this violating memory safety is almost nonexistent with the standard library and slim with community ones (for example, such library may use a native dependency which itself is not thread-safe, usually it's clear whether this is the case or not but the risk exists).
The common scenarios are well-known - use Interlocked.Add instead of +=, ConcurrentDictionary<K, V> instead of a plain one, etc. .AsParallel() itself already is able to collect the data in parallel - you just use .ToArray and call it a day.
Other than that, most APIs that are expected to be used concurrently are thread-safe - from the top of my head: HttpClient, Socket, JsonSerializerOptions, Channel<T> and its Reader/Writer can be shared by many threads (unless you specify single reader/writer on construction to reduce synchronization). Task<T> can be awaited multiple times, by multiple threads too. A lot of C# code already assumes concurrent execution, and existing concurrency primitives usually reduce the need for explicit synchronization. Worst case someone just slaps lock (instance) { ... } on it and gets on with their life.
This is to say, Rust provides watertight guarantees in a way C# is simply unable to, and when you are writing low-level code, you are on your own in C# where-as Rust has your back. But in other situations - C# is generally not known to suffer from race conditions and async/await usually allows to flow the data in a linear fashion in multi-tasking code, allowing the underlying implementation to do synchronization for you.
As @neosunset says we have a lot of good options but I've not come across anything which strictly guarantees thread safety. In practise issues are uncommon and easy to identify / fix.
Honestly I'm more interested in code contracts than Rust, as they allow you to make a set of statements of your system which can then be validated statically. I've had very good results using them (and am forever grateful to the colleague who introduced me to them)... and I am interested in Rust (having dabbled with it, only I'm yet to use it in a paying or production project).
So sideways as you said - with C# you can usually rely on the GC for memory management, have fast compile times and what I feel is a more flexible model.. and top tier tooling and a huge and wide range of libraries. Rust can be much more efficient and has much better language-level properties wrt threat safety and a more mature story around native code.
I'd use Rust for system-level code or places where I'd otherwise think of using C++. I keep having discussions with people who want to use Rust for everything - CLI tools, web services (which are doing pumping), business logic and the like. It's getting tiring. The trade-offs are bad. C#, Go and Java are all far better suited & cover pretty much all the niches.
Does Rust have anything to spot resource leaks (i.e. the infamous IDisposable in C#)?
What I think you missed in that statement is that the code is technically still Rust safe, so in theory it wouldn't just be unsafe .NET Code, it would be Rust safe code that happens to run on .NET with a speed up since its less managed. It's kind of genius, if I'm not misunderstanding.
If you go through the hassle of integrating this, you would be better off manually optimizing your C# code to reduce GC allocations, which you can do within the safe subset of C#. This is not targetted towards increasing performance as it seems the entirety of the rust application gets converted to IL, incurring large performance hits across the board.
If you do not want JITing, you can already "pre-compile" dotnet code, and you will achieve near-native performance in either case—far better performance than running Rust as IL, as the author indicated.
In my opinion, the use case for this is minuscule at best.
> If you do not want JITing, you can already "pre-compile" dotnet code, and you will achieve near-native performance in either case—far better performance than running Rust as IL, as the author indicated.
AOT is still not "there yet" in some cases though? At least that was my understanding. It cannot do every single .NET project out there yet, but it can do some.
I guess another option would be to store the native code as an assembly manifest stream, and cast that to a delegate at runtime. That would achieve much the same thing as IJW without the proprietary bits.
Just specify the headers to generate bindings for, and then use the generated interop code. Not too different from importing them as it was in the past.
The concept of "verifiable code" is effectively dead - even C#'s own syntax is sometimes desugared into what one could consider "unsafe" code in terms of feature use. Unsafe code is safe when used correctly, it does not imply the reliance on undefined or implementation-defined behavior :)
For all intents and purposes, the non-unsafe subset of Rust is a safe language, and the distinction pretty much does not exist at the IL level.
If I read TFA correctly it looks like they're doing codegen to IL (which is analogous to Java bytecode, but for .net), which would probably mean your rust code is subject to GC?
It seems like you would get rid of performance benefits of rust.
But .net can do unsafe pointer operations and it can pin objects to avoid GC (they call it "handles" iirc).
I find the relationship between mutlithreading and panics' default behavior confusing.
In a single-threaded program, a panic is not supposed to be "caught" and aborts everything. If the main thread panics, the program stops.
But in a multi-threaded program, a panic terminates only the thread it happened in, and the program is allowed to handle that case without termination.
I'm guessing that setting `panic=abort` changes this behavior, but I'm not sure.
Your assumptions about panic=abort are correct, it will simply terminate the entire process. The single thread and multithreaded behaviors are technically the same. A thread can observe the panic of another, as such, in a single threaded app, which thread would observe the panic in the main thread? There is no other thread to do that.
What you could do is create a single new thread and pretend as though its the main thread, observing any panics on the original main thread.
Edit: your mental model should be to avoid thinking about panics (beyond avoiding them in the first place). Panics are supposed to be extremely rare, and typically something that would be difficult to recover from. They are not exceptions; they are not designed to be caught. The difficulty in dealing with them is a feature that prevents anti patterns. If something panics you have a bug.
From this perspective, the weird part is that mutli-threaded programs can recover from panics in some of the threads (as long as `panic=unwind`). I suppose it's so practically useful that people made an exception for it, without this feature people would have to do inter-process communication (for better or worse).
I think what's really special about the main thread is that Rust (and I believe in some cases the OS) forces the process to exit if the main thread completes. I think the difference in panic handling is mostly down to that. I think the description in the docs for std::thread describe this distinction the most explicitly.[1]
Fundamentally panic recovery works the same way in all threads—for both the main thread and spawned threads the standard library implements panic handling by wrapping the user code in catch_unwind().[2][3] It's more or less possible to override the standard library's behavior for the main thread by wrapping all the code in your main() function in a catch_unwind() and then implementing whatever fallback behavior you want, like waiting for other threads to exit before terminating. In some cases something like this happens automatically, for instance if the main thread spawns other threads using std::thread::scope.[4]
Catching panics in Rust is meant pretty much only to avoid unwinding across an FFI boundary, since that would be undefined behavior. Pretty much every other use of `catch_unwind` is a mistake, it's not guaranteed to catch all panics (they can also just abort) so panics are rather different from exceptions. Panics are intended to be useless for control flow, unwinding exists to help debugging, and any panic indicates a bug in the invoking program.
This is really neat, I always wanted to see other languages target .NET much like the JVM was a popular platform to target. Considering .NET is fully MIT licensed I am surprised we dont see more languages that target .NET
That is why CLR used to mean Common Language Runtime, and there were so many languages on the launch event and .NET SDK bundled on computer magazines back in 2001.
Then the Windows only focus (for a while there was Rotor), and Microsoft being Microsoft, all that interest faded away and people focused on the JVM, even though MSIL was designed for all kind of languages (well dynamic came later with Iron languages and DLR), including C and C++.
Nowadays CLR almost feels to have changed into C# Language Runtime, given the C# centric decision on modern .NET design.
While "C# Language Runtime" as a joke term certainly exists, most runtime improvements benefit all languages that target it, individual changes would have different impact on different languages but that's expected. It is likely further devirt and escape analysis work will have greater impact on F# for example.
> most runtime improvements benefit all languages that target it
And most CIL ABI additions to the CLR driven by C# totally break them, because the C# ecosystem adopts them immediately (recently even breaking changes in the shared framework!), and there's no modern equivalent to the Common Language Specification.
Plus, no library writer would care if there was a CLS 2.0 because every CLR language other than C# and F# is in maintenance mode or simply abandoned.
Before Rich made Clojure for the JVM, he wrote dotLisp[1] for the CLR. Not long after Clojure was JVM hosted, it was also CLR hosted[2]. One of my first experiences with ML was F#[3], a ML variant that targets the CLR. These all predate the MIT licensed .net, but prior to that there was mono, which was also MIT licensed.
Microsoft didn’t manage to have Visual Basic target .NET without turning its semantics into C# with different syntax, so this will be interesting to see.
Yes it did, the complaints were mostly from VB 6 folks complaining about VB.NET 1.0, several VB features like Me objects, being again more dynamic, repl, were eventually added back for those that kept around in VB.NET land.
Not that it matter much nowadays, given that its development has been placed on freeze mode.
They use their own in-house implementation. There's an OSS alternative from Samsung https://github.com/Samsung/netcoredbg but I haven't heard of anyone using it.
In general, the debugger is not intentionally made unavailable but rather the "properietary" one is just the original Visual Studio debugger extracted into a standalone package adapted to cross-platform.
Other than that, CoreCLR exposes rich debugger API, and debugger implementations just integrate with it, there are no "private bits" that would make this task insurmountable, there was just not much need to do so so far, and some Linux folks (as evidenced by skimming through other .NET submissions here) tend to have irrational and unsubstantiated hatred towards .NET which hinders otherwise positive community efforts.
The fact that it is proprietary intentionally makes it unavailable for use outside of Visual Studio and Xamarin Studio - this actually caused debugging to be unavailable in Rider for a while a few years ago before they built their own.
This is a strange statement. That said debugger also comes with base C# extension, which is free and debugger aside, MIT, in VS Code on all platforms. Xamarin Studio and VS for Mac are deprecated.
Given the confidence of your reply, one would assume you'd know this? Unless it's the exact problem I outlined previously, in which case please consider sharing grievances about something you do use actively instead, rather than what you think are .NET's issues.
- The MS Debugger was use in Rider - thus was perfectly functional from a technical perspective.
- It was later discovered that the license was proprietary, allowed only for MS products. VS Code is one of those. The extension may legally not be used with VS Codium or other such telemetry-neutering builds.
- The debugger was removed, and debugging of Core CLR apps was unavailable while JetBrains found an alternative (which did not take very long).
As I alluded to, the fact that this worked, and was just prevented by licensing makes it a construct solely of proprietary software licensing. It was well documented at the time:
As for daily driving: I was the first person outside of JetBrains to get hands on Rider. The fact that I don't write C# _daily_ in 2024 does not mean I have no first-hand knowledge of what was happening in 2016-2018, or indeed today.
These events predate .NET Core 3.1, which what I consider the baseline where "the new" .NET gotten good enough for businesses to migrate to. Before that there was a lot of uncertainty, breaking changes and culture shock, the echo of which is still felt to this day. Nonetheless, this holds little influence on the state of affairs in the last few versions, certainly since .NET 5, which, if I understand your first reply correctly, is the criticism in question.
Would you like to put it against Go for lacking package manager, Java for being stuck on version 8 or Rust for not having stable language server? /s
Or, to phrase it differently, "this is an issue" - "it was an issue in 2018" - "no, you don't get it, it's a valid criticism because nothing can ever be improved". You see how flawed this argument is?
I'm so tired of these low effort replies here that it's just sad, in technical conversations in other contexts I'd equally defend another language when someone blatantly misconstrues the facts. I don't have a horse in this race at this point, it's simply annoying to try to converse productively when the quality of replies is this low. I should probably spend time elsewhere.
It's not. Base C# extension for VS Code is free and MIT[0], the closed component, that is a debugger, is free as well. There is an open-source alternative too, and what effectively all debugger implementations do is integrate with debugger API provided by .NET itself, which any new tool can hook into. At the same time, Rider uses its own homegrown debugger, that works even better and has nice time-travel capability.
[0]: https://github.com/dotnet/vscode-csharp (this is _not_ DevKit, which is optional, the actual language support like Roslyn language server is part of the base extension, and you really don't need DevKit which has "extra VS-style accommodations" most of which can be done with different extensions, if that's what you want)
In any case, I assume none of this has any use to you and the reply is posted simply as bad faith engagement, as it continues to happen whenever a piece of software that uses .NET is mentioned, because usually very few people within community have/take issue with the current (rich) tooling options.
Note how many comments here and in similar submissions completely ignore the topic at hand and instead try to criticize the points that their authors assume are an issue with .NET itself.
jdb is part of OpenJDK, and doesn't try to implement any such restrictions. Neither does gdb, for that matter.
But there is also a cultural difference. .NET libraries (including the standard library) are notoriously poor at implementing useful .ToString() overrides, because it's all designed to assume that you will use a debugger.
For comparison, Scala and Rust have cultures that emphasize printf-friendliness, and I rarely have to reach for a debugger at all. The difference it makes for my sanity is immense (as someone who wasted years on the shitshow that is .NET).
I spent way too long trying to get netcoredbg to work, and couldn't get it to do much of anything. Maybe it's less of a shitshow now? Given that your original reply wasn't "yeah nobody uses the MS debugger anyway", I somehow doubt it.
> and the reply is posted simply as bad faith engagement
I mostly get annoyed when I see bad faith arguments that old problems are irrelevant because they're old, even if the problem has never actually been addressed.
> I spent way too long trying to get netcoredbg to work, and couldn't get it to do much of anything. Maybe it's less of a shitshow now? Given that your original reply wasn't "yeah nobody uses the MS debugger anyway", I somehow doubt it.
I was able to successfully debug simple async code with it after installing the vsix, disabling the official one and restarting VS Code without changing any other settings.
So, for the trivial case it works. Submitted issues do indicate further compatibility problems like not supporting "Debug.Write*" methods (just use a logger or Console.Write* I guess?) or instability when bridging this extension to something that isn't VS Code.
> For comparison, Scala and Rust have cultures that emphasize printf-friendliness, and I rarely have to reach for a debugger at all. The difference it makes for my sanity is immense (as someone who wasted years on the shitshow that is .NET).
This is the first time I hear someone tout print-based debugging as an advantage. The approach F# takes with printfn "%A" might be more to your taste. Otherwise, DebuggerDisplay and DebuggerTypeProxy are there for a reason, and I don't understand the case for not using a debugger. But if you really want to, there are many ways to make the output pretty. Making a simple '.Print()' extension method that will do indented JsonSerializer.Serialize is already a start. Records also come with default ToString implementation.
Historical precedent and anti-.NET bias - CLI/CIL is a much more powerful and flexible bytecode target than JVM, but it is also not as well-documented with fewer guest languages and, as a result, community know-how. With that said, it really is a breath of fresh air to see projects like this one, alongside ongoing work on F#, the main "other" language of .NET, and a couple small toy-like languages like Draco. There are also IKVM and ClojureCLR/ClojureCLR.Next.
In the past I have worked on Solidity -> Rust compiler, to enable Solidity on WASM VMs. My pain point was emulation of C3 inheritence in Rust, which I actually was able to implement with a few macros. In Rust -> .NET I'm interested in how he does the `DerefMut` trait.
I'm using nostd and panic=abort to keep the library small (around 200kb after being stripped), but it's really not a well-supported setup.
Depending on the platform, I get an error that the _Unwind_Resume symbol is not defined (despite it not supposed to be needed), or if I define it myself I get a duplicate symbol error on other platforms. I ended up linking to gcc_eh on Linux just to avoid the issue.
I just find it strange that out of all the possible issues with using nostd, unwinding is causing the most issues.
reply