Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wrote a lot of rust, but after some years it still feels unproductive. I do a lot of zig now and I am like 10 times more productive with it. I can just concentrate on what I want to code and I never have to wonder what tool or what library to use.

I know rust gives memory safety and how important that is, but the ergonomic is really bad. Every time I write some rust I feel limited. I always have to search libraries and how to do things. I cannot just "type the code".

Also the type system can get out of control, it can be very hard to actually know what method you can call on a struct.

I still think rust is a great tool, and that it solves tons of problem. But I do not think it is a good general purpose language.



> but the ergonomic is really bad. Every time I write some rust I feel limited.

> But I do not think it is a good general purpose language.

Remember that this is not a sentiment that's shared by everyone. I use Rust for tasks that need anything more complicated than a shell script. Even my window manager is controlled from a Rust program. I say this as someone who has been programming in Python for nearly two decades now. At this point, I'm about as fast in Rust as I am in Python.


I tried to get into rust for many years, I'm now in a C/CPP job (after Java/Python/Ruby and other gigs). What I've come to understand is that Rust's lifetime model is very difficult to work with whenever you have a cyclic reference. In C/CPP the same holds, but you deal with it through clever coding - or ignoring the problem and cleaning up memory later. Java, and other GC'd languages just work for these structures.

While the Rust devs believe such cyclic references are rare - I think this speaks mostly to the problem domain they are focused on. Relational models are everywhere in Apps, they are often common in complex systems software like databases, and they are fairly rare in firmware/drivers/system code code.

There are a few patterns for dealing with cyclic references, but they all end up requiring either unsafe or a main "owner" object which you clean up occassionally (effectively arena allocation). Having now worked in C/CPP - the idea of having unsafe blocks sprinkled around the code doesn't bother me, and many C/CPP components have some form of arena allocation built-in. I just wish Rust learning resources would be more upfront about this.


> Relational models are everywhere in Apps, they are often common in complex systems software like databases, and they are fairly rare in firmware/drivers/system code code.

It's not like that you can't write relational models in the safe Rust. The only forbidden thing is a reference pointing arbitrary memory, which is typically worked around via indices and often more performant in that way. It is much rarer to find applications that need an absolutely random pointer that can't be hidden in abstractions in my opinion.


> I just wish Rust learning resources would be more upfront about this.

While beginner resources don't dwell too much upon cyclic references, they don't consider unsafe blocks as unusual. All the material I've seen say that there are certain domains where Rust's compile-time safety model simply won't work. What Rust allows you to do instead, is to limit the scope of unsafe blocks. However, the beginner material often won't give you too much details on how to analyze and decide on these compromises.

Anyway, compile-time safety checks (using borrow checker) and manual safety checks (using unsafe) aren't the only way to deal with safety. Cyclic references can be dealt with runtime safety checks too - like Rc and Weak.


> Cyclic references can be dealt with runtime safety checks too - like Rc and Weak.

Indeed. Starting out with code sprinkled with Rc, Weak, RefCell, etc is perfectly fine and performance will probably not be worse than in any other safe languages. And if you do this, Rust is pretty close to those languages in ease of use for what are otherwise complex topics in Rust.

A good reference for different approaches is Learn Rust With Entirely Too Many Linked Lists https://rust-unofficial.github.io/too-many-lists/


Also, take a look at GhostCell (https://plv.mpi-sws.org/rustbelt/ghostcell/ and https://www.youtube.com/watch?v=jIbubw86p0M). If anyone's used this in a project or production environment, I'd love to hear your firsthand experiences and insights.


Except in those other languages the compiler types .clone() for me.


Sometimes the compiler types clone for you even you don’t actually want it to.


It is easier to tell it, "don't do it this time" than all the time.

It is no accident that while Val/Hylo, Chapel and Swift have taken inspiration from Rust, they have decided to not inflict the affine types directly into the language users, rather let the compiler do part of the work itself.


You still need a main owner in those patterns, that owner must be part of a DAG of owners - you cannot have cyclic ownership.


Oh I just put up a blog post about this on Monday :)

https://jacko.io/object_soup.html

Agreed that I wish more beginner Rust books had a section about this. The pattern is quite simple, but it's hard for beginners who get stuck to realize that they need it.


I would have needed this when I started learning Rust! All my early programs were object soups.


> While the Rust devs believe such cyclic references are rare -

They are.

I have not had to use cyclic references ever, except once doing experiments with fully connected graphs, that was very unusual

If you're doing a lot cyclic references Rust is not the right choice. Horses for courses

But are you sure you're using the best algorithm?


Maybe for you it's unusual - in my previous work all the apps contained graphs, and I just joined a company where almost all the apps also contain graphs


I don't write Rust, but I never understood why graphs meant you need circular references.

Doesn't it just come down to the question of who owns the node?

If it's a tree, and parents are never removed before children, just make the child owned by the parent and keep a weak reference to the parent.

If it's a general graph, and vertices can exist or not exist regardless of edges, keep a list of them independent of the edges, and keep weak references in the edges.

If it's a graph where a one or a few roots exist, and nodes exist as long as there's a path from a root node to them, that sounds like a classic use case for Rc<>.

Is there a common use case I'm missing?


Things get tricky when you have a valid triangular relationship amongst equal objects. This comes up far more often than you’d expect.


Can you give an example?


What's a frequently encountered case for such cyclic loops? Without details I'm drawn to trying to break the cycle, either by promoting the shared state to a container object for the set, or by breaking it out into it's own object that multiple things can point at.


I think a game is a good example, or anything that's kind of like a game in that it's modeling a world that's changing over time. Objects come and go, and they "target" each other or "require" each other or whatever sort of relationships the program wants to express. Those relationships end up forming a graph that might contain cycles.

I just put up a blog post about this actually :) https://jacko.io/object_soup.html

> promoting the shared state to a container object for the set

Yeah I think that's a good way to describe these "ECS-ish" patterns.


a parent field.

a doubly linked list


Your parent said "frequently encountered" and while it's probably true that doubly linked lists may be "frequently encountered" in some people's code they're usually a bad idea and "don't use a list here" is often the right fix, not "conjure a way to make that safe in Rust".

It's very noticeable how often people who "need" a linked list actually only wanted a queue (thus Rust's VecDeque) or even a growable array (ie Vec).

Aria has a long list of excuses people offer for their linked lists, as well as her discussion of the time before Rust 1.0 when she sent lots of Rust's standard library collections to that farm up-state but wasn't able to send LinkedList.

https://rust-unofficial.github.io/too-many-lists/


Doubly-linked list is something you have almost no reason to ever write.

Parent field is something where you have a clear hierarchy (it's not really “cyclic”, so it's the perfect use-case for weak references).

When coming from a managed-memory language, this obviously requires some conceptual effort to understand why this is a problem at all and how to deal with it, but when compared to C or C++, the situation is much better in Rust.


Also, a parent field is something you should be able to infer, e.g, keep a stack of parents as you traverse down a search tree following the child pointers, instead of storing parent pointers in the tree nodes.


That's assuming you traverse the tree down from the root each time. Often you do, but there are cases where you don't -- e.g., if your goal is to determine the lowest common ancestor of two given nodes.


ASTs


An abstract syntax tree can't have cycles by definition.


Technically true, but sometimes you want parent pointers. You then have a more general graph in the underlying representation, but it still represents a tree structure.


Same shows up in Postgres Query* struct for SQL. Copying memory between parser, planner, execution would be too expensive in large queries - so instead you have an arena allocated representation.


ASTs are one of "nicely behaving" data structures. It is like a archetype of abstract data types pervasive in functional programming languages.


An Abstract Syntax Tree / or Double-Linked List both qualify, but they're also a lower level implementation detail than I'd expect to frequently interact with in a reference safety focused language.

I've still been meaning to write something in / learn Rust's ways of thinking; is there not an intended replacement for these data structures? Or do they expect it all to go under Unsafe?


> is there not an intended replacement for these data structures? Or do they expect it all to go under Unsafe?

For linked-lists, there's one in std and the majority of people should never have to write their own as it's error prone and requires unsafe.

For graph use-case then you can either use ECS, arena, ref counting or unsafe, but you're probably better off using/developing a dedicated crate that optimizes it and abstract it away behind an easy to use (and safe) interface.


The one in std uses unsafe. My main concern with learning rust is that you can spend ages trying to learn “the right way” of doing things in rust, when the right way really is to use unsafe.


No, the right way is to use unsafe primitives that have been tested, audited or even formally proven (like the ones in std).

Sometimes such a primitive doesn't exist and you should use unsafe yourself, but then you're the one supposed to make sure that your code is in fact sound. If you keep unsafe for small portions of the code you can reason about and extensively test so Miri gives you a good level of confidence, then it's fine to use unsafe. But it's the more expensive option, not the default.


You usually solve this by using a traversal helper that keeps the stack and next/prev for you without storing them inside the AST explicitly.


to be a bit pedantic, i assume the language you a referring to as CPP is actualy C++? cpp to me means the c (and c++) preprocessor.


Surprisingly, I am faster in Rust than any other language. Something about my prior experiences just made it click just the right way.

I don't want to program in anything else anymore. I don't want to deal with obscure C++ error messages, C footguns and lack of ergonomics, I don't want to deal with abstraction hell of Java, or the poor person's typing that python has.

I have been programming in Python for the past 6 years, I know all sorts of obscure details, and with rust, I just don't need to think about all of those issues.


> Surprisingly, I am faster in Rust than any other language.

Not really surprising, given that you have C and C++ background. That's what I was trying to highlight. Rust isn't a confusing or unproductive language as many project it to be - if you have the conceptual understanding of what happens on the hardware. Especially about stack frames and RAII. If you know those, the borrow checker complaints will immediately make sense and you will know how to resolve them.

Add rust-analyzer (Rust's language server) to it, you get real-time type annotations and a way to match types correctly in the first attempt. In my experience Rust also helps structure your program correctly and saves a ton of time in debugging. All in all, Rust is a fast way to write correct programs.


> Rust isn't a confusing or unproductive language as many project it to be - if you have the conceptual understanding of what happens on the hardware. Especially about stack frames and RAII. If you know those, the borrow checker complaints will immediately make sense and you will know how to resolve them.

I have reasonable understanding of "what happens on the hardware" (been writing kernel code for years), know modern C++ (with RAII and stuff) and Rust is confusing and unproductive language for me.


I get the feeling that learning rust can be a "bang your head against it until you get an 'aha' moment" sort of affair, much like learning git.

Some people pick up rust quickly because it clicks into their brain early, some take longer or end up bouncing off.


I had university courses on computer architecture and assembly, even before I took up Python as a hobby. I did have a little C experience before that. My entire perspective on Rust type system from day 1 (back in 2013, before Rust 1.0) was based on the hardware (primarily stack frames) and problems I had with assembly and C. There was never a point where the borrow checker didn't make sense. This is why I insist that Rust isn't hard to understand if you learn the hardware on which it runs.

Back then, people were debating the design decisions that led to the borrow checker, in public for everyone to see (on Reddit and IRC). They were trying to avoid memory safety issues in Firefox and Servo. They were even a bit surprised to discover that the borrow checker solved many concurrency bugs as well.


I took a different route to Goku (other commenter), I used to write a lot of C and C++ in university, did everything there, up until 2018-ish, I got a bit into rust and things just clicked, my understanding of memory was just not that good enough, and then my C skills skyrocketed as a consequence of learning proper memory management.

Then I got into Haskell, and functional programming, that made thinking about traits, immutability, and all functional aspects a breeze.

Then finally, I got into rust again, use it at work and personal projects. Somehow I managed to rewrite a project that took me 4 months in Python in about 4 days. It was faster, more robust, cleaner, and I could sleep at night.


I'd add that if you have some understanding of how memory ownership should be such that you don't end up with memory leaks, you are fine. The borrow checker just verifies that your mental model is correct, and removes some of the cognitive load from you.


> if you have some understanding of how memory ownership

What reading materials will help me level up my understanding of this?


My background is PHP, Python, and Go, and I have the same experience as GP


> At this point, I'm about as fast in Rust as I am in Python.

This is factually impossible.

For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

This is not even considering more local complexities, like data structures with cyclical references.


> This is factually impossible.

How do you outright deny something as subjective as my personal experience? Besides, I'm not the only one in this discussion that made the same opinion.

> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

While GC'ed languages allow you to skip a proper initial design stage, it's a stretch to claim that it's not required at all. In my experience using Python, while the initial stages are smooth, such design oversights come back and bite at a later stage - leading to a lot of debugging and refactoring. This is one aspect where Rust saves you time.

> This is not even considering more local complexities, like data structures with cyclical references.

I'm not going to dwell on cyclical references, since there's another thread that addresses it. They point out a way to make it as easy in Rust as it is in GC'ed languages.

Meanwhile, the upfront architecture and data structure design isn't as complicated as you project it. Rust is mostly transparent about those - even compared Python. How well do you understand how Python manages Lists, dictionaries or even objects in general? I often find myself thinking about it a lot when programming in Python. While you need to think upfront about these in Rust, there's actually less cognitive overhead as to what is happening behind the scenes.


This is possible if you are really slow in python


Or maybe they mean "fast to 1.0" rather than "fast to 0.1"?

They didn't specify.


That is also not shared by everyone. If you have written enough Rust to have internalized designing for the borrow checker, you don't have to spend much time in a design phase.

The only time I find I have to "fight the compiler" is when I write concurrent code, and you can sidestep a lot of issues by starting with immutable data and message passing through channels as a primitive. It's a style you have to get used to, but once you build up a mental library of patterns you can reasonably be as fast in Rust as you are in Python.


> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

It's nearly the opposite. For larger programs in Python, you need an upfront design stage because the lack of static typing will allow you to organically accrete classes whose job overlap but interfaces differ.

Meanwhile, Rust will smack you over the head until your interfaces (traits) are well-organized, before the program grows enough for this to become a problem (or until you give up and start over).

How do I know? I'm stuck with some larger Python programs that became a mess of similar-but-not-interchangeable classes. RiiR, if I ever have the time.


> For larger programs in Python, you need an upfront design stage because the lack of static typing will allow you to organically accrete classes whose job overlap but interfaces differ.

You can also install pre-commit and mypy, and have static typing.


That's the entire point we're making. Rust's type system forces you to deal with the problem early on and saves time towards the end. It's not like that's impossible with Python with addons like mypy. But Rust's type system goes beyond just data types - lifetimes are also a part of the type system. I don't know how you can tack that on to Python.


> Rust's type system forces you to deal with the problem early on and saves time towards the end. It's not like that's impossible with Python with addons like mypy.

Definitely not - mypy's pretty good these days, and lots of people use it.

> But Rust's type system goes beyond just data types - lifetimes are also a part of the type system. I don't know how you can tack that on to Python.

Well, Python's objects are generally garbage collected rather than explicitly destroyed, so I don't think it'd make sense to have lifetimes? They don't seem a correctness thing in the same way that types do.


Lifetime analysis matters a lot for way more than just garbage collection.

File handles, iterators, mutex guards, database transaction handles, session types, scoped threads, anything where ordering or mutual exclusivity matters.


I don't know about all of those, but Python's context managers and built in constructs handle most of those, I think?


Only in the most basic cases. If your handle has to be passed into another function or outlive the current scope of your function, the guardrails end.


Lifetimes and borrowing are very much a correctness thing and aren't just for tracking when memory is freed. While you won't have use-after-free issues in a GCed language, you will still have all the other problems of concurrent modification (data races) that they prevent. This is true even in single-threaded code, with problems like iterator invalidation.


Mutability is a big one for correctness. In Python, any function you pass any (non-primitive) object to might be mutated right out from under you and you have no idea it's happening. In Rust, you have to explicitly provide mutable references, or you need to hand over ownership, in which case you don't care if the callee mutates its argument, because you no longer have access to it.


RiiR: Rewrite it in Rust


> This is factually impossible.

Factually it's not. It may be true that in a very very idealized thought-experiment, when someone has a perfect knowledge, never makes mistakes, doesn't have preferences, can type arbitrary fast etc, python needs fewer keystrokes, fewer keywords or such, thus is faster. Who knows. But in reality none of the assumptions above hold, also literally everything plays a much bigger role in development speed anyway.


You know what slows me down in Python? The fact that you need to actually go down a code path to make sure you’ve spelled everything right.

But nothing that Rust does slows me down because I’m used to it.


Of course it's possible. You just need to write Python very slowly :)


Everyone has their breaking point. I start to write Python very slowly around after 10k lines or so. Can't remember where I put stuff...


> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

Every language requires this (if you want robust code), most just let you skip it upfront ... but you pay dearly for doing so later.


I disagree. I have similar observation. With modern editors and Language Server that are giving immediate feedback writing strongly typed languages doesn't differ than writing python.


I would flip this around.

If you are only writing small programs where perf doesn't matter, crashing doesn't matter, design doesn't matter - Python will be faster than Rust, because the only thing that matters is how you can write the code from 0 to "done". You can jump right in, the design stage is superfluous & unnecessary. There is nothing to optimize, the default is good enough.

If you are doing slightly more than that, Python and Rust become about even, and the more you need those things, the better Rust becomes.


> This is factually impossible.

No it isn’t Both languages comprise much more than their memory management handling. Even if your premise is true, the conclusion does not follow.


I mean, it’s easily the same for me. I am way more productive in Rust because I know it very well, and with Python I’m debugging all kind of gotchas.


I have to second the OP: ownership isn’t that hard. I just get used to structuring a program in certain ways. Having written a lot of C++ helps because the things Rust won’t let you do are often unsafe or a source of leaks and bugs in C++.

Having an editor with rust-analyzer running is massively helpful too since ownership issues get highlighted very quickly. I can’t imagine dealing with any language larger than C without a smart editor. It can be done but why?

I still find async annoying though.

My biggest source of friction with Rust (other than async) is figuring out how to write code that is both very high performance and modular. I end up using a lot of generics and it gets verbose.


I think this is a very valuable comment, and the replies don't do it justice.

I strongly agree from my own and my peers experience with the sentiment that latency from zero to running code is just higher in Rust than Python or Go. Obviously there are smart people around and they can compensate a lot with experience.


Honestly I found myself coding very much the same way in Rust as I did in Python and Go, which were my go-to hobby languages before. But instead of "this lock guards these fields" comments, the type system handles it. Ownership as a concept is something you need to follow in any language, otherwise you get problems like iterator invalidation, so it really shouldn't require an up-front architectural planning phase. Even for cyclic graphs, the biggest choice is whether you allow yourself to use a bit of unsafe for ergonomics or not.

Having a robust type system actually makes refactors a lot easier, so I have less up-front planning with Rust. My personal projects tend to creep up in scope over time, especially since I'm almost always doing something new in a domain I've not worked in. Whenever I've decided to change or redo a core design decision in Python or Go, it has always been a massive pain and usually "ends" with me finding edge cases in runtime crashes days or weeks later. When I've changed my mind in Rust it has, generally, ended once I get it building, and a few times I've had simple crashes from not dropping RefCell Refs.


Couldn't one use Arc and similar boxed types to avoid thinking about memory until later?


Why not just something like nim and that point, and straight up ditch 90% of the complexity?


I was answering to this other user:

> This is factually impossible.

> For anything larger than (very) small programs, Rust requires an upfront design stage, due to ownership, that it's not required when developing in GC'ed languages.

It seems that is not factually impossible.

Now, answering your question: It could be useful to use boxed types and later optimize it, so you get the benefits of rust (memory safety, zero cost abstractions) later, without getting the problems upfront when prototyping.


I've finally set my mind to properly learning a new language after Python, Haskell, and typescript. I'm looking into Rust especially because of how I've heard it interoperates with Python (and also because it's maybe being used in the Linux kernel? Is that correct?).


Rust is an excellent follow up to those languages. It's got many influences from Haskell, but is designed to solve for a very different task that's not yet in your repertoire so you'll learn a ton.

And yes the Python interop is excellent.


I'm sold, thank you. Yes, it felt like a great "missing quadrant" to my generalist skillset.


Linux kernel has support for rust userland drivers, and rust interops with python with pyo3.


Not sure what you mean by "userland" drivers here, but support for kernel modules written in rust is actively being developed. It's already being used for kernel drivers like the Asahi Linux GPU driver for M1 Macs.


I am referring to userspace / userland drivers.

https://www.kernel.org/doc/html/v4.18/driver-api/uio-howto.h...


But you can write userspace drivers in any language, as long as that language has basic file I/O and mmap() support. There's nothing special about using Rust for userspace drivers.


Isn't this false? Don't certain languages basically need, say, a libc that isn't nessesarily available in kernel space?


What window manager is that, may I ask out of curiousity?


Sway. And river, sooner or later. A single Rust program is used for setting up services (using s6-rc on Gentoo), shutdown, reboot, idle inhibit, etc.


It's interesting that you bring up Python. I find Rust unpleasant to program in -- and I also find Python unpleasant to program in.

Now I'm wondering about demographics. Are people who love Python more likely to love Rust as well?


I agree. I feel far more productive in C and C++ than in Rust at that point.

Rust feels like totally missing the sweet spot for me. It's way too pedantic about low level stuff for writing higher level applications, but way too complicated for embedded or writing an OS. In the former case I would rather take a C++, Java, Haskell, OCaml or even Go, and maybe sprinkle some C, and in the latter case C in macroassembly mode is far more suitable.

I still have a feeling that original vision of Graydon Hoare (i.e. OCaml/SML with linear types, GC, stack allocations, green threads and CPS) would be a much better language.


The problem with C and to C++ is that it’s 2023 and the CVE list is still loaded with basic memory errors. These come from everywhere too: small companies and open source all the way up to Apple, Microsoft, and Google.

We as a profession have proven that we can’t write unsafe code at scale and avoid these problems. You might be able to in hand whittled code you write but what happens when other people work on it, it gets refactored, someone pulls in a merge without looking too closely, etc., or even maybe you come back two years later to fix something and have forgotten the details.

Having the compiler detect almost all memory errors is necessary. Either that or the language has to avoid this entirely. Rust is the former class unless you use unsafe, and the fact that it’s called “unsafe” makes it trivial to search for. You can automatically flag commits with unsafe in them for extra review or even prohibit it.


I think nobody is arguing the need for static memory safety, just that the poor Rust ergonomics aren't a good tradeoff, especially for scenarios where C is useful. We need many more Rust alternatives that explore into different directions, Rust is already too big and "established" for any radical changes in direction.


On that regard, by packaging Modula-2 into a C friendly syntax, I do agree Zig is relatively interesting option, however not having a story for binary library distribution is an hindrance in many C dominated industries.


> I think nobody is arguing the need for static memory safety, just that the poor Rust ergonomics aren't a good tradeoff

Unless the "poor ergonomics" and lack of shortcuts are explicitly what provides the static memory safety.


IMHO Rust's ergonomics problems aren't caused by the borrow checker itself, but have the same cause as similar problems in C++ (mainly a "design by committee" approach to language design and implementing features that should be language syntax sugar in the stdlib instead, which then directly results in the stdlib being too entangled with the language and "too noisy" hard to read code).

Apart from the static memory safety USP, Rust is repeating too many problems of C++ for my taste, and at a much faster pace.


I agree with this. The borrow checker itself isn't the problem. That's necessary to make you write correct and safe code anyway.

The problem is that there is too much syntax, too many symbols, and too many keywords. I just keep forgetting how to use impl and lifetimes and single quotes and whatnot. It makes it really tough to use as an occasional language. And if I can't do that, then how can I get confident enough to use it in my job?


> The problem is that there is too much syntax, too many symbols, and too many keywords. I just keep forgetting how to use impl and lifetimes and single quotes and whatnot.

This is exactly how I feel about Rust.

There are some good ideas in there, hiding behind a horrible language design. I am waiting for someone to provide a more developer friendly alternative with the simplicity of C or Go.


Maybe Lobster could be an option.

https://strlen.com/lobster/

https://aardappel.github.io/lobster/memory_management.html

It uses compile-time reference counting / lifetime analysis / borrow checking. Which is mostly inlined to the point that there is none of the sort in the compiled output and objects can even live on the stack. It basically looks like Python but is nothing like it underneath, and of course with no GIL. You can run it on JIT or compile it to C++.

There's also Koka with the Perceus algorithm on compile time and looks like a much cleaner language than Rust. It also tracks side effects of every function in a type where pure and effectful computations are distinguished.

https://github.com/koka-lang/koka


In that regard, strangely enough, I find with constexpr code, templates, and concepts, is easier to achieve most compile time code stuff, while staying in C++, than dealing with Rust macros.


When the pendulum swings it often swings far before normalizing somewhere in the middle. I agree that Rust isn't the answer.

FWIW there is interest in adding bounds checking to C [1]. That discussion includes feedback from at least one member of the C standards committee.

[1] https://discourse.llvm.org/t/rfc-enforcing-bounds-safety-in-...


It is, however those same companies aren't dialing full safe ahead knob either, hence why Microsoft just recently published a set of secure coding guidelines for C and C++.

https://devblogs.microsoft.com/cppblog/build-reliable-and-se...


Does it have tooling to enforce those guidelines? If not, how is it better than someone saying "write correct code" and calling it a guideline?


Following a guideline that checks correctness for you is easier than following "write correct code".


Partially, in Visual Studio and GitHub.


Quite annoying to read in my native language, German. Do they use automatic translations? It's full of grammar errors and mistranslations.


These are two issues, which a theoretically orthogonal but in practice not so much. These are known as soundness and completeness. A good talk on topic [1]

Rust will reject a lot of sound programs, and that's a huge performance hit. You are hitting the incompleteness wall with multiple mutable borrowing, closures in structures are a huge source of pain as well. And usually the answer from the community is "use handlers instead of pointers", but this gives you, surprise surprise, manual resource management alike that in C, and the compiler won't help much.

Of course this is all subjective but for me the ergonomics of Rust is far too bad. It's a good step in right direction along with ATS, but I really hope we could do better than this.

[1] https://www.youtube.com/watch?v=iSmkqocn0oQ


Can you give us examples, please? I use Rust since version 1.0, and I like it a lot.


Cyclic data structures are impossible to represent in safe Rust because there is no clear "owner" in a cyclic data structure.


Cyclic structures can be flattened into a vector or an arena, or unsafe Rust can be used.


What are some examples of sound programs you want to write in Rust but are unable to write?


I maintain very large C and C++ application and very rarely have any memory issues. Tools like Valgrind and Helgrind are excellent for finding and fixing problems. So switching to Rust is a very bad ROI.


I gave Rust a few chances, and always came out hating its complexity. I needed a systems programming language to develop a hobby OS[1], and Nim hit the sweet spot of being very ergonomic, optional GC, and great interop with C. I can drop down to assembly any time I want, or write a piece of C code to do something exotic, but the rest of the system is pure Nim. It's also quite fast.

[1] https://github.com/khaledh/axiom


Opposite experience for me. Writing Rust on embedded systems greatly improved my confidence and speed. When using C a small mistake often leads to undefined behaviour and headaches. Rust theres none of that - its been a game changer for me.


I was using rust on embedded, I moved to zig. Very happy with it as you can pass allocators around to use fixed buffer.


When you are developing hardware on an FPGA, a lot of hardware bugs look like they have locked up the CPU and strangely enough, a lot of undefined behavior looks exactly like a hardware lockup...


I am curious what kind of code you are writing? Is it very low level or very high?

>I know rust gives memory safety and how important that is, but the ergonomic is really bad. Every time I write some rust I feel limited. I always have to search libraries and how to do things. I cannot just "type the code".

You don't have to search libraries and figure out how to do things in Zig?


It's hard to describe, but in some languages, you spend a lot less time looking at reference docs and more time just naturally writing the solution. Lisp is a great example of that, if you get through the learning curve.


I suspect Zig libraries feel easier because they're doing easier things. I bet if you try to draw a triangle with the Vulkan API in Zig, you'll find yourself looking at the reference docs a lot.


Most of the time I can use my "general computer science baggage" to write the solution. At present, I do embedded and web business logic (wasm) where the UI is rendered by preact. For those two projects zig is working very well.


I agree with this general feeling, and it is hard to articulate

Rust forces you to figure out ahead of time where each bit or byte is going to go and on which thread and using which mutation scheme. I’m happy to play the game, but it feels tedious for anything short of a parser or a microcontroller.

It messes with my process because I like to get something working before I determine the best API structure for it

I can get 90% of the performance with Swift and it flows much more easily, even though Rust’s type system is more powerful.


I’ve written plenty of Rust code in my life (easily more than 100kLOC, and I’ve really never worried about putting which bit where.

You can just clone and be on your merry way; you don’t need to worry about perf-related things if you don’t want to.


Those two sentences feel in contradiction of one another. You don’t need worry to about where the bits go, you just need to know to call a method to move the bits?

Swift makes every type implicitly copyable on the stack, including object pointers (through ARC), so you don’t have to clone. You can even pass functions as variables without any hoops.

I love lots of things about Rust, though, and will continue to use it for lots of things. Cross-platform things, for one!


I rather use compiled managed languages like Swift, D and C# instead, they provide enough low level coding knobs for C and C++ style coding, while being high level productive.

Would add Go to the list, but only when I really have to.

Nim and Crystal could be alternatives, but don't seem to have big enough communities, at least for what I do.

However I do agree with the conclusion, Rust is a great language for scenarios where no form of automatic memory management is allowed, kernels, specific kinds of drivers, GPGPU programming, as general purpose, there are more productive alternatives, equally safe.


C# is underrated by the HN crowd, I find. I quite like how mid sized firms (100-1000 employees) use it.


I used to be .NET dev and don't agree. Couple of reaons:

1) Modern Java is almost as good as C# with some things I can't give up in Java (static imports => succint code, Groovy Spock => succint tests)

2) Kotlin is better than C#

3) JVM has much much bigger ecosystem (almost all the apache projects are JVM oriented) and default web framework is much less code to type (SpringBoot) is much more productiv

4) JVM has wider variety of langs

For those reasons IMHO if you are small-mid company (or startup) it's wiser to choose JVM.


Kind of, maybe you need to do some low level coding and don't want to wait for Valhala, or make use of JNI and native libraries, GraalVM/OpenJ9 still aren't as integrated as .NET Native or Native AOT, e.g. writing native shared libraries.

Also Java lost the attention span of the gaming industry, besides Android casual games and Minecraft, there are hardly anyone else paying attention to it.


To be fair, C# is ok for game dev, but not great. C# libraries are lagging heavily behind Java.

Want a fastest possible library? It's in C++, and not portable to Win/Mac. So good luck with wrap + porting it.

Want a decent implementation of an algo? It usually exists for Java but not for C#. Hope you like writing it from scratch.

Want a C# implementation of an algo that doesn't allocate to the Nth degree. Again, write it yourself.

But ok, maybe Unity has a good ecosystem... And they fucked it over a barrel.


It is widely better recieved in the AAA gaming developer community than Java, and that is what matters.

I also like Java, but c'mon no decent algorithms being implemented in C#? That is already approaching zealotry.


I didn't say no decent algorithm in C#, but for each performance sensitive algorithm/data structure there is a C and Java implementation at the least ( in my case Roaring Bitmaps).

In C# the solution is half baked or archived or abuses allocation.

I think Unity has way more with C# adoption in game dev than innate C# qualities.


This is a classic case of goalpost moving. The reason why so many algorithms are written in Java especially closer to academic side is because most curriculums in comp-sci often straight up not allow using anything else except Java, Python or sometimes C++. Having C# as an alternative in these is a luxury. There are also more people using Java in general. However, this does not make it a better language at solving these tasks, nor it is any suitable for writing high performance implementations for advanced vectorized algorithms which would push hardware, which is actually what you want when you start caring about such scenarios, which C# excels at.


I'm not moving the goalpost. I explained my examples in another reply. Want to write an engine mostly from scratch in C# and you need libraries that are low on allocation and for niche data/algorithms that games need? You're going to have a bad time(TM).

Sure you could use YAML parser, but it allocates everyone and their mother. Can you find a Fluent localization in C#? Sure, but its outdated and archived. Ok, but basic RoaringBitmap implementation? The repo is archived and not fully complete.

Why C# is used in game dev is incidental. It has more to do with Unity and XNA/FNA than any concrete quality of language modulo value types (but then again, most C# libraries don't focus on avoiding allocation and are just as happy as Java to construct a complicated hierarchy of classes).


I think Java is only good for long-running servers.

Java doesn’t support C interop. For many desktop and embedded projects this is a showstopper, here’s an example https://github.com/Const-me/Vrmac/tree/master/VrmacVideo That C# code directly consumes V4L2 and ASIO Linux kernel APIs, and calls unmanaged user-mode DLLs like libfdk-aac.so and liba52-0.7.4.so.

Native stack and value types in C# reduce load on GC, and the number of complicated tricks required from JIT compiler. This in turn helps with startup performance. This is critical for command-line apps, and very desirable for desktop apps.

Another thing missing in Java is intrinsics support, both scalar like popcnt, bitscan, BMI, etc., and SIMD like SSE and AVX.


Projects Panama & Valhalla seems to solve all your complaints:

> Java doesn’t support C interop. For many desktop and embedded projects this is a showstopper, here’s an example https://github.com/Const-me/Vrmac/tree/master/VrmacVideo That C# code directly consumes V4L2 and ASIO Linux kernel APIs, and calls unmanaged user-mode DLLs like libfdk-aac.so and liba52-0.7.4.so.

Part of Panama: check out the "Foreign Function & Memory API" [0]. The official docs [1] say it is a preview in 21 but it got stabilized in Java 22 (isn't out yet).

> Another thing missing in Java is intrinsics support, both scalar like popcnt, bitscan, BMI, etc., and SIMD like SSE and AVX.

Also part of Panama: see the "Vector API" JEP [2].

> Native stack and value types in C# reduce load on GC, and the number of complicated tricks required from JIT compiler. This in turn helps with startup performance. This is critical for command-line apps, and very desirable for desktop apps.

This is part of Project Valhalla [3], they're adding value types and actual generics, among other things.

That said, most of these are not done / not in a stable LTS Java release yet. We'll see how much better it'll be compared to C# (if at all) once they land.

[0] https://openjdk.org/jeps/454

[1] https://docs.oracle.com/en/java/javase/21/core/foreign-funct...

[2] https://openjdk.org/jeps/460

[3] https://openjdk.org/projects/valhalla/


> Part of Panama

Most real-live C APIs are using function pointers and/or complicated data structures. Here’s couple real-life examples defined by Linux kernel developers who made V4L2 API: [0], [1] The first of them contains a union in C version, i.e. different structures are at the same memory addresses. Note C# delivers the level of usability similar to C or C++: we simply define structures, and access these fields. Not sure this is gonna be easy in Java even after all these proposals arrive.

For a managed runtime, unmanaged interop is a huge feature which affects all levels of the stack: type system in the language for value types, GC to be able to temporarily pin objects passed to native code (making copies is prohibitively slow for use cases like video processing), code generator to convert managed delegates to C function pointers and vice versa, error handling to automatically convert between exceptions and integer status codes at the API boundary, and more. Gonna be very hard to add into the existing language like Java.

> "Vector API" JEP

That API is not good. They don’t expose hardware instructions, instead they have invented some platform-agnostic API and implemented graceful degradation.

This means the applicability is likely to be limited to pure vertical operations processing FP32 or FP64 numbers. The rest of the SIMD instructions are too different between architectures. A simple example in C++ is [2], see [3] for the context. That example is trivial to port to modern C#, but impossible to port to Java even after the proposed changes. The key part of the implementation is psadbw instruction, which is very specific to SSE2/AVX2 and these vector APIs don’t have an equivalent. Apart from reduction, other problematic operations are shuffles, saturating integer math, and some memory access patterns (gathers in AVX2, transposed loads/stores on NEON).

> most of these are not done / not in a stable LTS Java release yet

BTW, SIMD intrinsics arrived to C# in 2019 (.NET Core 3.0 released in 2019), and unmanaged interop support is available since the very first 1.0 version.

[0] https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Lin...

[1] https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Lin...

[2] https://gist.github.com/Const-me/3ade77faad47f0fbb0538965ae7...

[3] https://news.ycombinator.com/item?id=36618344


Well maybe you should use C++ or Rust instead of Java or C# in that case?

My point is if you are doing business (especially web) apps. Use one of JVM langs insted of C# because ecosystem is much bigger (and it has fresher langs as well like Kotlin - if that's what you care about)


> use C++ or Rust instead of Java or C# in that case?

Despite having to spend extra time translating C API headers into C#, the productivity gains of the higher-level memory safe language were enormous.

Another example, I have shipped commercial embedded software running on ARM Linux, and based on .NET Core runtime. The major parts of the implementation were written in idiomatic memory-safe C#.

> doing business (especially web) apps

Well, these business web apps are precisely the long-running servers I have mentioned. Still, the software ecosystem is not limited to that class of problems, and due to different tradeoffs Java is not great for anything else.


Also with JDK 21 - you can use virtual threads. No need for async/await which IMHO is a design mistake. Java copied Go here instead of C#.


Which makes C interop worse, just like Go


Java is surely keeping up but I can't name single Java feature that I miss in C# or is implemented better in Java. I haven't used Java in a long time though, just occasionally I read about new Java features and I've never said to myself "cool, I wish I had it in C#".

Static import are also available in C# for quite some time now (c# 6, released in 2015, and in C# 10 you can even make this import global for for project).

I haven't used Kotlin, is there any killer feature compared to C#? (except more succinct code in certain cases?)


Kotlin is younger and made better choices by default, like immutable "val" as default option.

Also since it's Jetbrains - IDE integration is superior compared to anything C# can have (including Rider...)


Depending on how you look at it, better extension everything support on Kotlin's case, and a way to do DU, while it keeps being discussed for C#, people should just add F# to their codebase, but alas.


#1 => agree

#2 => don't know enough about Kotlin to comment

#3 => agree but quality > quantity

#4 => not terribly important to me (Clojure is cool but it's not "switch to the JVM" level cool)


I've always found those sort of firms with C#, in my experience, have the best architected code. Proper domain-driven design, onion architectures, clean testable code... Some have legacy issues where they might not have the most cutting edge CI/CD pipeline or high automated test coverage, but the code itself can be very nice. I've never really experienced that level of consistency with a different language/company size.


C# is a lovely language to work with.

The only issue I have is with the .NET ... that is, building self-contained binaries to distribute.

For comparison:

* Hello World win-x64 binary self-contained in .NET 7 is around 70 MB

* The same for Go results in 1.2 MB

Edit: Missed 'trimming' in .NET, which would result in a binary of size around 11 MB in win-x64


Usually that means you aren't using trimming, .NET speak for dead code removal during linking.

Also remember that standard .NET runtime does a little bit more than Go's runtime, so it might happen that even with trimming, for basic applications Go ends up having an upper hand on file size.

On the other hand, I have had Go static binaries grow up to 200 MB and being require to use UPX to make it manageable, e.g. trivy.


You're right. But even with trimming I get around 10x the size of the Go binary


    dotnet publish -c release -p:PublishAot=true


Since I edited my comment, see my additional remarks regarding runtime capabilities, and the counterpoint of big Go binaries.

Also note that triming only works properly if the libraries have taken the effort to be trimmable, as the linker errs on the safe side and won't trim unless certain that it is really dead code, and not called via reflection.


I was making Windows 98 apps with Delphi 4, and they were 350 KB large

And I was upset that they were so big. Sometimes I used UPX. Or I kicked out all Delphi GUI libraries, and created the GUI with the Win32 API calls directly. I got 50 KB Hello Worlds.


50kB hello worlds? Uhm.. thats still big.

15k May 3 2019 quickrun.exe*

Win32 GUI Application that spawns Window and ask for alias to run. Pure Win32 API, written in C (Mingw).

I literaly looled at 11MB hello world of .net or 1.2MB Go..


Well, it is what I remembered

I do not have Windows 98 anymore. But I still have Delphi 4 installed under Wine, so I just tried it out.

Just showing a messagebox from windows gives 16k

Using the sysutils unit though, puts it at over 40k. And with classes, it becomes 57k. Not sure what they pull in. sysutils contains number/datetime parsing and formatting, and exception handling. classes has basic containers and object oriented file handling.


Ahh, Delphi. Then I suppose its all right for it. Still, much better compared to Go or Java :D


If only Borland's management didn't decide to focus mostly on enterprise customers.


Let me give a real world example from my own experience.

I have built a Win32 desktop app with its core logic in Go; and then re-built from scratch using .NET (v7). The core logic involved a fairly complicated keyboard input processing based on bunch of config files.

- Final binary of .NET ~ 14 MB

- Final binary of Go ~ 2 MB


This is an unfair comparison of apples to oranges by building the binary with the wrong flags. .NET produces smaller binaries than Go with NativeAOT (despite including more features).


Yea, I missed trimming. But still NativeAOT results in 10x the size of the Go binary in Windows (win-x64)


Two aspects:

- There is no point in chasing smallest possible binary size if it trades off performance and features one wants to use in production scenario, or comes with other tradeoffs that sacrifice developer productivity. I don't see anyone complaining about the size of GraalVM native images. As long as binaries are reasonably sized, it's not an issue.

- dotnet publish -c release -p:PublishAot=true definitely produces smaller binaries than Go as of .NET 8 (and no, you cannot use the argument that it's not released yet - it's in RC.2 which is intended for evaluation for adopting .NET 8 scheduled for release next month)


That's awesome, honestly. Can't wait.


Idk the last time you tried but a hello world in C# using .Net 8 is smaller than a Go hello world, for what it’s worth.


Is it? Every time I see C# being mentioned here people agree how awesome it is. Not that I'm complaining, I love C#


Agreed. I feel C# is appropriately rated on HN and other programming forums. It has perfomant memory options that other GC languages lack, and great builtin packages to use. Overall, it is a good language.

My biggest issue with C# though is how badly exceptions are handled given that it is a statically typed langauge. I wish functions explicitly defined the exceptions it can throw since a minor package bump could add an exception without your compiler warning you that it isn't handled. I much prefer Rust, Go and Zig's error handling to C#'s since those kind of issues don't happen.


> It has perfomant memory options that other GC languages lack, and great builtin packages to use.

As clarification for the audience, it isn't the only GC enabled language with C and C++ like capabilities, in fact there are several examples since the early 1980's.

The adoption push for Java and scripting languages distorced the understanding of what was already available out there.


Well, I see it as lack of fanboyism which is interesting and almost unique to the Java/C# ecosystem. A lot C# experts(and I mean REAL, low level experts) seem to also have very high Java expertise..

And those that have Java expertise but not C# seem to demur to those that do; image!

But it's still niche(around here and in the startup world) and gets lumped in with Java and together they are not "hip" or "agile" or whatever.


C# is fine, but it feels like a slightly better Java, just without the huge ecosystem of libraries.


Slightly... The difference in type erasure is pretty huge IMHO.

But what libraries are you lacking?


Type reification is planned and so is value types.

And type erasure isn't as negative as you seem to make it.

Bunch of really obscure use cases - fluent localization, roaring bitmaps and so on.


> compiled managed languages like [...] C#

I've been out of the windows development game for a long time, so I haven't used C# since it strictly required a VM... what's pre-compiled C# development like nowadays? Are there major caveats? If you can emit plain old binaries in C# with no runtime dependencies, that would make it a truly compelling language IMO.

And as another question, what's the cross-platform (mainly Linux) support like in AOT-compiled C#? If it's just as good as in Windows and emits plain executables, I would probably consider it the best place to start for any new project. (Something tells me it's not...)


C# supports AOT since forever, NGEN was present in .NET 1.0. Not many people used it, because it requires signing binaries and only supports dynamic linking, with a performance profile towards fast startup.

On Microsoft side the Singularity and Midori experiments used AOT.

They influenced the AOT toolchains for Windows 8 store apps with MDIL (Singularity/Bartok), and Windows 10 store apps with .NET Native (Midori/Project N).

Now there is Native AOT, which supports CLI and native libraries,.NET 8 extends that to EF and ASP.NET frameworks. For GUI applications, maybe only fully on .NET 9.

Mono AOT has had support for ages, being used on iOS, Android, and Blazor.

Finally there is IL2CPP and Burst compiler from Unity.


In 8, NativeAOT also supports iOS (and even Android reportedly?) for, I assume, MAUI target to do away with Mono. Documentation on this definitely needs work, and there are projects that made it working with WPF, Windows Forms and Avalonia back in .NET 7. Arguably, none of those were particularly user-friendly but generated COM interop project for 8 was done specifically to improve this on Windows as well.


This. I have exactly the same experience, I can't believe how much I was able to ship with Zig and the code mostly feels like "done".

You can always improve it, but there's no need to. With Rust, I was never happy, even after 5 years, I was still thinking about better abstractions and implementing more traits, making it more generic, etc.


Why is there no need to improve Zig code but there is for Rust code? You'd need the same abstractions in Zig as well, no?


No, usually you don't. Rust has closures, iterators, generics, different traits for operator overloading, smart pointers, etc.

Zig doesn't have any of that. It's very interesting combination of low-level, predictable code, with meta-programming, where you get some of that abstraction back.

i.e. Zig does not have generics, but your function can return type, so generic list is just a function which returns a newly created struct.


Could you expand on the generics point, please? That sounds interesting but I can't quite get my head around it.


Functions in Zig can be called both in runtime and in compile-time. You can force some expression to be called during comptime using a keyword, and sometimes the comptime is implied (like when you define top-level const)

If a function is called in comptime, it can also return types. So for example:

    // this is a function which accepts a type
    // if you accept type, you also have to restrict the arg to be comptime
    // if the arg is comptime it still does not mean that the function cannot be called in runtime,
    // but in this case, it returns type, so it is comptime-only function
    // there are also cases where this is not true, like std.mem.eql(u8, a, b) which accepts type,
    // but can be called in runtime because it does not return type
    fn Wrapper(comptime T: type) type {
        return struct { value: T };
    }

    const F32Wrapper = Wrapper(f32);

    // @TypeOf(x.value) == f32
    var x: F32Wrapper = .{ value: 1.0 }


Zig has `comptime` where you effectively generate Zig code at compile time by writing Zig code, no special generics syntax/semantics needed. It is very nice and powerful concept that covers lots of ground that in Rust would belong to the land of procedural macros.


Can't speak for why Zig doesn't have a problem but Rust is cursed by its success: it lowers the barrier for improvement enough to entice you to always improve it.


No. Rust forces you to spend endless hours doing mental gymnastics which shouldn't be needed in the first place (linked data-structures, owned arenas, or even just boxed slices are impossible in safe rust).

And you just keep refactoring/improving/desperately trying different ideas, because it never feels right.

It's ok if you don't agree but pls don't try to make my responses look like I like Rust, I don't and I'd happily get those years back if it was possible.


> it can be very hard to actually know what method you can call on a struct

The rust-analyzer language server can autocomplete the available methods for a value.


depending on the autocompleter feels like asking to code to Chatgpt to me.


I disagree, there's a big difference: rust-analyzer is deterministic and 100% accurate while ChatGPT is non-deterministic and hallucinates.


Yep. I can't remember method names for the life of me, which is why my best experiences have been with Go and Java: The IDE (always Jetbrains) knows, via the type system, what methods I can call.


Then you underestimate the power of ChatGPT by a facor of a million.


Furthermore you can use `cargo doc` to generate a documentation website that had everything you can do or you can use docs.rs for this. Whoever wrote this didn't embrace the tooling and just gave up.


Wait, I am a bit confused. Does Zig have more/better libraries than Rust? I thought it's a pretty new language. The most limiting thing for me with Rust was the lack of libraries (vs. say Python or Node/JavaScript).


It doesn't. The ecosystem is very immature and even the official tooling is very unstable. It has a bunch of interesting design ideas but at this point it's more of an experimental language than a production ready one by most metrics. (And unless it finds some kind of corporate backing, this is unlikely to ever change).


We shipped a few web apps backed by zig. It is absolutely in production.


Just because you put it in production does not mean it's production ready.


It interops seamlessly with C libraries.


Depending on what seamlessly means, Rust can also interop with C libraries. I wrapped a bunch of them.


Truly seamless because the zig compiler is also a C compiler, so the type information and calling convention works across languages at a level above any other I've encountered.


It's also an unfinished language. I agree Zig is promising, but it's not confidence inspiring when the creator is still making videos debugging the compiler.


True, but I think that the person you're responding argued that rust/C integration is also seamless. (In the general discussion I'd say they're right as C to Rust integration isn't much of an problem and you can use C libraries relatively easily in Rust as well, but at the same time when talking about Zig I don't think it's fair to put it on the same ground).


It is impossible to do C interop without a C compiler, by the way.


Seamlessly as in @cInclude("raylib.h")


Given that Zig is memory unsafe it isn't either a good general purpose language.

IMO a good GPR is memory safe (no C, C++, Zig), is easy to use(no Rust), has strong static typing (no Perl, Python, Ruby) and is "stable" (no Scala). Lots of choices remain: Java, Kotlin, Ada, D, OCaml..


From these, my favourite is D


c#


It somehow seems that Zig has most of the qualities that people like about C: clear, crisp, no huge Stdlib, good old straightforward imperative semantics, and readonably fast. But without lots of the cruft.


Unfortunely lacks support for binary libraries, and not yet a story for UAF other than the tooling we are already using in C derived languages to track them down.


You should try diving into num for like a month and see how you like it. It's different enough that you need to go past a certain kind of ledge to start liking it. Or at least that was my experience.

For me, it shares the most important benefits of Rust but with quite a lot more ergonomic coding model.


Nim on paper is great; it has many advantages over Rust in the general purpose "niche". Tragically, it's kind of stillborn. It's older than Rust, has orders of magnitude less mindshare, and has no companies with serious technical reputations backing it.


Yeah, you're not wrong about the mindshare problem. But it somehow at least in my mind differs from other "stillborn" older languages in that it keeps improving. The end result is that it still feels modern in the year 2023.


So any language that isn't sponsored by Google/MS/Amazon/Mozilla from day 1 should just die?


Woopsie. I meant s/num/nim/ of course.


Since you have previously said that you are using Zig to do embedded programming for medical devices, I assume that it is your main pain point. I largely agree that the current Rust embedded story is not exactly for existing embedded programmers (and I guess you are one of them). Rather it looks more like a solution for existing application programmers. I don't think it's an intrinsic limitation of Rust the programming language, rather a specific community at this point happens to prefer this way. Still it is a weakness in some sense and Zig will be a good alternative.


Yes, I think rust is very good for higher level programmers wanting to code embedded like a regular OS. There are many great projects around using rust on embedded.

But me, I prefer to manipulate registers directly. Especially with "exotic" MCU where you have to execute write/read in specific number of CPU cycles. Rust makes that very hard.


By "wrote" you are meaning just coding or coding+debugging? Because other languages are easier to code, but hard to make error free, while Rust is hard to write but much easier to make bug free.


nim could be another option which defaults to c backend with python syntax


I use Rust a lot, and have been really keen on getting into Zig.

Not sure if much has changed (it was a while back), but my biggest problem was with finding and using 3rd party libraries (mostly for the boring stuff like DB connectivity, JSON/YAML parsing, logging, etc.).

E.g. even now if I search for "zig mysql library", the tops hits are all about people discussing it on reddit, than any actual library.


You cImport the C library most of the time.


Give Copilot a try, it completely shift coding experience in Rust. Especially this:

> I always have to search libraries and how to do things.

Once you pass the initial curve with crutch like Copilot, then you can be almost as productive (if not more, considering refactoring and testing) with your native first coding language


How much of your negative Rust experience is due to async?

Having used Rust for a long time it’s definitely the biggest source of confusion and headaches.


It is not that complicated but it is very "time consuming" for reasons I cannot really explain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: