> Our philanthropy is generally private, but I'm making an exception since I think my public support of Zig has a chance to really help the project due to my background.
I can't quite put words to it, but this statement kind of struck me. There's just a certain basic decency behind it that deserves celebrating.
Same here... except maybe in the opposite direction?
Does it implies that their other philanthropy would not benefit from more public support? So what are they in this case?
Anyway, they do whatever they want with their philanthropy in the end, but I found that was an odd phrasing.
A lot of people use philanthropy as a means to ego boost and raise their public image. This is a phenomenon as old as time, to the point where even the bible has tales about it. The (modern?) countermovement to that is to keep your donations secret, often to the point where it's one of the strings attached to the money, that you can't publicly disclose who made the donation. This is a way to support a cause just for the sake of supporting it. I read the above statement as them usually following this ethos, but making an exception this time around since they believe being public about it will bring more visibility to the cause.
> to the point where even the bible has tales about it.
The Bible tells you not to talk about your donations!
> This is a way to support a cause just for the sake of supporting it.
For many causes the money matters, but the publicity does not. In this case Zig gains from it being better funded makes people more likely to have the confidence in its future to adopt it, and from the PR benefit (e.g. getting one more mention here).
On the other hand for something like a charity that helps the poor, we all know of the need already. Publicity does not help much - in fact I would be more likely to give to a small charity that does not get big donations than to one I know is getting big donations.
> The Bible tells you not to talk about your donations!
Not a Christian, but since no one else dug up the quote:
"Thus, when you give alms, sound no trumpet before you, as the hypocrites do in the synagogues and in the streets, that they may be praised by men. Truly, I say to you, they have received their reward. But when you give alms, do not let your left hand know what your right hand is doing, so that your alms may be in secret; and your Father who sees in secret will reward you." -- Matthew 6:2-4 (RSV)
But then you have:
"You are the light of the world. A city set on a hill cannot be hid. Nor do men light a lamp and put it under a bushel, but on a stand, and it gives light to all in the house. Let your light so shine before men, that they may see your good works and give glory to your Father who is in heaven." -- Matthew 5:14-16 (RSV)
So I guess according to scripture "it depends". I do believe Judaism and other religions have similar teachings for that matter.
There are two main reasons to make a lot of noise about philanthropy: to draw attention to the cause or to draw attention to themselves. You can often tell the difference, based on whether someone's primarily talking about how the cause is important, or whether they seem to be primarily aggrandizing themselves for supporting it.
That said, while the former is more obviously laudable, the latter does serve the purpose of raising the status of being charitable, which can lead to more people being charitable.
perhaps his other philanthropy donations are not technical and he believes that supporting them publicly would feel like bragging and he personally doesn't like that?
No it is clearly that as the main technical founder of Hashicorp his endorsement means something here. Where as other causes are outside of the area of his expertise.
It is understood as a generally good thing for the average person to donate to a long standing public institution, like your local art museum or food bank. Hashimoto donating publicly to such a place wouldn't sway anyone's understanding around that.
Maybe other philanthropy they do is not part of the tech scene where they are most recognized. Maybe in those circles they are seen as just another rich couple.
Well, if you're giving money and attaching your name to it is worth something, it makes total sense. In this case, it could end up encouraging more contributions worth more than his.
Yeah, that's my point: the salient point here for me is the judgment call that this is one specific context where Mitchell attaching his name to the donation adds value.
Sorry if this is a silly question, i am a web developer so I dont usually dwelve into systems or low level programming except out of curiosity.
My understanting is that everyone is suggesting to move to memory safe languages when possible, however Zig does not seem to have any.
Since zig is a new language my guess is that the main use would be brand new projects, but sholdn't this be done in a memory safe language?
It seems that the selling point of Zig is: more modern than C but simpler than Rust, so I understand the appeal, but isn't this undermined by the lack of memory safety?
> It seems that the selling point of Zig is: more modern than C but simpler than Rust, so I understand the appeal, but isn't this undermined by the lack of memory safety?
Memory safety is a useful concept, but it’s not a panacea and it’s not binary. If the end goal was safety JS would have been fine. Safe rust is guaranteed memory safe which is a huge improvement for system programming but not necessarily the end-all-be-all. There are always tradeoffs depending on the application. I personally think having safety be easily achievable is more important than guaranteed. The problems we’ve had with C and C++ is that it’s been hard to achieve safety.
You can write very performant and very safe code in C/C++. Look at the gaming industry - or industry in general when things had to be burnt to disc. The problem now is that the complexity of the languages has increased and the average proficiency of software developers has plummeted (due in part to the increase in complexity). Google introduced Go to try and solve this, partly. Rust is another language that has memory safety as a core part of it's design. Another reason it is probably better are writing safer programs is that it is a lot less complex than C++. It seems to be catching up, but thankfully the memory safety concept is now deeply routed in the Rust community that even introducing complexity now, the language will still benefit from it's memory safety features and developers who are used to this style of language.
Zig is also a good choice if you care about safety - it simplifies things (by having a defer statement) and it's tooling is geared towards safety by having multiple targets that let you run your program in ways to catch memory safety issues during development. It is not enforced by the compiler, only at runtime in development/non-release-fast builds but still an improvement over C/C++.
I really don't think the gaming industry can be used as a shining example of very safe C++ code. If there's any software category for which people have historically low expectations wrt bugs, including outright crashes, it's video games. Even back in the era when it was all shipped burned to disc, and Internet was a luxury, there were games that were literally unplayable at release for many players; remember Daggerfall?
In the areas where Zig really shines, the equivalent code in Rust would probably have a lot of “unsafe” keywords which basically disables the memory safety features anyway.
I think it remains to be seen if Zig is less safe than Rust in practice. In either case you have to write a lot of tests if you actually want your program to be safe. Rust doesn’t magically eliminate every possible bug. And if you’re running a good amount of tests in debug mode in Zig you’ll probably catch most memory safety bugs.
Still, if I was making something like a web browser I would probably use Rust
> In the areas where Zig really shines, the equivalent code in Rust would probably have a lot of “unsafe” keywords which basically disables the memory safety features anyway.
This is a common misconception, but the `unsafe` keyword in Rust does not disable any of the features that enforce memory safety, rather it just unlocks the ability to perform a small number of new operations whose safety invariants must be manually upheld. Even codebases that have good reason to use `unsafe` in many places still extensively benefit from Rust's memory safety enforcement features.
> the `unsafe` keyword in Rust does not disable any of the features that enforce memory safety, rather it just unlocks the ability to perform a small number of new operations
If you view the locks on those operations as guard rails ensuring memory safety, GP's phrasing makes sense: The unsafe keyword disables them.
Huh, are you really trying to say unsafe Rust is safe? By this logic, C is safe, it also just has "safety invariants that must be manually upheld."
Unsafe Rust is even less safe than C because the rules that must be manually upheld are stricter. For example in C you can create an invalid pointer and it's fine as long as you don't access it. In Rust you can't even create an invalid reference or you have already invoked unchecked undefined behavior.
There's no common misconception here. I think you're misunderstanding the quoted comment due to being overly pedantic.
> Huh, are you really trying to say unsafe Rust is safe?
I'm unclear what part of my comment would lead someone to such an extreme conclusion. As mentioned, the `unsafe` keyword is used to unlock new operations and create new safety invariants that must be manually upheld. Naturally, failure to manually uphold those new invariants would lead to memory unsafety. But an `unsafe` block introduces no unsafety by itself. Which is to say, if you take a working Rust program with no unsafe blocks, and then wrap the body of `main` in an unsafe block, this is a no-op; it does nothing.
> By this logic, C is safe, it also just has "safety invariants that must be manually upheld."
Certainly, this is true, and I'm not sure why anyone would think otherwise. The problem is not that it is theoretically impossible to write correct C; rather the problem is that it is empirically infeasible to do so at scale. By locking unsafe operations behind an unsafe block, Rust attempts to make it feasible to identify the areas of most concern in a codebase and focus attention on proving those areas correct manually.
> Unsafe Rust is even less safe than C because the rules that must be manually upheld are stricter.
Unfortunately this is another misconception, although it's understandable why one would think this. The rules for raw pointers in Rust are less strict than the rules for raw pointers in C, which is to say, manipulating raw pointers in Rust is safer than doing the same in C. The misconception here comes from the conflation of raw pointers with Rust's references, which do have more safety invariants to uphold, and for several years there were footguns to be found here due to language-level deficiences WRT the inability to avoid creating temporary references when working with uninitialized or unaligned memory. The good news is that this was addressed with the addition of std::mem::addr_of in Rust 1.51.
> For example in C you can create an invalid pointer and it's fine as long as you don't access it.
Unfortunately, this is incorrect, though it illustrates why raw pointer manipulation is more fraught in C than it is in Rust. In C, using pointer arithmetic to cause a pointer to point outside the bounds of an array (save for one element past the end) is undefined behavior, even if you never dereference that pointer. In contrast, this is not undefined behavior in Rust. As another example, comparing pointers from two different allocations with less-than/greater-than is undefined behavior in C, but this is not undefined behavior in Rust.
> There's no common misconception here. I think you're misunderstanding the quoted comment due to being overly pedantic.
I have seen this misconception arise regularly for years. If this is not what the parent commenter intended, then I apologize for misreading it. At the same time, I don't regret clarifying Rust's semantics for the benefit of people who may be unfamiliar with them. Surely it benefits us all to learn from each other.
IMO "unchecked" is liable to cause the same sort of confusion; Rust is still performing all the usual checks, but we the programmer just introducing new things that must be manually upheld. I've come around to the notion that the keyword for the block should be `promise` (though 10 years ago this might have caused confusion with Javascript programmers), whereas the keyword for the function should remain as `unsafe`.
> code in an unsafe block is assumed by the compiler to be safe.
This is another instance of the same misconception. For every Rust operation that can exist outside of an `unsafe` block, Rust enforces memory safety even when that operation exists inside of an unsafe block. In other words, Rust does not assume that all code inside of an unsafe block is safe; e.g. you can neither disable the borrow checker nor disable bounds checking merely by wrapping code in an unsafe block.
What this means is that you still receive the benefits of Rust's normal safety guarantees even in the presence of unsafe blocks. Instead, what unsafe blocks do is allow you to invent your own safety invariants to layer on top of Rust's ordinary semantics (which is also what you're doing in C and Zig).
Right but specifically it’s about being able to violate certain invariants you can’t otherwise and that’s it. Namely
* Call unsafe functions
* do memory aliasing
* change the lifetime the compiler sees
That’s about it. The syntax and rules otherwise are still rust and violating those rules (eg aliasing in a way not allowed by rust) still results in UB. This can surprise some rust people even within popular crates and stdlib
Not sure why Zig would be wholesale branded as being "memory unsafe". It has an extensive suite of tools and checks for memory safety that C does not have.
Safety is a spectrum - C is less safe than C++, which is less safe than Zig, which is less safe than Rust, which is less safe than Java, which is less safe than Python. Undefined behavior and memory corruption are still possible in all of them, it's just a question of how easy it is to make it happen.
Agreed. My personal experience is Rust is more safe than Python as you get runtime errors when your interpreted Python code has a type error in it, but that's a compiler error in Rust so you don't have an "oopsie" in production.
Much harder to write Rust than Python, but definitely safer.
(Rust vs Java is much closer, but Java's nullable types by default and errors that are `throw`n not needing to be part of the signature of the function lead to runtime errors that Rust doesn't have, as well.)
I'm talking specifically about memory safety (when using unsafe/raw pointers). Being able to say "once I allocate this memory, the garbage collector will take care of keeping it alive up until it's no longer referenced anywhere" makes avoiding most memory safety errors relatively effortless, compared to ensuring correctness of lifetimes.
You can absolutely opt-out of lifetime management in Rust. It's not usually talked about because you sacrifice performance to do it and many in the Rust community want to explicitly push Rust in the niches that C and C++ currently occupy, so to be competitive the developer does have to worry about lifetimes.
But that has absolutely nothing to do with Rust's safety, and the fact that Rust refuses to compile if you don't provide it a proper solution there means it's at least as safe as Python and Java on the memory front (really, it is more as I have already stated). Just because it's more annoying to write doesn't affect it's safety; they are orthogonal dimensions to measure a language by.
Most memory safety errors are from not being able to test things like whether you are really dropping references in all cases or whether your C++ additions are interacting with each other. C is not safe but it is safer than C++. Rust is not going to stop all run away memory possibilities but it isn't going to hide them like a JS GC.
If your goal is to ship to most users something that kind of works then there are certainly complex solutions that will do that.. If your goal is memory safety that's more like every device working as expected which is done with less bloat not more.
Simply because Rust requires you to manage memory yourself. It provides conveniences like Drop to help you do this correctly, but it still makes things harder (when using unsafe) than having a garbage collector to just throw your allocations at.
Java and Python both have access to unsafe operations (via sun.misc.unsafe/ctypes) but Java is multithreaded, which requires extra care, whereas Python is not.
`drop` is an optimization. You never have to call it if you don't want to, Rust will automatically free memory for you when the variable goes out of scope.
Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe` as you note is also possible in other languages). The Rust compiler, when writing normal Rust code will prevent you from compiling code that uses memory incorrectly.
You can then solve the problem by figuring out how you're using the memory incorrectly, or you could just skip out on it by calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` for multi-threaded code, and have it effectively garbage-collected for you.
In any case, this is orthogonal to safety. Rust gives you better safety than Python and Java, but at the cost of a more complex language in order to also give you the option of high performance. If you just want safety and easy memory management, you could use one of the ML variants for that.
You don't really seem to be understanding the point I'm making, or perhaps don't understand what memory safety means. Or perhaps are assuming I'm a Rust newcomer.
> Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe`
There is no "except if you" in this context. I'm talking about unsafe Rust, specifically. I'm not talking about safe Rust at all. Safe Rust is a very safe language, and equivalent in memory safety to safe Java and safe Python. So if that's your argument, you've missed the point entirely.
> In any case, this is orthogonal to safety.
No, it's not orthogonal - memory safety is exactly what I'm talking about. If you're talking about some other kind of safety, like null safety or something, you've again missed the point entirely.
> ... calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` ...
This whole paragraph is assuming the use of safe abstractions. If you're arguing that safe abstractions are safe, then, well... I agree with you. But I'm talking about raw pointers, so you're missing the point here.
You're moving the goalposts. Your original post had zero mention of unsafe Rust. You have now latched onto this as somehow proving Rust is less safe than Python and Java despite also mentioning how Java also has unsafe APIs you can use, which nullifies even your moved goalposts.
Btw, Python also has unsafe APIs[1, 2, 3, 4] so this doesn't even differentiate these two languages from each other. Some of them are directly related to memory safety, and you don't even get an `unsafe` block to warn you to tread lightly while you're using them. Perhaps we should elevate Rust above Java and Python because of that?
No goalposts have been moved here. Rust is a programming language with both safe features and unsafe features. It is a totality.
And now you're linking me docs talking about things I already explicitly mentioned in my past comments.
You are so confidently ignoring my arguments, and so fundamentally misunderstanding basic concepts, that this discussion has really just become exhausting. I hope you have a nice day but I won't be replying further.
Yes, Rust is a language with safe and unsafe features. So is Java and Python (and you admitted that in your comments). So Rust is not any less safe than Java or a Python but that logic, and the original point you’ve made in the first comment is incorrect.
Actually Rust is safer because its unsafe features must be surrounded by ‘unsafe’ keyword which is easy to look for, but you can’t say that about Java and Python.
I can't think of anything in either Java or Python that is memory-unsafe when it comes to the languages themselves.
You can do unsafe stuff using stdlib in either language, sure. But by this standard, literally any language with FFI is "not any less safe" than C. Which is very technically correct, but it's not a particularly useful definition.
Standard library is an inherent part of the language.
There is no difference for the end user, whether the call to `unsafe` is a language builtin or a standard library call. The end result is, all of those languages have large safe subsets and you can opt-in into unsafety to do advanced stuff. And there isn't anything in the safe subset of Java / Python that you would need to use unsafe for when translating it to Rust.
Again, by this standard, literally any language with FFI is "unsafe". This is not a useful definition in practice.
As far as translation of Java or Python to safe Rust, sure, if you avoid borrow checking through the usual tricks (using indices instead of pointers etc), you can certainly do so in safe Rust. In the same vein, you can translate any portable C code, no matter how unsafe, to Java or Python by mapping memory to a single large array and pointers to indices into that array (see also: wasm). But I don't think many people would accept this as a reasonable argument that Java and C are the same when it comes to memory safety.
So you can see that the fact you can invoke unsafe code is not a good distinguishing factor. It is the other, safe part. Rust, Java and Python all have huge memory safe subsets that are practical for general purpose programming - almost all of the features are available in those safe subsets. C and C++ do not - in order to make them memory safe you’d have to disallow most of the useful features eg everything related to pointers/references and dynamic memory.
Rust is more a response to C++ than to C. Both C++ and rust are big and complicated languages that are good for large projects that are performance-sensitive. Both have very strong static typing and can be verbose as a result.
C feels substantially different than Rust. It’s much smaller and less complicated. It’s technically statically typed, but also not in that it doesn’t really have robust non-primitive types. It’s a very flexible language and really good for problems where you really do have to read and write to random memory locations, rearrange registers, use raw function pointers, that sort of thing. Writing C to me feels a lot closer to Python sometimes than to Rust or C++. Writing algorithms can be easier because there is less to get in your way. In this way, there’s still a clear place for C. Projects that are small but need to be clever are maybe easier done in C than Rust. Rust is getting used more for big systems projects like VMs (firecracker), low level backends, and that sort of thing. But if I was going to write an interpreter I’d probably do it in C. Now, I’d do it in Zig.
I understand why people compare Zig to C, being a simple low-level language, but I think that comparison is misleading. C++ is both more expressive than C and safer (when using appropriate idioms). Like Rust, Zig is as expressive as C++, and like Rust, Zig is safer than C++; it's just not as safe as Rust. Comparing Zig to other languages is difficult. While each of its features may have appeared in some other language, their combination, and especially the lack of certain other features, is something completely novel and results in a language unlike any other; it's sui generis.
However, unlike Rust Zig does reject C++'s attempt to hide some low-level details and make low-level code appear high-level on the page (i.e. it rejects a lot of implicitness), it is (at least on its intrinsic technical merits) suitable for the same domains C++ is suitable for. It's different in the approach it takes, but it's as different from C as it is from C++.
> but isn't this undermined by the lack of memory safety?
IMO, partially. But zig isn't done, so we probably can't judge that yet.
Now, zig does have good memory safety. It's not at the level of Javascript or Rust, but it's not like C either.
Last I checked -- a while ago now -- user-after-free was a major issue in zig. IMO, that has to be addressed or zig really has no future.
Javascript really is a memory safe language. But its runtime and level of abstraction doesn't work for "systems programming".
For systems programming, I think you want (1) memory safety by default with escape hatches; and (2) a "low" level of abstraction -- basically one step above the virtual PDP-11 that compilers and CPUs have generally agreed on to target. That's to let the programmer think in terms of the execution model the CPU supports without dealing with all the details. And as a kind of addendum to (2), it needs to interop with C really well.
Rust has (1) nailed, I think. (2) is where it's weak. The low level is in there, but buried under piles of language feature complexity. Also, it disallows some perfectly safe memory management patterns, so you either need to reach for unsafe too often, or spend time contorting the code to suit the solution space (rather than spending time productively, on the problem space).
Zig is weak on (1). It has some good features, but also some big gaps. It's quite strong on (2) though.
My hope for zig -- don't know if it will happen or not -- is that it provides memory safety by default, but in a significantly more flexible way than rust, and maintains it's excellent characteristics for (2).
A lot of people, especially die hard C programmers, does not obsess over memory safety. They'll continue to start new projects in C, they're the ones being targeted by Zig.
> but isn't this undermined by the lack of memory safety?
Yes, in my opinion, but from Zig's success you can see some people are willing to trade safety for a simpler language. Different people have different values
Though to be fair you can also use zig in old C projects, moving things incrementally. I don't know how many projects do that Vs greenfield projects though
Although it doesn't have the same level of compile-time guarantees, there are runtime checks to ensure memory safety if you use Debug or ReleaseSafe. You can do your development and testing in the default Debug mode and only use ReleaseFast or ReleaseSmall once you need the extra optimization and are confident in your test coverage.
> Although it doesn't have the same level of compile-time guarantees, there are runtime checks to ensure memory safety if you use Debug or ReleaseSafe.
Just wondering, if you don't care about that much about the performance for your application, is it okay to use the runtime checks compilation in production?
Like say I have a really weird issue I can't seem to find locally, can I switch my production server to this different compilation mode temporarily to get better logs? Can I run my development environment with it on all the time?
Certain classes of programs should be built as ReleaseSafe rather than ReleaseFast to keep many of the runtime checks. It's perfectly reasonable to write a database and build as ReleaseSafe, but also make a game and build it as ReleaseFast.
You can definitely use ReleaseSafe, you can also switch modes during compilation. so you can call '@setRuntimeSafety(false)' at the start of a scope to disable runtime safety for performance critical sections.
Sure, an application built in Debug mode with a compiled language is going to be much faster than if you implemented it in an interpreted language. Given how much of the world runs on Python, PHP and Javascript, your zig application in debug mode is probably going to run just fine.
I haven't yet seen a language where full memory safety didn't come at an extraordinary cost [0], and Zig is memory-safe enough to satisfy most programs' demands [1], especially if you shift your coding model to working with lifetimes and groups of objects rather than creating a new thing whenever you feel like it (which, incidentally, makes your life much easier in Rust and most other languages too).
[0] In Rust, a smattering of those costs include:
- Explicit destruction (under the hood) of every object. It's slow.
- Many memory-safe programs won't type-check (impossible to avoid in any perfectly memory-safe language, but particularly annoying in Rust because even simple and common data structures get caught in the crossfire).
- Rust's "unsafe" is only a partial workaround. "Unsafe" is in some ways more dangerous than C because you don't _just_ have to guarantee memory safety; you have to guarantee every other thing the compiler normally automatically checks in safe mode, else your program has a chance of being optimized to something incorrect.
- Even in safe Rust, you still have a form of subtle data race possible, especially on ARM. The compiler forces a level of synchronization to writes which might overlap with reads, but it doesn't force you to pick the _right_ level, and it doesn't protect you from having to know fiddly details like seq_cst not necessarily meaning anything on some processors when other reads/writes use a different atomic ordering.
- Even in safe Rust, races like deadlocks and livelocks are possible.
- The constraints Rust places on your code tend to push people toward making leaky data structures. In every long-running Rust process I've seen of any complexity (small, biased sample -- take with a grain of salt), there were memory leaks which weren't trivial to root out.
- The language is extraordinarily complicated.
[1] Zig is memory-safe enough:
- "Defer" and "errdefer" cover 99% of use-cases. If you see an init without a corresponding deinit immediately afterward, that's (1) trivially lintable and (2) a sign that something much more interesting is going on (see the next point).
- In the remaining use-cases, the right thing to do is almost always to put everything into a container object with its own lifetime. Getting memory safety correct in those isn't always trivial, but runtime leak/overflow detection in "safe" compilation modes go a long way, and the general pattern of working on a small number of slabs of data (much like how you would write a doubly-linked list in idiomatic Rust) makes it easy to not have to do anything more finicky than remember to deallocate each of those slabs to ensure safety.
I agree with all of your points, and think Zig is perfectly workable. I think for big enterprise software, being written by teams from dozens to hundreds, that Rust probably is a better choice. It would certainly be faster than shipping more electron apps.
From my point of view the main point of memory safety is not to avoid bugs (although it helps with that) is that when you do have a memory management bug you don't risk remote code executions or leaking sections of memory to attackers (private keys and such).
That's assuming a certain type of program is the one you're writing (naturally everyone wants a browser that is bug free and un-exploitable). Not every program talks to the network, not every program handles untrusted data, and not every program has the same risk profile as a browser. Every program has the problem of bugs though, so focusing on making it easy to avoid and fix bugs is more valuable to a wider audience.
There’s a lot to criticize about Rust for sure, but I feel like some of the points here aren’t necessarily in good faith.
> Explicit destruction (under the hood) of every object. It's slow.
Care to actually support this with data? C++ is quite similar in this respect (Rust has a cleaner implementation of destruction) and generally outperforms any GC language because stack deallocation >> RC >> GC in terms of speed. There’s also a lot of good properties of deterministic destruction vs non deterministic but generally rust’s approach offers best overall latency and throughput in real world code. And of course trivial objects don’t get any destruction due to compiler optimizations (trivially live on the stack). And zig isn’t immune from this afaik - it’s a trade off you have to pick and zig should be closer since it’s also targeting systems programmers.
> - Many memory-safe programs won't type-check (impossible to avoid in any perfectly memory-safe language, but particularly annoying in Rust because even simple and common data structures get caught in the crossfire).
Actually most memory safe languages don’t have issues expressing data structures (eg Java). And rust has consistently improved its type checker to make more things ergonomic. And finally if you define rust as language + stdlib which is the most common experience those typical data structures are just there for you to use. So more of a theoretical problem than a real one for data structures specifically.
> Even in safe Rust, you still have a form of subtle data race possible, especially on ARM.
I agree that for the language it’s weird that this is considered “safe”. Of course it’s not any less safe than any other language that exposes atomics so it’s weird to imply this as something uniquely negative to Rust.
> Even in safe Rust, races like deadlocks and livelocks are possible.
I’m not aware of any language that can defend against this as it’s classically an undecidable problem if I recall correctly. You can layer in your own deadlock and livelock detectors that are however relevant to you but this is not uniquely positive or negative to rust so again weird to raise as a criticism of Rust.
> The constraints Rust places on your code tend to push people toward making leaky data structures. In every long-running Rust process I've seen of any complexity (small, biased sample -- take with a grain of salt), there were memory leaks which weren't trivial to root out.
I think you’re right to caution to take this with salt. That hasn’t been my experience but of course we might be looking at different classes of code so it might be more idiomatic somewhere.
> In the remaining use-cases, the right thing to do is almost always to put everything into a container object with its own lifetime
You can of course do that with Rust boxing everything and/or putting it into a container which reduces 99% of all lifetime complexity. There are performance costs of doing that of course so that may be why it’s no considered particularly idiomatic.
My overall point is that it feels like you’ve excessively dramatized the costs associated with writing in Rust to justify the argument that memory safety comes with excessive cost. The strongest argument is that certain “natural” ways to write things run into the borrow checker as implemented today (the next gen I believe is coming next year which will accept even more valid code you would encounter in practice although certain data structures of course remain requiring unsafe like doubly linked lists which should be used rarely if ever)
The issue with destructors being slow is actually a well-known problem with C++, particularly on process shutdown when huge object graphs often end up being recursively destructed for no practical benefit whatsoever (since all they do is release OS resources that are going to be released by the OS itself when process exits).
Comparing stack deallocation vs GC is kinda weird because it's not an either-or - many GC languages will happily let you stack-allocate just the same (e.g. `struct` in C#) for the same performance profile. It's when you can't stack-allocate that the difference between deterministic memory management vs tracing GC become important.
Also, refcounting is not superior to GC in terms of speed, generally speaking, because GC (esp. compacting ones) can release multiple objects at once in the same manner as cleaning up the stack, with a single pointer op. Refcounting in a multithreaded environment additionally requires atomics, which aren't free, either. What refcounting gives you is predictability of deallocations, not raw speed. Which, to be fair, is often more important for perception of speed, as in e.g. UI where a sudden GC in the middle of a redraw would produce visible stutter.
> Also, refcounting is not superior to GC in terms of speed, generally speaking, because GC (esp. compacting ones) can release multiple objects at once in the same manner as cleaning up the stack, with a single pointer op. Refcounting in a multithreaded environment additionally requires atomics, which aren't free, either. What refcounting gives you is predictability of deallocations, not raw speed. Which, to be fair, is often more important for perception of speed, as in e.g. UI where a sudden GC in the middle of a redraw would produce visible stutter.
In practice, tail latencies are much harder to control in GC vs RC implementations which is what I was trying to communicate. This doesn’t matter just for UI applications but can also directly implicate how much load your server can service. Ref counting in a multithreaded environment can use atomics although biased ref counting is considered the state of the art to minimize that cost (ie RC on the owning thread, arc on shared threads).
As for releasing multiple objects at once, in practice I’ve yet to see that bear out in practice as a real advantage. The cost of walking the graph tends to dominate vs RC where you precisely release when unreferenced. And that’s assuming you even use RC - often times you at most RC at the outermost layer and everything internally is direct ownership. And if you really do need that, use an arena allocator which gives you that property without the need for a GC collection pause. There’s a reason there’s no systems language that uses GC.
> The issue with destructors being slow is actually a well-known problem with C++, particularly on process shutdown when huge object graphs often end up being recursively destructed for no practical benefit whatsoever (since all they do is release OS resources that are going to be released by the OS itself when process exits).
If you want fast shutdown just call _Exit(0) to bypass destructors of static, thread local, automatic storage duration. GC languages have a much worse problem of making it really easy to leak resources during the execution of a long running program. I’ll take that over a slow shutdown anytime, especially since in practice, unless you’ve written really bad code, that “slow shutdown” remains negligible.
> There’s a reason there’s no systems language that uses GC.
There are a few system languages that uses GC, like Nim and D. Of course with the option to do manual memory management where necessary, and allocating things on the stack whenever possible. Nim also gives option for several diferent types of GCs and memory allocators, where each one can be more performant for different tasks. Maximum GC pause can also be configurable, at the cost of temporarily using more memory than you should until the GC manages to catch up.
Of course, you can always manually craft arenas and such to be faster and avoid fragmentation, at the cost of much more effort.
Nim and D both offer multiple GC strategies within the language. Just as with C and Rust, while they can be used for systems programming, they can also be used for other things. If you’re doing systems level programming with them you’re probably not choosing any tracing GC option.
Nim and D are also bad examples as I’m not aware of any meaningful systems level programs that have been written in them - they have continuously failed to find a way to become mainstream (Nim is mildly more successful in that it’s managed to break into the 50-100 range of most popular languages but that’s already well into the tail of languages to the point where you can’t even tell the difference between 50 and 100)
I used to use Rust for work, and I use Zig in my new job. They're both fine. It was a good-faith smattering of examples, and it's pretty easy to keep pulling such examples out of a hat.
You seem to not like any of them much, so I'll just briefly address a few of your points:
> Of course it’s not any less safe than any other language that exposes atomics so it’s weird to imply this as something uniquely negative to Rust
That wasn't the implication. Off-the-cuff, when you ask your average rustacean what they think "no data races in safe Rust" means, do you honestly think they will tend to write code treating atomics with an appropriate level of respect as they would in another language?
> Actually most memory safe languages don’t have issues expressing data structures (eg Java)
That was sloppy writing on my part. I left the implicit "without runtime overhead" in my head instead of writing it down.
> Memory leaks
This first one isn't a leak per se, but it's about the same from an end-user perspective [0]. Here's a fun example of that language complexity I was talking about (async not being very composable with everything else) as an example of a true leak [1]. Actix was still only probably/mostly leak-free starting from v3 [2].
Rust makes it easy to avoid UAF errors, but the coding patterns it promotes to make that happen, especially when trying to write fast, predictably performant data structures, strongly encourage the formation of leaks -- can't have a UAF if you never free.
> Off-the-cuff, when you ask your average rustacean what they think "no data races in safe Rust" means, do you honestly think they will tend to write code treating atomics with an appropriate level of respect as they would in another language?
I agree, from what you would expect from Rust, atomics are a weird safety hole. But that’s just because the bar for Rust is higher but if we’re comparing across languages we must use a consistent bar.
> This first one isn't a leak per se, but it's about the same from an end-user perspective [0]
This kind of stuff pops up in every language (eg c++ vector and needing to call shrink_to_fit). Reusing allocations isn’t a unique problem to Rust and again, if you’re using the same bar across languages, they all have similar issues. I’m sure zig does too if you go looking for similar kinds of footguns, especially as more code starts using it.
> Rust makes it easy to avoid UAF errors, but the coding patterns it promotes to make that happen, especially when trying to write fast, predictably performant data structures, strongly encourage the formation of leaks -- can't have a UAF if you never free.
There’s so many cutting edge performant concurrent data structures available on crates.io that let you do cool stuff with respect to avoiding UAF and not leaking memory when you really need it. And other times you don’t need to worry about concurrency and then the leak and UAF concerns go away too. And again, I feel like a higher bar is being used for Rust and it doesn’t feel like Zig or other languages really offer more ergonomic solutions
You can build with runtime checks that help find all the issues. It's suprisingly effective, probably more effective than actually doing it in the type system.
> everyone is suggesting to move to memory safe languages when possible
Be careful not to believe your own hyperbole. Some people are loudly and persistently recommending other people to use memory safe languages. Rust may be quite popular lately but the opinions held by some subset of that community does not reflect the opinions of "everyone". It would be just as silly to say: "everyone is suggesting to move to OSS licenses".
> sholdn't [... new projects ...] be done in a memory safe language
Again, please be careful to understand where you are getting this "should". What happens exactly if you don't choose a memory safe language? Will the government put you in jail? Or will a small vocal community of language zealots criticize you.
Maybe you feel like you want to fit in with "real" programmers or something. And you have some impression that "real" programmers insist on memory safe languages. That isn't the case at all.
In my experience, making technical decisions (like what programming language to use) to avoid criticism is a really bad path.
Zig isn't a memory safe language, but it does have memory safety features. Theoretically it's safer than C but isn't as safe as Rust.
For example, you can't overflow buffers (slices have associated lengths that are automatically checked at runtime), pointers can't be null, integer overflows panic.
Not in all the ReleaseFast mode where both signed and unsigned overflow have undefined behaviour.
And there's also the aliasing issue, if you have
fn f(a:A, b: b:*A) { b = <>; which value has 'a' when f is called with f(a,a)? }
(not sure about Zig's syntax).
That said I agree with your classification (safer than C but isn't as safe as Rust)
Zig doesn't provide any rationale for why it picked UB rather than wrapping. By default Rust's release builds give the integer overflows wrapping, so (1u8 + 255u8 == 0u8) rather than panic, so as to avoid paying for the checks.
This is probably not what you wanted, your code has a bug (if it was what you wanted, you should use the Wrapping type wrapper which says what you meant, not just insist this code must be compiled with specific settings) but you didn't have to pay for checks and your program continues to have defined behaviour, like any normal bug.
It is very rare that you need the unchecked behaviour for performance. Rare enough that although Wrapping and Saturating wrappers exist in Rust, even the basic operations for unchecked arithmetic are still nightly only. Most often what people meant is a checked arithmetic operation in which they need to write code to handle the case where there would be overflow, not an unchecked operation, Rust even has caution notes to guide newbies who might write a manual check - pushing them towards the pit of success - hey, instead of your manual check and then unsafe arithmetic, why not use this nice checked function which, in fact, compiles to the same machine code.
> By default Rust's release builds give the integer overflows wrapping, so (1u8 + 255u8 == 0u8) rather than panic, so as to avoid paying for the checks.
I consider that to have been a mistake, and hopefully one we can change. Note that this is about defaults, you can build your own project as release with overflow panics. I'd wish the language had a mechanism to select the math overflow behavior in a more granular way that can be propagated to called functions (in effect, I want integer effects) instead of relying exclusively in the type system:
fn bar(a: i32, b: i32) -> i32 where i32 is Saturating {
a + b
}
fn foo(a: i32, b: i32) -> i32 where i32 is Wrapping {
// the `a + b` wraps on overflow, but the call to
// bar overrides the effect of the current function
// and will saturate instead.
a + b + bar(a, b)
}
With this crates can provide control to their callers on math overflow behavior without having to provide type parameters in every API with a bounds for something like https://docs.rs/num-traits/0.2.19/num_traits/.
When you say it's a mistake (in your opinion) do you mean that you'd have picked panic in release builds by default? Or do you think Rust 1.0 without full blown effects was the mistake and so you'd actually want effects here and no smaller change is worthwhile ?
Personally I'm not as bothered about this as I was initially, whereas I'm at least as annoyed today by some 'as' casts as I was when I learned Rust -- if I could have your integer effects or abolish narrowing 'as' then I'd abolish narrowing 'as' in a heartbeat. Let people explicitly say what they meant, if I have a u16 and I try to put that in a u8, it will not fit, make me write the fallible conversion and say what happens when it fails. This strikes me as especially hazardous for inferred casts. foo as _ could do more or less anything, it is easily possible that it does something I hadn't considered and will regret, make me write what I meant and we'll avoid that.
> Zig doesn't provide any rationale for why it picked UB rather than wrapping
There's no need to provide a rationale because it's obvious, from a performance POV:
1) (a) UB on overflow > (b)wrapping on overflow
2) (b)wrapping on overflow > (c)trap on overflow.
So when you create a language you have to pick a default behaviour, Zig allow both (a) xor (c) with ReleaseFast and ReleaseSafe..
(1) is because this allows the compiler to do "better" optimisations, which unfortunately can create lots of pain for you if your code has a bug.
(2) is because these f.. CPU designers don't provide an 'add_trap_on_overflow' instruction so at the very least the overflow check instruction degrades the instruction cache utilisation.
Alas no, you've written a greater than sign but you'll find in reality it's often only the same. But you've significantly weakened the language, so you just made the language worse and you need to identify what you got for this price.
On the one hand, since you didn't promise wrapping in some cases you'll astonish your programmers when you don't provide it but that's what they expected, on the other since can't always get better performance you'll sometimes disappoint them by not going any faster despite not promising wrapping.
This might all be worth it if in the usual case you were much faster, but, in practice that's not what we see.
One can reasonably argue that the only reason why people expect wraparound is because it was the default in C, not because it actually makes sense. If the code actually depends on wraparound to produce the correct result, making that explicit in the operators, as Zig does, is surely a better choice, not the least because it gives people reading the code a clear indication that they should be paying attention to that. OTOH most code out there in the wild treats it more as a "never gonna happen" situation and doesn't deal with it at all, which isn't really made any worse with full-fledged UB.
Integer wrapping on overflow is not just a C thing, it happens at the hardware level as part of ALU instructions. It's actually kind of difficult to come up with a different behaviour that makes sense. Saturating arithmetic requires additional transistors.
It happens on hardware level for a single opcode, sure, but a 1:1 mapping between such an opcode and arithmetic operators in a high-level PL isn't a given, especially in presence of advanced optimizations.
In any case, PLs don't have to blindly follow what the hardware does as the default. Many early PLs did checked arithmetic by default. Conversely, many instruction sets from that era have specific opcodes to facilitate overflow checking.
The reason why we got it in C specifically is because of its "high-level PDP assembly" origins.
It’s worth pointing out that Zig also just straight up gives you wrapping and saturating adds with ‘+%’ and ‘+|’ operations and same for other arithmetic operations.
I'm relieved that they decided to remove this trap as it could really have been a nasty one (worse than integer overflow because you can just use ReleaseSafe)
It really isn't. It's undermined by all the metadata you need for its safety model and that's a part of the semantics. You cannot create an alternate grontend for rust that gets rid of all the parts people hate
The recent story about them switching to self hosting makes me feel like they are a particularly efficient project that will not waste donation income.
Are you referring to the Zig compiler switching to become self hosted [1], or the Zig website switching to becoming self hosted [2]? (I assume the latter.)
Probably referring to Zig migrating away from AWS to become more cost efficient [1]. This was also mentioned in the announcement regarding this donation from the Zig Software Foundation [2]
They better sign their packages and check for signatures upon download and installation. At their scale, foregoing S3 means they have to figure out how to deal with bit rot in packages themselves.
They're self hosting docs and Zig releases. There is no central package repository. The decentralization (similar to Golang) is one of the things I like about Zig.
"Honey, so there is this programming language I really and I wanna talk to you about it"
"Yes?..."
"Well, I want to do a donation since I really really like that language..."
"......... not again .........."
Definitely good news, but to put it in perspective, that's about 0.75x - 1x annual salary for an experienced developer that works on compilers. My *guess* is that Microsoft spends at least 10-20x that amount of money on TypeScript annually, and much more on C++/C# etc.
There are developers that earn that much, and the probability goes up at MS, Mozilla, etc, but I think a lot of qualified experienced Compiler developers exist who are earning less than the 150k a year when you consider the global market and smaller compilers. But the over all cost of a developer is not just their salary.
Really though I've found jobs similar to compiler developer for an ANSI standard compiler at big tech to include a lot of hazard pay for how disagreeable the job actually is compared to one with more freedom.
My guess (and it's just that), is that a role on newish, high potential project like this would be quite something to land for the right person (i.e. they're already sold on Zig), and some reasonable funding allowing them to not have to work two jobs or only do nights/weekends is the enabler.
Honestly, I'm very excited about zig. Lean and mean, and it's written by someone who isn't sitting in an ivory tower, not caring about actual usability. It wasn't designed by a team of PhDs (like, e.g. Haskell), while it's clearly inspired by very useful ideas in e.g. Rust, Haskell, etc. I think it'll be very exciting to write code in zig.
Similarly excited about Zig. While its memory safety guarantees may not be as comprehensive as Rust's, I hope we'll one day see Zig in the Linux Kernel. I suspect the old-time kernel C programmers may have an easier time with Zig than Rust.
Once zig compiles Linux there are basically 3 compilers that can do so gcc, clang (llvm) and zig (less and less llvm).
I expect there to be good tools to port C code to zig (much hard for C->Rust), especially with the advancements in LLMs lately. I would not be surprised if that would result in a zig-linux code base in the coming 5 years. Sure the C based kernel may be the hot bed for innovation for years to come.
I'm willing to cop the slack from the Zig brigade for this take, but I'll get excited for Zig again when I'm allowed to turn `zig fmt` off in e.g. vim or VS Code.
It leaves me with a sour taste in my mouth when stylistic preferences are made mandatory; it is disrespectful of the coders for whom the language is a tool. It can also be a canary for deeper issues around community engagement and openness to different viewpoints (a problem for Zig, e.g. [0]).
A business with code uniformity requirements is more than capable of running a linter, and for my weekend projects, I don't give one toss about anyone's stylistic preferences but my own. Either Zig is a language for grown-ups, or it isn't. And if I'm going to be forced to code a certain way, why not just use Rust and get free memory safety out of it too?
2. Personally I'm happy Zig has a BDFL. Even though that #16270 issue has some controversy, it's clear Zig has a consistent direction and goal. It's not design-by-committee and doesn't get stalled for years on the tiniest of issues while the community bikesheds for eternity.
Can I? How? There is no setting I can see in e.g. VS Code to disable format-on-save. (Not trying to sound snarky here, I'm legitimately open to advice on this.)
> it's clear Zig has a consistent direction and goal. It's not design-by-committee and doesn't get stalled for years on the tiniest of issues while the community bikesheds for eternity.
Anyone can have direction. 'There, towards the copse of stinging nettles!' The tricky part is figuring out a good direction, which sometimes requires pauses and stakeholder engagement. You know, that thing that the 'move fast and break things' crowd loves to deride as 'bikeshedding'. :)
In as far as Zig's direction appears to be 'we will rewrite LLVM, but better-er!', I do worry. I'd hate for Zig to end up like Elm.
editor.formatOnSave will control it for all files. You can also use editor.defaultFormatter to change which formatter is used. You can set these for all languages or for specific languages[1].
Regarding competing with LLVM: I'm happy to see others try. Cranelift is a nice example of finding a niche that LLVM isn't filling, and I'm glad people didn't prematurely give up simply because LLVM already exists. Zig's goal is definitely ambitious, and there are risks. But in principle I'm happy to see someone pursuing these lofty goals because that's what ultimately creates incremental progress in the industry. If Zig fails... well, I'd still be happy they at least tried.
> editor.formatOnSave will control it for all files. You can also use editor.defaultFormatter to change which formatter is used. You can set these for all languages or for specific languages[1].
Ah, I see. The Zig module silently overrides the user's editor.formatOnSave setting by default; this is what I was missing. I need to specifically override Zig's override:
"[zig]": {
"editor.formatOnSave": false
},
Thank you!
> Regarding competing with LLVM
In theory, I have no problem with this either, but in practice, this is a big gamble for Zig as a whole. A language lives and dies on perceptions, and currently Zig's killer feature is that it is an easy slot-in incremental replacement for existing C/C++ codebases. This plan intends to break that by default. (I realise AK has walked that back somewhat, it will remain an option, etc - but considering this whole thread is people telling me formatting must be strictly mandated, surely you'll grant the power of defaults and the risk in breaking them.)
Ultimately, I am open to being proven wrong, but I've seen some of the same patterns in Zig that have broken other newlangs. Killer features that go under-appreciated by the leadership, a focus on purity at the expense of practicality, 'trust the plan', etc. My fear would be that the hype tide will go out, as it always does, and Zig will be left without any obvious niche, somewhere mid-LLVM-rewrite. But hey, we'll see. I wish AK the best of luck with it.
The primary justification for "you can have any color as long as it's black" approach to coding style & formatting is precisely that the language is a tool, not an art project. Having a single standard well-defined style does wonders to prevent bikeshedding when teams adopt the tool, which is why this approach is increasingly popular (e.g. Black in Python).
And I don't think you can meaningfully compare this to constraints imposed by Rust, which aren't about where to place a curly brace etc, but about not being able to (easily) model some data structures and algorithms. You could argue that both represent a form of tax on your freedom as a developer, but even if so, it's an orders of magnitude difference.
> The primary justification for "you can have any color as long as it's black" approach to coding style & formatting is precisely that the language is a tool, not an art project. Having a single standard well-defined style does wonders to prevent bikeshedding when teams adopt the tool, which is why this approach is increasingly popular (e.g. Black in Python).
I have no problem with standards. There are times where standards are useful. The thing about standards is that they can also be ignored where appropriate. It's notable no one is seriously pushing for Black to be made mandatory for Python, for extremely obvious and sensible reasons. Also, 'increasingly popular' is doing some heavily lifting in that argument. The vast, vast, vast (vast!) majority of Python code will never ever use Black, and that is fine.
A language, as we agree, is a tool, and it is a poor tool indeed that refuses to lend itself for use in creative ways.
> And I don't think you can meaningfully compare this to constraints imposed by Rust, which aren't about where to place a curly brace etc, but about not being able to (easily) model some data structures and algorithms. You could argue that both represent a form of tax on your freedom as a developer, but even if so, it's an orders of magnitude difference.
Very different - Rust's constraints actually serve a purpose beyond merely enforcing stylistic conformity. If I am to take on the added cognitive load of coding a certain way, I might as well actually get something out of it.
It's not like the Zig compiler will refuse to compile unformatted code, either.
And the vast majority of Python code was written before Black was a thing. However, once it appeared, it spread through the ecosystem very quickly. At this point I wouldn't be surprised if most people using Black don't even know that they do so simply because they write their Python in VSCode, which suggests Black (and will install it for you) if you try to do Format Document or enable format-on-save.
> It's not like the Zig compiler will refuse to compile unformatted code, either.
I mean... the use of tabs or LF+CR / CR line endings was a compiler error, last I checked. So, yes, it is exactly like that. And this was a deliberate choice to introduce friction for people who don't hew to the author's stylistic preferences.
> However, once it appeared, it spread through the ecosystem very quickly.
Uh huh, sure, a trendy hipster linter that appeared in 2019 is now so standard that Python code is nigh unthinkable without it. We marvel in the museums at what Python used to look like! There will definitely not be another trendy hipster linter in a couple of years with totally different opinions! :P
Coming from Python, and now mostly developing in Go, the uniform style in Go has really helped me familiarize myself with new codebases, especially in contrast with Python's variety of styles.
You're implying I'm not a grown up, but characterising a language created in 2016 - current version zero point something something - as immature is 'extreme' to you? C'mon now.
Zig is immature. That's not some conclusive judgment against its utility now and forever, it is simply a function of the amount of times the Earth has orbited the Sun since its creation.
Given how Zig is positioning itself vis-a-vis C, Rust, etc., it is somewhat baffling to me how little respect it seems to have for the opinions and capabilities of its end users.
People who want their hands held have better choices than Zig, and people who want the freedom to code how they wish... also have better choices than Zig. I think Zig is onto something, but hype cycles always fade, and if Zig hasn't matured by that point, there won't be any obvious niche for it to occupy.
You characterized Zig as "not a language for grown-ups" because it enforces a consistent syntax. I was expressing the opposite opinion. You seem to be objecting to a whole bunch of things that I didn't say. I'm just a guy who likes consistent code formatting and good faith arguments.
Formatting is a very weird thing to get hung up on, particularly when a language like Go is reaching mainstream (if it already isn't there). It's fine not to like the chosen formatting or the decisions made upstream but that comes with the territory. It just means that languages like Go, Zig, and others following this trend are probably not for you and that's okay.
> how little respect it seems to have for the opinions and capabilities of its end users
I think you misunderstand the objective here. As I understand it, it means that all code written in the language looks the same. This significantly improves readability and as we all (should) know, code is read much more than it is written. It's not about deducing the capabilities of the end users but more about reducing the cognitive load while having to read or write in the language itself.
If you still decide to write in one of these formatting-defined languages, it would probably be best to keep the repository and/or project private to avoid the barrage of "ran fmt on the code" pull requests sure to crop up. It would save all parties from a lot of frustration.
I have no problem with style guides or linters. I have a problem with a compiler that deliberately emits compiler errors on the use of \t, just as an example, because the author likes to throw Lego bricks under your feet if you refuse to obey his stylistic preferences.
I could explain where style guides can become a problem - usually in extremely low level code, emulation, legacy interop, etc. - and therefore need to be relaxed or ignored, but this would divert us onto a discussion of stylistic preferences, and that's not my chief concern here. My concern is the contempt for people's different needs and use cases, including edge cases, which is indicative of immaturity.
> If you still decide to write in one of these formatting-defined languages, it would probably be best to keep the repository and/or project private to avoid the barrage of "ran fmt on the code" pull requests sure to crop up. It would save all parties from a lot of frustration.
That's a rather patronising comment. I feel no frustration closing low effort PRs, and I'm honestly somewhat amused by this idea of living in terror of a 'barrage' of "I linted your code for you!"s.
I tried Nim, since it felt easier to use than C for low level stuff.
I don't really like the syntax though. Python barely does Python like syntax right.
Is Zig a good alternative. I vastly prefer higher level languages like C#. Have has a special place in my heart, but it's not supported outside of a few game engines.
If you like higher level languages like C#, you are not going to like Zig, except the surface similarity in syntax.
not liking syntax is not enough reason not to use a language. It takes a few days to get over the unfamiliarity in syntax. concepts are much harder to learn.
What syntax are you looking for? If you want C syntax, D will be the closest (most valid C code is also valid D code). If you want Ruby syntax, there's Crystal. Zig feels more verbose to me. For example, there are no multiline comments and no operator overloading, which kind of got to me when I tried Zig. This is, of course, purely subjective. Some people like the Zig syntax.
True, but you mentioned Nim earlier, and this is a discussion of Zig, which hasn't even reached a stable release. D is an old, stable language that's still under heavy development. It's used by some companies and is able to support an in-person annual conference. I have no concerns about the code I write today working ten years from now.
There are more similarities in the lower level than you think. Once you start writing structs that use generics to specialize their allocator (as in, for really hands-on memory management), it starts looking similar, much like when you write portable SIMD code, which I should commend Zig for having the API for that is similar to .NET one.
> I vastly prefer higher level languages like C#. Have has a special place in my heart, but it's not supported outside of a few game engines.
Where is C# not supported? It’s an incredibly versatile language. You even have a bunch of features to go “low-level” if needed (not as low-level as C of course, you still have the CLR): Span<T>, ref returns, ref struct, function pointer, unsafe keyword
Would you be ok sharing more details how you combine both? I’ve been experimenting with this for a personal project, but integrating zig in a .NET solution has been fairly messy. Would love to read others experiences doing so successfully
I should be more precise, I don't use them in same project. I am C# professional but using Zig for simple embedded programing projects and some tools that don't have requirment for complex runtime. If I would have to glue them together then I would probably compile Zig to webassembly component and use it in C# hosted env.
Using wasm for this sounds like a difficult and wasteful approach.
Is there a reason you are avoiding a simple compilation of Zig part of codebase into a dynamically or statically linked library and calling into it with P/Invoke? You only need a `Target` and maybe `None Include=...` items in .csproj to instrument MSBuild to hook the building process of Zig into dotnet build and co.
Yes, P/Invoke should work, especially when you are targeting a single platform. However, for multiple platforms, there might be some unexpected obstacles I am not aware of.
The answer to this is to map the RID to the argument passed to Zig. Or just build on a target platform as an alternative, if possible. WASM is not a solution and would not work properly. It is the last resort effort in language with inferior interop capabilities.
I'm just putting a disclaimer that using WASM is a very wrong kind of suggestion, would likely not work the way you expect it to (you would have to use WASM for .NET too which is in many places experimental and is a huge performance killer) and no one does it - there are appropriate ways to target multiple platforms in a solution that splits logic between .NET and C/C++/Rust/Zig/Swift/etc., especially that Zig offers nice cross-compilation toolchain. Mind you, the use case for this is accessing language-specific libraries and for performance the solution really is writing faster C# instead.
It depends on the integration use case. For example, if I were writing a plugin system for my C# app, safety would take precedence over performance, and using WASM modules would make more sense. If I had some performance-critical code in Zig, then P/Invoke would be the way to go. However, in most cases, it's better to avoid P/Invoke, as C# is already a very performant language.
IMHO the advantage of zig isn’t performance but generating a minimal library that exports C headers, making it simple to integrate in any language. My use case is a custom document editor in zig, with a “bring your own renderer” approach”. It integrates in a C# desktop app, as a base for something like a modern RichEditBox (just in spirit - not RTF based, with way more advanced features).
I want the editor to be usable in other GUI stacks, a C-compatible library is the only approach that makes sense here
Yeah, that’s what I personally do but maintaining the PInvoke interface during development adds a good amount of overhead as things are changing around. It’s also a lot of boilerplate to deal with, and you only notice at runtime that something is wrong. It doesn’t feel like a well integrated solution
I strongly caution against WASM suggestions in a sibling comment - I’m not even sure if the author has actually done any C# at all, given how ridiculous it is.
Awesome, thanks for the link, that’s really useful!
WASM is definitely a strange suggestion here, I didn’t take it seriously. I’m already using a C-compatible zig library approach. Some details of the use case here:
https://news.ycombinator.com/item?id=41729059
There was a port of the .NET Nano Framework for RP2040.
But I think the more interesting thing is that if you remove all features in C# that require heap allocations, the resulting subset is basically C with namespaces and generics, which is still useful, and certainly possible to compile efficiently even for very constrained platforms.
C# is quite a bit more than just enhanced C if you remove GC-reliant features as the generics and interface constraints enable a huge subset of features, alongside SIMD API, stack-allocated buffers and all sorts of memory wrappable in Span<T>s, which almost every method in the standard library that works on some sort of buffer accepts nowadays instead of plain arrays.
You can also manually allocate objects - there are multiple ways to go about it and even with the presence of GC, there is a "hidden" but supported API to register a NonGC heap that GC understands when scanning object references.
Though effective targeting is limited to much fewer count of platforms. Mono can target as low as ARMv6 but CoreCLR/NativeAOT mainly work with x86, x86_64, ARM and ARM64. For microcontrollers you are better off using Rust in my opinion. But for anything bigger .NET can be a surprisingly capable choice.
Heh, I've been procrastinating on making a blog and writing a blog post on "C# for systems programming". Would be nice to read and provide feedback if someone beats me to it.
It is a general purpose programming language that should be able to do anything you want, whether it works for you depends on what you want to do, the ecosystem might be lacking since it's a young language. Just try it out!
We have local developer environment and we deal quite often with python, go, or some brew packages version not being correctly installed before starting the tools.
I use Zig on Mac, Windows and Linux (but most on Mac). It works without issues (also as a Clang compatible C/C++/ObjC compiler and linker replacement).
I would recommend managing the Zig installation through a tool like zvm (https://github.com/tristanisham/zvm). This lets you easily update to the latest dev version and switch between stable versions (similar to rustup or nvm).
The other install options are working too of course (install via brew - although in the past this was a bit brittle, or download and unpack prebuilt archives https://ziglang.org/download/) but those options are not as convenient for switching between the stable and dev version.
Do you mean using Zig to compile the native libraries for python, go, etc? Sure, the build scripts would need to be updated to use `zig cc`. Zig is already packaged for brew so they could add it as a build dependency.
But it is still pre 1.0 so expect new versions to break old-ish code. I'd say package managers should wait for the 1.0 release.
It sounds like something that could support a full time developer for some years.
I hope at some point it can be added to standard linux distributions like debians or red hats through their official yum/apt packages.
So far there is no red hat distro, and there's integration with ubuntu's snapcraft for some reason.
I realize there's dozens of distributions to support, but these are the two most foundational to my understanding, and it just speaks to the maturity and lack of system usage that the compiler is not released/vetted by OS distros.
The language+compiler are on 0.x so packaging it in long term support distros can be more problematic than helpful at this point (source that compiled a year ago is likely broken now and vice versa). Once it has reached 1.x then it won't be much to get it prepackaged everywhere.
Until that point it doesn't really make sense to pull it from the repos of distros that are packaging it right now anyways. E.g. it's packaged on Fedora but you most likely don't actually want to rely on that package for the moment.
For example, PipeWire in Debian Stable is at 0.3.65 and probably won't get major updates until the next Debian release. However, that's only part of the picture: Its early presence in Debian has paved the way for updates via the Backports suite, which is available to Stable users with a flick of a switch. With Backports enabled, Debian Stable has access to PipeWire 1.2.4, released upstream less than two weeks ago.
This could be done for Zig as well. (Assuming Zig meets the Debian Free Software Guidelines. I think its recent move to a WASM blob for bootstrapping might complicate inclusion in Debian.)
> For example, PipeWire in Debian Stable is at 0.3.65
Pipewire did/does not follow the strict semver versioning so you can't compare the two because the versions both start with "0":
- Pipewire had offered ~3 years of API and ABI compatibility on what it called "0.3.x", zig intentionally plans to break compatibility multiple times a year as it increments 0.x.
- Pipewire already planned to keep API/ABI compatibility between 0.x and 1.x, zig explicitly does not - they label the releases 0.x to signify these will break by 1.x
- Pipewire called 0.3.x stable for use but a rapid development version. zig is also under rapid development but they say they opposite in regards to stability guarantees.
I.e. the issue is more than "how do you get a newer version on stable Debian?" and more "it's an explicitly unstable package with no version compatibility guarantees, distros don't like packaging that in their stable repositories".
The easy answer to "be prepared" more or less is keep it in the dev repositories, like Alpine does, where there is no guarantee of stability or interoperability on updates. When zig declares itself stable (which, in its versioning scheme, will be 1.0) then it can just be added to stable without much work.
What's more, there is precedent with regards to Debian: Go is backported to Stable via the golang metapackage. The main package for Zig can follow the same structure and also be a metapackage with versioned dependencies.
> I think its recent move to a WASM blob for bootstrapping might complicate inclusion in Debian.
As I understand, it would mean that Zig ends up in the non-free component rather than main or contrib.
Whether some distributions have jumped the gun or not it remains a good reason. No harm for the lot of ones in that list that are more delivery mechanisms or development versions themselves though.
For the ones that are green they tend to be development versions (e.g. Alpine Edge, ALT Sisyphus, LiGurOS develop), rolling releases where there aren't necessarily those kind of package stability/interop guarantees in the first place (e.g. Arch, Manjaro, OpenSuse Tumbleweed), or not actually distros at all just alternative download mechanisms (e.g. Chocolatey, Chromebrew, Homebrew, Scoop). There are very few (such as Fedora 40) that just happen to be "broken clock" status for the moment because they are very fresh spins.
For the ones that are red, they are the examples of why you don't want to rely on the built in package at the moment. Even for the ones that are green "for the moment" (such as the Fedora 40 example) it's often still considered better to use the latest "master" copy of zig (depending what you're doing with it) than the last milestone release even then.
That list says it's already in Fedora and openSUSE. The fact that they both have older versions is actually good, since that just means they've been packaging it for a while. Once that list no longer marks outdated versions as red is when zig would have been ready for the long-term supported distributions.
I wish them well, but I was hoping given their general ethos they'd forgot the inferred typing. I spend way too much time in my current work trying to debug people messing up types. I'm sure they'll give me excellent reasons for it but `var i = 1; var y = 2.0; var z = x * i; # z is a float right?` stuff always seems to sacraifice a lot of explicit readable code for little gain.
Your comment made me chuckle. It was an interesting inclusion, considering she appears to be in an industry not usually associated with Zig enthusiasts (entertainment). I'd have thought they individually had enough financial independence that a solidarity statement wasn't required, but it's neat to share that your partner is excited about the things you're excited about.
Well, his wife is Amy Okuda who, though an actor, is very technical and, for lack of a better term, "geeky" in her private life. https://twitter.com/amyokuda
My donor advised fund has given a lot of money to open source. I highly recommend that people who can benefit from donations of assets create a donor advised fund so they can get tax benefits and then direct it exactly to where they see fit
As a tech founder who’s still deeply involved, I wonder if Mark Zuckerberg has thoughts on Zig. I recall him mentioning in a recent interview that he would rewrite Facebook in Python instead of PHP. It's impressive to hear that from the CEO of a multibillion-dollar company.
I can't quite put words to it, but this statement kind of struck me. There's just a certain basic decency behind it that deserves celebrating.
reply