Almost half the comments here are just to remind everyone that C can/will never be replaced. Is that really the most insightful we can be? Is HN so pedantic now that we can't analyze an interesting language idea, but just nitpick at its existence?
1. Make a language and compiler that fills the same niche as C but does it significantly better, taking into account all the lessons we've learned in the last 44 years.
2. Make a build tool that competes with autotools, cmake, make, etc. Similar to Python build.py, you'll have a build.zig, import a library and use it. The library does everything configure/make does except better.
3. Package management on par with NPM and Cargo. Minimal overhead to publishing reusable code. Although I don't plan to have a central repository, rather the ability to put a URL or source control URL with a hash.
4. Zig can import and export .h files, and you can target .o files that are compatible with the rest of your C codebase.
5. Now there is minimal cost to replace bits and pieces of your codebase with Zig.
All of those points sound fantastic and zig is one of the most interesting things I've seen on HN in a while.
Still I can't help but ask one question: Your points 2 and 3 (build tooling and package management) seem like they would have the biggest, immediate impact for most current c/c++ users.
Did you ever consider separating the build tooling and dependency management effort from the core language?
I, for one, would be extremely interested in sane build tooling and dependency management for c/c++ (and what you describe sounds great). I would give it a try in no time. However, I'd be much more hesitant to consider a new language for a serious project. Even if the language is tip top, I would still be concerned if it is likely too be around and still relevant in X years. I think build tooling has less of a chicken-and-egg problem to adoption because you could always switch to another tool later. And once somebody uses your build system you're already half the way there to using the full thing.
Plenty of us know it can be done, and is an extremely worthwhile goal; keep up the fantastic work!
Zig may be the language I've been waiting for, so I'll make it the next language I learn, although I realise it's not even reached a first release.
2,3: Or just use make and use OS package managers. Don't make the users learn yet another tool.
The niche of C is Unix, which is here to stay. C is a part of Unix, it's interfaces are defined in terms of C, and plethora of libraries written in it. If a random library X is impemented in C, it is easily usable in many languages via FFIs. Programs written in C are easily portable to most popular platforms, and C knowledge is transferable from supercomputers to tiny embedded devices. C translates nicely to assembly, and it maps easily to mechanics of computing. These are the areas where languages like Zig need to penetrate.
> Or just use make and use OS package managers. Don't make the users learn yet another tool.
OS package managers fulfill a different task than development package managers that are bundled with programming languages.
The OS package manager provides a consistent set of libraries that is required to run the applications that ship in the OS.
A development package manager is used to bundle all your dependencies and pin them to specific versions so the build is reliable and reproducible. This sometimes involves things like being able to have several versions of a library in your dependency graph. Or sticking to a specific older version of a library you need (because their release schedule might not align with yours).
A development package manager might also take care of the compiler/interpreter version(s) of your programming language, for languages that develop rapidly. Heck, I'd even want this for "slow moving" languages like C, where I sometimes end up requiring a recent-ish compiler version for some extensions/intrinsics I might be using.
Apart from some fundamental libraries (say, OpenSSL, Xlib, libc), you should not rely on your OS package manager if you wish to reliably and reproducibly be able to build your software. The rest you should probably link statically or bundle with your app binaries depending on how you distribute your software. If your software ends up being packaged for a distro, it's up to the package maintainers to ensure that it'll build and run with the other libs in the OS.
I do agree that it is a nuisance to have a package manager for every language, but so far no-one has stepped up to try to make a unified package manager that works with many languages.
Points 1, 4 and 5 are exactly the same in C++ too. What makes you think you can do better than the guys who are driving C++ development? (Honest non-rhetorical question.)
Points 2 and 3 don't really belong in a programming language.
To the contrary, I believe that process improvements around a programming language will be more important than bare language features going forward.
If I were to build a language, a headline feature would be a package repository that enforces best practices (e.g. it should outright refuse to publish a minor version upgrade that breaks the ABI).
So what happens if you have a project that uses more than one programming language? (And what project nowadays doesn't? E.g., Python + C++ + Javascript + HTML + CSS is as simple as it gets for a larger-scale project today.)
Any 'process improvement' scheme will necessarily need to be language-agnostic to achieve popularity.
> Almost half the comments here are just to remind everyone that C can/will never be replaced.
As far as I can tell, this view is completely consistent with this PL's author's view.; he aims to leverage existing C code based and gradually switching parts over where possible; that's embracing C rather than shunning it.
Most of which from people that never used anything else and think C was some kind of wonderful invention by two guys at AT&T that just want to have an easy way to implement a compiler, by ignoring what everyone else has done before.
Had UNIX been a commercial OS instead of having the source code available for free, C would just be another footnote on the history of system programming languages.
Interesting language. Somewhat reminds of me "ooc"[0], though I believe ooc is ref-counted/garbage-collected (? I think. I can't remember, been a while since I've touched it); but being able to easy use C libraries from a relatively low-level language that has nicer high-level constructs is still a beautiful idea, in my opinion.
The language I'm using these days for that exact feature set though is Nim[1] -- being able to use {.compile.} pragmas and bring in header/source files, along with the great C type support is wonderful; but again, garbage collector (albeit one that can be tuned and/or turned off). Zig seems to be targeting the "true C replacement" niche, which I'm going to have to keep an eye on!
There are C library binding generators for Rust which do the boring work for you. It's not perfect, but rust-bindgen was pretty good last time I tried it (~15 months ago): https://github.com/Yamakaky/rust-bindgen
When a language has built-in support for easy binding to C, that's a positive. When other languages such as Rust offer that feature through external tools, that's not as good, since I need to evaluate which of those external tools to use (when there is more than one), and it is something else to download/install/learn and another step to add to the build process. So it looks to me like Zig handles this better than Rust does (of course, Rust has clear advantages over Zig in other areas: much more mature / widely used, and more advanced memory management.)
I think there exists, or work has been done on, a tool to auto generate that data for you. So not quite as simple, but possibly more fine grained control.
Rust seems very complex and C is more about simplicity (in the sense that K&R is a short read). Other languages with the same spirit of simplicity would be Tcl and perhaps Go. I just wish Go had more supported machine architectures.
"My qbe C frontend is actually written in myrddin which is a lot like rust/ocaml. To be honest, C is not well suited to places where there is adversarial input such as servers, but when it comes to logical correctness of code I do not see amazing benefits from languages like ocaml."
Did you mean two? Or did you come to new conclusions after that experience? Or did I misunderstand that entirely? ;)
There ARE benefits to correctness from better languages, just not as amazing as people think.
I'm just saying a vulnerability in C is just going to be an out of bounds exception in another language. both cases the code is not correct, one is just safer than the other when it comes to exploits.
Keep making more languages, we can't know for sure if we don't try.
I can see where you're going but I don't think it's correct. There are clearly cases where an error will mess you up whether it goes as far as in C or not. Other times, the language or tooling prevents the error before runtime to ensure correctness. Are you aware of the benefits of Design-by-Contract (Ada/Eiffel), static proving (SPARK), or dependent types (ATS), though? The correctness criteria you can encode in them can straight-up prevent errors at interface or algorithmic expression levels. Three of these have been used for low-level code with two often in real-time and one for an 8-bitter. Depending on automation or interactive use involved, the errors caught at compile-time can increase pretty far past what a basic, low-level, type system can do.
So, we already know we can knock out extra classes of errors with such languages. It was proven in theory and in the field. Using or improving them is just good engineering. We can also make more languages in trial-and-error discovery process to see if we find more benefits. Exceeding C's benefits, though, is already empirically proven to be worthwhile whether it's a Myrddin, a SPARK, or an ATS.
C code will continue to dereference pointers after object deallocation, access arrays out of bounds, and not complain about integer overflow. Even if input is not an RCE, you will silently get an incorrect result of computation. A safer language helps in cases other than security.
And there are newer languages that have this problem. I was shocked to see that I could produce a SIGSEGV in Nim within the first 15 lines of code that I wrote.
Why were you shocked? Is it because you're used to languages that prevent null values by default or do you have concerns about the safety of Nim because of this error?
Whatever the case, as far as I am aware, sooner or later Nim will prevent nil values by default.
A segfault on null is fine if the language defines it to be OK (i.e. dereferencing null is reliably a crash), but most environments that allow interacting with null (C, C++, LLVM IR) have dereferencing it be undefined behaviour, i.e. the compiler may optimise in ways that mean code that naively results in a null dereference/segfault do not do that and instead result in random memory corruption.
Note that Nim uses C as an intermediate language for compilation, so unchecked dereferences of possibly-null pointers are not memory safe.
Modifying Nim to perform a check before each dereference would be trivial. The reason this isn't done yet is because in practice unchecked dereferences do not cause random memory corruption, I have been using Nim for a while now and have never seen an instance where this was a problem.
Sure it is. It means your language and tooling allowed an unauthorized access that resulted in a segfault. Many don't. There's even system-level languages like Clay and Rust that can catch those things. Also, tools like Softbound+CETS neutralize them in C code. The category is called "temporal, memory safety."
Definitely a memory safety issue. Definitely worth preventing if possible. If not preventable, definitely worth handling better than mere segfaults.
No, it's the same as a .unwrap() in Rust, or malloc exiting when it fails. It means the language doesn't force you to handle all cases, not that it's "unsafe."
The point about it being like .unwrap is that the bug is a completeness bug.
My theory is that people have converted "memory unsafety can cause segfaults" into "segfaults are memory unsafe." Segfaults are actually the desired outcome, and initializing a pointer with null, instead of leaving it uninitialized is how that outcome is achieved.
The desired outcome is a program that either doesn't have the error or crashes safely with a report of exactly how it happened. Not segfaults or undefined behavior in general. The latter are simply one of the consequences of C's design and programming style than anything inevitable or ideal.
It's tradition that failures of a language's safety system to handle memory are "memory, safety issues." However, that they can be used for code injections in some scenarios is even more reason to think of them that way and make our languages prevent them where possible. Quick example:
Yeah, in C you could have a problem if there's a large object, but in that language you already have undefined behavior. Nim is not C, and bounds checking on arrays prevents that. (I am only skimming the docs, so tell me if that is incorrect.)
Segfaults can often be turned into an actual exploit; unwinding the stack cannot. It would be better to explicitly abort than to purposefully cause a segfault.
You are committing the exact generalization of segfaults=bad that I mentioned in the parent comment. Memory safety violations which cause segfaults can often be turned into an actual exploit. That does not mean the same thing as, "If A is a segfault, A can be turned into an exploit."
And it's not better to "explicitly" abort, because you get zero benefit from that. And because testing for null would be slow. (This is why JVM implementations often let it segfault.)
I didn't say that all segfaults are always security problems, just that the often are.
I think we'll have to agree to disagree here, though. Segfaults are memory safety problems, and unwrap is significantly different because it's not a memory safety problem.
You can "agree to disagree" about a definition, but we'll also have to agree to disagree on what properties the definition should have.
I'd say that language L is memory-safe, then a language M which is like L, except that some programs terminate earlier, should also be memory-safe. We'll have to "agree to disagree" about that.
I'd say a compiler's choice of behavior-maintaining implementation technique should not determine whether a language is memory-safe. We'll have to "agree to disagree" about that.
I'd say a definition of memory safety should correspond to some mutually relevant notions of being safe (with consequences around security and ease of debuggability), and that it shouldn't include some kinds of unrelated notions about termination safety while omitting other kinds of termination safety. We'll have to "agree to disagree" about that.
> I didn't say that all segfaults are always security problems, just that the often are.
My mistake, I assumed you were trying to make a point that had some bearing on the discussion.
Rust? Maybe. But then, why not Go or Swift or Lisp? Personally, if I were to be expelled from C/C++ paradise, I would try Oberon - the only language I know that can compete with C in its simplicity and, despite normally being perceived as garbage-collected, the ability to serve as a truly systems programming language.
As far as vs Go or Swift or Lisp, Rust is much "closer to the hardware"(as discussed in other threads nothing is really close to the hardware anymore) than the other languages, go and most lisps use GC which causes problems in many systems language situations, and last I heard swift still has lackluster cross platform support.
I don't really know anything about Oberon so can't speak to that.
P.S. Personally, all my hobby projects are in lisp. I write applications though, not system stuff.
The A2 Bluebottle OS is latest incarnation of Oberon OS with GUI and very responsive. Languages like Oberon can do unsafe stuff for low-level work like OS's. You just use manual, memory management and UNSAFE keyword. Rest that can be GC'd, bounds-checked, etc is by default.
A few months ago, I considered Oberon as a viable alternative to C for my pet projects. Horace, after some time spent playing with it, I was scared by a fewserious limitations: (1) there is no support for UTF-8 strings, native strings are just sequences of ASCII characters, (2) lack of libraries and available bindings (e.g., SQLite, GSL, FFTW...), (3) scarcity of tools like package managers and Doxygen-like tools, (4) very few resources available on the web (Wirth's book is exceptional, but it does not cover use cases that are common in 2016.)
I suspect that Oberon wants you to roll out your own text library. It's actually how C probably wants you to roll out your own string library, too, because "C strings" are no better than Oberon's text facilities. The problem is that nobody knows how you want to work with your strings. There doesn't seem to be "the one correct way" to work with texts. Gauche Scheme, for example, has an interesting take on strings (http://practical-scheme.net/docs/ILC2003.html): backing arrays are shared and substrings are O(1), semantics for Scheme strings is observed - mutability is still possible but expensive, thus also gently discouraging the programmer from using it.
Honestly, my first thought upon seeing GP's post was "why not Ada?". Designed by the DoD for safe systems programming, it wouldn't really be a bad choice.
Among other things, systems programming projects generally require stability of language syntax, API, and ABI.
In terms of ABI stability, Rust has none.
In terms of API stability, they seem willing to drop features within a major release cycle or two.
In terms of language stability, they seem to only guarantee stability within one major release (e.g. 1.x).
Rewriting for the sake of language updates is a huge resource drain when you're talking about code bases with lifetimes that are likely to be longer than their authors'.
For reference, just about every major shipping OS contains significant LoC dating back to the 70s or 80s.
> In terms of API stability, they seem willing to drop features within a major release cycle or two.
I don't think any feature/api that was at one point marked as stable post 1.0 (when they started making any guarantees) has ever been removed.
> In terms of language stability, they seem to only guarantee stability within one major release (e.g. 1.x).
While it's true that 2.x has the rights to break everything, this is incredibly dishonest to state in a vacuum, because it's a well established point that 2.x should never exist. The devs want all code to work forever, and only a catastrophe would push them to pull the lever. They have of course reserved the right to require fixes in special cases -- e.g. fixing a bug in the type checker might break some code, and that's acceptable. Although even then they'll try to phase in the fix over a few releases. Every release is tested against the entire crates.io ecosystem to try to catch these regressions.
You are absolutely correct on ABI stability, though.
Not necessarily. I suspect even in the case of 2.0, most if not all of those apis will survive. Deprecation markers exist to encourage people to use newer, better APIs.
I suspect that even if 2.0 happened, the only apis that would be removed would be ones that we're reasonably sure nobody uses (via crater), and even then there would be debate.
And like Gankro said, 2.0 is a thing that's not supposed to happen in the first place.
To re-iterate what gankro said, we care a _lot_ about making sure that it's trivial to update your Rust. Your ABI compatibility comment is correct, but the rest of them aren't. We only break backwards compatibility in very few cases: if there's a soundness hole, or if we have to fix something in certain underspecified areas of the language. For those latter ones, if it's not a trivial fix, then it's a no-go, and even if it is, we do warnings until we don't see them in the ecosystem any longer.
But really, it's not about rules-lawyering over technicalities about what our policies are: the underlying attitude we share towards new Rust versions is that it should be trivial to upgrade. And we collect data to make sure that we're on target there: part of it is the train-based release strategy, part of it is stuff like testing everything on crates.io, part of it is things like the community survey, where most people said their code has never broken, and if it did, it was extremely easy to fix.
True stability is impossible for almost any language out there, especially one with a nontrivial type system.
An example of a thing considered "trivial to upgrade" over is when the stdlib adds a new method to a type. Libraries may already have implemented a method of the same name on that type via a trait. In this case, folks upgrading would have to explicitly specify which method they're talking about.
It's these kinds of things that are not counted as breaking changes, because the alternative is freezing the stdlib forever. Other languages have the similar problems.
New releases are tested on the entire ecosystem to ensure that this rarely happens, if ever.
> In terms of API stability, they seem willing to drop features within a major release cycle or two.
This is completely false. Do you have an example?
Besides, this is completely off topic in a Zig thread. From the FAQ: "Zig is not afraid to roll the major version number of the language if it improves simplicity, fixes poor design decisions, or adds a new feature which compromises backward compatibility."
> This is completely false. Do you have an example?
Review the "Breaking Changes" of release notes, and the number of discussions regarding removing API after a short (in the view of someone who works on said "systems code") period of deprecation.
> Besides, this is completely off topic in a Zig thread. From the FAQ: "Zig is not afraid to roll the major version number of the language if it improves simplicity, fixes poor design decisions, or adds a new feature which compromises backward compatibility."
This is another instance where our being up front about any theoretical breakage leads to misunderstandings. Every new release of GCC and Clang has breaking changes in the Rust sense too--they just don't call them "breaking changes" (and we stopped doing that too, because of comments like yours). For example, I had some code break when upgrading GCC the other day due to newer versions becoming stricter about copy constructor semantics in C++11. In Rust we (used to) call this type of thing a "breaking change" out of an abundance of caution--even though it was nothing more than the compiler getting stricter about code that never should have compiled in the first place. Unfortunately being up front about it led to confusions and comments like yours.
Are you referring to unstable APIs being deprecated and removed?
Technically we can remove those at any time (happens in librustc all the time), the deprecation period is to help and motivate migration of nightly users, nobody else can touch those APIs.
I've thought alot about this before. Rust doesn't really have very great interoperability with C - most languages really don't, because the API for C programs revolves around functions, structs, macros, etc. Most languages can only really get part of that, and then the rest gets essentially a separate implementation in the language you're porting too - leading to a certain amount of manual duplication and intervention necessary (We don't call it porting when you use a C library in C). Design goals Rust has made compound the issue a bit more: For example, Rust uses the C++ ABI, not C. Rust can't generally use header files directly. The Rust<->C conversion loses a lot of information and can result in fairly messy code by Rust standards, etc.
This makes combining Rust into already existing C code a hassle, as it is generally worse then even just trying to use C and C++ together which is already fairly annoying and error-prone. This is compounded by Rust's safety goals which generally require a different design to a problem then the "C-style" approach would be - IE. You're probably not going to get good Rust code by simply replacing a .c file with a Rust file, because the entire time you're going to have to make use of the unsafe C functions that your program includes.
Perhaps the bottom line is that (for good reason) Rust is much more then just an improved C. And because of that, combining C and Rust is always going to feel like combining two separate systems together with a compatibility layer in-between - because that's really what it is - and that's just not very attractive.
I know little about Zig (I'm reading about it for the first time now) and don't at all predict it to replace C. That said, I would love a language which has a focus on maintaining very good compatibility with C while fixing the various pieces of its design that are fixable within the bounds of the language. For example, I noticed it mentioned in the README that in Zig that pointers are nonnull by default, and can be made nullable by adding the 'maybe' attribute. This is a feature I would love in C, but really isn't there (gcc has a nonnull attribute, but it is really essentially useless and nothing like you'd want). Essentially, I would like a language where it feels like writing code for the same system, but in a different (better) syntax - where I can drop it into a project and it works together with the system like any other .c file (Though with a different compiler, obviously), but the code inside is much better then what you can do in standard C. I think that such a language could retain the reasons why people (like me) still like C, while fixing a lot of the uglier sides to the language that everybody is aware of but aren't going to get fixed any time soon. And perhaps most importantly, I think such a language is definitely possible (Though it may not be able to employ all the features you may want) - but I don't have the time nor probably the skills to really do it well besides list off the things it should fix and shouldn't fix.
As for starting a project though, if you ignore difficulty to learn then I would agree with you: There's probably little reason not to just go for Rust if you're already willing to learn a new language. For most projects the issues I outlined above don't really matter assuming the entire project is written in Rust - besides the want for a language closer to C in design. That said, I readily concede that I have no idea if Rust's borrow-checking semantics could ever work without the extra features they added that take the design away from being 'C-like' - and considering that's one of the definitive features of Rust, is easily worth losing the 'C-like' detail if it is necessary.
But when considering a project like the Linux Kernel, the GNU coreutils, git, GTK, and other various large projects where conversion to Rust is probably impossible without a complete rewrite, being able to use a 'better' C while still retaining the aspects that people like would probably be a nice step in the right direction. The chances of it happening are nil, but it would still be a step in the right direction.
Rust uses its own ABI, not any of the various C++ ones.
Our strategy for interop with C is two-fold: first, a very thin, direct wrapper. These are the various *-sys packages. They know how to link in (and maybe even build!) the underlying C library, and provide functions you can call from Rust. Then, on top, people can write a more idiomatic Rust wrapper, working in Rust's safety guarantees.
That takes some work and time to get right, but hopefully, it means that in the end, Rust users of C libraries shouldn't have to deal with the stuff you're talking about.
That strategy is also used by Haskell, what typically happens is that the idiomatic wrapper ends up being out of date, poorly documented and if the library is prominent enough there might be even several competing ones.
You're 100% right. I was actually thinking of name-mangling when typing that, but that's still not quite right. The point I was trying (and failing) to make is that Rust uses things like name-mangling and a different ABI that makes interop more of a challenge - similar in challenge to interop with C++ from C.
C technically has name-mangling too, but generally speaking either there is no mangling at all, or they just add an '_' to every symbol name.
Edit: To address your second point, I think that is a step in the right direction - and I think that Rust's compatibility with C libraries is probably fine, and comparable to most languages. I don't consider that to be so big of a turn-off that I wouldn't want to use Rust for a new project, though obvious writing wrappers isn't always fun or error-free.
But, assuming I'm understanding what you're getting at correctly, I'm not sure that will really solve the core problem I'm trying to get at: Generally speaking, nobody wants to be maintaining compatibility wrappers for APIs that exist entirely within their program and are probably changing all the time, and that's really what you need if you want to replacing part of a C program with Rust.
For example, to take it to the 'extreme' - the Linux Kernel module API changes virtually every version of the kernel (And the ABI is not guaranteed at all), and maintaining a complete Rust wrapper for it would not be a fun time even if a certain amount of it can be auto-generated. The API includes a very complicated mess of functions, inline functions, macros, structures (with varying different types of alignment and padding). None of it is guaranteed to stay the same across versions. And I think it is fair to say that most C programs have internal APIs like this (Though not as crazy) that are changing all the time and not intended to be seen by the 'outside world'. It's these types of things that I see being a problem for interfacing with Rust - APIs that are changing all the time in complicated ways which make writing and maintaining a wrapper very annoying and error-prone.
Oh, and to reply to your edit: yes, writing a wrapper for an unstable interface isn't exactly fun. But that also means that your C code is going to have to update with each release too, an unstable API is unstable for everyone.
A C API can stay relatively the same if you switch a function to a macro, or make it inline. In a lot of cases you don't actually care which of the options it might be to begin with - the syntax generally stays the same, and the situations where you care (Mostly just function pointers) are somewhat uncommon. Such changes would easily break a simple wrapper though. That said I do see your point - It's not like the C code is guaranteed to work either, so perhaps that is acceptable. The maintainer still has to weigh the disadvantages of supporting a Rust wrapper to the advantages of allowing Rust code.
My original point (which has gotten a bit muddled in the details) was just that there could be room for a language closer to C that offers to fix some of the more annoying issues, while still keeping very good compatibility with C overall and avoiding the need for 'wrappers' and such to interface it with C code.
> But when considering a project like the Linux Kernel, the GNU coreutils, git, GTK, and other various large projects where conversion to Rust is probably impossible without a complete rewrite
Firefox is shipping Rust code right now. There are various examples of Linux kernel modules written in Rust. There are rewrites of the coreutils in Rust. I don't understand why you claim this.
It is, and that is cool. That said, what it's shipping is essentially a separate library written in Rust that exposes a C API for parsing mp4s - it replaced one libraries usage with another, with a presumably sable API consisting of nothing but C struct's and C functions. Perhaps a key to point out is that it doesn't actually interface with Firefox very much at all - it isn't capable of accessing any Firefox state for instance. I also didn't list Firefox as you'll note - it's not a C project, it's a mishmash of languages already (Mostly JS and C++ from my understanding). I'm really talking about projects that are currently C-only adding in Rust code to the mix - they are structured much differently then a project like Firefox and the interfaces are generally much more complex as far as C goes.
I will add though, servo is making pretty impressive efforts. It is again a complete rewrite though, not an integration of Rust into Gecko which is the type of thing I'm talking about. A rewrite of a library in Rust against a stable API is generally possible, depending on the API - though the amount of work may make it prohibitive to get something usable.
> There are various examples of Linux kernel modules written in Rust.
I would like to see an example of a legitimate Linux Kernel module written in Rust that is in some form of use. I have never seen one besides a toy implementation and I can all but guarantee you it doesn't exist, for the reason that it would be way to much of a hassle to attempt to get it working with the mixture of macros, inline assembly, inline functions, gcc attributes, etc.
> There are rewrites of the coreutils in Rust
Coreutils is admittedly not a very good example, if only because coreutils isn't actually that big/complex of a project (Though supporting all the GNU flags and arguments is a pretty big task). They're also mostly just separate exe's anyway, so you could replace one or two with Rust code without much of a difference - I'd gladly remove Coreutils from the list if you would like.
> That said, what it's shipping is essentially a separate library written in Rust that exposes a C API for parsing mp4s - it replaced one libraries usage with another, with a presumably sable API consisting of nothing but C struct's and C functions.
This is true.
However, if we put aside the Rust code that is actually shipping, there's still plenty of work going on in sharing Servo's code with Gecko's.
One example of this is the ongoing experimental work to move Servo's style system into Gecko. Servo's style system doesn't have much of an "API surface"; everything is a surface. There's plenty of reaching in and grabbing Firefox state and vice versa.
Some of this is just done directly by reading or writing to structs. Some of this is done by writing small wrapper C functions (https://dxr.mozilla.org/mozilla-central/source/layout/style/...) and using bindgen. Bindgen has C++ method/ctor/dtor generation abilities that would obviate almost all of these manual bindings, though we aren't using them yet[1].
Overall, mixing C++ and Rust at a rough API surface in a large codebase hasn't been that hard. It's not easy either, but it's doable and I don't think it's anywhere close to being "nearly impossible without a complete rewrite".
[1]: The reason behind this has to do with name mangling -- Rust doesn't understand C++ name mangling, so bindgen generates the mangled function names and wraps them in a nicer API. This is all great, except that it means that the generated bindings stop being cross-platform, and you need to twiddle with the build system to either dynamically generate them or have a way to statically generate them for all platforms. Right now the servo-gecko integration uses a temporary build system that will be replaced soon, so this hasn't been a priority.
Rust supports all of these features. So your complaint is that nobody has written a translator from these things to Rust yet. That's a matter of filing PRs against bindgen, not some basic problem with the language.
If you think it is so trivial, then why hasn't it been done yet?
Rust macros can't do all the same things that C #define macros can. You can't insert inline C code into Rust code without some serious work. And the fact is, as you pointed out, regardless of in the future, right now it doesn't work, and those features are absolutely necessary for writing something like a Linux Kernel module.
> I would like to see an example of a legitimate Linux Kernel module written in Rust that is in some form of use.
Seriously this.
I write an awful lot of kernel code. There's absolutely no way I'd try to convince folks to accept Rust code into the OS repository, where it will have to:
- Be maintained for decades
- Be ported to a litany of platforms, several of which may only have decent toolchains from the GCC department.
- Vend stable ABI
- Not require keeping a litany of compiler versions around just to keep older code building.
> - Not require keeping a litany of compiler versions around just to keep older code building.
Do you have a single example of this? We keep very close tabs on breakage in the wild even, and especially, for changes we were allowed to make (unlike say GCC, which happily breaks code if the language standard says the code never should have compiled in the first place). There are times when we've refused to make changes that we were allowed to make (because they were changes to unspecified behavior, which C/C++ has more of than Rust, and which GCC/Clang changes all the time) out of concern for breaking existing code.
As far as I'm concerned, honestly, this is just FUD.
Your own release notes contain descriptions of breaking language changes.
That may change as the language matures -- great. I keep an eye on Rust so that I can eventually try actually using it for kernel-level development work.
> Which means Swift, .NET Native, Java/C++, C++17, on the OSes from Apple, Google and Microsoft.
You've seen where their OS code comes from, and what it's written in, right?
Just to be very clear, I'm very well-versed in life outside C and imperative programming. This isn't "UNIX culture". This is "systems programming" culture, and it's simply pragmatic.
Mac OS X and their predecessor might have an UNIX heritage, but C was always left for the very lowest layer. Already on NeXT the device drivers were written in Objective-C, being replaced by C++ on Mac OS X.
Anyone paying attention to their Swift talks during the last two WWDCs knows where the boat is steering. Cris is quite clear in stating Swift should be usable in all scenarios where C is being used and Sierra already got some adoption in userland components like the dock and launch deamons, now rewritten in Swift.
Microsoft has declared C89 as good enough with the future being C++ and .NET Native.
The C runtime library was rewritten in C++ with extern C for the public symbols.
The C99 compatibility and upcoming C11 are only done to the extent required by ANSI C++. For anything else there is clang.
As of Windows 8, the device driver framework has been changed to C++ and there was a talk from Herb Sutter where he mentioned the plan was to migrate the kernel to compile with a C++ compiler.
The idea of Core C++ Guidelines actually originated at Microsoft, before Bjarne and CERN guys got involved.
Google doesn't allow native code on ChromeOS and on Android they make pretty clear that the NDK is just to make game developers happy and nothing else.
They are all aware that C isn't going away tomorrow, but are driving efforts to make it as relevant in the future on their platforms as Assembly is today.
When I started working in IT, the only OS written in C was UNIX.
The language is not a sacred cow and the only thing preventing replacing it is the ubiquity of UNIX like OSes.
OS not bound to the UNIX culture and POSIX compatibility are free to chose other language as their systems language.
Not doing so, is usually a decision to cater to the status quo and ubiquity of existing developers and library (with their endless CVE entries).
C++ isn't substantially different from C when it comes to systems programming concerns regarding stability.
The biggest issue is that it's a bit harder to maintain ABI compatibility. You have to carve out reserved vtable space, avoid exposing STL in your interface, etc, but it's doable.
As for Swift, it has heavy userspace dependencies that make it non-viable for kernel work. Rust does much better there.
This is interesting from a technical point of view, but looking at the Hello World example [0] really makes me wonder... who wants to write code like this? I have been a touch typer for half my life now, but things like
I don't know. C is very close to the hardware, and it's pretty portable, too.
I'd suggest doing what some other languages do, and get yourself to the point where almost all of the code for Zig is written in Zig.
In other words, you'd first write a `microzig` that's written in C++, and microzig knows enough to make `minizig`, which knows enough to make the rest of zig. This is what Perl does, and I expect other languages do the same thing.
On the other hand, zig is supposed to be able to cross-compile really well, so maybe you can skip that: Have version 0.9.9 be the last version written in C++. Then, for version 1.0, re-write the entire toolchain in Zig, and use the 0.9.9 compiler to compile Zig 1.0. At that point you are in full dogfood mode.
Finally, since it's called Zig, I get to close with this sentence: "You have no chance to survive make your time."
It's an interesting phenomenon, specifically how processor companies (erm, Intel), who make "general purpose" CPUs, have clearly made trade offs to optimize for OSes that behave like UNIX or Windows varients!
Well, in fact, Intel and all the other major makers spend a huge amount of effort benchmarking important workloads. It isn't a decision to optimize for Unix or Windows, it is a decision to chase dollars. They will optimize processors for your favorite workload, too. Just demonstrate how many millions of dollars of business your workload represents, and create some sound benchmarks for it. Easy.
This seems to be changing. Hardware is trying to move closer to java -by adding support of GC, bounds checks, etc - something that won't help C programs as much.
Unfortunately, the C compiler devs didn't implement the bounds checking extensions of C11, etc. It would have been really nice to have a good checked C that was the standard.
> C is very close to the hardware, and it's pretty portable, too.
Here you seem to be suggesting that portability is good, but then...
> zig is supposed to be able to cross-compile really well, so maybe you can skip that
... you seem to be suggesting that this project should give up its (already existing!) portability. Without saying why.
In 2016, it's not clear at all why having having a compiled language be self-hosting should be considered a worthy goal. It's just a lot of busywork instead of simply using an existing toolchain. Certainly not the right thing to do from a software engineering point of view. Certainly not the right thing to do if you want portability. Certainly not the right thing to do if you want interoperability with existing sanitizers. Certainly not the right thing to do if you want powerful optimizations.
I'd agree that the frontend should probably be bootstrapped, at some point. But the whole toolchain? Just to avoid random HN commenters throwing "every self-respecting programming language is bootstrapped" at you? Nah.
Well I am impressed. And I am very glad to see more and more languages in this space. I particularly like that Zig seems to be minimalistic -- a lot more fun to look at (for me) than C++ or Rust.
But I didn't see anything in Zig similar to Rust's lifetimes. Well it's nice to be rid of that complexity, but I don't see how you are going to do C-like pointer stuff safely without them.
Can anyone explain what these languages (e.g. Zig, Myrddin, Nim) do instead?
It's often a bad sign when a language advertises this high up on its features list. It means that they didn't really get the true takeaway of the Maybe type (which is that you should support algebraic data types to properly constraint data to valid states), but instead saw a single specific use case of ADTs (Maybe) and thought that's all there was to it.
I've run into this with Java 8. Optional is pretty common now and has eliminated the need for the null pointer in much of everyday code, but they still don't have something like Either to eliminate the need for exceptions. Maybe is extremely useful, but it's a small fraction of the usefulness you get with true ADT support.
Syntax looks very Rust-inspired, but it lacks Rust's OCD. I also catch hints of other syntaxes, like Ruby/Smalltalk-style block parameters, and a defer very much like go's.
I find it interesting that it implements generics by passing types as normal arguments. Say, `list(i32, 1, 2, 3)` rather than `list<i32>(1, 2, 3)`.
I can't seem to find any details on safety other than
> Safe: Optimality may be sitting in the driver's seat, but safety is sitting in the passenger's seat, wearing its seatbelt, and asking nicely for the other passengers to do the same.
Is Zig memory-safe? How? (Specifically, is there some useful safe subset ignoring the obvious FFI and "explicitly unsafe operations" exceptions every language gets?)
There is some compile time safety and some runtime safety, but it's not comprehensive.
Nullable pointers are handled by the type system at compile time.
Integer wrapping (signed or unsigned) will crash at runtime, unless you use explicit wrapping operators.
Array out of bounds will crash at runtime for slices.
There is no direct pointer arithmetic, but you can convert a pointer to a slice, and then index into the slice (which has array bounds checking). This is an example where the language is unsafe but it sort of guides the programmer into writing safe code.
Not being memory-safe is a bit of a bummer. Do you have a story for taking advantage of all the tooling thats been built up to defend against C's pervasive unsafety? Sanitizers being the most notable.
On balance even without them this'll probably be safer than C (as long as you're avoiding like, 90% of its pitfalls).
How is the goal of readability being addressed? I didn't find it particularly readable. Basically it's C with better types.
A few suggestions about the syntax, based on what many languages did in the last 25 years.
1) No mandatory () around conditionals. Make {} mandatory even for one liners instead. They remove bugs and are a common suggestion in the style guides of several languages.
2) The multine string syntax is verbose and uninviting. Use heredocs or a string delimiter reserved for that purpose.
3) Try to do without the ; line terminator
As a huge bonus, to reduce clutter, try to implement automatic import. I don't know of any language doing it, it's IDE territory right now (or editor's [1]). Still it's very useful because there is little as boring as going through a long list of imports at the beginning of each file. They should be there only when there is an ambiguity to resolve.
About readability, those {var v; while (cond) {}} blocks in the examples are puzzling. Finding a }} alone on a line was an "is this a syntax error?" moment, or just bad style.
Anyway, it seems to have generics so it's already ahead than Go, which seems to be stuck in the 60s regarding this feature (they've been pioneered by ML in 1973 according to [2]). For the rest, it's in the average of what we've seen in the past decades, so not shiny but not bad. Plenty of successful languages are like that and maybe it's a reason for their success: being average they don't scare away people. I'm thinking especially of Python, which smells oddly of C with its __underscores__ and the explicit self argument, reminding me of how we used to do OO in C passing around struct pointers (those structs were the objects, holding data and function pointers to methods).
const number = parse_u64(str, 10) %% |err| return err;
pattern and then goes on to say that you can avoid repeating "%% |err| return err" all the time by writing
const number = %return parse_u64(str, 10);
instead. I don't think that's particularly nice; it looks too much like "return the result of the call" when that's only what happens in the error case. I would have expected something more like this:
const number = parse_u64(str, 10) %% return error;
where the compiler would turn "return error" into "|_some_var| return _some_var".
> 1) No mandatory () around conditionals. Make {} mandatory even for one liners instead. They remove bugs and are a common suggestion in the style guides of several languages.
No, no no! I really hate that trend in newer languages (rust and go, I'm looking at you). C got this right.
Error handling mostly with error codes seems like an OK compromise for me since it doesn't require any dynamic memory allocation (like Gos error interfaces). I guess one would use some custom return type or a GetLastError equivalent if more details are required.
However I'm not sure if purely compiler assigned error numbers will work out for a systems language. Of course it works when you build the whole application, but if you build only a part of it (a library, an application that (dynamically) links against a library, ..., where do you get the information from which error codes are already occupied by them?
You lost me at hello world. Or maybe I should say you lost me @ %hello %%world. If you want to prioritize for readability the first thing you have to do is get rid of all that weird non-standard punctuation.
The thing is, you will never replace C. Supplement it, extend it, yes, but not replace it
The fact is, C is still probably the lowest level HLL still used. It's the lowest level you can be, while still providing the HL constructs we're all used to. There is always going to be code that wants or needs to be there. Code where anything further up either can't give the performance, or provides impedance to the design or actual goal.
You still want to replace C? Fine. What will you write your bootstrapping compiler in? :-D.
> It's the lowest level you can be, while still providing the HL constructs we're all used to.
I disagree completely. A language much better than C will come along in its niche, and I'll be switching.
C isn't low level enough. It has too much undefined behaviour (and tries to be too portable), each instance of which denies you access to real machine behaviour. Signed ints don't even wrap around! Effectively, C does not even support twos-complement signed integers, as exist on all modern computers since decades ago, without a special compiler flag! You have to resort to tonnes of compiler-specific extensions to get things like leading-zero-count, SIMD, and computed-goto. Even with GCC's zillion extensions that were added for people who have to do really low level work, C is still hopeless for writing maximum-performance VMs, that's why Mike Pall wrote the LuaJIT2 interpreters (one for each architecture) in assembly. Here's one of his explanations why C shouldn't be used for such a thing [1]. I've also tried writing a high performance interpreter in C, and I found that I wasted 80% of my time trying to get the C compiler to output the sequence of instructions I wanted. Next time, maybe I'll just use assembly and save myself a lot of trouble.
Secondly, C is not high-level enough. A low level language like C needs a powerful macro system to let you abstract away the details of your object system or whatever. C's preprocessor has many flaws [2] and is one of the worst in existence. A good macro facility allows compile-time generation of code in a readable (or at least type safe) way, like D and Lisp do. C and C++ users don't realise they have the preprocessor equivalent of a Blub language [3]. Sure, C++ is an extension of C with high-level features, and was a good attempt, but it's too complex and C++'s template meta-programming just about the only compile-time codegen solution worse than C's preprocessor.
> And as for C not being low-level enough, that was my point - C is as low level as you can be, before you become unportable.
I don't think that's true. You can add first-class support for bitdata [1], much better than C's bit flags and portable, better support for alignment declarations, better/safer support for absolute addresses, native SIMD-like vectors [2], exposing arithmetic carry flags, and more without losing portability.
> You still want to replace C? Fine. What will you write your bootstrapping compiler in? :-D.
There are several examples of language implementations that are built directly from machine code incrementally. (In that case, the "bootstrap" happens via an already-built compiler of the previous version, or of the same version in the different architectures.) Also, you don't strictly need C for the bootstrapping compiler... Rust's one was OCaml for example.
And what's OCaml written in? How about the assembler?
If you really want to replace C, You'll need to be able to compile the language without ever having C on your system (at least, in theory). This is almost impossible, although you can do it, if you slowly work towards it.
That's not really fair. If you go far enough back in the ancestral compiler list on any language, you'll probably hit C. You'll definitely hit hit assembly and bare-metal machine code. Heck, at some point you might run into hand wire-wrapped microcode---but I don't think that's an argument that we should be using it. That's just an artifact of the history of operating system development, and the fact that C spent (is spending) a considerable period as the de facto standard for OS development. That it was first doesn't necessarily make it better. A few events turning out differently in history, and you'd be making the exact same arguments about lisp, BCPL or Fortran.
It is a weak claim. You'd hit Pascal for a whole family of them with FreePascal well-maintained, fast, and portable. The Wirth ones vary language-by-language. There are BASIC's written in BASIC, LISP's in LISP (or 1940's machine code or microcode), silicon implementations of Java (eg JOP), Hyde's HLA assembly, IBM's PL/S that's still around, Amiga-E in M68K assembly, and Hansen's simpler Edison language in machine code or Pascal one.
So, yeah, C wasn't at the bottom of all kinds of past and present developments, many more complex than C. It being at bottom of many current ones is social and economic, not a technical necessity. One could just as easily target FreePascal, a low-level VM w/out C's built-in preferences, or a HLA on a specific ISA.
I know full well: my point was that you cannot depend on that which you wish to replace: Targeting something that doesn't is a valid approach for doing so.
Replace your example with assembly and C. According to this, C can't replace assembly and assembly can't replace machine code, which is an absurd statement.
No modern development of OS is done in assembly only.
As a poet once said - "Perfect is the enemy of good".
Rust never aimed to replace all of C/C++ forever, everyhwere. That's nigh impossible. Much in the same way, it's impossible for ASM to replace machine code or 386 processors to replace Commodore 64 or Analytical engine. There will always be some shop that programs in ASM/C/C++, like there are shops that program in COBOL.
Rust point was - to replace C/C++ in domains that require safety and speed.
Care to elaborate? You can depend on what you want to replace for bootstrapping. You can use tools written in it, compiled with it, or even hand-converted to assembly for first run. That it was a stepping stone doesn't undermine an argument to replace it in general or even say much about it. The common flaw in reasoning is that "If C was used at any point, then C is technically good, essential, or something similar." It can come down to something as simple as author's convenience, personal preference, or even timetable. Using vs replacing a C compiler fits into all three I imagine. :)
Okay. Let's imagine Zig does replace C: 20 years from now, nobody uses C, except in a few specialized fields, same as FORTRAN. However, every time somebody has to recompile Zig, they have to go get a C compiler. Furthermore, most people don't know C any more than they know FORTRAN, making the compiler hard to contribute to.
I'm not saying C is technically good, or essential, I'm saying that you cannot be dependant on something you want to get rid of, because then you can't get rid of it.
I think you're missing the whole concept of bootstrapping. C, if only language, would be used to compile the first, Zig compiler. The next compiler, written in Zig, would go through the first one. You then have a pile of machine code that turns Zig into machine code. No C left after the first pass.
So, there's no dependency if you're switching languages or getting rid of it. There's just a one-time use. For legacy systems, there's also potential for a source-to-source compiler if semantics are close. There's also binary recompilation and integration into new language a la AS/400 or OpenVMS migrations.
True, but what about the bootstrapping compiler, for new architectures? That can't be C. Neither can the libraries the compiler depends on. And you'll eventually have to rewrite the assembler, because that can't be C...
You have to know that you're making a hugely stretch of an argument, I mean none of the "C replacement" languages really expect to replace C so completely that there will be no more C code written or no C compiler that runs on modern hardware in 20 years - you have to realise that's not what they mean by "replace".
It's meant as in "replace for the majority of modern use cases, where the power of C is needed, but a better language is desired" (or some variation thereof) and if it does come to it...meaning a language will replace C so completely that there'll be no C in sight in 20 years time, (hard to believe), then it'll probably be so, because we figured out how to not depend on C and go on without it.
I don't expect it, but if that's what you're shooting for, that's the sort of thing you've got to contend with.
And no, I don't know what they mean by replace, but if they make such a big point of stdlib not depending in cstdlib, what other conclusions am I supposed to draw?
True, it's not fair. I didn't say that your first implementation had to be beneath C, but if you really want to replace the language at the bottom of the stack, you have to rewrite the stack. Otherwise, you haven't really replaced it, now have you?
But that's not what we're talking about: we're talking about replacing C entirely. You can't outlaw dynamite if it's in use for making the dynamite replacement.
Another one for gradually building an ML which incidentally results in one of best languages for writing compilers, including Ocaml. You never need C so long as there's an assembler on source or target machine.
I've long proven you can on a per project basis for both system apps non-C languages and C apps with better, C variants.
"What will you write your bootstrapping compiler in? :-D."
Any language better than C for writing a compiler, esp Ocaml w/ design-by-contract, compiling to target assembler. Worked wonders for Esterel's SCADE. Optionally, any such language transpiling to any other HLL on target system, including C w/ safety checks & no undef behavior. Working wonders for COGENT language team. Or go Wirth-style with a better-than-C (esp Component Pascal) to P-code compiler plus P-code interpreter for target hand-written in assembly. Simple backend + interpreter got Pascal/P on 70 architectures. Any are easier than a whole compiler for complex language (or C) in C using a C compiler.
You can bootstrap in any language that is widely available, so today that could be Javascript even. The biggest advantage of C as a language is, that a simple compiler is very easy to write. But I would bootstrap from Lisp (Scheme). A naive interpreter can be easily implemented in assembly and then you have a nice high level language for implementing the target system.
at hundreds of pages defining the standard, it's not low level at all, sadly. The apparent simplicity is deceiving at first, IMHO.
Low level doesn't mean simple, but yet because of the desire for portability, it's rather abstract. It can't even check for a carry flag. It's not hard to imagine a language that could. Coming from microcontrollers, that is a huge disappointment. That is just the an example. Is there more?
It's lowlevel, that's why there's so much spec. And much of it is just defining that behaviour is implementation defined.
And as I said, it's as lowlevel as an HLL can be.
I wish we could check the carry flag. You could totally write a C library for that, I bet, by mixing in some asm, and using conditional compilation. Language extension!
I'll probably write it in Haskell or possibly ML. The Haskell compiler is written in Haskell, so there's nothing to C here.
OTOH, I'm not sure if C needs replacing. It models state machines fairly well. Sure, we could use more systems languages like rust, but at best they replace C++ in some circumstances.
Hmm you may be right. I'd say rust, but LLVM is written in C (and writing a compiler in Ocaml or Haskell is so much nicer.) That leaves assembly and god knows what else in my repertoire.
Not that it matters once it's bootstrapped though.
C is beautiful in its way and it is rather minimalist from the syntax point of view (especially the pre-C99 variants). Judging by the code examples shown on the page, this new language has a pretty noisy (compared to C, downright ugly) syntax. Granted, it may be dictated by some logic and possibly even its inner beauty, but on the surface it does not seem to be much of an improvement compared to the languages we already know and love (or love to hate), including Perl and JavaScript.
I would love to see a comparison to C, C++, Ada, Rust, maybe Cyclone. After reading the page I can't find an answer why Zig is better than any of established languages. (Haven't read comments yet)
Speaking of readability... well, still smells like C. And multiline string syntax is weird. :)
While I find interesting the idea of a new C (instead of a new C++ like D, Nim, etc), I'm a bit sad that there is both the Zig AND the Jay projects..
It reminds me why Pascal lost (IMHO): too many incompatible Pascal 'clone'.
True, which only makes my poorly-targeted point on bad parsability.
Reading the code requires backtracking around a conditonal control statement, and in a format more verbose than ternary. Most places I've worked have bans or limits on ternary to single boolean cases for the same reason: It's hard for people to understand the control flow of operations around conditional assignment. A new language that lets you perpetuate painful code isn't helping as much as it could.
It's much simpler for people to parse the more verbose case with an IF and two return blocks, and the compiler shouldn't care since it can generate the same code either way.
I think that is quite subjective. I prefer this version with one return. I don't want more returns than necessary, and I want them as far up in the nesting as possible so they are easy to see. Returns are, after all, control-flow altering.
You are right that C++ methods need to be invoked in C++ code. However C container types are too weak/inconvenient to wrap C++ code. I am wondering if something like Zig can be used to create STL like containers that can be used at the API level?
Modern amateur language designers usually are not aware of decades of research and breakthroughs in old-timer's languages, such as Algol, Scheme, Common Lisp, Standard ML and recently Haskell. They miss the beautiful specialized syntactic sugars in Standard ML (for defining and calling curried functions), Octave/Matlab matrix syntax, and the beautiful Common Lisp's looping and SETF macros, which inspired Python's for loop syntax for iterable collections and assignment operator overloading. Instead of spending time to learn and understand pros and cons and nuances they rush headlong to implement everything from scratch.
Really smart guys, however, are trying to leverage old-school languages, to climb on the shoulders of Titans.
David Moon created the PLOT language that leverages Lisps http://users.rcn.com/david-moon/PLOT3/ , Rich Hickey did almost the same with Clojure. Scala has been a leverage of Standard ML, similar to Ocaml. Now Swift is trying to distillate Scala, at least in terms of standard idioms and syntax choices. Julia has been build upon the big ideas from CLOS.
Python incorporated lots of good ideas and bits of syntax and managed to balance the syntax and unify semantics and since 3.5+ earned reputation of a well-designed language (clear, balanced, complementing each other semantics from the Lisp world, and carefully chosen syntax). It is also famous for its culture.
So, for those who wish to change the world by developing a new language, it might be a good idea to look at Standard ML and its derivatives and try to relax some constraints and balance the syntax according to modern conventions.
I personally would prefer to start from David Moon's PLOT3 and, perhaps, try to marry it with some lightweight type-system based with optional annotations.
Btw, Haskell's choice of not mixing type information and function definitions and use annotations with its own DSL for types is a really big idea. It reduces cognitive load, makes code extremely readable, and in general, it seems, the best of both worlds. Python's optional type annotations are clever, but less elegant.
I'd rather use Zig than Rust because I wouldn't have to learn about how to work with Rust's memory management model. Even though I understand Rust at a technical level, actually getting my brain to work that way has proven difficult.
Being able to directly import existing C header files is also attractive.
Having said that, I'd rather Zig had been Go-inspired than Rust-inspired. Go plus Zig's approach to generics and error handling would be nearly perfect! (Notwithstanding that Zig's error handling is basically Rust's with added syntax sugar!)
But I will still always argue that it's a bad trade-off to make compiler construction slightly easier at the expense of insecure code all over the place.
Not memory safe, bad type system, terrible 'macro' system, no/minimal(C11) support for generics, minimal standard library, pretty verbose (it's hard to build any good abstractions), lack of functional programming support, undefined behavior. Dealing with dependencies and headers in any but the simplest of projects gets old really quickly. Just about bigger project I've seen abuses the language to some extent. The list goes on and on.
Edit: I know that you can include a C header in Zig, and cross-compilation is possible and made easy. But you can't continue develop on current C projects if you switch to Zig. I guess that you have to change the Makefiles too.