Hacker News new | past | comments | ask | show | jobs | submit login
C-for-all: Extending C with modern safety and productivity features (uwaterloo.ca)
155 points by albertoCaroM 5 months ago | hide | past | web | favorite | 139 comments



I like the general idea, but the result seems just as full of garbage as C++. Remember the first time you saw a lambda declaration in C++? That sequence of familiar-seeming characters to mean something totally different probably threw you a bit. I know it did me. C-for-all's constructor/destructor syntax reminds me of that.

  // bonus doc bug: "with" hasn't been introduced yet
  void ^?{}( VLA & vla ) with ( vla ) { // destructor
The syntax for redefining ++ or -- (which already seems spurious) is even more opaque.

  S & ?+=?( S & op, one_t )
Other features, like left-to-right declaration syntax and postfix function calls, seem to have no use except to make one programmer's code harder for another programmer to understand. Then they throw in the rest of the kitchen sink, with all of C++'s overloading and polymorphism and inheritance plus both kinds and traits. The result is even more complex and even more surprising (in a bad way) to actual C programmers than C++ or for that matter APL. If not for the absence of templates, I'd say Cforall's complexity is a strict superset of C++'s.

If I wanted a direct C replacement with minimal improvements, I'd try D or Zig. If I wanted a systems programming language that had more differences but fewer surprises, Rust or Nim. The last thing I'd want is something that's both more baroque (rococo?) and less known than any of those.


It’s interesting how “evolutionary approach” is used. Compare with C2: http://www.c2lang.org/

I think that in the case of C2 it’s still a language that will rely on the same paradigms as C, whereas C-for-all is more of a “kitchen sink” style of language that moves very far from C in how the natural code solutions are. Just the small change adding destructors and constructors is a feature with huge impact on how data will be managed (for better and for worse)

(Full disclosure: I have contributed to C2 in the past and I’m working on the C3 lang which is feature and syntax-wise based on C2 but goes further: https://www.c3-lang.org )


Also think that C2 makes more sense.


This has been posted here a couple of times in the distant past but for whatever reason I can only find one old post:

https://news.ycombinator.com/item?id=9829133

Anyway, it seemed like an interesting project the last time I looked at it but Rust wound up getting all my attention.


Here's the discussion from 2018 [0], which has some responses from people involved with the C-for-all project.

[0] https://news.ycombinator.com/item?id=16657385


Yes. That's actually the discussion I was looking for. Good find.


Could someone expand on why Rust "isn't a systems language"?


It's simply incorrect, and betrays a carelessness with facts. Now, if you wanted to argue that the fragment of Rust excluding "unsafe" is not a systems language, fine. But by that reasoning, I think you can make an even stronger argument that (standard) C is not a systems programming language, because in practice no kernel is written in a way that avoids all undefined behavior.


I think that assertion muddles their argument.

Their key points seem to be that Rust (and other existing languages) lack backwards compatibility and have different memory management strategies. Lumping Rust in with GC languages is confusing.

Whether either of those things matter is another story. From using TypeScript, I'm personally inclined to believe that a superset of a difficult language can't solve all of the underlying problems, while a totally new language can.


All memory allocations in Rust follow a `malloc`-like API that ends up calling `malloc` and similar at some point.


With memory management, the question is often when and how to release resources, not how to get untyped memory from the operating system.


Sure, but in Rust, the programmer writes code that calls malloc manually, which is completely different from Java.


I think that, in their words, of its "restrictive memory management".


Interesting choice of words given you can do practically everything you can do in C also in (possibly unsafe) Rust.


The language itself is pretty good for systems programming. It's the lack of tooling (as compared to C) that renders it unsuitable for any real world systems programming projects.


I really don’t need an extended-C with productivity features... as that’s one of the defining points of C(the language is small). Fixing the warts (like different behavior between the overflow of unsigned ints & signed inta) would have been fine.

I personally want a better stdlib for C; fix the defiancies of <string.h>, removing the global locales (changing function behaviors according to LC_* was a really, really bad idea[0]), etc...

Does anyone know of an alternative stdlib for C?

[0] https://github.com/mpv-player/mpv/commit/1e70e82baa9193f6f02...


If you are targeting nix/BSD/MacOS/Windows then https://developer.gnome.org/glib/ is pretty good standard library for C. Not supported on bare-metal though.


Right? I keep thinking that the first place to start on a better C is to ask what needs be removed — not added. Once the “bad parts” are removed, what ergonomic features need be added back in?

For starters: implicit coercion is out; weak type aliasing is out; automatic binary serialization of aggregate types is out; etc.



That doesn't fix the issues mentioned. (Whether it's an "alternative" stdlib for C is debatable; nobody ever said that glibc is "the" stdlib for C…)


It is the de facto standard on Linux.


That doesn't make it the official implementation of ISO C, just one among many.


Which Linux? There are plenty that use MUSL instead.


musl is just another implementation of the true stdlib. I think what they want is an alternative to standard-conforming stdlibs that fills a similar niche - in other words, a "non-standard" standard library.


I'm sure a lot of very smart people are working on this, and I don't want to detract from their dedication, but man this language looks like a mess. And in no small part because it wants to be backwards compatible with C (the reasoning is unconvincing -- why not just use C++?).

In the age of Go and Rust, which already have a hard time finding niches, I don't really think there's any room for a language like this. And on the other hand, from a research standpoint, it's probably a lot more interesting to build a new language from the ground up.


I don't know, I work in the games industry and we're currently building a small project that's probably 95% C(the only C++ we have is for one library that has to have a C++ wrapper). It's just.....we don't need the C++ features, the compilation times are instant(last project I worked on was a large AAA game in C++ and compiling it from scratch was about 40-50 minutes on a single workstation), and pretty much all libs are C anyway. I'm not going to make the performance argument since it's mostly irrelevant nowadays, but I just don't see us using C as anything out of ordinary.


Y'all should try D! D compiles really fast-- the D reference compiler can compile itself, the standard library, and the codegen backend in less than 5 seconds, on a single laptop core. It uses a streaming architecture to achieve this. It runs fast because it is supported by both GCC and LLVM. One of its guiding principles is the thing that looks like C works like C-- all members of a file public by default, for example.

Some experimental compiler builds (Calypso) can compile D for use with C++ libraries as complex as Qt.

You can start porting your D project to C by using DPP- it allows you to #include C headers within D- and use macros. Putting "extern(C):" at the top of your file let's you interoperate between C and D.

Andrei Alexandrescu wrote Modern C++ Design, and is the co-captain of D. The captain of D created the first end-to-end C++ compiler.


Long compile times are almost always because of bad use of header files which include basically everything, insteda of only the used component. And this sometimes is because of bad separation of concersn / modularization.


I used to agree with this, but having spent a while in a very good C++ project (by C++ standards), even if we try to always forward declare and not include any unnecessary headers in headers, we still have a clean debug build time of 30-40 minutes. Even changing a single line in a cpp file, will still take around 30-40 seconds at best with incremental builds. I think long compile times are in the eye of the beholder, but often it feels like people have just gotten used to unnecessarily long compile times because they've forgotten how fast computers are and how fast a compiler should be for most of the work except for maybe some of the more complex optimizations in -O3. It kills productivity, and should really be a much larger focus than it sometimes seems to be.


Well said, I could not have said it better. Just to add, the productivity boost you get from short build times, in a long run, think of a project you work on for years, will out perform ANY gain you get from using a library, templates, and the like that kills your build time.


We use Delphi at work, just did a full rebuild of our core product, 978015 LOC compiled and linked in 22 seconds on my aging i7 4790k.


If it’s just a single line at a leaf in the deptree, it should only be taking time to link everything together.

Still, long compile cycles are my kryptonite... 0 productivity. Horrible. Back when I was doing java, I patched hibernate to serialize and deserialize the generated code, so startuptimes would be lowered with about 80%.

My setup back then used hot code replacement, but so many people would just simply wait 1-3minutes for every single change...


As soon as you include C++ stdlib headers (even only for the stdlib features you're actually using), you have tens- to hundreds-of-thousands of lines of complex template code in front of your own code, and this in each compilation unit. In the end your own code is maybe one percent of all the code the C++ compiler needs to work through each time something is compiled (depending on how many source files you have, this is also why unity builds - merging all source files into one - are so much faster in C++).

In comparison, CRT headers (when compiled as C, not as C++, as in that case additional C++ bloat is added) are usually a few dozen to a few hundred lines of simple declaration code. Compared to C++, it's very hard to make a C project compile slowly.


"Bad use of header files" is the fault of C++, not the developers. Until Stroustrup fixes the header processing behavior (it's 2019 already...) C++ will not become a sane language to use for new development.


> Until Stroustrup fixes the header processing behavior (it's 2019 already...) C++ will not become a sane language to use for new development.

C++ does not have a BDFL like other languages, C++ evolution is governed by a committee. In other words, Stroustrup himself needs to submit any changes he wants to the committee and there is a vote ...

C++20 modules should partially solve the header processing problem, but it will take years until it will become widespread in big companies.


I'm yet to see a C++ project that compiles as fast as a C one.


About a decade ago I used a Zipit Z2 as my primary machine, as a fun experiment. It had 32 megabytes of RAM. Back then, that meant you could just about run Debian (who needs X11 anyway). Believe it or not, I often compiled software on this thing, and I quickly noticed that projects written in C would generally compile, while C++ builds would invariably choke from memory exhaustion. This was actually the first time I became aware of any difference between the two.

A very educational experience!


groff does, but it restricts itself to C headers only.


It's very confusing, they start off saying C++ is too complicated and then go basically reinventing C++ with (IMO) absolutely awful syntax.

I think they're missing the point of why people use C - there's no magic. It's pretty much the wysiwyg of high level -> asm. If you want a systems language with magic there's C++. If you want a safe one there's Rust.

Personally if I were going to make C better, I'd add a better macro/template feature so you don't have to write generic code as a define block with slashes all over, something like Go's defer, and some low level stack unwinding support. Maybe tuples as struts with numbered fields, but no magic syntax sugar. And that's it. No crazy operators, type theory things, or anything that doesn't do exactly what the code says. Because then it's not C.

But what do I know?


> C - there's no magic

Strict aliasing and weak typing say hello.

> wysiwyg of high level -> asm

Except neither GCC nor Clang compile even remotely predictable ASM. It's easier to predict OCaml assembly output than the GCC's one.


How is strict aliasing magic? How could you possibly define the meaning of accessing an object through an incompatible type since the type of an object determines how the compiler interprets its value? The bits stored in the object do not even necessarily correspond to a value for every type, after all. And that's not even considering the fact that there is a great deal of freedom wrt how implementations can represent floating point numbers, etc. so how can the standard impose a requirement on what happens when you, say, access a double as an int?

What do you mean by weak typing? That can you cast away basically anything?

>It's easier to predict OCaml assembly output than the GCC's one

Surely you are kidding. If that were true, why doesn't OCaml curb-stomp C in benchmarks? IMO GCC's output isn't even a little surprising, esp. once you've gained experience with its favorite optimizations.


Not only can you cast away anything, but:

- casting away types unsafely is required for many common patterns in C, including any form of runtime polymorphism

- basically anything involving numerics has a good chance of happily casting for you without asking you or telling you


>- casting away types unsafely is required for many common patterns in C, including any form of runtime polymorphism

Which patterns? I'm thinking of stuff like the Berkeley Sockets API, which you can absolutely implement safely.

re numerics: fortunately, that's 100% amenable to static analysis (see -Wconversion, -Wsign-conversion in gcc/clang and coverity's warnings about potentially unsafe casts - e.g. promoting an int to a long is always safe, but an int to a float not necessarily) and follows very straightforward rules.


> Surely you are kidding. If that were true, why doesn't OCaml curb-stomp C in benchmarks?

Exactly because nobody (except backend writers) can predict what C compiler backends emit, especially when abusing undefined behaviour.


OCaml: https://godbolt.org/z/8bWDXy C: https://godbolt.org/z/WAjsvk

I've never actually looked at the output of an OCaml compiler, I have to say I'm surprised by how clean it is. But I wouldn't call it more predictable than that C's output.


>C

You forgot about -O2/3.


> Surely you are kidding. If that were true, why doesn't OCaml curb-stomp C in benchmarks?

You are confusing predictability with performance. OCaml is way more predictable exactly because it generates way more straightforward code.

On the other hand, GCC could do all kind of stuff, generating any sort of assembly. It can even rewrite fairly complex math functions, simplifying them.

Parent was talking about predictable ASM, not about performant ASM.


I don't think optimizations count as magic. It'd be entirely unreasonable for a language to not do them and they don't change the meaning of the program, undefined behavior aside.

"No magic" is meant as shorthand for "the minimal amount of magic we can reasonably expect given the history of the language and requirements for backward compatibility, and the least magic of any high level language with more than 3 users." Please don't make me do this again for the number 3.


> don't think optimizations count as magic

It's not about magic, you were talking about

> It's pretty much the wysiwyg of high level -> asm

which is not true until you use some ancient compiler from bell labs era. Modern C compilers emit totally unpredictable asm.


>It's pretty much the wysiwyg of high level -> asm.

Everyone with this opinion needs to read this[0].

[0] https://queue.acm.org/detail.cfm?id=3212479


I remember seeing people from INRIA talking about CompCert (a formally verified C compiler http://compcert.inria.fr) and the main idea was that C was still the main language used in some critical application like aviation and military equipment.

This is why it was not rare to see defense company or Airbus financing those research.


> In the age of Go and Rust, which already have a hard time finding niches

To make the time even harder, in that niche space, you already have decent enough languages like Nim and Zig etc.

So yeah, the market is really pretty saturated, I don't even have time and project idea to try them all, let alone bring them to work.


Can we talk about the proposed new features:

How are tuples different from structs? You can pass around and return whole structs (not pointers) just fine in plain C.

Underscores in int literals should probably be proposed to the ISO committee for C itself. A similar concept works successfully in OCaml. (Binary literals would also be useful).

I've never felt that C lacked sufficient control structures, and the new ones proposed here just seem like they will confuse people. What specific problem is each new control feature trying to solve?

Adding exception handling (to C) seems like an actively bad idea. What are the semantics? How does unwinding work exactly and how would it interact with resource allocation?

Coroutines are today successfully handled in libraries, so I'm not sure what you gain by adding them to the language.


I wonder how they propose to solve exceptions, destructors, constructors and overloading while being more compatible with C than D. Apparently a beta was promised for "early 2019" according this: https://github.com/cforall/cforall but nothing is forthcoming.

Already announced in 2007: http://lambda-the-ultimate.org/node/2181


Oh, looks like they have a partly completed compiler with a c transpiler backend: https://cforall.uwaterloo.ca/trac/browser


if safety is what they want without departing from C just use MISRA-C

> While C++, like C∀, takes an evolutionary approach to extending C, C++'s complex and interdependent features (e.g., overloading, object oriented, templates) mean idiomatic C++ code is difficult to use from C, and C programmers must expend significant effort learning C++.

the "significant effort" for learning C++ pays off (financially), while this certainly does not.


MISRA-C is just a set of rules. It does not stop humans making stupid mistakes or ignoring rules, and causing safety to be violated.

It not as hard as rust where the binary is not built if a rule is violated.


Is it possible to check for compliance with MISRA-C rules programmatically?

If it is then it can be made part of the build process and the effect will be the same: the binary will not get built if the rules are violated.


It is, but many developers have allergy to static analysers.


You quoted the key point: "idiomatic C++ code is difficult to use from C": most C++ libraries are useless in C programs.

They want the productivity improvement of C++ without leaving behind 2-way compatibility.


> use MISRA-C

MISRA-C's idea of safety is banning function pointers.


actually that's not true. It's more complicated than that[1]:

---

Rule 104:

This is there to prevent the address of a function from being calculated at run time. i.e. the use of pointer arithmetic to calculate the value of a pointer to function is prohibited.

The reason is that an error in the calculation of the address could lead to a system failure.

Rule 105:

This is to ensure that a function pointer is only used to access functions that have the same return value type and formal parameter list. i.e. the type of the function pointer and the function to which it points must be the same.

The reason for this is to keep the use of the pointer consistent. If it is not, it is possible for the programmer to supply the wrong number of parameters when a function call is made, as it might not be clear which function the pointer is pointing at.

---

FWIW I have used function pointers twice in my career. From my personal experience you can improve readability by avoiding function pointers.

[1] https://www.misra.org.uk/forum/viewtopic.php?t=240


> you can improve readability by avoiding function pointers

How do you sort?

#include <stdlib.h>

void qsort(void base, size_t nmemb, size_t size, int (compar)(const void , const void ));


I was excited to read about this. An upgraded C would be nice. Unfortunately, after the homepage, this seems disastrously opposed to anything resembling a "better" C.

The features page feels extremely painful to read. It took me at least two minutes to even begin to make sense of the first section: "Declarations", then an explanation of Tuple, immediately followed by:

  int i;
  double x, y;
  int f( int, int, int );
  f( 2, x, 3 + i );      // technically ambiguous: argument list or comma expression?
Tuple isn't mentioned again until seven or so lines after its explanation. After reading further, I realise they aren't linked in any way, but this was confusing at first. I thought I was missing something major about the syntax.

  [ y, i ] = [ t.2, t.0 ];    // reorder and drop tuple elements
I don't know what this even means. If this:

  [ i, x, y ] = *pt;      // expand tuple elements to variables
pulls out tuple elements into variables, and this:

  x = t.1;        // extract 2nd tuple element (zero origin)
accesses tuple elements: then what is "reorder and drop" and why does the combination of the above two behaviour result in an ostensibly different third behaviour?

It gets worse the further down the page I try to understand. Very claustrophobic and presented as a mish-mash of syntax examples. I can't see how this is any better than C's syntax, honestly. What C would benefit from is a better baked-in stdlib, OR an easily available, downloadable, lib of helper functionality that doesn't require any modifications to existing code (eg. when I want a hashtable in an already-established project, I don't want to modify my existing structs!)


Take a look at D. Some of D's features (such as alias this and mixin templates) may seem strange, but they come from decades of experience with C++ features- Walter Bright and Andrei Alexandrescu, two C++ pioneers, are among D's foremost advocates. WB made the first end-to-end C++ compiler, and AA wrote Modern C++ Design. GCC 9 now supports D.


Wow, this is quite a trainwreck. I think it's possible to improve C while keeping compatibility with C programs, but that's not how you should do it.

I think an improved C would be quite useful, but you really should be careful with what you include it, and keep the language simple. A lot of functionality feels like added because it could be added (WTF are nested routines). This way lies worse C++.

In particular, I wouldn't look too much into what C++ does. Rather, I would look into what Go, Rust and Zig do. `?{}` and `^?{}` are clearly inspired by C++ constructors and destructors, but that's not the only way (in fact, it's a rather bad approach if you want to keep things simple). For instance, you could avoid having constructors and require full initialization for structures with destructors. Hypothetically, it could look like this.

    struct Point {
        int x;
        int y;
    };

    struct Point new_point(int x, int y) {
        return { .x = x, .y = y };
    }

Meanwhile, C-for-all is mostly missing actually useful features. Slice types (https://www.drdobbs.com/architecture-and-design/cs-biggest-m...) would be huge, but they are nowhere to be seen. Vtable dynamic dispatch? Missing. Borrow checker? Missing. Module system (maybe a stretch, but...)? Missing.


You realize that this is (almost) valid C99?

In C99 it looks like this:

    struct Point {
        int x;
        int y;
    };

    struct Point new_point(int x, int y) {
        return (struct Point) { .x = x, .y = y };
    }


Yes. In fact that's intentional.


Who owns the return value of new_point?


It lives on the stack and is handed around by value, the question of ownership really only gets interesting once heap memory and pointers/references come into play.

I guess ancient C compilers would do a memory copy on return, but there are various optimizations in "newer" compilers which remove redundant copies, and structs up to 16 bytes or so are passed in registers anyway (via the 64-bit Intel and ARM ABI conventions).


The function that calls `new_point`, the value is moved. Similarly, consider the following case.

    struct Point a = new_point();
    struct Point a2 = a;
Assuming `struct Point` has a destructor, this would move `a` into `a2`, and prevent accesses to `a` after `a2` assignment, as `a` is no longer considered to be alive at this point. C++ has a wrong default of cloning instead of moving, but a new language could fix that.


That default changes in C++11: https://en.cppreference.com/w/cpp/language/copy_elision

The logic around it is still pretty complicated, since we’re talking about C++.


> C++ has a wrong default of cloning instead of moving

Which it inherited from C.


Why is cloning rather than moving the wrong default?


It isn't. Cloning is what actually happens under the hood, if a and a2 are in different memory locations. Making a inaccessible after the assignment is merely a convention. With a struct that only has data members, there are no further consequences.

If the struct contains a pointer, you need a kind of additional contract that says how the memory that is pointed to is going to be managed, particularly when it is going to be freed and by whom. And that is where move semantics establish a constricting convention that is intended to make that unambiguous.


So you’re looking for Rust with C’s syntax?


Would be a big improvement on Rust.


What part of Rust’s syntax do you think is a detriment vs C? A lot of it is very similar to C, but with ambiguity removed.

- Variable types are changed to avoid ambiguity in lexing.

- Loops and conditionals require {}, which means the parentheses can be dropped on the condition and there’s no ambiguity about nested if’s and else’s.

What would you like improved?


For some time I have been puzzled that clang and gcc don't provide an option/ABI (maybe -safe or -mcpu=x86-64-safe) for memory safety.

Last I checked, memory safety for C (e.g. fat pointers with bounds checking) seems to impose a ~60% performance overhead on traditional processors (with hardware support it could be much less.) In many (most?) cases, that overhead is worth the improvements in reliability and security.

For certain applications (probably anything network facing) I'd probably want to compile the whole OS, libraries, and software base with -safe.


Linked page explains rationale, see also the list of features : https://cforall.uwaterloo.ca/features/


So this is loading for you?


Yes, some of the many features : Tuples, left to right declaration syntax, references with auto dereferencing, constructors destructors, nested routines, extended case with ranges, choose (switch with no fallthrough), overloading and polymorphism.


Quite lot of that sounds like Turbo Pascal/Delphi.


The even added with for structs member usage in a block.


Yes, but slowly.


What does powered by Huawei mean? Are they what Mozilla is to Rust?


It's a more general thing. Like elsewhere in the former British empire, the CCP is getting their fingers in the pie, and siphoning research product back to the weapons and surveillance industry in China.

We aren't in as dire a situation here in Canada as Australia [0] seems to be, but there's a reason it was so difficult to serve that Huawei extradition.

[0]: https://www.news.com.au/national/victoria/news/its-a-police-...


Probably funding the research


More interesting page, IMO: https://cforall.uwaterloo.ca/features/. Personally it has a bit more than I would be comfortable adding to C while taking the similarity stance they are. Are there safety improvements planned with regards to bounds checking?


I dream of "better C" as something that I wish existed. Not a "alternative to C" like Rust or ADA (that is BETTER, but not C). I think:

- A C developer must find as low friction as possible to use it.

- It will look alike C as much as possible.

- Will not bring a big semantic/syntax departure

- Not bring any novel stuff or Gc. BetterC is C as will be written by the BEST of the developers with the BEST practiques applied, and removed (as much as possible) the most obvious mistakes or ill-advised features.

- BetterC must be/have a transpiler. Even if that break for a while perfect behaviors it provide a way to cleanly upgrade things forward. Because BetterC is well writen C, the user will be encouraged to use it with the confidence of be in the ecosystem.

- BetterC must be incorporated (eventually?) as a front-end in a C compiler (like LLVM). So is like have "strict on" available.

- This mean is better if is backed by the community as a long, step-by-step goal of C. This also mean must be done for people that actually love/like C, just want it to be better.

- Must provide a set of blessed libraries like unicode strings, arrays, numbers, dates, etc. If not in-built at least included so most that use C as all-around lang not get out (similar in this case as Rust)

- Must encourage rewrite of critical pieces of code, despite zero-cost of FFI with C.

- Fix stupid syntax issues like dangling else. This is the "easiest" part I think.

- Find the biggest issues of the langs/common libraries and kill them.

- NOT ALLOW STUPID BY DEFAULT. Bring the "unsafe" keyboard here. No excuses!

- Bring sane macros.

- Bring AGTD, pattern matching, for ... in ..., and hopefully, fast fast compile times.


Here's a snapshot from the Wayback Machine, given the original website is down https://web.archive.org/web/20181206232625/https://cforall.u...


Some interesting ideas here. However, C is being improved in an evolutionary fashion; the latest revision was C18 in 2018. I like languages like this (my favorite is D), but the only C successor language I use is C11 (this is what clang targets; C18 is an incremental minor update to C11 that only fixes errata).

    x = t.1; // extract 2nd tuple element (zero origin)
Zero-based indexing in a programming language is not necessarily a dealbreaker but is always a worrying sign about how hard the designer thought their design through. (This does not apply to the design of C, which does not have zero-based indexing; arrays are indexed by offset rather than "from zero".) These are structs with fewer names and less ceremony involved. Why would the first field be "field 0"?


Modern computers encode the first address of memory as a binary word of all zeros. In an array, a pointer to the array + 0 is the first element. A pointer to the array + 1 is the second element. C is low-level, so it makes sense to follow the machine idiom.


I would like to see C with just the borrow checker - nothing else. I assume someone has probably already done this, and I just don't know about it.

The primary advantage of C is that, and this just parphrasing a Linus Torvald's quote - it was developed at a time when computers weren't that powerful and so it's very close to assembly such that a proficient C programmer knows exactly what's going on under the hood.

I can't say the same for Rust or C++ which both spit out an enormous amount of code thanks to monomorphisation and all the other "zero cost" abstractions they employ.


It started out so great, but at the end I was left with the impression that someone was just trying to shoehorn features into C with quirkier syntax than C++. That type declaration syntax will probably haunt me in my sleep.


Just use the c2rust[1] tool to "extend" C.

[1] https://github.com/immunant/c2rust


YAFUATMCWCIN

Yet Another Fucked Up Attempt To Make C What C Is Not

C is not “unsafe” as in “flawed”: it’s designed to fill certain gaps. Its syntactic stability through the decades - as well as its “minimalism” (for a lack of better word) - are among its most important features.

Personally I have been having an hard time trying to figure out why there’s some part of the industry that desperately tries to change what C is in a way or the other.

...but probably I’m just an old Valley fart. Apologies for the rant.


Obligatory name-drop: Zig is an awesome low level programming language targeting the same space as C and CForAll. It isn't at all compatible with C on the source level like CForAll is, but it does make it super easy to interop with C, because it can include .h files. That means it lets you move projects from C to Zig file-by-file. Because Zig benefits from decades of insights about how C could've been better, it has way fewer footguns and higher productivity.

Obviously CForAll has a learning curve advantage over Zig, being based on C directly, but for any readers not aware, I'd say, also check out Zig.

https://ziglang.org/


I second the props for Zig. It's basically a really well designed, trimmed down C with "compile time values" that, as an emergent phenomenon gets generics, and composable allocators.

C interop is beautiful. My primary language is on the Erlang Virtual machine, and it's easier to write FFI using Zig than it is to write it in C.


Now that is an interesting combo.


if that combination is interesting to you, this is what I'm working on (on the side from my main job):

https://github.com/ityonemo/zigler


Oh, nice, I will definitely try that. Nifs in C are an eyesore and a risk, having more language options there is good and a clean integration like this is even better.


Just FYI I recommend waiting till I 0.1 it, which will definitely happen before end of January, when I'm giving a talk. I haven't tested that it works as a library yet, much less with elixir releases (it works self contained).


Ok. Even so, very nice development, this is exactly how I think it should be done, most other dual-language environments where you use one language for structure and another for speed tend to completely lose the context of the code for the optimized part. Having them blended in like this ensures that they are viewed as a whole rather than as two loosely coupled parts.


Have been doing this years Advent of Code in Zig because I've been exited to try it out for a long time now. After over the first bumps I really like how it keeps most of the simpleness and clearness of C while fixing a lot of annoyances. The really simple to grok but still incredibly powerful comptime idea is a good example. It adds generics and a much saner macro language without making something too complex or hard to understand. I still have great hopes for languages like Rust in the system programming area because of the guarantees it gives at compile time but when it comes to minimalism zig feels like a much better choice. I would really like the Std lib to be better documented though but I hear that is in the works.


Another (agruably better) option is to use Dlang's betterC feature.

https://dlang.org/spec/betterc.html


Zig is, however, unsuitable for interactive media or scientific computing due to the unfortunate lack of operator overloading (which is touted as a "feature"), rendering it less general purpose than I'd like.

Other aspects of its design look good however.


Im not super sure what specific use case you are after but are you sure you can't do this in some ways with comptime generics? I have to be able to do most generic-like stuff with comptime (though I sometimes have to change the implementation a bit). I also really value how clear the lack of overloading makes the language. This way of clearly being able to follow the control flow is one of the things I like that they kept from C.


Yes I've tried zig a fair bit and the lack of operator overloading made the use untenable. The "ability to follow control flow" is, IMO, a silly argument. I will concede that `comptime` is a nice feature.


You can use syntax of the form vec1.add(vec2), which is still better than add(vec1, vec2)


It is obvious how operator overloading makes scientific computing easier but what about interactive media? Can you clarify?


Oh so like, dual quaternion skinning, spherical quanternion interpolation, matrix multiplication, vector sums, etc.

I use a ton of linear algebra, vector algebra, and grassmann/clifford algebra at work and not having operators is like... why would I ever do that to myself. You end up in "parentheses" hell for things that would actually be nicely coded in concise and easy-to-understand expressions.


I’m very surprised that no one here has mentioned the Cyclone language[0] yet. It seems to me that it tried to address the same niche as C-for-all.

[0] https://cyclone.thelanguage.org


The project died, but its region-based memory management influenced a couple of new languages.


> Cyclone is no longer supported

This might be the reason.


Like I said, it tried to be an answer to similar problems, yet it failed. By the logic of C-for-all creators, Cyclone falls into the “evolutionary approach” group. Why Cyclone did not succeed in spite of “revolutionary” languages with similar aims gaining traction is what I don’t quite understand. I have never had any occasion to try Cyclone though, so I’m only basing my assumptions on what other people were saying about it. Still, its design appeals more to me than, for example, Rust, because ability to include C headers directly is IMO the very essential thing necessary to productively rewrite an old codebase.


The one thing I like in c for all seems to be (not sure) but generally every valid c program is a valid c-for-all program. People mentioning C2 seem to ignore that C2 is incompatible with C. Or perhaps I'm wrong on that.


Like we extend the web with "security" ... seems like a recipe for success.


It is concerning that a pro Chinese government entity is sponsoring this project. They're infiltrating North American institutions, where's the oversight?


HN Kiss of death


HN's version of Slashdotted.

They might well use UWaterloo's CS Club infrastructure, which can withstand enough traffic. For instance, the FOSS mirror at https://mirror.csclub.uwaterloo.ca


Or they should use a static website and handle any amount of traffic on a $3 VPS :)


Thanks


[flagged]


You mean like the Americans did with Cisco? I am not really in a mood to defend the chinese regime, but the “free” West really threw away any moral high ground we might have had.


You can't say "one is bad, therefore the other is good." They can both be bad.


In which case you shouldn't paint one as the exceptional evil for doing the same thing, as is usually the case.


The CCP is exceptionally evil, they are conducting themselves like Nazi Germany.


China was actually a victim of the Axis powers, with a huge death toll, so this more than a little disrespectful...


Nothing about having the imperial Japanese rape, torture, and conduct live medical experiments of unspeakable horror on your people makes you immune from becoming like them. If anything, it is a great excuse for doing so.

And the victimization you're speaking of was conducted against the the people of Republic of China. The CCP is an abomination, and they make China everything the Chinese have any right to hate about the Japanese.


Victimhood is not a persistent state. It is a momentary one. They are no longer victims to the Axis powers.

They are imprisoning and "re-educating" millions of their citizens because the region they live in (Xinjiang) is really important economically for China. Those citizens are treated similarly to how the Jews were treated by Nazis, i.e. put in concentration camps and villainized, although they haven't escalated (as far as we know) to extermination practices.


Just because the west also can't necessarily be trusted not to try to sneak exploits into sponsored projects (or through other means, such as a snail mail MITM) doesn't mean we shouldn't ask the question of other countries as well.

Whataboutism is never the correct answer to a valid concern (in general. I'm not sure if there's much to worry about in this specific case or not).


I understand your criticism of whataboutism. I dislike whataboutism myself.

However the issue here is that I wish that the nations I am part of would conduct themselves in a manner that gives them the moral authority to speak up on issues of freedom and democracy. And this — quite frankly — currently isn’t the case. It isn’t the case because the post 9-11 paranoia lead elements of our society to feel so threatened, they threw defining rights and freedoms overboard (and this is the favourable reading of events).

You cannot topple governments, install dictators, kidnap and torture people and keep them locked up without trial for years, cover up war crimes, go after those who blow the whistle on said war crimes, spy on your allies and on your own people and still expect other nations to see you as a beacon of freedom and democracy. A lot of the behaviour of the US is only justified by: “Yeah but we are the good ones” and this is an argument my one Nazi Grandfather also used: “Sure there were bad things, but they built the streets and everybody had a job.”

A free society isn’t measured by how it treats its wealthiest and most powerful members — it is measured by how it treats its enemies.

Hard to demand someone to respect human rights, if you don’t give a damn about it yourself, when it just hits the people you dislike.

That beeing said, China is still worse. But this is not a contest and I like to change the nations I live in first.


>Whataboutism is never the correct answer to a valid concern

I'd say it is. I puts the concern in perspective, and stops people from using it to paint the "enemy" (anybody that's not them/their government) as particularly evil for doing the same thing the other side does.


That doesn't make it the answer though, just something else that might be beneficial to mention.

As an answer it obscures whether the original question had merit and leaves it unanswered. As something that's included alongside an answer it is superior to an answer alone and to itself alone.

I agree whataboutism can be useful in causing people to reassess their perspective on a topic, but when used haphazardly it can obscure more than clarify (and has been used this way extensively in the past for this purpose).


In the same way the fact that Microsoft is a US company has implications on backdoors/exploits and Windows. Except C for all is Open Source, so you can go check.


C with modern safety and productivity features is called "C++" and has been around for decades. I'm amazed at how much effort people will expend just to avoid the C++ boogeyman. No, writing C++ does not automatically make your code bloated. No, using C does not guarantee lean design.

Plain C ought to be considered a legacy language and not used for new code. There is zero reason to prefer it over whatever style of C++ you'd like. If you want procedural struct-based C++, you can have that, but gosh, don't write C.


> writing C++ does not automatically make your code bloated

Sorry, but it does.


No, it does not.


  $ cat hello.c
  #include <stdio.h>

  int main(void)
  {
   printf("hello world\n");
   return 0;
  }

  $ cat hello.cpp
  #include <iostream>

  using namespace std;

  int main(void)
  {
   cout << "hello world" << endl;
   return 0;
  }

  $ gcc -o hello_c hello.c
  $ g++ -o hello_cpp hello.cpp
  $ ll hello_*
  -rwxr-xr-x 1 root root 5880 Dec  5 14:13 hello_c
  -rwxr-xr-x 1 root root 7424 Dec  5 14:14 hello_cpp
I see 1544 bytes of bloat. Hope that helps.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: