Hacker News new | past | comments | ask | show | jobs | submit login
I still like C and strongly dislike C++ (multimedia.cx)
164 points by todsacerdoti on May 26, 2021 | hide | past | favorite | 165 comments



> The reason why I still like C is that it is a simple language. Simple in a sense that it is easy to express ideas and what to expect from it. For example, if you want to retrieve an array value with two offsets, one of which can be negative, in C you write arr[off1 + off2] while in Rust it would be arr[((off1 as isize) + off2) as usize].

There's an argument to be made that C is a simple language compared to some, I'll grant that. But implicit casts and the integer promotion rules really aren't a good example of simplicity. Terseness, sure, but not simplicity. For example, from https://www.cryptologie.net/article/418/integer-promotion-in...

    uint8_t start = -1;
    printf("%x\n", start); // prints 0xff
    uint64_t result = start << 24;
    printf("%llx\n", result); // should print 00000000ff000000, but will print ffffffffff000000
    result = (uint64_t)start << 24;
    printf("%llx\n", result); // prints 00000000ff000000
There are good reasons why e.g. Go, which prioritizes simplicity very highly, does not allow implicit integer casts.


This is the difference between "simple" and "easy", as Rich Hickey so eloquently put it. C--this part of it, at least--is easy. You can write code without having to think about the different integer types, thanks to the complex integer promotion rules. Rust and Go are simple: the semantics are clear and understandable, at the cost of requiring the programmer to think about this aspect of numeric types. As programmers, we often mistakenly think that simple decisions automatically lead to an easy interface for the user; sometimes they do, but not always.

(On balance, I personally think that not having the implicit conversions, as Rust and Go do, is generally the better decision, but I certainly acknowledge that C is easier in this regard.)


> (On balance, I personally think that not having the implicit conversions, as Rust and Go do, is generally the better decision, but I certainly acknowledge that C is easier in this regard.)

Really? I think there must be a better way - I also don't like implicit conversions, but the rust approach gets extremely verbose, especially if you want to check for truncation etc.


> Terseness, sure, but not simplicity.

I feel the same whenever somebody shows up talking about their favorite functional language. Sure it can be less code, but it looks like an explosion at the unicode factory.

Really though, I'd much rather have a few language "gotchas" that you need to learn once, over countless gotchas in other developers code because they are using a language the lends itself to ungrokable code. C really does have the properly where you can look at the code and just know what it does.

That said, I don't think C scales very well for big projects and I personally develop in C/C++/Python/ environment as a robotics system SWE. C for embedded stuff, C++ for nearly all system code, and python for the data/Deep learning stuff.


C does not have that property of not having gotchas whatsoever due to how undefined behavior is handled. (Or rather not handled) plus compiler and architecture defined behavior.

If you can spot undefined and other platform defined behavior always, someone probably would want to hire you as a security specialist.


In current C this is actually undefined behavior:

  uint8_t start = -1;
  uint64_t result = start << 24;
however there's some hope of defining it once C requires 2's-complement integers. (this wasn't reasonable in the past but is now)

There are worse problems with `a << b` because the shift instruction on CPUs doesn't actually treat b as an 'int', it reads the lower X bits and uses that. C can't define what X is because not only is it different in x86 and ARM but x86 doesn't even match itself - it's different for scalar and SSE ints.


No, this is not always undefined behavior.

    uint8_t start = -1;  // Your code
    uint8_t start = 0xFF;  // Absolutely equivalent
As for the expression `start << 24`, `start` will always be promoted to `signed int` because `int` is guaranteed to be at least 16 bits wide, so it can hold all `uint8_t` values.

If the width of `int` is 32 bits or narrower, then `start << 24` (which is 0xFF000000) will put a binary 1 into the sign bit, which is indeed undefined behavior.

Otherwise if `int` is 33 bits or wider, then `start << 24` will fit in `int` nicely and be fully defined behavior.

Reasoning about these complicated cases involving implicit conversion (-1 -> 0xFF), promotion (uint8_t -> signed int), and implementation-defined bit widths (how wide is int?) is exactly why I dislike dealing with C and C++.


Where have you found an ILP64 system? There aren't any - that's why I said "current C".

The idea that C programs can survive having the size of their most common type changing is not realistic, though luckily people rarely try.


Your insistence on "current C" is a brittle claim to make. I'll give you the benefit of the doubt that there are no popular instances of ILP64 systems today. But someday your comment will be false, and it could happen at a surprising time. This would be a repeat of how old C programmers' false assumptions were violated by the 16->32 bit transition, the 32->64 bit transition, and the increasing exploitation of UB for optimization. Whereas my comment is always true with respect to the C11 standard - there can be a legal implementations where `(uint8_t)-1 << 24` is undefined behavior, and other implementations where it is well-defined behavior equal to 0xFF000000.

As for writing code that behaves correctly on any legal C implementation, it is certainly possible and I practiced this for years. Go ahead and audit my published programs. It is not easy though, and I have made many mistakes along the way before I figured out how to do it consistently. Although there are many rules that I need to follow, the two most important ones are: Obey the fact that built-in integer types have minimum widths (e.g. char is at least 8, int at least 16, long at least 32); Use size_t for array lengths and indexes.


That there is a chance of undefined behavior is a statement without worth or meaning. It's like having an estimation method without an error bound.

In my experience, I run into "undefined behavior" in C-land extremely rarely. I can't remember a single instance in 15 years of embedded programming (unsigned/type promotion sure everybody does that at least once sometime, some weird architecture defined thing, nope). Maybe that's because I'm some super amazing C programmer, or that I read the docs, or maybe it's because the chance is low?

Regardless, people very often become hyper-focused in this types of debates to be point of being completely unbraced by reality. The valid way to do these sorts of language comparisons is to look at the the pros and cons and give them a weight. Something like this pro is (very good), but (rarely comes up in practical programming), or (This con is very bad) but basically only shows up in (interview questions), etc. Then take the sum of the weight of each side, rather than making statements like, "language has/doesn't have x feature/drawbrack thus I'm right about every thing yay!". I.e. if you are looking at a 10d problem in 1d, you're going to be wrong.


C seems to scale very well for the Linux kernel?


More so, the integer promotion rules of unsigned char and unsigned short are wrong and introduce potential signed integer overflow(in the case of unsigned short). If they were simple, they would have promoted to unsigned


Explicit casts are documentation. They may be tedious, but they are clear.


Right, this is why i never understood why people always harped on casts in C. Small price to pay for clarifying intent.


Yeah implicit casts are nice when working on a toy or personal project, but if you are putting code on a rocket or a game server or a self driving car etc, etc, absolutely not.


This blog is by an ffmpeg developer and in my opinion you'd never have ffmpeg with that rule because everyone would have gotten bored of all the typing and stopped working on it.


Right, because people don't use smart editors that fill in the repetitive preambles.

Seriously, are you talking about C, a language where you have to write code to access any matrix optimally by hand always... (or an advanced library to do it, better than BLAS, which ffmpeg does not even use)


There are not a lot of operations on large matrices in there. 8x8 at the most, and the memory access on those is handcrafted so language features wouldn't really help. (for instance the order DCT coefficients are stored in memory actually depends on the IDCT used, which is different on each platform and often written in assembly.)


It's 2021, you can use a library for almost everything, it's basically the choice of the developer.


Native Oberon had MPEG video decoders, so apparently that isn't an issue.


But if the comparison is with C++, then they would both rank equally poorly here, because C++ doesn't want to behave differently.

If you write code like that in C frequently (e.g. working with system registers on embedded CPUs), you do eventually get used to explicitly casting defensively, and I agree that it's nice that you get saner defaults in e.g. Rust.


Since int is allowed to be as little as 16 bits wide, and start has promoted to int, start << 24 could be undefined behavior.

How it should work is that type of "start << 24" should be synthesized from the operands, without any idiotic "promotion" rule: it should therefore type uint8_t.

The shift should then require a diagnostic that it exceeds the width of the type. No undefined behavior nonsense; if the shift amount is a constant then it is statically diagnosable against the width of the type, and therefore should be.

This diagnostic will inform the programmer that their intent isn't being expressed.


The type of "uint8_t << 1" being "uint8_t" would be confusing because it would always lose the top bits (unlike the current definition), but presumably the intent is to insert it into a larger bitfield or multiply by 2.

An alternative safer model called as-if-infinitely ranged integers solves this problem:

https://resources.sei.cmu.edu/library/asset-view.cfm?assetid...


Note that, say,

   SHL 1, AL  ;; x86 assembly
will not affect any part of the EAX register other than the low 8 bits specified by AL. This is very simple and obvious.

An operation involving uint8_t staying entirely in that type is a complete no-brainer.

If you want to shift bits out of the uint8_t, you cast the value to something wider, and shift that.

And note that this is what you have to do when working with types that are as wide as int, or wider, which do not promote. uint32_t << 1 will lose the top bits!

Losing promotion would make everything consistent: uint8_t << 1 loses the top bit the same way like uint16_t << 1 and uint32_t << 1.

"As if infinitely ranged" (AIIR) is an obvious idea but it has flaws. One is the run-time checking that it requires, which is unattractive in a C-like language. The other is semantic issues. In a language like C, the values of calculations eventually have to settle into typed boxes. Under AIIR, these are not equivalent:

  int x = 42;
  int y = x * x + 3;
  int z = 2 * y / x;

  int z = 2 * (x * x + 3) / x;
In the second version, we have not bound the (x * x + 3) term into a variable, but inlined it into the z calculation. Therefore, its result of that subexpression permitted to be outside of the range of int by AIIR.

This is completely unacceptable, though; simply introducing a variable of the precisely correct type to hold the value of an intermediate calculation changes the semantics of the calculation.

Only the values being juggled in the AIIR, so to speak, benefit; not values that have landed.


> "As if infinitely ranged" (AIIR) is an obvious idea but it has flaws. One is the run-time checking that it requires, which is unattractive in a C-like language.

It seems to be acceptable in Swift so far. (Swift traps on overflow on all expressions though, it doesn't use AIR.)

It is unpleasant if you don't like to distinguish between statements and expressions, but it might be usable. Anything is as long as it's well defined.

The main advantage is that it makes it easier to fold away intermediate steps like (x + 1 - 1). This is the kind of thing that looks pointless, but is needed to remove abstractions like in macros or C++y generic code.

If you'd like your functions total (not trapping), then there could be a set of wrapping/saturating operations to go with the default infinitely-ranged ones. Or you could declare x/y/z to be unsigned.


Explicit casts are problematic. Here's an example:

  int32 a;
  int16 b;
  //...
  b = (int16)a;
Later you decide to change b to int32.

  int32 a;
  int32 b;
  //...
  b = (int16)a;
Now you have a bug. If you didn't have that explicit cast the program would have worked as expected.


But, that's a good example of why implicit casts are bad!

      b = (int16)a;
This contains two casts... one explicit cast to int16, and one implicit cast back to int32. In languages without implicit casts, this would be an error.


Many languages "without implicit casts" e.g. Java still have this problem because they allow widening casts.

I don't think it makes sense to ban them in a language with powerful enough checks, like dependent types. If you have two variables and one is typed "int 1-10" and the other is typed "int 1-5", and it fails at runtime if it doesn't fit in the destination range, why do you need to type something that says you really meant to assign one to the other? Just assign it.


> Many languages "without implicit casts" e.g. Java still have this problem because they allow widening casts.

I'm reading this as, "Many red boxes are blue circles." If the language allows implicit widening casts, it has implicit casts.

I don't think more powerful checks are necessary. It's just that the implicit conversions in C are a bit wild, and they result in unexpected behavior and surprise programmers, and from experience, making all casts explicit is not such a burden (except for stuff like char ptr -> const char ptr).

I think the fantasy here is simple... as much as we want to explore new ways to make safe programs with better type systems and runtime checks, there is still some design space near C which is safer but not really more complicated.


> I'm reading this as, "Many red boxes are blue circles." If the language allows implicit widening casts, it has implicit casts.

The problem is that people state things like "Java doesn't have implicit casts for correctness." But then it does have implicit casts, so now is there still correctness?

Another case is Haskell, where the wiki tells you:

> Conversion between numerical types in Haskell must be done explicitly. This is unlike many traditional languages (such as C or Java) that automatically coerce between numerical types.

But what this actually means is instead of `a = b` you do `a = fromInteger b`. This is obviously also an implicit cast, because Haskell has type inference. So again, you're not writing a proof that your conversion is correct. You're just writing a different, longer "yes I really mean to assign this" statement.


> The problem is that people state things like "Java doesn't have implicit casts for correctness." But then it does have implicit casts, so now is there still correctness?

I am a bit too tired to engage with this kind of outright insanity. If you have a problem with what "people state", but not specifically with what I state or what has been stated in this conversation, then go have arguments with "people".

The idea that "Java has correctness" or "Java does not have correctness" does not make any sense. Java is a language, it's the programs that you write with it that are correct or incorrect.

> This is obviously also an implicit cast, because Haskell has type inference.

Incorrect, it is an explicit conversion. There is no such thing as a "cast" in Haskell, there are only functions which convert values of one type to values of another type. (well, in FFI code, there are functions which "cast" pointers and the like, but that's FFI.)

And yes, you're not writing a proof that the conversion is correct. This is... blindingly obvious, so I have no idea what kind of point you are trying to make. "Ordinary Haskell code is not formally verified" is not news.


> I am a bit too tired to engage with this kind of outright insanity. If you have a problem with what "people state", but not specifically with what I state or what has been stated in this conversation, then go have arguments with "people".

Touchy! I am claiming the people who want always explicit casts have not actually used a system where they're all explicit much (because like in Java, some are still implicit) and would find it annoying if they did.

That could result in performance compromises, like using int arrays everywhere instead of smaller types. Which is fine for scalar values, but for arrays it wastes memory.

> Java is a language, it's the programs that you write with it that are correct or incorrect.

Surely this language feature is intended to promote correctness. What else are errors for?

> Incorrect, it is an explicit conversion.

But `a = (uint16_t)b` and `a = fromInteger b` aren't the same thing - in one you have to name the destination type and in the other you don't. One is more explicit than the other. Is one of them bad?

> "Ordinary Haskell code is not formally verified" is not news.

It is news to people who don't do numeric programming. "If it compiles it probably works" / "if it compiles it's probably correct" is a real thing said about the language said on this forum.

C clearly has issues (the implicit unsigned short->int promotion is totally wrong) but I think trapping on overflows at runtime would be a much better improvement than adding compile errors.


> "If it compiles it probably works" / "if it compiles it's probably correct" is a real thing said about the language said on this forum.

Yes. "Probably", as you said. "Probably" is much, much weaker than "formally verified".


> "If it compiles it probably works" / "if it compiles it's probably correct" is a real thing said about the language said on this forum.

Can you link some examples that weren't obviously at least 50% tongue in cheek?


> Touchy!

Hey, I'm only human. You made some inane comments.

> I am claiming the people who want always explicit casts have not actually used a system where they're all explicit much (because like in Java, some are still implicit) and would find it annoying if they did.

That claim is trivially false... Go and Rust have explicit casts, and they're reasonably popular. For example, in Go,

    var x int16
    var y int32
    x = y        // ERROR!
    x = int32(y) // ok
The same is true in Rust.

    let x: i16 = 1;
    let y: i32 = x;       // ERROR!
    let y = i32::from(x); // ok
    let z = i16::from(y); // ERROR!
> That could result in performance compromises, like using int arrays everywhere instead of smaller types. Which is fine for scalar values, but for arrays it wastes memory.

This is clearly false in practice, just look at extant Go and Rust code.

> But `a = (uint16_t)b` and `a = fromInteger b` aren't the same thing - in one you have to name the destination type and in the other you don't. One is more explicit than the other. Is one of them bad?

If that's a rhetorical question, just make your point.

The fromInteger function is an explicit conversion. The conversion is explicit, but the destination type isn't explicit. The source type isn't explicit either, but apparently that's okay... consider the C code:

    int x = (int)y;
Would you call the cast "implicit" because you don't know the type of y just by looking at the code? No, it's still an explicit cast. Maybe a "super duper mega explicit" cast would look like this:

    int x = (float->int)y;
Reminds me of Scheme. You would want super duper explicit conversions in dynamic languages, because neither the source nor destination type are annotated. You want the destination type annotated in traditional type systems because otherwise the compiler would not be able to figure out what you are doing. Haskell does not need the source or destination type annotated, this is ok.

> It is news to people who don't do numeric programming.

I'd say that it's blindingly obvious to people with a rudimentary understanding of programming.

> C clearly has issues (the implicit unsigned short->int promotion is totally wrong) but I think trapping on overflows at runtime would be a much better improvement than adding compile errors.

You can already -ftrapv if you like, but "trapping on overflow" is a contributing factor to the Ariane 5 disaster, and the lesson there is that trapping on overflow is not necessarily a safe default. The cost of runtime overflow checks is surprisingly large, too, which is why people so often turn it off in languages that support it out of the box (like C# and Ada). The mitigations for these problems will involve some combination of run-time checks, compile-time checks, and testing.


Tbh, I don't think you're the first person to think that but the trouble with making a "safer C" is that it's not C. Or to be more precise, it looks like C but isn't C compatible. This has ended up killing most attempts. Apparently nobody actually wants an almost-C language (alternatively, nobody has yet managed to successfully market one).

There seems to be an uncanny valley effect with C-like languages. The best you can do is have non-standard compiler extensions on top of ISO C.


No, this approach is more common and successful than you might think.

We're not talking about making an entirely new language which is incompatible with C. You adopt a style guide, an accompanying static analysis tool, and maybe add some annotations to your source code. Your code is still C-compatible and you can still use the same compilers.

Static analysis of general C programs is quite difficult, and turning on an analyzer for an existing codebase typically results in obscene numbers of false positives. However, if you pair a static analyzer with a strict style guide which restricts the language, you can make the static analyzer much more powerful and useful.

This could be MISRA, it could be a formal verification system, or it could be something else entirely.


I think the bug here is arguably also an implicit cast, from int16 back to int32. Languages that don't allow implicit casts at won't compile that last line.


I think it’s less a problem with explicit casts and more a problem of insufficient expressivity. If what you wanted was “cast to the type of b”, then something along the lines of C++’s decltype would be better:

    b = static_cast<decltype(b)>(a);
And as other commenters have pointed out, if implicit widening is disallowed that would also prevent the potential bug.


The problem with mandating this is that it doesn't really express anything - the semantics are the same as `b = a;`. And there's a cost because it's quite verbose.

If what you want to express is "I know b and a have different storage sizes", that could be useful, but it doesn't do that because eg if you typo `b` for `a`, then `b = static_cast<decltype(b)>(b);` is still accepted.


> The problem with mandating this is that it doesn't really express anything - the semantics are the same as `b = a;`. And there's a cost because it's quite verbose.

Sure, but at least the problem is back to "should conversions/casts be implicit or explicit?", rather than "I cannot do what I want with explicit casts".

> If what you want to express is "I know b and a have different storage sizes"

You may not even know this (e.g., in generic code), and if you did want to ensure different storage sizes specifically, then you want a (static) assert.

> eg if you typo `b` for `a`, then `b = static_cast<decltype(b)>(b);` is still accepted.

That would also apply to the version with implicit casts/conversions, wouldn't it?


If C lets you shoot yourself in the foot, it doesn't mean that you should do it! If you cast something then you need to take a moment to think about it first and check if it makes sense and whether it is really needed.


> Now you have a bug.

Weeell, maybe not. I agree that whenever I see a cast, I think "WTF?", but maybe you want what the cast does?


Wouldn't the compiler stop your second example? You're trying to assign an int16 value to b, which is now an int32.


Not in C.


Sounds like this is less "explicit casts are bad" and more "explicit casts are bad in C" then. Rust indeed errors out at compile time for the reason I stated: https://play.rust-lang.org/?version=stable&mode=debug&editio...


the first program might not have worked as expected though.


What's your justification for saying the second printf "should print 00000000ff000000"?


Not the parent, but my take on it is that you have only unsigned types in all of those signatures, but C goes and promotes the u8 to a signed quantity before doing the shift.

What I believe most people would expect is that 0xff would get shifted up, and that's it, but instead you end up with that plus a sign extension to fill out the "new" 32 bits at the top of the 64-bit value.


I would say, it's rather primitive than just simple language. Lack of absolutely necessary features like generics, not to tell about memory and thread unsafety by design. There's zero sense to start a new project in C in 2021



Microsoft's compiler did not support C99 for a long time because they did not really think of it as a C compiler or a C/ C++ compiler; it is a C++ compiler. Its support for C90 was explained as being maintained "for historical reasons" ( https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-an... ), but they had no reason to add support for new standards of a language that was outside the scope of the project.

It's odd for the author to complain about the commingling of C and C++ in compilers and then complain about Microsoft specifically not doing that.


The MSVC compiler actually has distinct C and C++ compilers (or at least compilation modes). In C++ mode, no C99 is accepted. What's different from clang and gcc is that the MSVC C++ compiler doesn't accept any "modern C" code, while clang and gcc have C++ language extensions for this (for instance clang accepts the full C99 designated initialization feature set in C++).


I think they only moved away from that path due to customer pressure to keep supporting C on Windows.

That is the official excuse to only support C on Azure Sphere, despite the whole security sales pitch.


I used to feel the same. For a long time I got away with using C and python for everything. They paired well and I always came away feeling that my code was expressed elegantly enough for me. C was for stuff that had to be fast, python for things that were more functional or meta. Then I started a project that needed to be faster than python but also required some heavy-ish meta-programming. I wrangled for way too long with the C preprocessor, and while way more powerful than I had imagined, still fell well short of being able to do what I needed without it looking and feeling like a dog that had to be put down. I think I would have needed a code-generating script to do it and to me that sounds disgusting. But C++ templates did exactly what I needed. They look awful and I still don't even understand my own code when I try to go back and read it. But it made me understand and appreciate that C++ does have a place, and for certain projects it might actually be the best choice. I'm sure some fan boys of the newer C++ descendents (rust/golang/zig/et al) will disagree.


Type parametrization is actually done better in Zig. I wish it departed from "no implicit actions" dogma and let RAII in. It would be almost perfect...


I kinda like that `defer` is very visible. Although I haven't worked on large Zig codebases so I don't know how that scales.


`defer` does not allow passing ownership so it limits your designs quite significantly.


This is the usp of C++, it is a set of languages which can be combined as needed depending upon problem requirements (aka Multi-Paradigm). I personally always start with a subset and only add features as needed. Two old books which i found useful in thinking about how to use C++ are;

Scientific and Engineering C++ by Barton and Nackman.

Multi-paradigm design for C++ by James Coplien.

I have not as yet found any equivalents for "Modern C++".


I prefer Rust simply because it's safer and C++ has become monolithic. Also gotta love an excellent package manager.


The problem with Rust is that it still can't hit all of the targets that C++ can. I have to support IBM AIX and HP-UX and the only things that I know of that run there are C/C++ and Java.


Well, all languages have good and bad parts. C and (the modern) C++ are two completely different languages in what matters most, and, to be fair, it is C++ in which much of the world-class software, like Unreal Engine, is written in. As much as I love C, if faced with a large project, I would only choose C++ as the implementation language.


At my first real dev job a big portion of my work heavily involved the Qt framework and, while at the time lambdas were still a big missing sore point, using that framework and their widespread habit of passing by const ref instead of pointer made me like C++ a lot more. I completely understand that pointers and dynamic memory management exists for really good reasons, but isolating global object creation into your main and leaning on internally managed lists of objects can be really strong from the perspective of writing good testable code.

I've been working primarily in PHP for a while now (and honestly put a lot less value on language comparison) but from what I've seen the expressiveness and power of C++ (at least in the ways I like it) have mostly been superseded by Rust and I think that'd be where I started if I ever need to work on something where in application logic would actually be a product bottleneck (instead of data source interaction).


Is Unreal Engine world-class software? I've heard a lot of negative things about it so it's hard to tell if it's just one of those things people complain about because its so popular or if it really does get things wrong


Bad things compared to what? Unity which does not have half the features out of the box and every other game implements basics like FPS limiter wrong?

Frostbite or Dawn Engine with its bad editors and performance problems?

Crytek used by few games? (This one has a chance to challenge UE) Unigine which is also rather rare?


People tend to complain about game engines because they're just so accessible, yet allow you to do extremely complex things.

For example if I wanted to make a 3D platformer in unity, I could probably commission an artist to create the character, download a starter kit of some sort, and be done with it in about a week.

However this means I'm tying in so much code, and if anything doesn't work exactly the way I expect it I can complain. I might complain about the starter kit I purchased, I might complain about Unity crashing when the real reason for the crashes are my own crappy code.

Particularly when I was younger, I would often try to do way too much at once and this leads to frustration. But, theoretically you could create the next gears of war with three other people in about a year. Odds are you're going to run into tons of problems though just because it's so hard to do that


> theoretically you could create the next gears of war with three other people in about a year

this is simply not true


Bright Memory was created by a single person. Keyword being theoretically, in reality you'd just go insane


Heck, most modern C compilers are written in C++ as well.


Right, Browsers too are written in c++ for the most part


I want to like C more, it's simpleness is great until you need to do anything at compiletime then the macro's blow any C++ complexity out of the water.

It's why I like zig so much, It really is the language that understands what was wrong with C and fix's those parts and doesn't do too much more.

Macros suck -> comptime is just zig code Pointer or array is ambiguous most of the time -> array and ptr notation Error handling things with a lot of possible errors leads to goto's or messy clean up -> errdefer and defer make that clean.


Browser times out on article.

Here is my take. I like C when I work on firmware for less powerful microcontrollers. I like modern C++ when I work on backend servers / middleware. I like Delphi / Lazarus when I work on desktop software. I like Java Script when writing web front-end libraries / apps. I like Python when shell script gets a bit too complex. Etc. Etc.

The real truth - I do not really like any of those. They're just practical tools that help me build my product. Product designed and created by me is what I like.


> It is not safe e.g. out of bounds array access is rather common and there’s no runtime check for that while e.g. Borland Pascal let alone something more modern had it (even if you could turn it off in compilation options for better performance

ALGOL dialects for systems programming had it 10 years before C was born.

CPL the language that BCPL subset was designed as bootstrapping compiler, had it.

PL/I the language used on Multics had it.

PL.8, language created by IBM for their RISC project and LLVM like compiler toolchain, had it.

As did plenty of others.

> And in most cases you know what will compiler produce—what would be memory representation of the object and how you can reinterpret it differently (I blame C++ for making it harder in newer C standard editions but I’ll talk about it later), what happens on the function calls and such. C is called portable assembly language for a reason, and I like it because of that reason.

When the target CPU is an old 8 or 16 bits CPU, and the code gets compiled with optimizations turned off.


C is not a "portable assembly language" even if you really, really want it to be.


I like the arguments here: https://blog.regehr.org/archives/1520

> Avoid repeating tired maxims like “C is a portable assembly language” and “trust the programmer.” Unfortunately, C and C++ are mostly taught the old way, as if programming in them isn’t like walking in a minefield.


The way most C compilers and ABIs use the stack kills it imo. But you could tweak a compiler to change that.


Why not?


Because there's no such thing as down-to-the-metal portable assembly.


C++ is?


It is not even more


C is one of those languages that consistently looks fast, but often is slower than other languages. Without generics, it's difficult to have complex data structures for each type, so you end up with lots of pointer chasing and less than optimal data structures.


I want a C with templates (with support for C++20 concepts) and consteval, instead of a C with classes (C++)

I don't care about RAII, so I don't care about the ownership tracking that Rust provides. I like to think that memory management is actually part of the problem, rather than a responsibility for the language's runtime.

I want to generate hundreds of datastructures or algorithms at compile time that are optimal in specific situations using introspection of types.

I want everything else in the language to get out of the way and know that I'm writing code that actually runs on a real computer, not an abstract machine.


Every time I read this, I agree. And then I wonder why people don't do the obvious thing like I do and write C with templates in C++. It is a kitchen-sink multi-paradigm language so just embrace it.

Plus, inheriting abstract interface classes is more convenient than writing a vtable by hand when such a thing makes sense. You don't have to participate in Java-style class voodoo if you don't want to.

I am sure all the other languages mentioned like Rust, Zig, and D are very good too, but C++ is where all the high-performance libraries live, and whatever productivity gains I would get by switching languages is utterly dwarfed by that.


Any argument that starts with "I don't care about safety" is a bad idea.

We could use more safety even, like automatic race condition analysis in language and more.

Conservative extensions of C and C++ exist, and they're not very popular, just check how popular D is...


Sounds like you might like Zig.


Never gave it an honest look, maybe I should spend some time learning it


D is that language, especially wrt to the code generation. Code generation and metaprogramming is what we obsess over.


I am afraid D slowly needs to decide what it wants to be when it grows old.


Can you please elaborate what kind of decision to make on D? It's a general purpose programming language with excellent metaprogramming capability.

As a comparison, has Python decided what it wanted to be? A scripting language like Perl, data processing language like R, tool command language like TCL or web backend programming language like PHP or RoR? But nobody in their right mind will ever say Python need to decide on his direction when it grows old.

As you probably have known when Python is around 20 years old (like D today), it played second or third fiddle to PHP, TCL, Ruby (RoR), Perl and R in their respective domains, and at the time the growing pain into Python 3 is yet to happen. But looks where is it now?

Personally I think the D language foundation is already solid [1]. It just need a killer application for it to be more popular and well-known just like what RoR did to Ruby. And if you have a solid foundation, it will probably just a matter of time for it to properly take off.

[1]https://dl.acm.org/doi/pdf/10.1145/3386323


An excellent language that keeps being rebooted with endless discussions on memory models, incomplete features, DIPs that were half implemented without documentation.

While they don't know where it should go, C#, C++, Swift keep adopting D like features, with their rich ecosystem.

D could have been C# on Unity, given Remedy experience with the language, instead not.

D's Metaprogramming was great 10 years ago, not when placed against C++20 metaprogramming, or the features arriving in C++23.


Don't you think it's a good thing to have choices of managed memory and not, or mix them together when necessary? I think it's a paradigm shift in programming for the better.

Programming languages used to stick to one programming concept for examples either imperative, functional or object oriented. But now most modern languages support at least more than two concepts to stay relevant.

Regarding other languages later adopting D like features, sure you can do that but the end results will probably be sub-optimal and clunky. For example, Python adopting array processing in Numpy library and becoming very useful and popular, but it will not be as seamless, intuitive and natural as R or Fortran array processing capabilities.


It is good to have choices, provided they are fully implemented and bug free, instead of jumping into the next possible solution.

Phobos still doesn't fully work with @nogc, DIP1000 is undocumented and now there is @life getting the spotlight, meanwhile the GC is stuck in early 2000 design, the std.allocators library is in experimental limbo for years.

Even if the results are cluky, they can double down on an eco-system in libraries, IDE tooling and OS support, that D lacks.

Currently the motto is Jack of all trades, master of none.


We are having a meeting to lay down a plan on memory safety in 2 hours time, so have faith.

Also DIP1000 isn't that badly documented these days, not brilliantly, but frankly I find that a lot of complaints come from people who weren't ready (this doesn't necessarily apply to you in particular, just that these attitudes spread) to understand it in the first place.

And what do you mean by OS support? I find that not many other languages take it as seriously as us?


Looking forward to the outcome.

Taking C# as example, I mean being able to do embedded stuff (Meadows, IoT Core), regular desktops, IBM mainframes, game consoles, and mobile OSes, without having to create my own compiler toolchain and self made druntime.


Like what? The company I work for has business logic, numerical code, a functional DSL, bindings to just about everything needed, fast and introspectable because they are under the same language.

Which features do you recommend we deprecate?


It is more like finalize the unstable features, stop chasing other languages memory models, fix type system holes, no incomplete DIP implementations.


you can abuse the preprocessor to do efficient generic programming e.g:

https://github.com/attractivechaos/klib/blob/master/khash.h

https://github.com/attractivechaos/klib/blob/master/kvec.h

FWIW writing a generic vector like that ^^^ is going to be faster than almost any std::vector implementation you can find. And that is just one way to do it, there are other tricks.


Good luck writing a data structure more complex than a vector, like a b* tree, with that approach.


How would writing a B*-tree be any harder?

The actual logic of a B*-tree would be equally hard in any language. The macroized template part is easy.


C simplicity is entirely illusory. The language ISO Standard, printed out, weighs kilograms. It is easy to confuse the language's extreme limitations with simplicity.

Even if it were simple, programs coded in C get no benefit from that: the language's limitations mean that the program has to provide whatever the language does not.

Thus, a program in C++ will be much simpler than the corresponding C program because you get to lean on what C++ provides. And, let's face it, practically all the complexity we encounter is in program code. The less there is of it to read, the less there is to understand.


> would e.g. C++24 as a separate language based in C++21 with most of outdated stuff thrown out be as popular?

This seem to be arguing that C++ is accumulating cruft as it grows and it's bad, but I like the fact that C++ tries hard to maintain backward compatibility. The greatest asset of a language is all the stuff that is already written in it.

That said, some outdated stuff do get thrown out, such as trigraphs.


C syntax is overall simple. The only syntax often confuses me is function typedef.


C syntax isn't simple. Unlike most modern languages, which have a context-free grammar, it has a context-sensitive grammar. You can see it in the following example:

  #define a stdio
  
  #define header <a.h> // five tokens: <, a, ., h, > (otherwise preprocessor wouldn't substitue "a")
  
  #include header // one token after substitution: <stdio.h>
  
  int main() {
      struct {
          int h;
      } stdio;
      a.h = 0;
      printf("%d\n", stdio.h);
  }
What happened here is:

1. In #include directives a thing like <stdio.h> is one token; everywhere else (including other preprocessor directives) it is five tokens;

2. But an #include directive can have multiple tokens on the right hand side of "#include" before macro substitution. After macro substitution the strings are lexed again - into ONE token.

3. In main() "a.h" is substituted with "stdio.h" - this time it's THREE tokens.

(Oh, and writing "#include <a.h>" will produce an error, because here we have a single token <a.h>, so no "a" to be substituted, and normally there is no "a.h" header file in include path.)

So the same string gets lexed into a different number of tokens in different places; and that happens even after it got lexed first into the same number of tokens before macro substitution.

Now have an example of a C file that compiles with GCC, but not with Clang:

  #define CMPS x /*
                    */ <y>
  
  #define ALMOST_SHIFT<CMPS
  
  #define FWD_FST(x, y) x
  
  #define CONCAT(x, y) x ## y
  
  #define x
  # /*
       LOL
       */ define y stdio.h
  
  #include CMPS
  
  #undef x
  #undef y
  
  #include FWD_FST(<stdint.h>, y)
  
  // btw. comma is a valid character in a header name. but this will work anyway,
  // because lexing in C works in mysterious ways
  #include CONCAT(<std,def.h>)
  
  int main() {
      int x = 0;
      int y = 0;
      int z = 0;
      printf("%d\n", CMPS z);
      // printf("%d\n", z<ALMOST_SHIFT); // doesn't compile
      // but the rest works fine...
  }
So clearly the C syntax isn't simple. It's so not simple, that compiler writers can't agree on what exactly it is.

(I've been writing such programs for the purpose of using them as edgecases to test my C compiler.)

Then of course you have the type declaration fiasco: <https://blog.golang.org/declaration-syntax>

Then there if of course the awful ternary operator.

Oh, and I almost forgot: spaces are insignificant... unless you're in a "#define" directive. Talk about language consistency...

Those things not only make life hard for compiler writers, but also for language users, because, I don't know about you, but when I write a program the smallest unit that I use to think about its source code is the token. When I can't easily predict tokenization, then something went really wrong. Same with other syntax complications. Guy Steele once said that when he designs a language he verifies that its grammar is LL(1) with a parser generator, which helps make sure that it's not only easy to write the parser and that parsing can be very efficient, but also that it's easy to understand by humans. C fails here big time.


The thing that makes C have a context-sensitive grammar is parsing non-keyword type names (typedefs), not the preprocessor, which is a separate text transformation run before the C parser sees the text.

The specific problem is this has two parses:

   foo * bar;
It's either an expression multiplying foo by bar; or, if foo is a type, it's a declaration of a variable foo pointing to values of type bar.

The normal way this is handled is to inform the lexer of known type names (e.g. by letting it peek at the symbol table) whenever it parses an identifier, so it can produce a type name token instead.


This is another place where it's context-sensitive. To make everyone happy, I could rephrase my claim to "C preprocessor grammar is context-sensitive". This has practical implications for e.g. what gets substituted where by the preprocessor. And in my claim this context-sensitivity means that we get different sequences of tokens in different places for the same input. Your example shows that we get different ASTs for the same input but different type information, although we still get the same sequence of tokens. Getting a different sequence of tokens is much more confusing to me than getting a different AST (or maybe I just got used to that with a life of experience with C++).

But generally I disagree that preprocessor is separate from C for the following reasons:

1. It is specified by the C standard.

2. Different C compilers implement it differently, just like they implement other parts of C differently.

3. There is some truth in the claim that it is a separate text transformation, but here I also do not agree entirely, because the C standard doesn't say that the preprocessor outputs text. It says that it outputs tokens and those tokens are then converted to tokens of C proper[1]. It is also clear that this model of "text transformation" is also not entirely what happens in real world compilers, because if it did we wouldn't get good backtraces for tokens that went out of the preprocessor after substitution. Maybe GCC serializes it to text at some point, but clearly it gets more information out of it than what a naive interpretation of the phrase "text transformation" would suggest.

[1]: Translation phase 7 of the C11 standard:

> Each preprocessing token is converted into a token.


You seem to be talking about the preprocessor, not C itself, and I'm not sure that anything you say about it is true (the preprocessor is complicated). Certainly, your first example won't compile without warnings for any sensible GCC options.


A better example of C's context-sensitive grammar might be the expression "(T)*x", which parses differently depending on whether T names a type or a value.


That's a good example, but of a different thing. My examples show that the input is lexed differently based on what part of AST is being produced at a given point. Your example shows that input is parsed into different ASTs based on information from the type system. I.e my examples show that in C the lexer can't be separated from the parser, and your example shows that the parser can't be separated from the type system.


> You seem to be talking about the preprocessor, not C itself, and I'm not sure that anything you say about it is true (the preprocessor is complicated).

The preprocessor is part of the language. Both in literal sense and in real-world. All C programs use the preprocessor. You need to deal with it. Also, the grammar of the preprocessor is intertwined with C proper due to "#if" directives. The expression on the right hand side of an "#if" is parsed according to the grammar of C proper.

> I'm not sure that anything you say about it is true (the preprocessor is complicated).

You can run the examples under compilers and verify. Also, the part about it being context-sensitive (with regard to tokenization in #include directives vs everywhere else) is explicitly noted in the standard. Nothing controversial here.

> Certainly, your first example won't compile without warnings for any sensible GCC options.

Compiles just fine without any warnings with GCC with options -Wall -Wextra.


The constant expressions that #if understands are a very restricted subset of C. It's best thought of as a small grammar inlined into the preprocessor rather than intermingled with the C parser, since the expressions can only evaluate preprocessor definitions and not any C declarations.

I've written several C-style preprocessors. They can be fiddly to write, but you shouldn't think of them as lexing to the same lexemes as a C parser needs. You can take a lot of shortcuts there.

Things like '<' and '>' being overloaded is quite common, even in lexers. You see it with nested templates / generics in C++, Java, C# etc. There are simple tricks for dealing with the ambiguity between '>>' vs '>' '>' in 'T<U<V>>'.

(FWIW, having written some of those preprocessors, I think they're a bad idea. It's notable that other languages haven't copied the idea, other than C++ which inherited it.)


You forgot to support the assumption that

> Unlike most modern languages, which have a context-free grammar, it has a context-sensitive grammar.

Implies it C does not have simple syntax.

Is there a law a language with "simple syntax" cannot have context-sensitive grammar?


My last paragraph addresses that.


I agree with all of this... Except, what's so terrible about the ternary operator?


Its only reason for existence is that if-s in C are statements and not expressions. But an if and a ternary expression do the same thing. Making two different constructs to do one thing doesn't sound all that simple to me. And the standard has special treatment for parsing ternary expressions. It basically says that in any "a ? b : c" pretend that "b" is surrounded with parens, so that it's effectively "a ? ( b ) : c". Because specifying a grammar that has clear precedence was too hard.


Ok thanks for explaining that to me, I understand and agree. For the most part I was just thinking ?: When you need an expr, which by itself is not the worst (though an expr-if , like in zig, or basically any functional pl, is much better). But I wasn't aware of the other issues.


Why? Essentially, a typedef only differs from a variable declaration by, well, having the word ‘typedef’ in front of it, functions or function pointers being no exception.


Sorry, I should be more accurately. It is function type uses.

In fact, I'm not confused with simple function types. It is just that sometimes, complex function types are convolutions for me.


And ostensibly function pointers/function pointer calling.


One of the best parts!


Look at C++ as a scripting language and you possibly will not dislike it as much.


Agreed! I use C++ for small scripts/tools and love it. Granted, I first spent some time writing a library of convenience functions like starts_with(), ends_with(), contains(), get_file_contents(), read_csv_file(), etc. that so many languages have built into their standard libraries, but to be honest, even that part was a lot of fun!


CERN actually does that with CLING for interactive HEP data science.

https://root.cern/cling/


Thanks!

  $ brew install cling
  $ cling
   
  ****************** CLING ******************
  * Type C++ code and press enter to run it *
  *             Type .q to exit             *
  *******************************************
  [cling]$ #include <stdio.h>
  [cling]$ #include <sys/utsname.h>
  [cling]$ struct utsname u;
  [cling]$ uname(&u);
  [cling]$ printf("%s %s %s\n", u.sysname, u.release, u.machine);
  Darwin 20.5.0 arm64


You can even use it interactively like ipython and get rid of the printf, as it automatically evaluates expressions.

  [cling]$ int i=21
  (int) 21
  [cling]$ i*2
  (int) 42


Thanks! Is there a way to see strings nicely without printf?

  [cling]$ u.sysname
  (char [256]) "Darwin\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"


I imagine if you printed an actual string it would look as you expect, issue is that you're printing a char array and not a C++ std::string/_view


Indeed, this is the expected behavior with C arrays, regardless their type.

The solutions are either to use a C++ std::string as you mentioned, or a typedef struct with a length property.


struct utsname is what it is. For me the appeal for something like cling would be doing quick experiments without any boilerplate.


Sure, but I’d expect the interpreter to have some smarts to do the right thing within reason.


But a char[] could be binary data for all the tool knows, do you want it to do a buffer overflow and end up bricking your computer? std::string etc. exist exactly because char[] is unsafe for holding strings unless you carry around a size.


There is a size in the example I posted above.

  [cling]$ u.sysname
    (char [256])


Same.

I should finish writing a big article title "Why I still don't like C++" for my yet unpublished blog...

Even if I tend to use orthodox C++ for convenience.

But I've been impressed lately by some newcomers, like Zig.


"And everybody essentially picks a subset of C++ and writes in it ignoring the existence of other features.".. very true in my experience.


But isn't this great? You choose the bits of C++ that makes sense for your project and run with it.


yeah.. I really don't see why everyone gets so bent about it. I can certainly understand the cognitive load of learning essentially all of the features so you can find the ones that work.. but really, picking a set of features that work (and ignoring possibly better techniques) seems to true for all languages. Java, Python, etc.. map.filter.reduce.. is better than for loops, but I still see those. C++ has a LOT of features and 40 years of both advancement in the language, as well as strong third-party library development.. so, compared to something like Go, it's a bit daunting to use.


That is not wrong at all, You use the language features which best suits the problem and/or you are most familiar with.


Nobody knows C++. Nobody can know C++.

There are situations that you will not interpret correctly no matter how much time you spend learning the language.


Same for C, with its tokenization and undefined or implementation defined behavior.

It's a serious flaw in the language allowed because it makes writing optimizers and compilers easier.


Same applies to any programing language that gets widespread industry adoption, with several implementation and years of production code, including C.


This isn't true. Some languages are more complex than others and C++ is the Queen.

I've never run into a bug in Java wherein a look at the compiler or runtim error, runtime stack and some look at StackExchange I couldn't solve.

But I'm constantly fighting CMake, linker issues etc..

I avoid huge swaths of C++ because I consider them to risky to deploy without mastery.


I doubt very seriosuly that you would surive a Pub Quiz of Java Puzzles that includes the Java language and standard library up to release 17.

For bonus points we can include the Java implementations outside OpenJDK.


You're right. But I would learn any new Java thing in a few minutes because it's explained and documented.

I'm having difficulty compiling a Qt app right now due to complicated and arcane marco directives. The compiler / linker give me no information to work on, and there's no documentation.

It takes 2x as long to develop in C++ it's grinding.


I bet you also had to learn Java properly before having a go at it.

C++ is also explained and documented as much as Java is.

As for building, try to use Makefiles, Ant, Maven, Gradle when only being comfortable with one of them.

Or configure the builds across all major IDEs.


I found a buffer overrun bug causing incorrect results in a C dependency recently. I don't like C anymore.


You are lucky if you only encountered this recently. We might not like C, but C is not going away anywhere.


C is, in fact, going away, in many, many places.

E.g. Gcc and Gdb are nowadays compiled with G++, and new code in them is C++. It is very easy to start this process in just about any C program, so it happens many times every day, all over the world, with little notice because the bump is unfelt.


That does not count as “going away”. Sure, newer functionality may be done in C++, but it is unlikely that all C code will disappear from the gcc and gdb codebases. The older versions of things written in C will still be around and remain in production.


The production of new C code is going away in many places that have a lot of old C code. In each project that converts to compiling with a C++ compiler, the new code is C++. Old C code hangs about, but is looked at less and less; soon, nobody wants to look at any of it. Parts get wholly rewritten in C++ when it seems like they might otherwise need much attention.

If it is meaningful at all to talk about a language going away, it is about a decline in writing new code in it. Obviously all the code written in badly obsolete languages still exists, somewhere, but the demand for people to write any more of it falls at an increasing rate.


Same here. C gives a lot of control, and makes it easy to map code to what the CPU will actually do.

If you like C, you should really try Zig. Same simplicity and control, but really nice improvements to make code easier to read and maintain.


They're both a huge pain in the ass. But it's easier to scratch C than C++.


Archive link since the site appears to be down (too much traffic I guess)

https://archive.is/E3GvJ


How often have I heard of people running into C++ compiler bugs. Oh the horror. This is what originally bit me in the 90s, and I never went back to C++ again.


Because only C++ compilers have bugs, of course.


Dlang is nice. It's essentially C++, but with the warts fixed. Using dpp[0], you can #include C headers into D, and use their complex macros.

[0]: https://code.dlang.org/packages/dpp


They didn't fix all the warts, D's compile times are just as bad or maybe even worse [0][1].

[0]: https://blog.thecybershadow.net/2018/11/18/d-compilation-is-...

[1]: https://forum.dlang.org/thread/pvseqkfkgaopsnhqecdb@forum.dl...


Unfortunely D is full of warts, as the community is small and they keep changing their mind what D is supposed to be, while C#, C++, Java, Rust, Swift adopt the features that made D special 10 years ago.


I've actually gone backwards from my original positions.

I used to think that using C++ was acceptable in some cases. I no longer do.

The faults isn't even C++ the language, per se. It's the ecosystem--or more appropriately--the lack thereof.

A lot of the modern languages integrate with C, properly. They integrate on Linux, Windows, OS X/macOS, iOS, Android, etc.

Nothing integrates with C++ well, and it's the fault on the C++ side. That means dumping C++. Perhaps the C++ compiler writers will finally have some incentive to fix their crap.


> That means dumping C++.

Or just integrate with C++ code by exposing it through C linkage with `extern "C" {}`. Sure, C is the de facto lingua franca of programming languages, but that doesn't seem like a pragmatic reason to choose C over a different language that may be better for the job. Also, C++ has a massive ecosystem.


> Or just integrate with C++ code by exposing it through C linkage with `extern "C" {}`.

Nope. Been there. Done that. Got the scars.

For example, what happens when an exception gets thrown on the other side of that "extern C"? Yeah, undefined.

The problem is that if you're using C++, you get the C++ machinery--allocators, exceptions, etc.--and you can't dodge it.


Except that libc you are using, most likely was implemented in C++ with extern "C", unless using a pre-historic C compiler.


Pretty sure that the C compiler writers, writing their C compiler in C++, probably don't feel the same way. Especially as they're also the same people as the C++ compiler writers.


This is actually a pretty deep idea, not to rely on a single language for implementation. I would go as far as suggesting to use as many DSLs as possible, to make sure that each component or level of abstraction is adequately expressed.


The C compiler people and the C++ compiler people are the same people.


If you can integrate something with C, you can do the same with C++.


>Nothing integrates with C++ well, and it's the fault on the C++ side

Not even wrong !




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: