Hacker News new | comments | show | ask | jobs | submit login

Very few people can read and understand C without being a language lawyer an knowing tons of esoteric magic. There are many things in C which look perfectly reasonable but which actually result in undefined behavior, and you cannot reason about what a program will do in the presence of undefined behavior.

Further, C makes it impractically difficult to handle things that should be simple. Like, uh, errors and resource cleanup. The inability to have something as trivial as a scope guard that works as a human being would expect it to work will ensure that under no circumstances will I ever write a piece of C that anything I care about relies upon. I know I'm not good enough to write perfect C, and imperfect C is disastrous. (The fraction that is good enough to write perfect C is a vanishingly small proportion of those who think they do. And try to. And fail. And hurt other people in the process.)

But there is a difference between using C in crazy ways, and using 20 features of a language in ways that make them interact.

The more features a language has, the more I have to know just to read someone's code or be productive in a company - and the more chance someone doesn't know about potential harmful interactions and side effects.

> there is a difference between using C in crazy ways

As per my sibling, write me something as straightforward as a C++ RAII-using dtor in C without "using C in crazy ways". I will not hold my breath. The cost of a very minimal subset of C++ is literally zero, and yet significantly improves the likelihood of your code actually working. You're picking social baggage in either case: either understanding the subset of C++ that your team is using or expecting all of your developers to be perfect where C++ (and other languages--shouts, Rust!) just do it for you, correctly.

Technology is socially interesting in that incompleteness and inexpressivity is so often misread as "elegance" or "minimalism". That a language is "simple" is not a feature when it offloads all of the danger onto the (almost invariably failing to consider critical information at the worst possible time, and I include myself at the forefront of that characterization!) developer. This is why we have tools that compensate for the most common, and most destructive, of our mistakes.

(This post should not be construed as any particular endorsement of C++. C++ is a gong show when used improperly. But at least it's possible for a mere mortal to use it properly.)

The reason I dislike C++ (since about the 2000 as well) is that while it is possible to shackle yourself to only using C++ RAII etc, the vast majority of code in the wild (and in libraries, etc) does not. It does 90%, but that doesn't get you 90% of the benefit (maybe 50%, maybe 0%, depending on your point of view).

You can do essentially all the "C in crazy ways" in C++ as well, and people do. In my opinion and experience, it isn't what the language provides, it is how it is used in practice - and again from my experience (YMMV), C is used sanely and C++ is not.

I think the profusion of use-after-free bugs and memory leaks in C code running all over the place should put the lie to this. And, further, modern C++ allows you to firewall off the damage of bad C++ and most C, when you are forced to interact with it, via unique_ptr and your own enforced RAII. (And the idea that using this is so pejoratively "shackling" is bonkers to me; being "shackled" to the use of railings on walkways over a pit of fulminating acid is just the worst.)

The idea that people use C "sanely" more often than C++ (or, you know, something actually good--further shouts, Rust!) doesn't pass the smell test. Are you checking the retval of every sprintf? Are you writing goto drops in every method where there's allocated memory, diligently checking every error code, and properly bailing out, every time? If so, you're that one percent. But you're probably not. And that's not a slight--I'm not, either. That's why I am proud to count myself as a member of a tool-using species, because we build (and have built) better tools to compensate for our flaws.

"shackle" was a bad choice of word, I agree. I meant to say that you might be disciplined enough to never use anything except RAII, but most (users/libraries) aren't.

> If so, you're that one percent.

I probably am that one percent (and I don't use sprintf, because there is no sane way to use it). When I say people use C "sanely", I do not mean that they never err in any way, far from it. And my observation (your mileage obviously varies) is only that C code bases that I tend to use and meet (e.g. x264, ffmpeg/libav, the linux kernel, the Nim compiler) tend to be saner than C++ code bases that I tend to use and meet (e.g. libzmq, although that one improved dramatically since 4.0, and is now almost sane, boost, stl)

I admit that I have not yet met a C++11 codebase with lambdas - that might have restored sanity. But even if it does, it does not retroactively bestow that goodness on the millions of lines of code already out there.

I stress again - I am not passing judgement on the language, but about how it is used in practice, through my own sample set. If I work on a project in which I can dictate the exact subset, choose the people, etc, I might pick C++. But in most projects I'm involved in, the constraints are dictated in some way or the other that makes C at least as good a choice (and often better) than C++

I completely agree and would, most of the time, choose C over C++!

Except, that I have seen C++11 codebases that heavily relied on lambdas. And that was far from sane. I am very familiar with lambdas from pure functional languages and partially ones (python, ...). But all the syntax specifics, brackets, ... in C++ made it a very annoying process to understand, what was even going on at all.

Using short functions would be more lines of code. But at least I would have known right away what's happening in the code.

And we already could take advantage of many of those positive features with C++ARM.

Hence why I eventually did a Turbo Pascal -> C++ transition, with a very very short stop in C.

I was lucky that our technical school also had access to Turbo C++ 1.0 for MS-DOS on their software library. As I was not getting why should I downgrade myself from Turbo Pascal 6.0 into C.

Already in those days of "The C++ Report" and when "The C/C++ Users Journal" was still called "The C Users Journal", there were articles how to put C++ features to good use for increased safety.

And this is a major culture gap between C and C++, that I have observed since then, yes we also get the performance obsessed ones, but there are also the security minded ones.

I seldom see talks about how to increase type safety in C, their C99 security annex is a joke as it keeps pointers and size separated, and is so highly regarded that it was moved into optional in C11.

C++ community on the other hard improves the standard library to decrease the unsafe use cases, promotes guidelines and is trying to reduce the amount of UB use cases.

This is a really interesting notion. I'd love to follow up with you offline. Drop me an email?

So that's easy:

  #define NEW(t, args...) ((t*)malloc(sizeof(t)) && t ## _construct(args))

  #define DELETE(t) (t ## _destruct() && !free(t) && (t = NULL))

This isn't sufficient unless you consciously error-check and goto-trap every single failure point in your code. Which you might do. But you'd be the literal one percent. If not the literal one permille.

In DELETE(), is t the variable name or the type name? It is used as both, which won't work.

Yes, RAII and exceptions (as well as a few other things like being able to declare variables anywhere) is the reason why I liked C++ in 2000 more than C.

But beyond that, it all went downhill. There was no reason to make C++ the most bloated language ever.

The same is true of Javascript etc. Newbies will not be able to start, now, because they will think about let/var/const the same way as in C++ they have to think about all the possible casts, copy constructor vs casting semantics etc.

Tell me this for example, does the following use a cast or a copy constructor?


"C++ has optional and complicated things that incur costs only when you use them, so let's not even have dtors" is...not a good look, I think.

Gotta love those non virtual destructors which only clean up the base class. And the ambiguity about whether you're going to call a type conversion operator or a copy constructor when assigning A a = b.

Because you know, it's all super clear.

Who said anything about inheritance? Like, at all? That is yet another feature you pay for only when you adopt it. You can pick one of two problems: you can agree upon a subset of C++ to use or you can expect your C developers to be perfect all of the time (while having fewer, if any, ways to express correctness). Dragging out additional things that are not part of your subset of C++ doesn't help formulate an argument against this.

I agree about C needing memory management and exceptions. That's why I said I liked C++ when that's all there was. Classes were nice sumyntactic sugar.

But the language jumped the shark starting with C++0x

C++0x adds nothing you are obligated to use, though. No sharks have been jumped unless you choose to strap on some skis and rent a boat. (Meanwhile, with move semantics and unique_ptrs, C++14 is actually way nicer as far as memory management goes than that 2000-era C++ was.)

Being a language geek with major in compiler design and systems programming, I tend to be a bit language lawyer.

I seldom meet C developers that are able to distinguish between ANSI C and my compiler's C, that extrapolate from my compiler's C version Y, how the language should behave.

Then they port the code to my compiler's C version Y + 1, or another compiler vendor, their code gets broken, blame the compiler, only to find out that the code was already broken from ANSI C point of view.

So is your compiler ANSI C or not? c89 or c99?

I am writing about C developers' knowledge about the language.

> But there is a difference between using C in crazy ways

Is signed arithmetic using C in crazy ways? Is shifting by an arbitrary value without checking to make sure that the number of bits is in the valid range of the type using C in crazy ways?

Undefined behavior is everywhere in C.

And all of those are inherited by C++. Thankfully C++ has a large assortment of additional footguns to help combat the previous footguns. It also has armored, self-detonating shoes.

In my experience, some people are too much attached to that good feeling of knowing "all there is to know" about something; this has the taste of proficiency, but the very real risk is to prefer the small pond because of that, calling the sea "overextended".

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact