Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

C is the best choice if you want all of: small (both language and binaries), fast (both compiler and binaries), obvious (no/minimal complex magic), close to the metal, with excellent debugging support, portability and integrations.

No other language has been battle tested for longer and more extensively than C.

Your kernels, OSes, drivers, databases, web servers and compilers are written in C.

If some of these features are not important to you, there are hundreds of slower, more complex and less portable languages to choose from, that provide other benefits instead, such as more convenience, more correctness and higher level abstractions.



C isn’t that small, compare it to a Zig hello world. Fast is relative — due to C not having good expressive/abstracting powers, it will leave you to inferior solutions, eg. counting string length multiple times at call sites, vs C++’s small string optimization, which is simply not possible in a user-ergonomic way in C. Regarding obviousness, I would add UB here, so Zig for example would beat it.

C is not any closer to the metal than other system level programming languages, this myth should just die. Your code at -O3 gets mangled to oblivion, unless you are a compiler writer of the respective compiler, you will have no idea on the generated code. This is the exact same as with C++, Rust, etc, hell, C doesn’t have proper simd support, so in a way the former two is closer to the metal.

But I do agree on portability and integrations - so C will not die, but I really have a hard time seeing why should I choose C over any of the listed languages, unless I target some obscure CPU architecture.


Are you going to mix all those languages into one project and somehow use their advantages but steer away from their disadvantages?

Sure, other languages have caught up or have improved on some of the features where C shines.

Let's remove portability and integration from the feature list, because that's strongly related to C's tenure.

Which one of the languages you listed matches the rest of the feature set I brought up?

  * Small language
  * Fast compile times
  * Fast binaries
  * Small binaries
  * Great debugging experience
  * Close to the metal
In my opinion, they all fail in at least one category, and that's expected - additional functionality can't come for free. You can only accept its cost.


And they fail in other categories too, e.g. C can be very terse whereas most modern so-called replacements tend towards verbose. For me that's a development exactly in the wrong direction; I'd rather have a more terse C.


Zig? I personally haven't used it, but based on what I've heard about it, it fulfills these criteria.


That’s what I am betting on - but it isn’t 1.0 yet, and that matters too.


In C it's possible (and, to some level, very much encouraged) to write your own things to replace whatever built in thing you don't like for whatever reason, be it speed, size, portability.

C not having fancy built-in structures means programmers are more careful about choosing simple ones, which dramatically cuts down the amount of completely pointless generated code. -O3 can optimize registers and maybe memory reads/writes, but it can't remove reallocation from each call to a vector push, remove a reference count from a whole object type, or do pretty much anything with any moderately complex heap data structure. Sure, that comes at the cost of more advanced things being horrible to write, but it's a trade-off.

And I'll just disagree that UB must be unobvious. To me, it's one of the most useful optimization tools. You can check for it with sanitizers, and explicitly invoke it to convey information about hidden behavior to a compiler. It's not magic.


> C not having fancy built-in structures means programmers are more careful about choosing simple ones

Resulting in linked lists everywhere which have pretty terrible performance characteristics. You should be worried about those way before the occasional vector push reallocation cause you any problem. (And you can specify initial capacity so there is that)


linked lists - true, but at least when you write them you'll be certain of what they do, and, coupled with knowledge of cache performance, precisely how bad they are. About reallocation I don't even mean the copying, but just the code needing to check for the chance overflow on every push. Besides the obvious waste of instructions to check for that, it also clobbers registers even if the branch isn't taken, and thus will easily result in a lot more unnecessary spilling to the stack. This kind of pointless thing happens in a ton of data structures.


Branch prediction exists so the common case can be made very fast (much faster than jump to this remote pointer), and also, rust uses an optimizing compiler for a reason. Eg., you use a vector inside a for, the vector’s code can be inlined and the checking part be done only at the boundary. Also, one can also manually perform an unchecked push/get/whatever as well.


The common case may be fast, but it's still slower than not having a rare case in the first place. The untaken branch will still affect register allocation (out of the ~15 x86-64 general-purpose registers, only 5 (6 if you count rbp) preserve data after a non-inlined function call in the System-V ABI (of which there will be at least one, ending in malloc somewhere), and the compiler has to accommodate both taken and untaken at the branch end) and will eliminate any hope of SIMD vectorization, among other wrecked optimizations.

Unchecked operations are acceptable if your code has a single hot loop, but if you have a hundred small functions, each taking 1% of the time, you probably won't carefully examine every stdlib function each of them uses and write code to "work around" every unnecessary thing the stdlib does.

Yes, that's a ton of micro-optimization, but micro-optimization can bring a ton of speedup, so I'll take whatever makes it simpler (or not needed in the case where you already know what will happen due to having written it)


Feel free to wrap it into a repr(transparent) type which exposes the unsafe operations by default.

But I seriously doubt that programs would benefit much from these micro-optimizations - there are rare and unfortunate cases where indeed there is no one single bottleneck for a program and thus there is no simple way to improve performance, but the vast majority of programs spend all their life in a tight hot loop, and anything else doesn’t matter in the slightest - hell, as mentioned C gets away with as many linked data structure as they want, but writing those in goddamn bash would suffice as well.


That depends on the project. There definitely are projects in which micro-optimizations such as this wouldn't be beneficial for one reason or another, but there definitely also are ones where they're make-or-break. For those, C is pretty good, and you're not fighting against the language (or the core intent of the language at least).

(I'd guesstimate that micro-optimizations would go a very long way in improving performance of nearly every program anywhere, but I don't have much data on that besides the couple projects where I've tried to care about performance, and having achieved pretty decent results)


I don't think I've ever used a linked list in 12 years of embedded C programming.


>due to C not having good expressive/abstracting powers

I disagree, if anything, C's scantness forces you to abstract things much properly, unless you plan to write pages of boilerplate code here and there.


With all due respect - how do you write a string library in C then? Char* is not one.


An appropriate data type (you could use structs maybe) and methods for the common operations, concat, length, etc...

It's not as trivial "string s" like in other languages, but it's also not that big of a deal. Also, you make sure it works properly and you just use it anywhere you want.


Languages exist that compile to C which gives them many of the above advantages and more, so why choose C over them?


Control over the code that actually runs.

We already lose quite a bit of control with C (e.g. it reorders your code as it pleases) to gain portability across CPUs, otherwise we'd have to rewrite the code in several assembly languages.

If you consciously choose to give up even more control over the code that runs (because a compiler that targets C necessarily adds another layer of autogenerated code that you don't control) then it better be a wise tradeoff.


The downside of reordering being performance? Or something else?


Performance is usually the upside. The downside is subtle bugs that can happen due to side effects of the statements being reordered between sequence points.


AFAIK compilers don't reorder if it would change the result.


Compilers not allowed to violate the language spec when optimizing. But the spec may be fairly generous in its allowed interpretations, which may not match what a programmer may naively expect. C, with its many undefined behaviors and implementation defined behaviors, is especially dangerous.


Having never used such a language, I'm curious, what's the debugging experience like? Can I source level step the application in the original language or do I have to debug the generated C?


You can put in line pragmas for e.g. gcc that will point you from "foo.gen.c" to "foo.fancylang". However it is not very useful unless the fancy language matches C's structs and native types and also does not mangle names.


It is always a pain to debug transpiled C IME.


One of the criteria is "excellent debugging support" which you don't get with transpiled languages, since you'll be buried in generated code when you open the debugger.


Languages like Eiffel prove this isn't the case, it is a matter of proper debug tooling.


Of course, Eiffel is such a popular and widely used language, with a great toolset. /s


The amount of users doesn't change the fact of the tooling quality, but no wonder, UNIX folks tend to only take free beer, so what do they know about fine wine.


I've read the Eiffel books, and tried to use the Eifell tools, on Windows - a horrid experience. I am not a "UNIX folk" and I can't imagine what comparing UNIX and Eifell has to do with anything. It is basically an unuseable programming language and toolset, promoted by an egomaniac with a grudge against C++.

If you disagree, please provide a link to something medium sized and useful written in Eifel.


Useful to whom?

To you or the companies that keep Eiffel Software in business for 30 years?


> the companies that keep Eiffel Software in business for 30 years?

Such as? If you can't provide links to people using Eiffel succefuly, then maybe there are none.


I can provide them, but you apparently will ignore them anyway, because it is clear a company can exist for 30 years without money.

Here is one, just to keep you happy.

https://washingtontechnology.com/2003/06/tech-success-xontec...

Yes, the article is from 2003, then again you haven't asked for dates, but to make it easier on you, you can ask them how they feel about Eiffel in 2022.

https://www.xontech.md/en/


Depending on transpiler (compilers that will compile to C), you can get near representation in C (rarely) or something completely different (frequently). In Scheme/LISP world (where transpilers were popular for a while), even a simple expression like "(+ 1 1)" will rarely give you something like: "int a = 1 + 1;". I've seen these things would produce dozen of lines of various boxing/unboxing calls, type checks and GC protection marks. If the compiler is really aggressive, it will just put "2" as a constant.

I think that the only benefit of "compiling to C" was reusing existing compilers, not tooling around it. Also, debugging that code is a nightmare unless you are intimate with transpiler internals.


A compiler that translates to another language used to be called a translator.


All compilers covert programs in one language (say C) into another, (say assembler or machine code), and they are called compilers. The whole transpiler thing is a bit bogus, but a C compiler has never IME been called a translator.


C certainly has been battle tested but I wouldn't say that it passed with flying colours. Every one of the examples that you cite has had bugs/crashes/vulnerabilities that would have been avoided with newer languages.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: