Many people still have the mistaken belief that C is still trivial to map to assembly instructions and thus has an advantage over C++ and Rust in areas where understanding that is important - but in practice the importance is overstated, and modern C compilers are so capable at optimising at high optimisation levels that many C developers would be surprised at what was produced if they looked much further than small snippets.
Like half the point of high-level systems languages is to be able to express the _effects_ of a program and let a compiler work out how to implement that efficiently (C++ famously calls this the as-if rule, where the compiler can do just about anything to optimise so long as it behaves in terms of observable effects as-if the optimisation hadn't been performed - C works the same). I don't think there's really any areas left from a language perspective where C is more capable than C++ or Rust at that. If the produced code must work in a very specific way then in all cases you'll need to drop into assembly.
The thing Rust really still lacks is maturity from being used in an embedded setting, and by that I mostly mean either toolchains for embedded targets being fiddly to use (or nonexistent) and some useful abstractions not existing for safe rust in those settings (but it's not like those exist in C to begin with).
Often the strong type system of C++ means that if take C code and compile it with a C++ compiler it will run faster. Though part of the reason it is faster C++ will allow the compiler to make assumptions that might be false and so there is a (very small IMHO) chance that your code will be wrong after those optimizations. C++ often has better abstractions that if you use will allow C++ to be faster than C can.
If Rust doesn't also compile faster than C because of the better abstractions that should be considered just a sign of compilers needing more work in the optimize and not that Rust can't be faster. Writing optimizers is hard and takes a long time, so I'd expect Rust to be behind.
Note that the above is about real world benchmarks, and is unlikely to amount to 0.03% difference in speed - it takes very special setups to measure these differences, while simple code changes can easially but several hundred percentage differences. Common microbenchmarks generally are not large enough for the type system to make a difference and so often show C as #1 even though in real world problems it isn't.
Rust is a systems programming language by design; bit-banging is totally within its remit, and I can't think of anything in the kernel that Rust can't do but that C could. If you want really, really tight control of exactly which machine instructions get generated, you would still have to go to assembler anyway, in either Rust or C.
The exact reason why it was created in first place, a portable macro assembler for UNIX, and should stayed there, leaving place for other stuff on userspace like Perl/Tcl/... on UNIX, or Limbo on Inferno, as the UNIX authors revised their ideas of what UNIX v3 should look like, already on UNIX v2 aka Plan 9, there was a first attempt with Alef.
Or even C++, that many forget was also born at Bell Labs on the UNIX group, the main reason being Bjarne Stroutroup never wanted to repeat his Simula to BCPL downgrade ever again, thus C with Classes was originally designed for a distributed computing Bell Labs research project on UNIX, that Bjarne Stroutroup certainly wasn't going to repeat the previous experience, this time with C instead of BCPL.
I'm not sure what you mean by "leaving place for". There was a place for Perl and Tcl on Unix. That's how we wound up with Perl and Tcl.
If you mean that C should have ceded all of user-space programming to Perl and Tcl, I disagree strongly. First, that position is self-contradictory; Perl was a user-space program, and it was written in C. Second, C was much more maintainable than Perl for anything longer than, say, 100 lines.
More fundamentally: There was a free market in developer languages on Unix, with C, Perl, Awk, Sed, and probably several others, all freely available (free both as in speech and as in beer). Of them, C won as the language that the bulk of the serious development got done in. Why "should" anything else have happened? If developers felt that C was better than Perl for what they were trying to write, why should they not use C?
"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."
-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming
C's victory is more related to not having anything else as compiled language in the box than anything else regarding its marvelous technical capabilities, so worse is better approach, use C.
Even more so, when Sun started the trend that UNIX development tooling was paid extra, and it only contained C and C++ compilers, for additional compilers like Fortran and Ada, or IDE, it was even a bit more extra on top.
Which other UNIX vendors were quite fast to follow suit.
But I've seen that quote before (I think from you, even). I didn't believe it then, and I don't believe it now.
There is nothing about the existence of C that prevents people from doing research on the kind of problem that Fran Allen is talking about. Nothing! Those other languages still exist. The ideas still exist. The people who care about that kind of problem still exist. Go do your research; nobody's stopping you.
What actually happened is that the people who wanted to do the research (and/or pay for the research) dried up. C won hearts and minds; Fran Allen (and you) are lamenting that the side you preferred lost.
It's worth asking why, even if Ada or Algol or whatever were extra cost, why weren't they worth the extra cost? Why didn't everybody buy them and use them anyway, if they were that much better?
The fact is that people didn't think they were enough better to be worth it. Why not? People no longer thought that these automatic optimization research avenues were worth pursuing. Why not? Universities were teaching C, and C was free to them. But universities have enough money to pay for the other languages. But they didn't. Why not?
The answer can't be just that C was free and the other stuff cost. C won too thoroughly for that - especially if you claim that the other languages were better.
Worse is better, and most folks are cheapy, if lemons are free and juicy sweet oranges have to be bought, they will drink bitter lemonande no matter what, eventually it will taste great.
Universities are always fighting with budgets, some of them can't even afford to keep the library running with good enough up to date books.
> What actually happened is that the people who wanted to do the research (and/or pay for the research) dried up. C won hearts and minds; Fran Allen (and you) are lamenting that the side you preferred lost.
Eh, sort of. The rise of C is partially wrapped up in the rise of general-purpose hardware, which eviscerates the demand for optimizers to take advantage of the special capabilities of hardware. An autovectorizer isn't interesting if there's no vector hardware to run it on.
But it's also the case that when Java became an important language, there was a renaissance in many advanced optimization and analysis techniques. For example, alias analysis works out to be trivial in C--either you obviously prove they don't alias based on quite local information, or your alias analysis (no matter how much you try to improve its sensitivity) gives up and conservatively puts it in the everything-must-alias pile; there isn't much a middle ground.
Directly programming hardware with bit-banging, shifts, bitmasks and whatnot. Too cumbersome in ASM to do in large swaths, too low level for Rust or even for C++.
Plus for that kind of things you have "deterministic C" styles which guarantee things will be done your way, all day, every day.
For everyone answering: This is what I understood by chatting with people who write Rust in amateur and pro settings. It's not something of a "Rust is bad" bias or something. The general consensus was, C is closer to the hardware and allows handling of quirks of the hardware better, because you can do "seemingly dangerous" things which hardware needs to be done to initialize successfully. Older hardware is finicky, just remember that. Also, for anyone wondering. I'll start learning Rust the day gccrs becomes usable. I'm not a fan of LLVM, and have no problems with Rust.
Two reasons I can think of off the top of my head.
The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language. This matters when writing drivers for exotic hardware.
Sometimes to do things like make a performant ring buffer (without vec dequeue) you need to use unsafe rust anyway, which IMO is just taking the complexity of the rust language without any of the benefit.
I don’t really think there’s any benefit to using C++ over rust except that it interfaces with C code more easily. IMO that’s not a deal maker.
> The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language.
The usual outcome of this assumption is that a user complains to the compiler that it doesn't produce the expected assembly code, which the compiler ignores because they never guaranteed any particular assembly output.
This is especially true for the kinds of implicit assembly guarantees people want when working with exotic hardware. Compilers will happily merge loads and stores into larger load/stores, for example, so if you need to issue two adjacent byte loads as two byte loads and not one 16-bit load, then you should use inline assembly and not C code.
I’m not saying every C compiler is always perfectly predictable, but by virtue of it being a simpler language it should Always be more predictable than rust, barring arcane optimizations.
I do agree that if someone actually cares about the assembly they should be writing it by hand.
> I’m not saying every C compiler is always perfectly predictable
No C compiler is predictable. First, there is the compiler magic of optimization.
Then you have Undefined Behavior, which in C, that's almost a guarantee, you'll experience inconsistent behavior between compilers, targets, optimization levels and the phases of the moon.
In Rust, use .iter a lot to avoid bound checks, or if you want auto-vectorization use a lot of fixed length arrays, and look how LLVM auto-vectorizes it. It takes getting used to it, but hey, so does literally every language if you care about SOURCE -> ASSEMBLY translation.
> The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language.
That doesn't seem to be true, not in the presence of UB, different platforms and optimization levels.
> Sometimes to do things like make a performant ring buffer (without vec dequeue) you need to use unsafe rust anyway, which IMO is just taking the complexity of the rust language without any of the benefit.
If you write a data structure in Rust, it's expected to wrap the unsafe fiddly bits into a safer shell and provide unsafe access as needed. Sure, the inner workings of Vec, VecDeque, and Ring Buffers are unsafe, but the API used to modify them isn't (modulo any unsafe methods that have their prerequisite for safe access stated).
The idea is to minimize the amount of unsafe, not completely eradicate it.
Rust does ok at this but typically works better with some tooling to make register and bit flag manipulation look more like normal rust functions. chiptool and svd2rust do this for microcontroller code using all rust. The only asm needed is going to be the bootup to setup enough to run rust (or C)