As long as you compile with the version specified (e.g., `-std=c11`) I think backwards compatibility should be 100%. I've been able to compile codebases that are decades old with modern compilers with this.
In practice, C has a couple of significant pitfalls that I've read about.
First is if you compile with `-Werror -Wall` or similar; new compiler diagnostics can result in a build failing. That's easy enough to work around.
Second, nearly any decent-sized C program has undefined behavior, and new compilers may change their handling of undefined behavior. (E.g., they may add new optimizations that detect and exploit undefined behavior that was previously benign.) See, e.g., this post by cryptologist Daniel J. Bernstein: https://groups.google.com/g/boring-crypto/c/48qa1kWignU/m/o8...
Why not entirely wrong, the UB issues are bit exaggerated in my opinion. My C code from 20 years ago still works fine even when using modern compilers. In any case, our plan is to remove most of UB and there is quite good progress. Complaining that your build fails with -Werror seems a bit weird. If you do not want it, why explicitly request this with -Werror?
To be clear: I mean UB in the C standard. The cases where there is UB are mostly spelled out explicitly, so we can go through all the cases and define behavior. There might be cases where there is implicit UB, i.e. something is accidentally not specified, but this has always been fixed when noticed. It is not possible to remove all cases of UB though, but the plan is to add a memory safe mode where there is no UB similar to Rust.
The warning argument is silly. It just means that your code is not up to par with the modern standards. -Wall is a moving goalpost and it's getting new warnings added with every release of a TC because TC developers are trying to make your code more secure.
I mean, yeah, I said it was easy enough to work around. But it's an issue I've seen raised in a discussions of C code maintenance. (The typical conclusion is that using `-Wall -Werror` is a mistake for long-lived, not-actively-developed code.) Apologies if I overstated the case.
Not even close to 100%, the reason that it feels like every major C codebase in industry is pinned to some ancient compiler version is because upgrading to a new toolchain is fraught. The fact that most Rust users are successfully tracking relatively recent versions of the toolchain is a testament to how stable Rust actually is in practice (an upgrade might take you a few minutes per million lines of code).
Try following your favourite distro's bug tracker during GCC upgrade. Practically every update breaks some packages, sometimes less, sometimes more (esp. when GCC changes their default flags).
Every language has breaking changes. The question is the frequency, not if it happens at all.
The C and C++ folks try very hard to minimize breakage, and so do the Rust folks. Rust is far closer to those two than other languages. I'm not willing to say that it's the same, because I do not know how to quantify it.
But you can still use gets() if you're using C89 or C99[1], so backwards compatibility is maintained.
Rust 2015 can still evolve (either by language changes or by std/core changes) and packages can be broken by simply upgrading the compiler version even if they're still targeting Rust 2015. There's a whole RFC[2] on what is and isn't considered a breaking change.
That's not what backwards compatibility means in this context. You're talking about how a compiler is backwards compatible. We're talking about the language itself, and upgrading from one versions of the language to the next.
Rust 2015 is not the same thing as C89, that is true.
> packages can be broken by simply upgrading the compiler version
This is theoretically true, but in practice, this rarely happens. Take the certainly-a-huge-mistake time issue discussed above. I actually got hit by that one, and it took me like ten minutes to even realize that it was the compiler's fault, because upgrading is generally so hassle free. The fix was also about five minutes worth of work. Yes, they should do better, but I find Rust upgrades to be the smoothest of any ecosystem I've ever dealt with, including C and C++ compilers.
(side note: I don't think I've ever thanked you for your many contributions to the Rust ecosystem, so let me do that now: thank you!)
> You're talking about how a compiler is backwards compatible. We're talking about the language itself, and upgrading from one versions of the language to the next.
That's part of the problem. Rust doesn't have a spec. The compiler is the spec. So I don't think we can separate the two in a meaningful way.
C (and C++) code breaks all the time between toolchain versions (I say "toolchain" to include compiler, assembler, linker, libc, etc.). Some common concerns are: headers that include other headers, internal-but-public-looking names, macros that don't work the way people think they do, unusual function-argument combinations, ...
Decades-old codebases tend to work because the toolchain explicitly hard-codes support for the ways they make assumptions not provided by any standard.