Hacker News new | past | comments | ask | show | jobs | submit login
Musings on the C Charter (aaronballman.com)
56 points by steveklabnik 8 months ago | hide | past | favorite | 65 comments



> I would love to see this principle updated to set a time limit, along the lines of: existing code that has been maintained to not use features marked deprecated, obsolescent, or removed in the past ten years is important; unmaintained code and existing implementations are not. If you cannot update your code to stop relying on deprecated functionality, your code is not actually economically important — people spend time and money maintaining things that are economically important.

i have been posting FOSS code since 1998, much of it in C, and not one line of it is something i would dare classify as "economically important." It was written for fun and much of the fun of writing in a language is lost when updates in the language break old code. (The reason i won't touch PHP anymore is because updates broke my long-working code too many times.)

C Charter folks: by all means, break long-standing features when folks build with -std=cNEW_VERSION, but keep in mind that -std=c89 or -std=c99 are holy contracts with decades of code behind them, all of which should still compile in 30 years when built in -std=c89/c99 modes, regardless of whether they're "economically important."


I think I have the opposite opinion. Newer C standards have a very high degree of backwards compatibility. I think we should try to transition code to the current ISO standard and in almost all cases this will just work without problems. In the few cases, where it does not, it is usually for a very good reason.

Now, nobody is stopping compilers from supporting old language modes, but it adds a lot of complexity, not for making the code compile, but for all the diagnostics which are different. If you look at the FE code of GCC, for example, a lot of code is dedicated this. If we could remove all this stuff at some point (which would still allow old code to compile, but one code remove diagnostics for things which were disallowed in earlier standards but are now allowed), then this reduce maintenance burden substantially. And if one really needs those old diagnostics, one could simply go back to an old version of the compiler.


I would rather just announce to the compiler what standard version a given piece of code should adhere to (and it's fine if -std=2x doesn't accept older C standards except for header declarations - while at it, please remove the 'inline' keyword from the next C standard ;))

To some extent that's already possible with the "-std=..." option of course, but it would be better to also allow changing the standard version within the same compilation unit (at least if it turns out that including headers written in old C standards can't be supported, in that case I would like to wrap a block of #includes with a 'use c89 { ... }').

Random example for why it is important to allow older C versions to compile: I just made a mid-90's assembler for 8-bit CPUs written in C89 work in VSCode via WASM/WASI without requiring any code changes (and yes I've been looking long and hard for an alternative assembler tool which provides the same feature set written more recently without luck, I even considered writing my own assembler from scratch).

It would suck if I had to port the code base to a more recent C standard just to make it compile, and (if that would be possible) automatically formatting that code to a more recent C version would introduce a hard "before/after" split in version control.


What is wrong with "inline"?

I do not think it is feasible to maintain all language versions in parallel for eternity. What would make porting this code to a newer standard difficult in this specific case?


Inline "encourages" moving implementation code into headers instead of only interface declarations, and the inline feature isn't all that useful anymore with LTO anyway.

It's a good thing to only have declarations in headers, it simplifies parsing with 3rd party tools (to generate language bindings), and speeds up compilation (see the C++ stdlib headers for the canonical "why is implementation code in headers a bad idea" example). IMHO mixing interface declarations with implementation details was one of the cardinal sins of C++.

> What would make porting this code to a newer standard difficult in this specific case?

It's pointless busywork, and it's unclear how deep the changes should go (for instance: does it make sense to move C89 variable declaration from the start of scope blocks to their initialization point just because it might potentially be "safer" but touches half of all lines of code?).

The code demonstrably works as C89, and any bugs that matter most likely had already been fixed decades ago, also doing large scale syntax changes is just noise in version control which makes bug fixing harder (because that often involves diving into the change history).


I fully agree that this is the cardinal sin of C++. I think that C still offers much better encapsulation by using interfaces based on incomplete struct types. While I understand your point regarding inline, there are some comments I would like to make: It is now there and removing it would break existing code. The alternative are often macros, but inline functions where they can replace macros are better and the compiler can decide not to inline when it does not make sense. LTO is an alternative, but does not work across library boundaries, and also has a very substantial compile-time cost.

Compiling with a new language mode does not force you to move variable declarations to the point of initialization, but you would now have the option to improve the code in this way when it makes sense. It is difficult to see this as an disadvantage. My point is that moving it to a new language version would usually not require changes to the code, except where the old modes were dangerous. So I would expect to find serious bugs when doing this.


> ... So I would expect to find serious bugs when doing this.

That's actually a good point hmmm (not in case of this specific assembler, which seems to be quite robust and well-written), and even if there would be serious memory corruption errors lurking, they would be contained because the assemblers runs in a WASM VM.

For projects that are still maintained, a "moving standard" isn't that much of a problem, after all we've been conditioned that every new compiler update adds new warnings which "break" existing code ;)

...there is a certain value in taking a "finished/frozen" project and integrate that into a new project without requiring code changes though.

I'm sure there must be a good middle-ground for C, where obviously bad and outdated language features (starting with leftovers from the K&R era) can be removed without causing too much breakage even on most old code ... but then, why keep 'inline' ;P


Which C++ fixed with modules. Something that most likely C will never have.


> Something that most likely C will never have.

...rather, doesn't need. C++ modules just (potentially at least) fix a problem that C++ created in the first place (exploding build times because of complex template code in stdlib headers).

Also, I have yet to see clear indications that C++ modules drastically improve build times in real world projects.


Back when I had to develop in C, I would routinely wait for 1h builds.

The trick to make them faster is the same as with C++, never compile everything from source unless required, this includes template code for common type parameters.

As for compile times see Microsoft Office modules migration, or that VC++ import std in C++23 is faster than a plain #include <iostream>.


> I think we should try to transition code to the current ISO standard and in almost all cases this will just work without problems. In the few cases, where it does not, it is usually for a very good reason.

Microsoft's C compiler did not support C99 until something like 15 years after the standard came out (and reportedly (according to colleagues who use that platform) still doesn't support certain features of it).

It happens with surprising frequency that someone reports to the sqlite project that The Latest Version no longer works on their pre-2005 environment and they'd like to see it patched to work there.

My point is: users of a given platform might not be able to use The Latest Stuff (or anywhere near it, for that matter) because their OS vendor is reticent, because they have to support an old platform which is not targeted by newer tools, or for whatever other reason.

> Now, nobody is stopping compilers from supporting old language modes, but it adds a lot of complexity, ...

i don't doubt that, and i do sympathize with the maintainers. i'm not saying they should never ever remove C89 support, but if they do then there needs to be an alternative, like a fork of the last compiler version which supported it, maintained at least to the extent that the rare genuine compiler bugs can be resolved.


Microsoft considered C done, as clarified by Herb Sutter.

https://herbsutter.com/2012/05/03/reader-qa-what-about-vc-an...

They only kept updating their C support to the extent required by ISO C++ requirements, and some key customers.

Anyone else still dependent on C was suggested to use clang, this is also why clang is part of Visual Studio compiler suite nowadays.

Around the time the Microsoft <3 Linux stuff started, they decided to backtrack on this matter, and started updating their C support, however since C11 made a couple of stuff from C99 optional, they decided to skip on those.

VLAs have anyway been proven a rich source of exploits, to the extent Google has sponsored the work to clean up the Linux kernel from all uses of them.

https://www.phoronix.com/news/Linux-Kills-The-VLA


The VLA security problems are a bit of a myth. In the kernel it may be some problem, but with stack clash protection (which one should activate anyway) there isn't really a inherent security issue anymore. The code quality improvements of using VLAs usually make it worth using them IMHO.


A myth that was worth every penny fixing it, as per Google.


Do you have link? BTW: I was tangentially involved in this effort...


It was on my original comment.

Also there were a couple of talks from Linux Plumbers Conference given by Kees Cook, if I recall correctly.


I now regret helping with this effort, since people use it as arguments against using VLAs in general, although in my opinion this is clearly the wrong conclusion outside of the kernel. VLAs are basically always superior to the next best alternative: They are safer than alloca() (and standard's compliant), they are faster than heap allocation (and similar safe), and use less stack and allow better bounds checking than a worst-case fixed-size arrays on the stack (but are slower).


Microsoft declared C dead and wanted that everybody transitions to C++. Luckily this changed a bit and MSVC now supports newer standards. Also there are now other alternatives on Windows. So I do not think there is a good reason to put everything on standstill anymore. C needs to evolve and old code must be maintained. For me, this this means that code should be transitioned to newer standards. In no other industry would it be acceptable to ignore current industry standards. Now, I do understand that some projects do not want to use newer features for portability reasons. But this does not mean that they couldn't be compatible with a newer ISO C modes as well. Being compliant with C 17 does not necessarily mean one has to use new features.


Well thats for you to take up with the compiler writers, who have so far done this but it does put a lot of work on them supporting many versions as they diverge.


I don't think that is ever going to change. What might happen is that new compilers drop -std=c89 entirely if they can't support it, but that also seems unlikely.


> If you cannot update your code to stop relying on deprecated functionality, your code is not actually economically important — people spend time and money maintaining things that are economically important.

I think this view is mistaken. Tools that only need small changes rarely are some of the most valuable, because they produce value at minimal cost.

> ...existing code that has been maintained to not use features marked deprecated, obsolescent, or removed in the past ten years is important; unmaintained code and existing implementations are not.

I understand that you can't support legacy code forever, but we need to get away from this idea that ten years is a long time for software to run, or that only software undergoing constant churn is worth anything.


Aaron is speaking from experience in trying to get support for prototype-less functions in C removed, something which imposes a surprisingly high burden in the compiler and has been deprecated for over 30 years... and still received a lot of pushback because of the potential of breaking code that hadn't been touched in that time.

> I understand that you can't support legacy code forever, but we need to get away from this idea that ten years is a long time for software to run, or that only software undergoing constant churn is worth anything.

The specific issue here is the code which lies at the intersection of the three categories:

1. Code that is N decades old and still in active use.

2. Code that needs to be compiled by the latest and greatest compiler versions.

3. Code that no one is willing to invest the resources in to make any changes to keep compiling.

That intersection is quite small, if not empty entirely. Consider the programming language with the longest pedigree, Fortran, nearing its 70th birthday, with a large amount of code very firmly meeting the first criterion... and yet I don't think any modern compiler supports anything pre-Fortran 77.


There could be larger codebases that are under active development but contain some ancient source files that no one dares to touch any longer.

If the latest compilers were to stop supporting old code then some new code could be cut off from further compiler upgrades as well.


> that no one dares to touch any longer.

This though highlights a grave problem. Rather than code which everybody understands (which could easily be re-written for a newer language) this is code which nobody understands. Replacing this code is in fact urgent.


Replacing the code may be in fact impossible exactly because nobody understands it, while it still fulfills a vital business function (or sometimes a vital hobbyist function). There is a lot of code like that out in the world.


In this particular case the change required (K&R to ANSI C) is such a mechanical exercise anyone with a modicum of C experience can perform successfully. We’re not talking about decades of taxation rules encoded with COBOL here.


The post asks the following question, which brought out a visceral and immediate answer from me:

‘When two implementations support the same notional feature with slightly differing semantics, should the committee use undefined behavior to resolve the conflict so no users have to change code, or should the committee push for well-defined behavior despite knowing it will break some users?’

I lurched towards the second option for ‘well-defined behavior’. And I would answer that way not just despite knowing breakage, but I would say that is the correct choice even in the event of large breakages for some percentage of active code bases. I have a hard time figuring out who would choose to accept more undefined behavior for new semantic constructs.

While I may not be accepting of the authors stance and conclusions to the question of ‘Trust the Programmer’, I do agree with his thoughts about the inherent positives of full throated argument in favor of increasing semantic constructs being ‘well-‘ and ‘implementation-‘ defined.

I may even go a step further and reach for an ultimate position that if it merely degrades performance metrics by some percentage, then instances of undefined behavior should be eliminated if at all possible. At a minimum, the goal should be to move to, at the most liberal, implementation-defined behavior for any given semantic construct. This would force compiler writers to specify and particularize what syntactic and semantic constructions they were taking advantage of to generate performance gains and allow developers the ability to decide if they could adhere to the implementation’s semantic guarantees.


Although Aaron phrased it as an aside, I think this is a critical point about how "Trust the Programmer" should be construed going forward:

> I think it’s perfectly reasonable to expect “trust me, I know what I’m doing” users to have to step outside of the language to accomplish their goals and use facilities like inline assembly or implementation extensions.

This is an extremely good point: if you can "trust the programmer" to write C code which exploits undefined behavior or other fundamentally unsafe compiler-dependent features, then you should be able to trust them to write inline assembly or a compiler extension to accomplish the same goal. If they can't, then they shouldn't be mucking around with undefined behavior in C: they might understand the behavior of the compiler at a "ChatGPT level" - as a set of ad hoc if-A-then-B's - but I wouldn't trust them to make serious decisions about state and security.


> ‘When two implementations support the same notional feature with slightly differing semantics, should the committee use undefined behavior to resolve the conflict so no users have to change code, or should the committee push for well-defined behavior despite knowing it will break some users?’

That's a false dichotomy. You could also use implementation defined behaviour. Or specify that these n behaviours would be valid.

'Undefined behaviour' is too big of a sledgehammer.


This works well in some ecosystems, but the C ecosystem has a "don't pessimize my weird implementation" goal. A bunch of different platforms wrap integers at different widths (or do different things entirely). Say you want to define it. Great! You probably pick the most common case. But now there is some weird embedded system that doesn't provide this wrapping behavior in hardware and the compiler needs to emit code to implement the wrapping and now all integer operations are slower on this target. These devs are now mad!

To get the "just define it all" approach you need people on weird ecosystems to be okay with paying for it. You think it is worth paying for (and I do too, frankly). But a significant portion of both the C and C++ communities don't - and that makes this very very hard.


One important point is that it’s implementers who vote on the C standard. And they may not want to vote for a version that breaks compatibility for their users (but not for users of other, competing implementations). This is one reason why certain semantics remain undefined or implementation-defined.


Yeah. I think it's disingenuous to talk about breaking things for users, as though people are forced to use a newer language standard.

C99 "broke" implicit declarations, but few if any people were forced to use C99 and it never became the default in, say, GCC (-std=gnu11 became the default in GCC 5.5, released in 2017).


Agreed, the biggest concern with this point of view is that the developer then has to ensure the older version of the compiler stays functional as OS’s and execution environments progress. That may be a reach, but I think one of the heavy imperatives of going to a more defined standard for the semantics of C would be forcing compiler implementations to be very clear about what standard they are supporting and what kind of guarantees they are making about support timeframes for that standard.

The above is because I would hope that one result of pushing a nearly fully ‘defined’ (well or implementation) standard would be a strong interconnectedness and compositionality of semantics between all semantic constructions. This should mean compiler implementers can not just fall back on a mish-mash of standards compliance and then claim undefined behavior lets them just omit certain semantic constructs. I would like to think having the language be very clearly defined would almost require a complete adoption of some given standard to ensure the compiler was compliant.

I am aware the possibility of such a radical realignment of C’s structure is nearly impossible, but if C can not or will not do it, there may be the option for an incredibly similar language to piggy back it’s way to common use. This may also satisfy some of the arguments/positions in TFA concerning ‘Trust the Programmer’, where this superseding language can ‘unsafe/non-conforming’ out to C directly in C syntax in the event a non-conforming semantic construction is needed or desired by a developer.


C is doing fine and doesn't need interference. If people want modern C they are better off using other programming languages, and leave the rest of us with giant C codebases alone without creating pointless make-work.


I think if C does not evolve, some people with giant C codebases will find themselves in the unfortunate position that they suddenly will be shut out of a market by regulators, because the C code is considered a risk, or that they will be liable for bugs caused by preventable memory safety issue, and keeping the code will not be sustainable.


That's not really the case. For example Airbus has a giant codebase all written in C. It's never going to be rewritten. Instead they apply many tools to it including formals proofs (of both the code and the toolchain). They work closely with the relevant regulators.

I'm actually more confident in that process than I would be of some insane plan to rewrite it all in Rust - which would take forever and Rust doesn't have any of the tooling, formally proven compilers, a language specification, more than one implementation, etc etc.

People really don't know what they're talking about when they claim C codebases will be rewritten or will "have to be rewritten" because of "regulations", when the regulators are on top of this stuff already.


It is true that safe systems require more than just a programming language. However, the Rust compiler has already been qualified for some safety standards, with more on the way, including the things that Airbus does. You're right that "defense in depth," in a sense, is a good thing for safety, and starting with a language that's memory safe by default adds an additional layer in comparison to things that don't.

There is already a bill in the US (it's part of a funding bill so it's just a matter of time until it passes) that, after it passes, the DoD is going to be putting out a plan for moving towards memory safety for things purchased by the DoD. Now, as we all know, because this isn't something that will happen overnight, and so there will be signoffs for exceptions, I'm sure, so we'll see what actually comes of it.

There is also a similar one in the EU, but I know less about how things work there so I won't say more other than "I know it exists."


The DoD mandated the usage of Ada [1] in the early 90's but changed their minds a few years later. History repeats itself?

[1] https://en.wikipedia.org/wiki/Ada_(programming_language)


It doesn't necessarily repeat itself, but it does rhyme :)

I agree that this is a great story about how even a government mandate does not mean something may come to pass.

However, I also think that the conditions are different enough that it does not guarantee that this will fall the same fate. There's a few reasons why: the first is that the goals are different. Ada was created by the DoD, because they thought that there were too many languages in use there, and that standardizing on a single, modern language would be far better. So then they set off to create Ada, and in 1980, the first version was done. But it wasn't seen as popular, for various reasons.

In the words of the mandate itself: https://web.archive.org/web/20160304073005/http://archive.ad...

> In March, 1987, the Deputy Secretary of Defense mandated use of Ada in DOD weapons systems and strongly recommended it for other DOD applications. This mandate has stimulated the development of commercially-available Ada compilers and support tools that are fully responsive to almost all DOD requirements. However, there are still too many other languages being used in the DOD, and thus the cost benefits of Ada are being substantially delayed. Therefore, the Committee has included a new general provision, Section 8084, that enforces the DOD policy to make use of Ada mandatory.

It didn't catch on enough naturally, and so therefore needed a push, and the mandate was supposed to accomplish that. But it backfired. First of all, there were a LOT of exceptions. (which I mention could easily happen in this situation as well). The mandate:

> "Notwithstanding any other provisions of law, where cost effective, all Department of Defense software shall be written in the programming language Ada, in the absence of special exemption by an official designated by the Secretary of Defense."

That "where cost effective" was a big loophole.

Second, well, take this article from 1997: https://www.militaryaerospace.com/communications/article/167...

> Chief complaints about Ada since it first became a military-wide standard in 1983 centered on the perception among industry software engineers that DOD officials were "shoving Ada down our throats."

Part of the idea of repealing the mandate was that it would be more palatable to people, and that they'd be more likely to use it if it were repealed.

But there were a lot of other parts to this story that led to the removal of the mandate, like an overall movement towards more off-the-shelf commercial components rather than making everything in-house. But this comment is already too long.

---------------------------------

Okay so why is this different? Well, first of all, because it's not actually a mandate: the language in the bill is

> SEC. 1613. POLICY AND GUIDANCE ON MEMORY-SAFE SOFT- WARE PROGRAMMING. > > (a) POLICY AND GUIDANCE.—Not later than 270 days after the date of the enactment of this Act, the Secretary of Defense shall develop a Department of Defense wide policy and guidance in the form of a directive memorandum to implement the recommendations of the National Security Agency contained in the Software Memory Safety Cybersecurity Information Sheet published by the Agency in November, 2022, regarding memory-safe software programming languages and testing to identify memory-related vulnerabilities in software developed, acquired by, and used by the Department of Defense."

That sheet is this one: https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI...

and the most salient part

> NSA advises organizations to consider making a strategic shift from programming languages that provide little or no inherent memory protection, such as C/C++, to a memory safe language when possible.

"when possible" feels like that "where cost effective" bit. I guess I've said this twice in this comment now. We'll see.

But moreover, this is not recommending "rewrite everything in Rust." It is not recommending rewriting every single thing in any single language, or move towards a language that is new, designed by them. It goes on to mention

> Some examples of memory safe languages are C#, Go, Java, Ruby™, and Swift®.

I suspect that this policy will not be as controversial as broadly as the Ada mandate was. It's just a different thing.

Time will tell, I guess.


Excellent write up. This would make a good article for HN :)


I may in fact have been like "damn I should make this even better and put it in a blog post" so thank you for validating that that is in fact a good idea!


There are tools to make C safe and for some industry use this will work. I also agree that rewriting in Rust is often a mistake.

But NSA is already advising against the use of C: https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...

And the EU is working on new liability rules. Yes, Airbus might be able to get around this. I am more worried about smaller companies or products including open source.


I bet that many C cowboys[0] will rather switch languages than code under the requirements that Airbus has to comply to.

[0] - A meme from the days they used to call programming with straightjacket regarding Modula-2 and Object Pascal, in Usenet flamewars.


I hope the tools and techniques become more widely available, and those which are proprietary are released or reimplemented as open source.


Memory bugs are just bugs. They are not particularly interesting bugs nor difficult to track down. Even if regulators hold vendors responsible for their bugs, memory bugs aren't the only type of bug nor are they the only mechanism for security breaches. Vendors should already be doing their do-diligence to catch bugs by writing a thorough test suite, fuzzing and static analysis. With C they only have one extra step and that is to run their thorough test suite against sanitizers.


Memory bugs very often allow exploitation which is not always the case for other types bugs. Although it is of course true that other types of bugs can also be serious, memory safety violations tend to be more serious if security is a requirement. Fuzzing with sanitizers is not currently able to detect all memory bugs.


I agree. Lua, Lisp, C - they are all conceptually simple languages that have stood the test of time. Let languages like C++ be the "everything" language and leave C alone.


What are your thoughts on efforts to rewrite your giant C codebase in Rust? Most of the people who hold this opinion also seem to dislike people coming near their code with a new language. Improvements from C are generally driven from a pragmatic viewpoint that there is lots of legacy code that is not being well served by C but cannot afford to be rewritten just yet.


We've incrementally replaced C objects with OCaml in one project, still linking everything into one final binary. The two languages (or indeed C and Rust, or C and Golang) interoperate fine. You don't need to rewrite the entire project in one go.


I'll start caring about c standards above gnu99 as soon as they add case range values to switch statements, like case A ... B: -- which I've only been using for what, 35 years now?

All that time, I really wondered what the flip they are on about. Adding extensions that really, I really raise eyebrows at while ignoring complete elephants in the room, like the above feature.

Last year they were pontificating about something something adding some sort of (complicated) syntax to support destructor functions and were wondering about prior implementations, and I had to point to them the support of destructor function had been there as an extension of gcc for countless years.

It's like they aren't actually using the language, just speccing it. Kinda like linux maintainers who are more like some sort of priesthood and gatekeepers than actual users of the system.


Everybody in WG14 is fully aware of the GNU extensions. Also many of us are active C users (but maybe not enough).

With complicated syntax you mean the "defer" proposal?.

In any case, different people have different priorities and ideas. The best way to influence decisions is to contribute to the standardization process. For example, everybody can submit proposals.

But regarding case range values, there is now one: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3194.htm


That defer proposal was the stupidest thing I read in the a long time. Very glad it was rejected.


Well there is a new one: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3199.htm

But calling things "stupid" is relatively useless internet noise and the best way to be ignored.


It's funny that they summarise all the ways that attribute((cleanup)) is already being used successfully, then make up some stuff about a problem that it allegedly has (which doesn't affect any of the users), then try to shoehorn defer in again.

Just standardize attribute((cleanup)) as it is already widely implemented and used, and stop messing around with defer nonsense.


Not my proposal, but what exactly do you dislike about it?

cleanup is unlikely to be standardized exactly as implemented as it would violate the rule that standard attributes can be removed from a correct program.


That it would require rewriting everything that is already using attribute((cleanup)), which is a lot of code these days. Just make-work for no benefit.

> it would violate the rule

Good, let's change that rule for this case.


While I agree that there is some value in putting an existing extension into the standard as is, not doing so would not mean that everybody has to rewrite their code. Code that already uses a non-standard attribute could simply continue to do so. I would just not automatically become standard compliant.

I think changing the rule would create a mess, so I am not in favor of this. I wonder if there is a way so that existing macro wrappers could be adapted...


Pfew it wasn't just me then :-)


The language authors can be shortsighted, but they're not stupid. The discussion of destructors has proceeded for years and everyone there knows that GCC already has __attribute__((cleanup)).


Don't know if this is an unpopular opinion or not:

Today if you are using C it's because you HAVE to (embedded toolchain, historical code, etc). In that case, you use the C you find, not the C in the next version of the standard. So I find the C standard committee's work... uninteresting.

Sorry, guys.


That's essentially true and actually one of the strengths of C, that the ISO standard is relatively unimportant for real-world projects.

The standard is just the minimal supported common language core of a family of C dialects, with the actually interesting stuff happening outside the standard in language extensions and 3rd-party libraries.

In a way we can "thank" Microsoft for that. If the MSVC team hadn't boycotted the C standard for nearly two decades, adhering to the C standard would have been more important. But since MSVC didn't support anything past C89 anyway, the C world moved on without them by switching to different compilers.


It is easy to blame Microsoft, while forgeting the huge world of embedded devices with proprietary compilers, many of which are still stuck C89 with extensions, as the tiny CPUs hardly do anything better.

If anything, I was saddned that they backtraced on their decision to focus on C++.

Then again, nowadays thanks to the ongoing security bills, Azure business unit has a roadamp to only use Rust for new systems programming projects, and C#/Go for when managed languages are not an issue, while Visual C++ team seems mostly focused on keeping game developers happy and little else.


SDCC, which is essentially a hobbyist retro-computing compiler for 8-bit and embedded CPUs had and still has better C standard support than MSVC:

https://sdcc.sourceforge.net/

If a couple open source devs can easily out-perform the MSVC team, Visual Studio really is in trouble ;)


SDCC was certainly not what I had in mind during my comment, rather stuff from TI, Keil, Microe, and co.

Hobbyist retro-computing compiler for 8-bit and embedded CPUs is hardly anything to worry about for Microsoft.


I've never really understood the imperative to "move forward". If you want to do something new, do something new. Why break what works?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: