Hacker News new | past | comments | ask | show | jobs | submit login
What every compiler writer should know about programmers (2015) [pdf] (tuwien.ac.at)
88 points by mpweiher on April 14, 2019 | hide | past | favorite | 62 comments



The author seems to be saying, in section 4, that compilers should just stop doing some optimizations (such as converting int to size_t where appropriate) and should expect programmers to do those optimizations instead, in order to reduce undefined behavior.

The problem with this argument is that it doesn't match economic reality. Compiler authors wouldn't disagree that it would be nice to not have to do these optimizations! (Chris Lattner has said as much to me, for example.) Unfortunately, there is a lot more application code out there than there is compiler code. So it makes economic sense to add optimizations to the compiler rather than to individual programs, so that the huge universe of C and C++ programs can benefit without having to be optimized by hand at the source level. This is why companies like Google and Apple employ compiler engineers: so that their large codebases can become faster automatically.

It's trendy to complain about compiler engineers because they're an easy target, and because of nostalgia for the days of Turbo Pascal when optimizations weren't done and programs were slow. It's much less trendy to analyze the complex circumstances that has led to UB exploitation in C and C++. In my opinion, if I had to assign blame to one thing, it would go to the C language itself, for e.g. encouraging use of int for array indices.

P.S. As always, Fabian Giesen's description of why compilers exploit signed overflow is a must-read: https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759...

Note that the conclusion is not as simple as "compilers should stop doing this optimization in C". There in fact isn't a great solution precisely because int is 32 bits for popular ABIs in C; the best options are "hope the programmer used size_t right" or "use another language".


> The problem with this argument is that it doesn't match economic reality. Compiler authors wouldn't disagree that it would be nice to not have to do these optimizations!

I think that the extent to which that's actually true is pretty debatable. Note that both Linux and Postgres use -fwrapv/-fno-strict-overflow and -fno-strict-aliasing. I'm not a compiler engineer, but if I was I might think to ask why that's the policy of these projects, and address the criticism on something closer to its own terms.

The author makes a clear distinction between "Optimization*" and "Optimization". That distinction might be a bit squishy, but not so much that it isn't still useful. Even the Fabian Giesen thing that you linked to says "A lot of people (myself included) are against transforms that aggressively exploit undefined behavior", right at the start. It's trendy to blame the C language itself, but surely we can expect C compilers to care about the C language.


The problem is that this group of programmers are operating on a language that is almost-but-not-quite C. They would very much like to believe that there is a set of flags which make the compiler run in the mode of almost-C that the application engineers thought they were working with all those years ago.

But the fact of the matter is that the ISO C standard is the only contract which has been written down between application authors and compiler authors. No amount of flags are enough to guarantee that a particular individual's expectation of almost-C will be equal to what their compiler is producing.

What the industry needs to move forward are two things. A set of tools that can help identify more cases of almost-C that need to be ported to C. We're getting this, in the form of UBSAN, ASAN, and their kin. The other thing is for the veterans to accept that their particular brand of almost-C doens't actually exist. There's only ISO C. For almost three decades now, that's all we've had. the engineering cost of trying to keep almost-C alive is just too high to keep paying.


This is a common trope ("language users are just ignorant"), but couldn't be more wrong.

There actually is explicit language in the C standard that gives a range of "permissible behaviours":

Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

This gives a range of permissible behaviours, and what current compilers do with UB clearly does not fall into that range, whereas it closely matches the expectations of those "ignorant" language users.

In the first ANSI/ISO C standard, this language was binding, it was only in later version that it was declared non-binding. So the language is in the standard, but implementers are free to ignore it and still call themselves standards-compliant, which they do with abandon.

> the ISO C standard is the only contract

This is also a common trope, and also not (completely) true. In fact, the C standard very explicitly says that the standard by itself is not sufficient to make a useful/usable implementation!

There is a way in which this sentiment is becoming true, which is that we used to have a different contract between user and compiler writer: a sales contract. Users were also customers, and if you produced a compiler that only conformed to the standard, your customers would stop giving you money.

Today, compilers are largely free for users, and therefore the standard actually is in many ways the only "contract" between user and writer. So that's a problem, as the goals and incentives aren't aligned.


> This is a common trope ("language users are just ignorant"), but couldn't be more wrong.

It isn't a matter of ignorance; using those switches to get a vendor-specific version of not-quite-C is a matter of willful disobedience.


Hmmm...actually, those switches turn the no-longer-C back into C.


Compiler developers do care about the C language. They just care about it as it's actually used. In real-world C, people write "for (int i = 0; i < array->length; i++)". The Linux kernel might not, but that's a choice the Linux kernel developers have made. Lots of C code uses different idioms from the Linux kernel, and users expect that code to run fast. Slowing down that code out of a notion of language purity (especially when "Friendly C" isn't even defined) does a disservice to users.

I can't speak for anyone but me, but I read the opposition to doing these kinds of optimizations in that Gist as preferring compiler warnings to suggest users use size_t when appropriate. This would be great, but I'm skeptical that, for example, Linux distros would be happy about that. Most Linux distributions consist of packages of vast amounts of unmaintained or mostly unmaintained C code. There simply isn't the manpower available to fix all that code to be properly 64-bit optimized (which is in fact why int is 32 bits on x86-64 Linux in the first place).


> Most Linux distributions consist of packages of vast amounts of unmaintained or mostly unmaintained C code

Well, that's exactly the kind of code I'd expect to simply break when the compiler becomes able to prove new optimizations based on undefined behavior.

When that happened to me, I decided there was no way I was trawling through our massive legacy codebase to fix undefined behavior that ubsan can't catch. Resulted in -fwrapv -fno-strict-aliasing being added, and any speed loss was noise under 0.5%

Then in our modern codebase, I did my absolute best to ensure we built it no undefined behaviour, including contortions to avoid undefined left shifts... and we still got screwed over by a new version of clang.

For all the worry about "for (int i = 0; i < array->length; i++)" specifically, how often is that actually measurable whole-program? It's one additional sign extension in a tight loop, plus a store/load in a complex loop, which a modern CPU would completely hide 99.9% of the time.


> It's one additional sign extension in a tight loop, plus a store/load in a complex loop, which a modern CPU would completely hide 99.9% of the time.

Not true. It's a store/load in the tight, simple loops that are the problem. Sometimes making your loop too big will cause you to fall out of the uop cache on x86, which is a significant performance hit. Modern CPUs don't hide this inefficiency, which is why compiler developers implemented the optimizations in the first place. (LLVM and GCC benchmarking infrastructure is excellent; they test this stuff continuously and don't land changes that don't demonstrate benefits.)


My suspicion is that loops that need even a dozen registers are in general either not tight enough on modern CPUs that a spill is measurable (maybe they were that tight 10-15 years ago), or were optimized after 64-bit was common. Or maybe I'm just disappointed from all the times I've eliminated spills and other extraneous µops in small tight loops with no measurable gain even in microbenchmarks.

I kinda wish that benchmarking infrastructure was reflected in my experience; I've spent entirely too much time fixing performance regressions in various inner loops caused by newer compilers. Though to be fair, only once has that regression exceeded 2x.

To give some context: if a loop is large enough that it's on the verge of falling out of the µop cache, I'd consider that a massive loop. And from my experience any code that complex I definitely wouldn't trust newer compiler versions to not randomly add a dozen or two instructions.

Anyway... exactly how much does -fwrapv impact LLVM or GCC's benchmarking testsuite?


I'm suggesting that compiler authors don't seem to pay sufficient attention to how C is actually used, though. It may well be true that I don't have the full picture here, but I don't think that these concerns can be waved away so easily.

Postgres has code like "for (int i = 0; i < array->length; i++)" in many places. I believe that the same is true of the kernel.


> I'm suggesting that compiler authors don't seem to pay sufficient attention to how C is actually used, though.

Compiler developers pay more attention to how C is used than virtually anyone else. After all, they are the ones who have to deal with bug reports--often bug reports saying that code is slow. It's not like compiler engineers implement optimizations for no reason, or just to make themselves seem smart. They implement optimizations because somebody files a bug report saying that some code is slow, and they go to fix it.

> Postgres has code like "for (int i = 0; i < array->length; i++)" in many places. I believe that the same is true of the kernel.

If that's true, then Postgres is giving up performance on 64-bit for that, which can be significant. That is a decision they can opt into. It's not a decision everyone else should have to opt out of.


I think Linux and Postgres are much more exception than norm programming. They’re both relatively low level, critical infrastructure code worked on by domain experts with low level expertise —- not at all what most software looks like, and exactly the sort of people who would override defaults anyway.


> P.S. As always, Fabian Giesen's description of why compilers exploit signed overflow is a must-read: https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759....

What's missing from this is the only data point that matters: what's the slow-down on large programs if you are forced to compile without taking advantage of signed overflow UB?

If you're breaking the language to get perf, then it's not enough to show that an optimization is possible. You also have to show that it has a net effect, outside of the microbenchmark you cherrypicked.


This entire issue comes from "poor" (by modern standards, it was great for its time) language design. The compiler writers have to fight with the compiler users because the language that was meant for communication between them does not include all of the information that the compiler needs to know. Instead, it depends on implicit things that "everybody knew" in 1980 but which are not all true today.

In order to know what your C program is going to do, you have to have some idea of how it is compiled. That's bad, because if they go and change how it is compiled your expectations will be wrong. Instead languages should be designed so that you write down what you want to have happen, without having to think about how the compiler will implement it.


Very rarely have I gotten deep enough into a project and not had to pop open the hood. How the lower level abstraction implements things is often important and often leaves you with no practical choice but to depend on implementation details. These are just unavoidable realities. “Hiding implementation details” is one of the great lies of abstraction.


> “Hiding implementation details” is one of the great lies of abstraction.

do you have some examples? i would imagine it is simply a matter of (poor) documentation or something not doing something it was documented to do, or vice versa.

if something that has its implementation details abstracted away correctly and is documented fully and accurately, it seems to me the only reason to "pop open the hood" is to ask if it could be faster. i suppose you could want to abuse the abstraction by relying on implementation details, but that doesn't seem like a good thing to do.

basically, i wouldn't say hiding implementation details is a lie of abstraction. i would wager it's developers' inability to abide by the laws of abstraction.


> some examples?

Exception handling. EH is often sold as a "zero cost abstraction". The reality is far different. To see what I'm talking about, create a simple D (or C++) struct with a constructor and destructor. Write a simple piece of code that constructs an instance, calls some function where the implementation is hidden from the compiler, and then expect the destructor to run.

Compile it with and without EH turned on. Take a look at the code generated. You might be quite discouraged.

The existence of exceptions also significantly degrades flow analysis by the compiler optimizer, to the point where large swaths of optimizations are simply abandoned if exception unwinding may occur.


As an example, latency of execution and expected distribution of latencies under various conditions can have a great many side effects in the code that uses that abstraction. As in, you would architect the code using that abstraction differently depending on the expected latency behaviors.

I've seen many operational performance bugs that were ultimately tracked down to some abstraction being much faster or much slower than expected by the code using it under certain conditions. Documenting this thoroughly essentially exposes the underlying implementation of the abstraction.


That's an example of what I'm talking about: for example C has no facility for expressing instruction timing, so if you are on a microcontroller and need to generate a signal you have to keep an eye on the assembly. Ideally there would be some facility for telling the compiler that a certain variable needed to have its value computed after N cycles, then you could use it for those specific cases and not abandon the optimizer for the rest.


I feel like something like that would be worth writing in assembly directly anyway, wouldn't it? And with C you usually don't need to give up compiler optimization entirely, just carve out the specific sections.

By that I mean that since you're sitting on a pretty thin abstraction anyway when you're on microcontrollers, doesn't it make sense to code the parts that require piercing the abstraction using the toolset that sits below the abstraction?


Let's say it took 50us to compute the next value, but the circuit only needed the next value to show up at the register in 100us. If you were writing it all in C you couldn't time it at all (except maybe by trial and error). If you wrote just that one routine in assembly, you would have to wait for 50us after every loop doing nothing. If you were writing it all in assembly (abandoning C entirely), you could stick random instructions from elsewhere in the program into the 50us window of extra time. If instead of C there was a language that let you specify timing, the compiler could either wait or if it was smarter do that optimization automatically. Over time as new optimizations like that were invented the same code could become even better (completing the non-real-time tasks faster while still meeting timing deadlines), and if a new microcontroller was introduced the same code might still work. That's an example of the benefits of keeping implicit "you have to know how the compiler works" stuff out of your languages.

(Pragmatically, you have to poke through the abstraction because what you have in real life is C. However if we recognize this as an opportunity to do better we might reach the ideal in the future.)


That feature wouldn't make any sense outside of very basic microcontrollers, you don't have any sort of determinism due to things like branch misprediction, cache misses, and pre-emptive multitasking.


I am a bit surprised to hear that this is true in your common case. What problems are you usually solving and what tools are you usually using?


"Compiler writers are sometimes surprisingly clueless about programming."

Sure, but there's another aspect, which is trying to guess at what the minimum level of knowledge the user of the language should be expected to have. For an interesting debate on if an "isOdd(int i)" as opposed to just using (i & 1) should exist as a standard library component or not, see:

https://digitalmars.com/d/archives/digitalmars/D/food_for_th...


> For example, if I saw "isOdd" in code, I'd wonder what the heck that was doing. I'd go look up its documentation, implementation, test cases, etc., find out "oh, it's just doing the obvious thing", and be annoyed at the waste of my time.

Huh. That is not at all what I would do. I would think 'oh, that's checking if the thing is odd' and keep going. If I see i&1, then I have to unpack that, parse it, figure out why it's there and that it checks oddness. Also relevant is that i&1 checking if i is odd is a detail of i's internal representation. There's nothing intrinsically odd-related about the operation &1; that it checks for oddness is a co-incidental side effect, just like how making a truth table of (0, 0) => 0, (1, 0) => 1, (0, 1) => 1, (1, 1) => 1 happens to correctly simulate the behaviour of an or logic gate, even though there's nothing inherently or-y about it.


    function isEven(int i) {
        return !isOdd(i);
    }


Good to see a detailed study on this --- it's been my experience that Intel's compiler (icc) is far less eager to exploit undefined behaviour, yet generates just as competitive if not sometimes significantly better code than GCC.

It's also worth mentioning the note that the standard has about UB (emphasis mine): "Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message)."

That emphasised part is what the majority of programmers using C expect.


Could that be because of ICC’s dispatch code? It compiles code for different feature sets and then uses the CPUID feature list to know which to run.


That's for autovectorisation, but my experience has been that icc's regular scalar code generation is what makes it perform so well. Instruction selection, register allocation, etc.


Would you happen to know if recent versions of icc still generate horrible code for AMD CPUs (particularly Ryzen/EPYC)?


Aren't those common to all compilers? So it just does everything that clang/gcc do also, but does it better?


Depending on what you are doing a bit, stuff compiled with `icc` runs about 20% faster than the same code compiled with `gcc` or MSVC's compiler on Intel CPUs.


Hmmm. The one time I tried it (for a crypto miner), it ran about 60% the speed of gcc and clang (which were comparable; one was slightly ahead, I forget which but I think it was gcc). Granted, I may have gotten the flags wrong (there were a lot of them), but I turned all the ones that looked relevant.


Interesting. Any links to benchmarks?


GCC has some options that you can specify in order to change how some of the undefined behaviour stuff is implemented.


In section 2.2, the author claims that the intent of a programmer is knowable when the programmer writes something that could result in undefined behavior, but this claim does not hold up for the example given. When a program attempts to read from beyond the end of an array, it is rarely something that a programmer intended to happen, and it is rarely the case that any action the program takes as a response to that event (even if it is a segmentation or general protection fault) is in conformance with what the programmer intended to happen after the sequence of inputs that led up to this event. It is much more likely that the programmer did not even think that this could happen, and expected that the program would continue to run and produce results conforming to the purpose they intended the program to serve.

If I can divine what the author of this article actually intended to say, I think it is that, in at least some cases of undefined behavior, programmers commonly have intuitions about what would happen if that behavior occurred during the execution of their programs. I don't know if the rest of the thesis can be salvaged by substituting that reading.


I think it is clear they mean what the programmer would expect /even if they knew the code was wrong/:

"And that’s what programmers expect: In the normal case a read from an array (even an out-of-bounds read) performs a load from the address computed for the array element; the programmer expects that load to produce the value at that address, or, in the out-of-bounds case, it may also lead to a segmentation violation (Unix), general protection fault (Windows), or equivalent; but most programmers do not expect it to “optimize” a bounded loop into an endless loop."


Well, that's what I wrote in my second paragraph, and the article would have been more straightforward if the author had not drawn an unjustified conclusion from it.

For example, he goes on to say that some programs capable of undefined behavior are nevertheless correct if compiled in a certain way, but how is the compiler-writer to determine that? The unstated assumption is that there is a preferred compilation under which all programs are 'more correct', but when it comes to undefined behavior, then, by (non)definition, there is no 'more correct' way to compile it. Therefore, saying there is a 'more correct' way is equivalent to saying that at least some undefined behavior should be defined in a particular way. The article would have been simpler if the author had realized that this is what he is proposing, and skipped the confused correctness arguments.


If you're interested in the weight that the author puts on 'confirming' and don't know what it exactly means - a 'conforming' program is one that runs correctly on some 'conforming' implementation. So it's a program that at some point ran correctly on some compiler somewhere. If your C program is only 'conforming' then it isn't necessarily portable between compilers or even versions of compilers.


A major issue with removing UB exploitation is that you've now created C, but each compiler implements a different C. It becomes harder to port code, and you end up with bugs that can be harder to catch, especially if don't consider what you're doing an error, as now you can't even use the sanitizers.

What my team has been looking at to help developers that have these issues is providing low overhead ways to give defined, but likely to crash, behavior for these cases. This helps protects against UB without creating C. An example would be variable auto-init, which intentionally doesn't support zero init, as that is likely to hide bugs: https://reviews.llvm.org/D54604

I'm fine with fully defining some things that are currently UB or IB. I'm also fine with keeping things as UB. What I'm not fine with is creating this implicit C that programmers think they know what it is, but it's not actually defined anywhere, and not everyone working on the compiler is on the same page about.


To me, whenever something is undefined at one level, there are still constraints on it at another level. We go about our daily lives without laws and regulations explicitly detailing our every interaction, and rely on custom and ingrained behavioral rules to fill in the gaps. Every level of abstraction has another layer behind it. When I read someone arguing about undefined behavior allowing a compiler writer to do anything, it makes me imagine someone who, if they could afford the finest lawyers and found the appropriate loophole, would consider it not just their right, but their duty, to kill and eat me. It's not terrifying to imagine there are monsters here and there; there always have been a certain number. What terrifies me is the idea that this mentality is infecting society and altering norms and interactions. It doesn't feel like evil as such, it feels more like a metaphorical prion disease. There's something fundamental to being a human being in being able to deal with undefined situations by breaking out of a given context into a broader one.


I'm tempted to write a "what every programmer should know about compiler writing" article. In particular, that they're not alone in the universe. The price that you have to pay for the incredible combo "performance + portability" is undefined behavior. Why do people assume that if they don't care about some sorts of portability and exotic platforms, nobody else does?


Probably because a lot programmers are just used to the fact pretty much at the desktop level it's x86/64.

However, when you start talking microcontrollers for instance the architectures are all over the place.

C covers both these use cases.


Indeed, the likes of x86 little endian assumptions of programmers (even compiler engineers) can be hilarious (in the sad clown way) when targeting something like MIPS64 big endian.


I'm seeing Rust with stricter aliasing rules, targeting microcontrollers, using the same high performing LLVM backend, without undefined behavior lurking behind every corner. That price is not a given.

The one advantage C has on portability is that for any given platform, you can probably be assured there's at least one buggy C compiler for it that doesn't properly comply with the standard out there. There's a niche for that, but it is a niche.

Otherwise? C and C++ aren't great at portability. I've spent a lot of time porting games, and a lot of that time has been spent porting C and C++. I can't even assume two compilers will compile the same defined-behavior code targeting x86-64 the same way - someone somewhere will have typedefed something in terms of long, and put that into a bytewise serialized struct, and one compiler will be LP64 whereas the other will be LLP64.

Add ARM, PPC, or even just 32-bit x86 into the mix, and suddenly previously working programs turn into multi-month fixup projects on medium sized codebases, before even touching all the system APIs I need to change because of the completely anemic standard library.

The further off the beaten path we go, the more likely we are to encounter straight up compiler bugs where not even the authors correctly interpreted the standard, so now we get to debug other people's compilers and write extensive test suites which we pray will catch all the codegen bugs. They won't, but they'll catch some percentage of them, at least.

There's a whole slew of porting issues that just don't show up in C#, Java, JavaScript, ActionScript, Lua, Python, Squirrel, or any number of other programming languages games and game tools use - that are uniquely terrible about C and C++. Maybe I fixup a few paths, maybe there's a few filesystem permission issues, maybe a few OS APIs don't work quite the same.

This is all time not spent optimizing your actual program, nor porting things in your program that are actually supposed to be platform specific.

C and C++ aren't the greatest languages from a performance standpoint either. I still can't convince MSVC to properly align stack variables consistently, so I must resort to my own stupid padding hacks or dynamic allocation. Not that this is sufficient to coax the optimizer into emitting appropriate SIMD instructions, so I resort to doing so by hand. The standard library completely lacks any means of actually doing this, and whatever middleware I choose probably lacks targeting for at least one platform or instruction set, so I get to port that too, taking extreme care to not introduce extra overhead via the supposedly "zero cost" abstractions these languages give me. Maybe it's just easier to write the same math N times instead. They're not JIT compiled, although function multi-versioning is finally spreading (quite late) to enough compilers that you might not need to choose between a baseline architecture or multiple binaries.

Then a constexpr function shows up in my profiling results, I see trivial static init invoked at runtime, I chase down another optimizer induced heisenbug and figure out if it's UB or a compiler codegen bug, and I'm left wondering why people are so busy defending int overflow being UB for some loop microbenchmark instead of fixing something that would actually help me with perf.


Rust is awesome, and I thought of mentioning it; however, instead of proposing to "fix" C, I wish people would focus on getting Rust to have the same sort of footprint that C has. Because right now, sorry, but it just isn't "proven" yet in the same way that C is. I'm rooting for Rust, but it still has a way to go.

(otherwise, yes, just because it's C doesn't automatically mean it's portable. However, C's continued success across the ages can't be exclusively ascribed to its flaws :). How many different Rust compilers do you know, so that you may complain about inconsistencies among their borrow checkers, or about flaws in their codegens and optimizers?)


If it makes you feel any better, I have given up on seeing C fixed in my lifetime, and have contributed at least one patchset to rustc and a related patch to lld-link. You're 100% right that Rust isn't as proven - even to me and my niches - but it's been proven to me that C will remain broken, and Rust is one of the more promising alternatives.

The choice to seek out new alternatives needs justification though. "Why not C" and "What should these other languages do differently" are important discussion topics, and those topics invariably circle around to basically the same conversation anyways - what would it take to "fix" C, and how can we do that in not-C since it's not getting done in C.

(While I haven't tried multiple compilers for rust, I have for C#, and the equivalent parsers + runtimes for JS. I've even encountered some inconsistencies! Just... not on the same order of magnitude as in C++. Ditto for codegen bugs.)


We have this discussion every few years :) https://news.ycombinator.com/item?id=11219874

I have the same comment now i had then.


And reading it once again, I'm still sympathetic to Anton's argument and frustrated by your response. Although reading the exchange between the two of you at the bottom, I do see why you might have difficulty looking afresh at his argument. But in the hope of moving the discussion ahead at least a little, could you perhaps respond here to forthy's comment from the last thread? https://news.ycombinator.com/item?id=11226391. In particular, why does the C standard seem to prefer undefined behavior over the easier to reason about implementation defined behavior?


Implementation defined behavior isn't actually any easier to reason about. Compilers are free to choose "assumes this never happens" as their behavior, as there is no limit on what can happen. This is different from implementation defined value or unspecified behavior (which generally has a list of options).


I really think that we need a standard variant of C in which all undefined behavior is (somehow, even if arbitrarily) defined. (Also: a variant with some restricted level of polymorphism would make C much less painful.)


That's not possible in general, for instance if you write a block of memory than execute it, you can't really define what's going to happen without including an ISA in your language spec. Another example would be executing a syscall with a not-yet-used number. The ability to do both of these things are necessary for a low-level language such as C.


you can't really define what's going to happen without including an ISA in your language spec.

That's called "implementation-defined".


True, but there are a lot of things that are also undefined at the ISA level


Don't specify that at all in the standard. Let compilers implement it as an extension, like inline asm. They'll have to make it have sane behaviour because they don't have the excuse of 'the standard says it's ub'.


The standard actually has a definition of the range of permissible behavior for UB:

Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).

However, an update to the standard made this definition non-binding. It is still in the standard, verbatim, but compiler writers have interpreted that to mean they can completely ignore it and build compilers whose behavior goes way outside the "permissible" range.

See https://blog.metaobject.com/2018/07/a-one-word-change-to-c-s...


You seem to be saying that the meaning of this clause in the standards is undefined (or is it implementation-defined?)

I have no idea what the authors of the standard would have in mind in putting a non-binding constraint in a standard, unless there's a faction that hopes to make it binding in the future, and hopes this will pave the way.


It used to be binding, was later changed to be non-binding.


Somewhat related, does anybody here know what compiler flags to use to get "friendly C" (aka do-what-I-mean, or high-level assembler)? I want it to do the "obvious" thing in case of UB, for example dereferencing a pointer will just simply read the memory at that address.

(I can live without the optimizations, as most code I write is bounded by algorithmic complexity, IO, or not performance sensitive. If I find it neccessary, I'd tune the options for a specific piece of code.)

I know about: -fno-delete-null-pointer-checks -fno-strict-aliasing -fwrapv

Are there any other such options?


This is unlikely to happen because of the performance penalty it would incur (e.g. defining outcome of all out-of-bound memory accesses).


I would expect that it would still be faster and use less memory than most of the other popular languages today (e.g., C#, Python, or JavaScript).

It would be just another option to choose from. Solving some of those undefined behaviors may even provide other benefits (e.g., better compiler error messages, optimizations, or portability).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: