Hacker News new | past | comments | ask | show | jobs | submit login

Agreed. You put your finger directly on something that's always bugged me about optimistic code optimization strategies. The code I write is supposed to have deterministic behaviour. If the determinism changes, it should be because of a change I made, not a flag in the compiler. The behaviour is completely opaque and uncorrelatable, makes it very hard to figure out if a given change will lead to better or worse performance.

"Abstraction violation" is a good way to put it.




The excuse for this is that performance is not considered part of the behavior of the program. So it doesn't matter to the question of whether your program is deterministic.


These days you don't know what cpu you run on so you can't make performance guarentees anyway. Even in embedded we have been burn with the only cpu going out of production enough to not depend on it anymore in most cases.


Except for the fact that performance optimization frequently does change the behaviour of the program.


That is only possible if the optimization is wrong (rare), or the program already had undefined behavior (very common, because undefined behavior is very easy to trigger in C).


I'll give an example: -ffast-math.

There's extensive literature out there on how fast-math changes the behaviour of code. I've been bitten by this a couple of times already.


If performance optimization changes anything other than performance it's either a bug in the program or a bug in the compiler.


Yes, but program bugs are obviously very, very common. Developers already do a great job of adding bugs to their programs, they don't need compilers compounding that problem in non-deterministic ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: