Hacker News new | past | comments | ask | show | jobs | submit login

It would be nice if C++ could really settle on one single error handling system like this. I know the contracts proposals have been shot down several times, but those I think would help a lot. People will only really code defensively if there is a some gain. Having the compiler enforce and use contracts for optimization would be the only way to ensure adoption. If people thought they could get a 5% speed boost, they would annotate their code with more of their assumptions.



Some people, Microsoft higher-ups in particular, are vociferous opponents of allowing contract annotations to be used for optimization. Their arguments for this position are not generally very coherent.

Compilers are often surprisingly bad at using much of such information effectively. Probably they will get better at it, but slowly.


Not sure about Microsoft higher-ups in particular, but C++ and related compilers are notoriously bad (in fact negatively good) at enforcing the assumptions they use for optimization. Eg:

  void foo(bar_t* p)
    {
    baz_t* q = &p->baz;
    if(!p) panic("...");
    do_stuff(p,q);
    }
A (stereo-)typical C++ compiler will assume that p is non-null, then actively remove the code checking that. Surprise! You now have a security vulnerability. But only when optimisations are turned on, so if you have distinct debug and release modes, your testing and other debugging systems will be useless.

I suspect this is a recurring source of vociferous opponents of allowing any information to be used for optimization, regardless of how much compilers promise that this time is totally different and they'll definitely actually check that it's correct before using it.


But you already have a security vulnerability regardless, if you dereference p before checking if it's null.


> But you already have a security vulnerability regardless, if you dereference p before checking if it's null.

Firstly, that's not (in general) true, unless you count denial of service[0] as a vulnerability; reading from address zero and then panicking has the same security implications as segfaulting while trying to read, namely the software immediately halting.

More importantly, the above code does not dereference p (at all, though do_stuff presumably does). `&p->baz` adds a constant offset to the (register storing the) pointer, without touching memory at all. There is no vulnerability (assuming the obvious assumptions about how foo and do_stuff work and are used) until the compiler introduces one.

0: For example, you count the fact that someone can DDOS the machine it's running on as a vulnerability in any network software. Which is somewhat resonable in some contexts, but not the context of compiler bugs.


Correct!

It is extremely common -- i.e., absolutely normal -- to write code after an assertion that would be UB if the assertion were false. Any worry about eliding checks should apply even moreso to all that UB code. But people who hate optimization based on assertions have implicitly chosen to ignore all the UB, and concentrate only on the elided checks.

It's like complaining you don't have a parachute when you know the doors couldn't be opened anyway.

In a language that doesn't seem to have UB, those worries might seem less. But every substantial library makes its own versions of UB that, while they may have less drastic effects on the runtime consistency of the process, equally impact the coherent behavior of the program.




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: