Hacker News new | past | comments | ask | show | jobs | submit login
The Error Model (2016) (joeduffyblog.com)
56 points by vips7L 61 days ago | hide | past | favorite | 9 comments

Excellent article!

I think about error handling quite often, because I work with data a lot, so often have to make compromises between defensiveness and 'secureness' (or whatever non-defensiveness is called). And since 'error' is sometimes a pretty subjective thing, usually it's a spectrum of options for error handling too.

A year ago, I wrote an article [0] on different 'styles' of error handling in Python specifically (via mypy). It's also potentially applicable to other languages, which have some sort of generator semantics and covariant union types (for example I've been successfully using it with JS Flow [1])

[0] https://beepb00p.xyz/mypy-error-handling.html

[1] https://flow.org/en/docs/types/

It would be nice if C++ could really settle on one single error handling system like this. I know the contracts proposals have been shot down several times, but those I think would help a lot. People will only really code defensively if there is a some gain. Having the compiler enforce and use contracts for optimization would be the only way to ensure adoption. If people thought they could get a 5% speed boost, they would annotate their code with more of their assumptions.

Some people, Microsoft higher-ups in particular, are vociferous opponents of allowing contract annotations to be used for optimization. Their arguments for this position are not generally very coherent.

Compilers are often surprisingly bad at using much of such information effectively. Probably they will get better at it, but slowly.

Not sure about Microsoft higher-ups in particular, but C++ and related compilers are notoriously bad (in fact negatively good) at enforcing the assumptions they use for optimization. Eg:

  void foo(bar_t* p)
    baz_t* q = &p->baz;
    if(!p) panic("...");
A (stereo-)typical C++ compiler will assume that p is non-null, then actively remove the code checking that. Surprise! You now have a security vulnerability. But only when optimisations are turned on, so if you have distinct debug and release modes, your testing and other debugging systems will be useless.

I suspect this is a recurring source of vociferous opponents of allowing any information to be used for optimization, regardless of how much compilers promise that this time is totally different and they'll definitely actually check that it's correct before using it.

But you already have a security vulnerability regardless, if you dereference p before checking if it's null.

> But you already have a security vulnerability regardless, if you dereference p before checking if it's null.

Firstly, that's not (in general) true, unless you count denial of service[0] as a vulnerability; reading from address zero and then panicking has the same security implications as segfaulting while trying to read, namely the software immediately halting.

More importantly, the above code does not dereference p (at all, though do_stuff presumably does). `&p->baz` adds a constant offset to the (register storing the) pointer, without touching memory at all. There is no vulnerability (assuming the obvious assumptions about how foo and do_stuff work and are used) until the compiler introduces one.

0: For example, you count the fact that someone can DDOS the machine it's running on as a vulnerability in any network software. Which is somewhat resonable in some contexts, but not the context of compiler bugs.


It is extremely common -- i.e., absolutely normal -- to write code after an assertion that would be UB if the assertion were false. Any worry about eliding checks should apply even moreso to all that UB code. But people who hate optimization based on assertions have implicitly chosen to ignore all the UB, and concentrate only on the elided checks.

It's like complaining you don't have a parachute when you know the doors couldn't be opened anyway.

In a language that doesn't seem to have UB, those worries might seem less. But every substantial library makes its own versions of UB that, while they may have less drastic effects on the runtime consistency of the process, equally impact the coherent behavior of the program.

The Midori project is always interesting to read about. I'm not sure how much real impact any of it had, however.

It had the impact of generating articles citing extensive and well-articulated experience, and revealing dead ends we don't now need to explore.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact