Hacker News new | past | comments | ask | show | jobs | submit login

As someone who's spent the majority of their working life designing CPUs (and the rest designing hardware accelerators for applications where CPUs and GPUs aren't fast enough), I find that when people say something like "If you could make your processor 5% faster with a simple change, why wouldn't you do it?", what's really meant is "if, on certain 90%-ile or 99%-ile best case real-world workloads, you could get a 5% performance improvement for a significant expenditure of effort and your choice of a legacy penalty in the ISA for eternity or a fragmented ISA, why wouldn't you do it?"

And the answer is that it there's a tradeoff. All of the no-brainer tradeoffs were picked clean decades ago, so all we're left with are the ones that aren't obvious wins. In general, if you look at a field an wonder why almost no one has done this super obvious thing for decades, maybe consider that it might be not so obvious after all. At zurn mentioned, there are actually a lot of places where you could get 5% and it doesn't seem worth it. I've worked at two software companies that are large enough to politely ask Intel for new features and instructions; checked overflow isn't even in the top 10 list of priorities, and possibly not even in the top 100.

In the thread you linked to, the penalty is observed to be between 1% and 5%, and even on integer heavy workloads, the penalty can be less than 1%, as demonstrated by the benchmark linked to above. Somehow, this has resulted in the question "If you could make your processor 5% faster ...". But you're not making your processor 5% faster across the board! That's a completely different question, even if you totally ignore the cost of adding the check, which you are.

To turn the question around, if people aren't willing to pay between 0% and 5% for the extra security provided, why should hardware manufacturers implement the feature? When I look at most code, there's not just a 5% penalty, but a one to two order of magnitude penalty over what could be done in the limit with proper optimization. People pay those penalties all the time because they think it's worth the tradeoff. And here, we're talking about a penalty that might be 1% or 2% on average (keep in mind that many workloads aren't integer heavy) that you don't think is worth paying. What makes you think that people would who don't care enough about security to pay that kind of performance penalty would pay extra for a microprocessor that gives has this fancy feature you want?




> people aren't willing to pay between 0% and 5% for the extra security provided

This is not true. One problem is that language implementations are imperfect and may have much higher overhead than necessary. An even bigger problem is that defaults matter. Most users of a language don't consider integer overflow at all. They trust the language designers to make the default decision for them. I believe that most people would certainly choose overflow checks if they had a perfect implementation available, and perfect knowledge of the security and reliability implications (i.e. knowledge of all the future bugs that would result from overflow in their code), and carefully considered it and weighed all the options, but they don't even think about it. And they shouldn't have to!

For a language designer, considerations are different. Default integer overflow checks will hurt their benchmark scores (especially early in development when these things are set in stone while the implementation is still unoptimized), and benchmarks influence language adoption. So they choose the fast way. Similarly with hardware designers like you. Everyone is locally making decisions which are good for them, but the overall outcome is bad.


  > if people aren't willing to pay between 0% and 5% for 
  > the extra security provided
In the context of Rust, integer overflow checks provide much less utility because Rust already has to perform static and dynamic checks to ensure that integers are used properly, regardless of whether they've ever overflowed (e.g. indexing into an array is a checked operation in Rust). So as you say, there's a tradeoff. :) And as I say elsewhere in here, the Rust devs are eagerly waiting for checked overflow in hardware to prove itself so that they can make it the default and do away with the current compromise solution (which is checked ops in debug builds, unchecked ops in release builds).


Amen!




Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: