
CPU and kernel engineers don't have the necessary security mindset - paradroid
What is really indicated by these hardware-level defects is that a security mindset is not a pervasive way of thinking among otherwise very serious engineers. The sheer number of low-level engineers - whether hardware or software level - who overlooked these flaws for so long is simply huge.<p>When CPUs and the internet were being designed the level of adoption we are seeing was not anticipated. Similar to how nature engineered humans with lots of potential exploits. Now that we are aware of both the possibility and risk of every single person being exploited simultaneously, we must act accordingly and defend ourselves.<p>In other words, a huge amount of effort must be put into securing basic infrastrucure (including CPUs), somehow.<p>So long as humanity continues to be at war, we must continue to act as if that is the case. Kudos to Google for figuring this out. But do you really want the &quot;don&#x27;t be evil&quot; company that profits almost exclusively based on ads to be your only true line of defense? Do you want it to be your government, versus other governments?<p>What we have just experienced is a precursor to something deeply terrifying. We must, somehow, act to prevent it.
======
wahern
I have a hard time believing that senior Intel engineers were ignorant of the
potential issues here. Cache timing attacks have not only be known for well
over a decade, but actively exploited. And it doesn't take a genius to
consider the interaction between cache latencies and speculative execution.
The thought has crossed my mind (and I'm neither an EE nor a researcher), but
like most people I assumed the CPU obeyed page protections consistently and
didn't bother testing the assumption. At least some Intel engineers working on
these chips wouldn't have had to assume anything, and if the issue hadn't
crossed their mind we probably have bigger problems--sheer incompetency.

More likely than not, the misfeature was a calculated risk, either at
inception or after the fact. And I bet Intel has had investigators quietly
doing damage control internally for the past several weeks or months in
anticipation of subpoenas.

Some ARM CPUs apparently can leak system register values, a much narrower
version of the Intel design issue. And I'd bet that was also a calculated risk
undertaken at some point in the design phase.

On the flip side, it's not a coincidence that AMD isn't vulnerable like Intel
and ARM chips are.

Ultimately I don't think the issue is that people are blind to the problems,
it's that they systematically underestimate the potential for exploitation.
And ultimately that's because of the economics at play: at the end of the day
even if, theoretically, Intel lost some huge class action suit, they'd still
probably enjoy a net gain for their misestimation of the risk.

And it's not just Intel. Otherwise good engineers are absolutely confident
that containers and VMs provide strong isolation guarantees, despite years of
exploits, simply because were the marketing and technical literature
completely true and accurate it would make it cheaper and easier to design and
deploy software services. It's motivated thinking, not ignorance. Every time
these exploits come out nobody is particularly surprised--it's always 100%
obvious in retrospect, and it would be 100% obvious prospectively if the
incentives were better aligned.

------
PaulHoule
This is true for engineers in general. Intel couldn't even make a secure web
server running on the ME.

