
Intel Has a Big Problem - MilnerRoute
https://www.bloomberg.com/news/features/2018-01-18/intel-has-a-big-problem-it-needs-to-act-like-it
======
013a
Intel is a hot mess even without these security disasters.

Just look at their product release lifecycle: In years past, they'd get maybe
one extra product release off each new arch (tick/tock); for example, Sandy
Bridge bore Ivy Bridge and Haswell bore Broadwell.

Skylake has born SIX new product lines; Goldmont, Goldmont Plus, Kaby Lake,
Kaby Lake Refresh, Coffee Lake, and the upcoming Cannonlake. Their failed 10nm
shrink has forced product delays; remember, Cannonlake (the 10nm shrink of
Skylake) was supposed to be released in __2016 __, and its not even out yet.
Just at CES this week they said they 've shipped mobile Cannonlake CPUs.

They have zero presence in mobile. Their best efforts involve competent
Y-series processors. Then Apple comes around and, seemingly without even
trying, destroys them [1] with a product that's more thermally efficient and,
in some ways, more powerful than Intel's _best_ mobile processors, not just
their thermally efficient ones.

They have little presence in HPC/AI, where Nvidia is slaughtering everyone and
its not even close.

Its completely inevitable they're going to lose Apple as a customer for
consumer products; its just a matter of time. AMD is gaining traction with
Zen, and they're moving in the direction enterprise cloud provider want (lots
of cores, not much $$). How much longer can Intel keep holding on? Do they
have an ace they've been hiding? Will people even trust their ace after
Meltdown?

[1] [https://9to5mac.com/2017/06/14/ipad-pro-versus-macbook-
pro-s...](https://9to5mac.com/2017/06/14/ipad-pro-versus-macbook-pro-speed-
tests/)

~~~
saas_co_de
> Their failed 10nm shrink has forced product delays

This is the most important thing. AMD has already made the switch to Multi
Chip Modules which makes it much much easier to produce chips for 10nm (what
TSMC/GF/Samsung call 7nm).

Right now Intel cant even make a dual core low speed mobile chip on 10nm. How
are they going to make a giant 30+ core server processor? This is extremely
bad news for them that has not been fully realized by the markets because they
have faith that Intel will figure it out, but they may not.

AMD may ship 7/10nm server chips before Intel and Intel may never ship them
before switching to MCM themselves.

MCM is as revolutionary as AMD64 was but most people dont realize yet how
important it is and how much of an advantage AMD has because of leading with
it.

~~~
colejohnson66
> 10nm (what TSMC/GF/Samsung call 7nm)

What’s the reason for this? From my understanding of numbers, 10 is not 7

~~~
mappu
"10nm" and "7nm" are marketing terms that are not directly comparable across
foundries.

~~~
colejohnson66
How? I thought the number was the size of the smallest transistor?

~~~
Valmar
The numbers are almost pure marketing, these days.

From
[https://en.wikipedia.org/wiki/7_nanometer#7_nm_process_nodes](https://en.wikipedia.org/wiki/7_nanometer#7_nm_process_nodes)

> The naming of process nodes by different major manufacturers (TSMC, Intel,
> Samsung, GlobalFoundries) is partially marketing driven and not directly
> related to any measurable distance on a chip – for example TSMC's 7 nm node
> is similar in some key dimensions to Intel's 10 nm node.

------
sdhgaiojfsa
I'm skeptical that Intel really has that much of a problem, and the investors
in the stock market seem to agree. Intel only has a problem if alternatives
come to be seen as economically viable. And right now it doesn't seem like
there's any particular danger of this happening.

Pundits want to say that there's a huge thing here, because pundits don't
optimize for the truth. They optimize for clicks. So you really need to be
careful looking to their writing for the truth.

~~~
Spooky23
Don’t be generous to Intel. I work in a vertical strategic to Intel, and get
regularly called on by them and get NDA presentations and the ability to talk
to engineers about strategic projects.

For this incident, I got an email a few hours after the embargo was lifted,
that essentially said that it was no big deal and referenced public
information. The purpose of the communication was to have people like me
message up the chain that this was no big deal. That misdirection is
inexcusable, particularly when they could have given meaningful guidance under
NDA.

We had some follow up questions, which weren’t really answered. We were
directed to hardware OEMs, as ETA for microcode updates are out of their
control and according to Intel are the full responsibility of the OEM. In
reality, Intel was struggling to deliver the code, and the OEMs we deal with
issued patches in hours, and had to pull back updates due to Intel code
revisions.

Personally, I do have alternatives for strategic parts of the business that
drive high margin Intel sales. Many critical aspects of my business can run on
Intel or Power platforms, and we can engineer solutions either way in similar
cost footprints.

Less strategic aspects of the business, like end user compute now have niche
competitors that can gobble up Intel business very quickly. Half of my desktop
users run on VDI, mostly with AMD thin clients. 50% of my constituencies can
run their core line of business functions on iOS. iPad with a keyboard could
reduce my Intel desktop spend by 50-75% for 2-3 years.

~~~
lima
I'm also under Intel NDA and it took them days to provide any meaningful
guidance at all.

The only useful thing in these documents was a timeline/detailed list for the
microcode patches, all of which should be public.

They also claim that Spectre/Meltdown are "not a bug or flaw in Intel
products" and their slide deck has a whole slide dedicated to forward-looking
statement disclaimers. Sigh.

Needless to say, we're not impressed.

~~~
Simon_says
They know what they're saying is bull hockey, but they have to say it,
otherwise they'll lose their lawsuits. They have to be able to point to the
public spec and say "See!? We're completely in spec! No bugs here." Nevermind
that the bug is in the spec.

------
VHRanger
The main problem with intel is direction and process. Dan Luu pointed out in
2015 [1] that intel chips had serious bugs and, given how intel acted, it was
only a manner of time before something like this popped up.

What I see happened to Intel is that once they consolidated their monopoly in
the late 2000s, they lost the healthy management practices that tend to come
from being in a competitive industry.

All this talk from upper management about velocity was about trying to find a
way to make more money when you've mined out your current niche completely. It
ended up instead opening the door for AMD to make a comeback on x86

[1] [https://danluu.com/cpu-bugs/](https://danluu.com/cpu-bugs/)

~~~
harryh
I think that comparing the types of bugs in Luu's post (which is excellent)
and Spectre/Meltdown is a mistake. The former are mistakes caused by
insufficient testing. The latter are conceptual problems that are nearly
fundamental to modern processor design. No amount of simple testing would have
uncovered them.

~~~
pcwalton
Spectre is fundamental to processor design, but Meltdown is pretty much a bug.

~~~
wklauss
It is only a bug once you have discovered that branch prediction can be used
as a vector of attack. Until then, it was a perfectly valid design
implementation.

~~~
jotm
What actual data can be extracted by a Meltdown/Spectre attack? Still need to
find an answer to that, nothing online says anything specific.

Datacenters should probably be worried, but what about the hundreds of
millions of users out there? Doesn't seem like a big deal, tbh - until an
actual exploit is out there, why should they worry?

~~~
tptacek
Meltdown allows userland native code (the Javascript your browser loads from
random websites is JIT'd down to native code) to dump kernel memory.

~~~
DSMan195276
It is worth clarifying, when people talk about "kernel memory", for x86-64 it
really means _all_ of memory, because all of physical memory is mapped into
the kernel's address space. So really, meltdown allows userland code to read
_anything_ in memory.

~~~
zerohp
Incorrect. When people talk about kernel memory, they are talking about pages
marked as supervisor in the page tables for a particular process. That is not
"anything in memory."

~~~
Filligree
Meltdown allows applications to read any mapped pages, regardless of the
protection bits on those pages. That mainly means kernel memory, which is the
only page set that's normally unreadable. The kernel mapping normally includes
all of physical memory.

------
Nokinside
Intel still has the advantages that allow them to screw up and survive.

1\. Process knowledge and manufacturing capacity. You can buy from others only
as much as they have manufacturing capacity. Only real threat to Intel comes
from combined volume of GlobalFoundries, TSMC, Samsung and UMC. Apple, NVIDIA,
AMD, ARM and Qualcomm can get past Intel only trough these companies.

2\. Profit margins. Intel makes 60 percent profit margins, AMD struggles from
decade to decade. That's not a coincident. It's the direct result of pricing
decisions by Intel. Whenever AMD gets ahead Intel in uP technology, Intel has
always the option of cutting profit margins and prevent AMD from gaining more
market share.

~~~
Tharkun
The odd bit of anti-competitive behaviour (and billion-dollar lawsuits that
come with it) is also something of an advantage...if an unfair one.

------
danpalmer
I think Spectre and Meltdown are a fantastic opportunity to rediversify the
CPU market, and I don’t think it could have come at a better time.

\- AMD has just had a great release with Ryzen, showing they can compete on a
price/performance basis.

\- Apple is moving core OS functionality on its newest desktops/laptops on to
Apple designed ARM chips.

\- Mobile platforms are getting bigger, especially with things like ChromeOS
that could be (are being?) easily run on ARM based hardware.

\- Open Power has come a long way and could be poised to take some of the
server market for customers who want more control than they got with Intel.

I’m excited for this. Obviously the vulns are an issue that needs to be
solved, but we could get some real competition in terms of manufacturers, and
even in terms of architecture. The industry will take some time to readjust to
compiling for/running on multiple architectures, which I think much of the
industry hasn’t needed to deal with for a while. The result though will be a
market where customers can choose an architecture that makes sense for their
use case, and can choose from a range of good options.

(I realise other chips are vulnerable, not just Intel, but the publicity has
been Intel focused and I don’t think the technicalities of it matter too much)

------
yeasayer
I think Intel might get away with it.

Last 5 years they were slacking off, because economically there is no reason
to go over the usual 10-15% yearly performance bump. But actually they were
accumulating aces up their sleeves. Again, no reason to show your hand, if you
don't have to.

But the time has come. Right now Intel has 3 major problems: 1)
Meltdown/Spectre situation 2) AMD is awoken from sleep with surprisingly good
Ryzen lineup 3) Apple craves new powerful CPUs to satisfy unhappy MacBook Pro
customers

Intel can fix all of this with one sweep. Just by releasing a brand new CPU
that will surprise everyone. Of course with hardware Meltdown/Spectre fix.
They were holding off, but it's time to drop all these hidden aces on the
table. And I believe it's gonna happen. Not right now with Cannon Lake, but
with the one after - Ice Lake on 10nm transistors, by the end of 2018. It's
going to be even bigger than NVIDIA's GTX 1080 success.

~~~
fyi1183
Doubtful. You don't just develop a new processor over night, and if they truly
had all these aces up their sleeves, they would have dropped them already in
response to Zen last year.

Intel's process advantage is shrinking. They're struggling like everybody else
because the physics is getting harder and harder. Apart from the fact that it
would have been nice to get easy process shrinking forever, this is good news
for almost everybody: it means competition for them is getting tougher.

~~~
lttlrck
It’s doubtful that they would drop all their aces in response to Zen.

~~~
nhaehnle
Actually, I'd turn this on its head and ask: Why is there this claim that they
had or have _any_ aces in the first place, Zen or no Zen?

What you and the ggp are basically saying is that Intel slowed down the
improvement in their processors on purpose over the last several years. Why on
earth would they do that?

Besides, all the evidence points to the contrary, what with them being unable
to compete in the mobile space.

~~~
rudedogg
> Why is there this claim that they had or have any aces in the first place,
> Zen or no Zen?

I'm not a big hardware person, but from what I've heard the speed they
released 6 core processors after Ryzen makes it likely they were capable of
producing 6 core (consumer) designs earlier.

~~~
vonmoltke
The original hexacore Xeon is almost eight years old (March 2010 release).
Intel released a _consumer_ hexacore in response to Ryzen. Intel's artificial
market segmentation is ridiculous, but so is the typical AMD watcher's near
total ignorance if what is happening in the Xeon line.

~~~
hajile
That may be overstating AMD's ignorance by quite a bit. The big marketing push
with the zen launch was that Intel had a chip with a lot of cores, but it was
2x the price for with slightly worse performance.

------
Animats
Bloomberg says Intel has a big problem. The Economist says Intel has a big
problem. Three class actions already filed against Intel.

Intel has a big problem. They're probably going to have to replace a lot of
CPUs.

~~~
rphlx
May not be a winnable argument on HN (where the consensus seems to be that
this is nearly-100% Intel's fault), but I think a significant fraction of the
blame properly belongs on their customers, particularly their multi-tenant
public cloud customers, for assuming a level of HW thread and guest isolation
that has _never_ actually existed. These huge-scale customers did not do
proper security due dil on the silicon they were buying and made assumptions
that were never, strictly speaking, guaranteed by Intel, and in some cases
were explicitly documented as _not_ guaranteed within the Intel optimization
manuals.

~~~
bsder
> These huge-scale customers did not do proper security due dil on the silicon
> they were buying and made assumptions that were never, strictly speaking,
> guaranteed by Intel, and in some cases were explicitly documented as not
> guaranteed within the Intel optimization manuals.

But the commodity hardware is _SOOOOOO_ much cheaper, you know?

Everything about Intel was always about "good enough" from since probably ...
1982? And "good enough" security in hardware was always "nobody cares". IBM,
certainly, was screaming about the level of insecurity in commodity hardware
and software _forever_. DEC similarly.

But commodity hardware is SOOOO much cheaper.

No one. And I mean _NO ONE_ was every going to give up even 10% on performance
or cost in order to be even slightly more secure. At any level of the stack.
Intel, Microsoft, Google ... _all_ are guilty of this up until probably this
year. Anybody who suggested that would have gotten laughed at and/or fired.

The market spoke--and security became an afterthought.

Sadly, this is _STILL_ true. While there is much gnashing of teeth about
Intel, everybody's implementation of security is far worse. The thing that is
biting Intel is that the monoculture means that it is a universal and scalable
tool as opposed to at the software level where each individual company has to
be compromised in a slightly different way.

It's only been since everybody is putting everything in the cloud that people
now care about actual _absolute_ values of security.

The problem is that all of the hardware solutions to these bugs cost RAM
somewhere. And RAM is now the gating factor of cost and performance on most
chips. RAM fell off the Moore's Law curve back about 32-22nm and isn't coming
back. So, the performance hit to mitigate this is real.

No one was going to be the first to fix this _EVEN IF THEY KNEW AND CARED_.
Everybody learned from painful business experience that the first guy gets all
the arrows and the second guy gets all the profits. So, everybody was going to
wait until they were the second guy--which was only going to happen when
something bit _everybody_.

~~~
rphlx
I would have agreed with you about very poor SRAM scaling until very recently;
it looks like 10nm/"7nm" made some decent progress there at last, though
nothing like The Good Old Days when it actually scaled close to what the
marketing number implied.

In any case, it is not clear to me that you need _that_ much SRAM to greatly
minimize the various side channels that Spectre uses; it's on the order of a
portion of the L1D size, which is pretty minimal compared to how much SRAM
there is on-chip already for the LLC.

------
kazinator
> _Part of what makes Meltdown and Spectre so terrifying is that they upend
> more than a decade of conventional wisdom about information security._

The combination of speculative execution, virtual memory, caching and
user/supervisor privilege separation isn't ten years old.

These flaws upend something like 40 years of conventional wisdom.

~~~
omginternets
I'm confused as to what this conventional wisdom they're alluding to actually
is. Would you mind spelling it out?

~~~
syncsynchalt
Conventional wisdom such as:

    
    
      // it is impossible to read past the end
      // of the array in this code:
      if (i >= 0 && i < array.size())
         x = array[i];

~~~
inimino
The conventional wisdom has always been that it is impossible if and only if
your hardware is not broken.

If the hardware is broken, all bets are off. Nothing in the conventional
wisdom has been challenged, except perhaps the complacent assumption that
"Intel hardware is unlikely to be broken in ways that invalidate our
security".

~~~
cesarb
Spectre is not just Intel. That "array bounds" example is Spectre variant 1,
which affects everyone except in-order processors (older Atom and slower ARM).
So far, I haven't read of any hardware mitigation for it (unlike variant 2 or
Meltdown aka "variant 3").

The mitigations for variant 1 I've seen are either introducing a speculation
barrier on array bounds checks, or faster masking tricks which convert
speculated out-of-bounds values into safe values. It won't surprise me at all
if conventional wisdom changes to include speculation effects, much like it
has changed to include cache effects once memory got slower than the CPU.

------
cybervegan
The holes in this article belie an fundamental misunderstanding of how the
flaw works, and, how computers in general work. I agree with the general
premise of the article, but if they don't understand it, they should avoid
trying to explain it.

------
lower
Could it be that Intel is communicating the way it does because they're afraid
of lawsuits? I mean, if they admit that their processors are faulty, then they
may have to recall basically all their processors.

~~~
gkya
How can that even happen? My laptop has a vulnerable i3 chip, but I'm not
giving it or the chip to anybody, I need the computer. And I guess no data
centers would "temporarily close" in order to change CPUs (though maybe they
can do that gradually).

~~~
lower
Back in 1994 when the Pentium had the FDIV bug, they offered to replace the
chip. Wikipedia says "Although it turned out that only a small fraction of
Pentium owners bothered to get their chips replaced, the financial impact on
the company was significant."

------
StillBored
I don't know why people are so focused on intel here. Its not like other
processors aren't affected, even ARM's latest core (A75
[https://developer.arm.com/support/security-
update](https://developer.arm.com/support/security-update)) is susceptible to
meltdown. I suspect that if they had been building aggressive OoO CPU's with
deep ROB rather than focusing on perf/W there would be a lot more cores on
that list. Similarly IBM's high end POWER processors are also vulnerable
(although they don't break it down as nicely as ARM
[https://www.ibm.com/blogs/psirt/potential-impact-
processors-...](https://www.ibm.com/blogs/psirt/potential-impact-processors-
power-family/))

So, everyone is like Intel is so bad, when they just happen to be the one that
people are running the most untrused code on.

Let me quote IBM "This vulnerability doesn’t allow an external unauthorized
party to gain access to a machine, but it could allow a party that has access
to the system to access unauthorized data."

People on this board are upset, because after spending the last decade+,
making promises about how secure "cloud computing" is, once again the
naysayers were proven correct. This time the flaw is so fundamental that in
order to fix it you have the OS vendors making changes the destroy system
performance for most applications that are I/O or just syscall intensive. This
likely won't be the last time either if history is to be believed.

But javascript you cry, again I'm going to say that you shouldn't be running
random code from random people on the internet.

I'm not a big fan of intel, but in a way I applaud them for pushing back
against what I view as the crazy extent people go to in order to allow native
code execution from untrused sources. I would much prefer this change be
isolated to hypervisors, and have chrome/ff/etc detune their JIT's a bit to
keep people from running cache timing attacks.

So, I likely will be turning the kpti off on most of my machines the same way
I run them with the iommu's disabled because I'm not running VM's with
untrused code.

(BTW, once the dust settles, i'm guessing a pretty large number of other
aggressive OoO processors are vulnerable as well (old Alpha/PA-
RISC/SPARC/etc)).

------
chiefalchemist
> "Krzanich sold $24 million in company shares. Intel says the stock sale was
> part of a plan that had been in place before anyone there knew about
> Meltdown or Spectre..."

Certainly, if there's proof, they would, under the circumstance, provide it.
Without out that proof the sale of this many shares in that time window tells
us all we need to know.

Off topic: How has Intel become so dominate, and not been pursued as a
monopoly? Legal issues aside, how did so many big customer allow all their
collective eggs to be in a single basket? It seems to me, at some point, some
of the responsibility needs to be shared by other industry titans.

~~~
mr_toad
Given the cost of designing a CPU and setting up manufacturing capacity you
could argue that they’re a natural monopoly. At the least there is a
significant hurdle to entering the market, which intel can use to its
advantage.

~~~
chiefalchemist
Yes. But still a monopoly. And now that such chips are global the whole
world's eggs are in a single basket. What could go wrong? Wait! I think they
just answered that.

------
5ilv3r
Lots of Intel apologists in this thread. I for one am excited by the idea of
healthy competition in the processor market.

------
40acres
I wonder which, if any, government organizaations knew about this. Did the DoD
know? Russia? China?

I remember reading about a "cyperweapon" the US government was using to cause
North Korean missile tests to fail before launch. Could attacks like this be
possible through meltdown?

------
HenryBemis
To quote from the "Quest for the Holy Grail" and the scene with the Black
Knight (guarding the 'bridge'): "'Tis but a scratch".

Intel doesn't care. They will downplay it, misinform and misdirect till the
cows go home. No point beating a dead horse. The only think Intel (and most
other corporations) unerstand is to stop buying their products. This is the
only language they 'speak'.

------
frankharv
The article has some false information "Intel says it’s already provided
software fixes for 90 percent of its chips"

This is a falsehood. They MCU updates only cover IvyBridge and newer CPU's.

Check the dates on the MCU files.

------
anonu
From the article:

>> Starting in the mid-2000s, Intel added a layer of security within its chips
and began encouraging developers to store users’ most sensitive information in
the walled-off area rather than in regular software memory.

Can anyone explain what the author is referring to as "walled-off" area? L1,
L2 Cache?

~~~
JdeBP
This is a botched attempt to explain the user/supervisor mode distinction,
that gets the dates and the purpose wrong.

------
rdiddly
I'm confused, did Vegas change from a place that's all about a show, to one
where one expects to find seriousness, honesty, contrition? In a _keynote_?

------
make3
It's literally the first thing he mentioned in the speech, though I admit he
likely went too fast on the subject and was not convincing or reassuring at
all

------
jwilk
> Starting in the mid-2000s, Intel added a layer of security within its chips
> and began encouraging developers to store users’ most sensitive information
> in the walled-off area rather than in regular software memory.

What "layer of security" and "area" they are talking about?

~~~
nootropicat
It's a very confusing attempt to explain the difference between kernel and
user memory space. I don't think article's authors understand it.

~~~
jwilk
Yeah, but that'd be 1980s, not 2000s. :-\

~~~
capitalsigma
I think they're talking about going from an unprotected to a protected memory
model. The windows 9x line was unprotected (ending with "Windows ME" circa
2000) and NT was protected. The first consumer NT was XP in, what, 2002? That
lines up with their "early 2000's" comment.

Why they're deciding to credit Intel with protected memory models is beyond
me, though. Maybe they thought they needed to give some credit to Intel for
something to make the article seem more balanced.

------
josh2600
Maybe I’m missing something... this article says every smartphone is
exposed... is that actually the case? Which smartphones use Intel chips?

Maybe they’re referring to the fact some of these bugs are present on other
chipsets but that seems weird in an Intel article. Am I missing something?

~~~
dragontamer
Spectre affects all advanced chips with out-of-order execution.

But Spectre cannot break kernel memory. Its more of a "new class" of bug,
similar to how "Buffer Overflows" don't describe a particular attack, but a
methodology that hackers will use to exploit new bugs.

Spectre affects virtually every high-performance computer in the world.
Smartphones, SPARC, PowerPC, Intel, AMD, ARM. All of these designs use out-of-
order execution, and in theory, a rogue Javascript would be able to read the
rest of process memory if a programmer isn't careful about how things work.

Meltdown took it one step further: and showed that code could read Kernel
memory. That was an Intel-specific mistake.

~~~
titzer
> Spectre affects all advanced chips with out-of-order execution.

It's not out-of-order execution, it's speculative execution (all forms of
branch prediction) plus the ability to affect the cache state during
speculative execution.

~~~
dragontamer
I'd be surprised if there was an out-of-order CPU that didn't do speculative
executions. I mean, a major benefit of OOE is to fill up the pipelines /
execution units, and speculative execution is a very "obvious" way to do that.

------
xtrapolate

      Meltdown and Spectre have opened up new hacking threats, sparked class actions, and enraged longtime partners.
    

At this point in time, it is known that Intel isn't the only vendor producing
hardware susceptible to Meltdown and Spectre, which is another of saying AMD
is in the same boat. Given this fact, I'm struggling to understand why Intel
is being continuously singled out.

Meltdown and Spectre aren't the first, won't be the last. I personally feel
that a more interesting discussion should take place: how to prepare/plan-
for/deal-with similar issues further down the road. One particular thought
that comes to mind is that this industry lacks an effective recall mechanism.

~~~
jacoblambda
As far as I'm aware, Intel is the only vendor susceptible to Meltdown.

~~~
johnbellone
There are some ARM CPU that are also vulnerable to Meltdown[0].

[https://support.apple.com/en-us/HT208394](https://support.apple.com/en-
us/HT208394)

------
Pica_soO
Here we are at the high point of our historic exhibition of the horrors of the
22 century. The first public occurance of public/private law.

A company which had a major flaw in its products, secretly negotiated a silent
phase out of its flawed products for its bigger customers creating a private
settlement, while avoiding the public courts by declaring the error trivial.
This company was too entrenched to fail, as in every big actor agreed upon
that seeing the company go under and half of all legacy software rewritten to
accommodate new hardware, was not in the interest of a "informed" public.

Liberal fanatics of course, had no such distributed hostage effect in there
market-models, and where under the illusion that the Anorexia state they
created, was still too much of a influence, while in truth the upholder of
citizen rights was not even present anymore at the negotiation table.

This catastrophe would later on lead to a heretic movement among the fanatics,
that viewed hardware dependence in legacy not as something ugly but
inevitable, but something threatening to their deity, the one free market.

Please follow me into the next exhibition hall, where we will see the
reintroduction of generational slavery by debt for the underclass. Please
watch your steps, some of the tiles on the floor are in slight disrepair-

------
HugoDaniel
They are too big to fail. It is us, the users, who have a problem: we have to
put up with their mess and shady tactics.

------
neonate
In case anyone was confused like I was, trying to find the article text:
[http://archive.is/ywGYJ](http://archive.is/ywGYJ).

------
alkonaut
Long term Intel will have to "fix" these issues - but what about short term?
Even if they won't have to replace sold processors (which is not likely), what
about CPU's in their current pipeline? If properly fixed chips are 3 years
out, what are they doing for 2018/19? Are they just going to keep making small
tweaks to their current chips, and sell them with the known flaws?

------
coldtea
Aside: as if Moore's Law (in the form often casually understood, about
processing speed/power, not transistor count) wasn't screwed enough as it is.

~~~
djsumdog
We've been out of the Moore's Law window for a while. Processing power no
longer really increases in a linear fashion. Instead, manufactures increase
throughput by smaller improvements in the processor and by squeezing in more
cores per chip. This also means our tasks/programs are changing to utilize
more multi-threading capabilities.

The idea of what is high performance has really changed.

~~~
xigency
Moore's law works on transistor density, so adding cores to a chip can be a
continuation of it. The barriers to faster single core performance are
power/heat in increasing clock speed and diminishing returns in superscalar
architecture.

Moore's law is still alive, it's just dwindling now.

~~~
alkonaut
> Moore's law works on transistor density,

I thought it was transistor _count_ in IC's?

If we assume that for state of the art chips the area is more or less fixed
(by speed of light, manufacturing constraints, thermal constraints etc) then
there is no difference between "transistor density" and "transistor count" \-
but is that the case? Is there no margin for die growth (Disregarding
yield/cost - considering only physical constraints)?

------
simik
> Krzanich showed off a full-size pilotless helicopter Wait... what?

~~~
elihu
That was a Volocopter. I think it uses some Intel parts or technology, but I
don't remember what exactly.

------
exabrial
x86 is a problem... Itanium was supposed to be the life boat but it came too
soon.

~~~
pmlnr
Alpha? SPARC?

------
MechEStudent
Paywalled?

------
exabrial
Well written article! Definitely a few waited in there that explain the
problem in plain English

------
saalweachter
Frankly I think this says something about the security of modern OS's that two
of the most recent exploits have been hardware based.

~~~
yborg
[https://cve.mitre.org/cgi-
bin/cvekey.cgi?keyword=Linux](https://cve.mitre.org/cgi-
bin/cvekey.cgi?keyword=Linux)

I wouldn't say that the security of the "modern OS" is a solved problem. The
Meltdown/Spectre issues happen to be broad-based and heavily publicized, I
don't think you can draw any more general conclusions from them.

In general, as long as human beings are designing and implementing information
processing systems, there will be bugs. Once AI systems are building them,
there will also be bugs, but no human being will be able to understand them.

------
can3p
A really wish the topics with clickbait title like this were banned on hacker
news

------
bigbugbag
> The company makes about 90 percent of the world’s computer processors and 99
> percent of the server chips in the data centers that effectively run the
> internet.

Since when do datacenter servers run the internet ? The journalist seem to not
understand the role of routers and network equipment to connect those servers.
You could add AS to the mix but the datacenter servers are connected to the
internet not running it. The internet would still work if we removed all those
servers, it would work as intended even.

------
waynecochran
It's almost as if the author didn't watch the Vegas Keynote. It was the very
_first_ thing that was addressed when the CEO spoke.

~~~
foobarbazetc
You need to put a disclaimer at the end of your post.

~~~
JdeBP
Only if xe confuses _disclaimer_ for _disclosure_ , of course. (-:

------
guhcampos
The article is ridiculously biased. It makes absolutely no sense to talk about
"moving away from Intel" to anything else because of Spectre. Every modern CPU
is vulnerable to this type of attack. The "fix" involves turning off branch
prediction, and that will slow down any CPU, from any vendor, of any platform.
CPUs which are not affected generally do not take advantage of branch
prediction anyways, and that generally makes them less powerful. Why the hell
would anyone move from an underpowered Intel chip, spending copious amounts of
money for such migration (you don't just change your CPU), and replace it with
an even less powerful chilp?

And Meltdown... Well, that's just the FIRST platform specific vulnerability
found using the Spectre strategy. There will be more, hell, there are probably
more already, just not published yet.

No brand is safe, this is not an Intel bug, it's a bug in Computer Science
itself. If anyone wants to profit from it, they'll need more than paid
Bloomberg posts, they'll need to rethink how we build processors.

