
Intel Announces Skylake-X: Bringing 18-Core HCC Silicon to Consumers - satai
http://www.anandtech.com/show/11464/intel-announces-skylakex-bringing-18core-hcc-silicon-to-consumers-for-1999
======
myrandomcomment
Intel getting kicked by AMD ever few years is good for the market and the
consumer. I am still planning on getting an AMD system to show my support for
their efforts. I have been holding off for one with a _gasp_ integrated GPU as
I will be using the system as a media center. Right now I have the high end
Intel compute stick. The limited RAM is a huge draw back. Oh, if it plays Civ6
well, that's a huge bonus.

~~~
Synaesthesia
Compared to the atom on your compute stick anything will be grand!

~~~
Sanddancer
The high end compute stick has a core m5, not an atom.
[http://ark.intel.com/products/91979/Intel-Compute-Stick-
STK2...](http://ark.intel.com/products/91979/Intel-Compute-Stick-STK2mv64CC)

~~~
myrandomcomment
The compute stick I was referring to is the BOXSTK2m3W64CC which has a Intel
Core m3-6Y30 processor. The RAM is really the issue, 4GB only. I run Kodi on
it 99% of the time. If I exit out it is to run Firefox to watch / listen to
something that Kodi does not support (XMRadio). Every once in awhile I check
and I am hitting swap and Kodi is complaining.

------
gbrown_
> Intel hasn’t given many details on AVX-512 yet, regarding whether there is
> one or two units per CPU, or if it is more granular and is per core.

I can't imagine it being more than one per core. For context Knights Landing
has two per core but that's a HPC focused product.

> We expect it to be enabled on day one, although I have a suspicion there may
> be a BIOS flag that needs enabling in order to use it.

This seems odd.

> With the support of AVX-512, Intel is calling the Core i9-7980X ‘the first
> TeraFLOP CPU’. I’ve asked details as to how this figure is calculated
> (software, or theoretical)

So lets work backwards here the Core i9-7980XE has 18 cores but as of yet the
clock speed is not specified.

A couple of assumptions:

\- We're talking double precision FLOPs

\- We can theoretically do 16 double precision FLOPs per cycle

FLOPs per cycle * Cycles per second (frequency) * number of cores =~ 1TF

So we can guesstimate the clock frequency being ~3.47Ghz.

Edit: In review such a clock speed seems rather high for an 18 core part. I'm
not sure if consumer parts will do 32DP FLOPs?

~~~
gpderetta
32 full width vector ALUs running at 3.5 GHz is probably not realistic. I
think that it is running around 2GHz at most [1]. The trick is that FMAs are
normally counted as two FLOP.

[1] (* (/ 512 64) 2 2 18 2 1000 1000 1000) = 1152000000000 FLOPS (512 unit
over 64 bits double) times 2 for FMA times two units, over 18 cores at 2 GHz)

edit: the 10 core part has a base clock of 3.3GHz. The 18 core part will
probably be in the 2.5 range at best (the best 18 core Broadwell I can find
runs at 2.3, but it is a dual socket part). Running in full AVX512 mode will
probably downclock the cpu further.

~~~
slizard
> The 18 core part will probably be in the 2.5 range at best (the best 18 core
> Broadwell I can find runs at 2.3, but it is a dual socket part). Running in
> full AVX512 mode will probably downclock the cpu further.

Indeed, the 2.2-2.3/2.7-2.8 GHz (base/boost) of the >18C E5-269X v4 CPUs is
the non-AVX instruction clock. With AVX the throttling these drop by 300-400
MHz [1] and I expect the skylake chips to behave very similarly. In fact I
would not be surprised if on average 512-bit AVX2 required more throttling
than 256-bit.

[1] [https://www.microway.com/knowledge-center-
articles/detailed-...](https://www.microway.com/knowledge-center-
articles/detailed-specifications-of-the-intel-xeon-e5-2600v4-broadwell-ep-
processors/)

------
slizard
Looks like they think they're still winning regardless of the price and that
simply bumping core count to be the kings and bringing the price back to the
Haswell-EP level high (rather than Broadwell-EP crazy) will be enough.

What also shows that they seem to be confident is that they're further
segmenting the market based on the PCIE lane count to push everyone wanting
>32 lanes into the >$1k regime.

All in all, the cool thing is not the i9s and high core counts which you could
get even before by plugging a Xeon chip into a consumer X99 mobo (though you'd
have to pay some $$$), but the _new cache hierarchy_ which will give serious
improvements in well-implemented, cache friendly codes!

~~~
slizard
...and of course AVX-512 for the lucky ones that can get significant benefit
from such a wide SIMD (also considering the very likely significant clock
limit for AVX instruction streams).

~~~
marmaduke
even chips with AVX2 on all cores slow down when it's fully used. The Xeon Phi
has a pretty low clock, 1.3 ghz iirc.

Still, it gets you GPU style performance on vector workloads without needing
separate hardware and software stack.

~~~
valarauca1

        even chips with AVX2 on all cores slow down
        when it's fully used
    

Not really. Xeon Phi's clock low because the die is massive. The downclocking
for AVX started with Knights Landing. My Boardwell-EP Xeon stays at 3.0Ghz
even when I (ab)use AVX2.

~~~
AbacusAvenger
I tried AVX512 on a Xeon (non-Phi) part recently and it was extremely
underwhelming. The workload (OpenMP-parallelized n-body) was actually _slower_
with AVX512. Since it was under virtualization and I didn't have access to the
bare metal hardware or to performance counters, I have no way of knowing _why_
, but I'm almost certain it was because it lost all-cores Turbo and
downclocked aggressively. It had previously scaled almost linearly going from
SSE to AVX/AVX2, but it regressed with AVX512.

~~~
smitherfield
It might be your processor only supported AVX512 in emulation — the article
makes it sound like only the Phi currently supports it natively.

~~~
AbacusAvenger
So they implemented AVX512 on the Xeon server parts in microcode? That seems
crazy.

~~~
smitherfield
It's fairly common practice with bleeding-edge vector instructions. The
reasoning (assuming it is the case here) is that a theoretically-minor
performance regression (the cost of converting 1x AVX512 to 2x AVX2 in
microcode) is usually much preferred over a CPU exception when attempting to
run a binary with AVX512 instructions on a server. It also means you don't
need a $15000 chip to test your AVX512 code.

------
redtuesday
It seems Skylake X will not be soldered [0] unlike previous HEDT CPU's from
Intel. AMD even solders their normal consumer CPU Ryzen. How much will Intel
save with this? 2 to 4 dollars per CPU?

I'm also curious what that means for the thermals. Intels 4 core parts have
much better thermals when delided to change the bad TIM.

[0]
[https://www.overclock3d.net/news/cpu_mainboard/intel_s_skyla...](https://www.overclock3d.net/news/cpu_mainboard/intel_s_skylake-
x_and_kaby_lake-x_cpus_will_not_be_soldered/1)

------
deafcalculus
It's high time Intel started adding more cores to consumer CPUs rather than
spending half the silicon area on a crappy integrated GPU. It's only thanks to
Ryzen that this is happening.

~~~
nl
I like my Intel graphics.

Good battery life on my laptop, good Linux support on my desktop. What's not
to like?

~~~
saosebastiao
Hmmm. I must be doing something wrong, because I feel like it's completely
unstable for me. NUC6i5 with integrated Iris on Ubuntu 16.04. I get a dozen
errors a day with the x server, and I get weird intermittent visual glitches
with multiple monitors.

I have no need for high performance and Iris should be good enough, but the
stability still leaves a lot to be desired.

~~~
wolf550e
Ubuntu 16.04 is pretty old code. I would try a mainline kernel (currently
[http://kernel.ubuntu.com/~kernel-
ppa/mainline/v4.11.3/](http://kernel.ubuntu.com/~kernel-
ppa/mainline/v4.11.3/)) and maybe even more up to date userspace libraries
(but then you're getting into a dependency mess. replacing just the kernel is
easy).

------
jacquesm
This really makes me wonder how many more unreleased products Intel has
waiting in some drawer somewhere for that case where they have some serious
competition.

It is also strong proof that without competition Intel is not going to release
anything to move the market forward.

~~~
coldtea
> _This really makes me wonder how many more unreleased products Intel has
> waiting in some drawer somewhere for that case where they have some serious
> competition._

Given the churn rate of technology? Probably close to none. It's not like you
can wait on CPU technology and have it still be relevant when you finally
release it.

Except if you mean "potential projects" that still need years, and tons of
work and R&D to be finished.

~~~
skywhopper
I doubt they have stuff on the shelf for a rainy day, but it's undeniable that
competition encourages them to ramp up R&D, invest in new processes and
factories sooner, lower prices, and push the envelope on what's releasable.
Sticking to 12 or even 16 core products would be a lot safer for Intel's gross
margins.

------
fauigerzigerk
I can't even read this article properly. The site uses 130% CPU, scrolling
hardly works at all, it keeps making network requests like crazy and it even
crashed my Chrome tab.

And for what reason? I do understand the dilemma that ad funded sites are in.
I'm not using an ad blocker. But I simply don't get what purpose this sort of
abusive website design is supposed to have.

I will never visit Anandtech again. I've seen it many times. It's never long
after advertising gets irrational that content quality suffers as well and the
entire site goes down the drain.

~~~
matt4077
I usually roll my eyes at these complaints, but in this case it's really quite
something. I just let the page sit unused for 5min and it downloaded 165MB.

Safari has much better defaults when it comes to such behaviour by ad
networks: It blocks 165 requests and shows no further activity after loading
5MB: "Blocked a frame with origin
"[http://www.anandtech.com"](http://www.anandtech.com") from accessing a frame
with origin "[http://pixel.mathtag.com"](http://pixel.mathtag.com").
Protocols, domains, and ports must match."

~~~
SquareWheel
This seems to be caused by a software bug (at least, I'd hope so). The site
continues to make requests on a loop, driving up data and resource usage.

------
josteink
Oh. So _now_ they're making the i9!

So it did take AMD and Ryzen to make Intel push it's game from it's 5-6 year
long hiatus with the i7 eh?

Competition is clearly good :)

~~~
gigatexal
Yes it is. Though the intel part at 599 for 16 threads will likely be the
better choice vs the 1800X.

~~~
Sanddancer
True, but I'm curious to see what the rest of the Threadripper line shakes up
to be. The $599 chip only has 28 pcie lanes, which isn't enough to run two
gpus at full speed. In comparison, the $300 ivy bridge-e cpu has 40 lanes.
Especially with their Zeppelin line, AMD's got a chance to shake up Intel's
stagnant IO situation.

~~~
gigatexal
Even modern GPUs don't give up a significant number of FPS in games when run
in 8x or even 2.0 mode. That's been known for a while. But creating a
workstation for high IO around this would be advised to go elsewhere.

------
eecc
So, let's give credit when credit is due and call this the Intel Ryzen CPU :D

------
Keyframe
That's good. Finally, we're moving with processors forward - probably thanks
to AMD, again. My only hope is for them (both, either) to make thunderbolt
standard feature on motherboards or ditch it completely.

~~~
gbrown_
Intel certainly seem to have come around to opening Thunderbolt up to wider
adoption.

[https://newsroom.intel.com/editorials/envision-world-
thunder...](https://newsroom.intel.com/editorials/envision-world-
thunderbolt-3-everywhere/)

~~~
pawadu
Good!

Something was obviously very wrong before when Microsoft left out Thunderbolt
on high-end machines for "non-technical reasons".

------
vardump
So does it support ECC like AMD? Otherwise not interested.

~~~
simias
I'll bite: every time I see a CPU-related thread on HN there are a few people
clamoring for ECC support. While I get why I'd want ECC on a high-availability
server running critical tasks, I don't really feel a massive need for it on a
workstation. I mean of course if it's given to me "for free" I'll gladly take
it, but otherwise I'll prefer to trade it for more RAM or simply a cheaper
build.

Why is ECC that much of a big deal for you? Maybe I'm lucky but I manage quite
a few computers (at work and at home) and I haven't had a faulty RAM module in
at least a year. And even if I do I run memtest to isolate the issue and then
order a new module. An inconvenience of course, but pretty minor one IMO.

Do you also use redundant power supplies? I think in the past years I've had
more issues with broken power supplies than RAM modules.

~~~
DaiPlusPlus
> I haven't had a faulty RAM module in at least a year

ECC isn't for physically broken RAM, it's for the prevention of data
corruption caused by environmental bit-errors (e.g. cosmic-ray bitflips).

Memory density increases with RAM capacity - which means a higher potential
for noise (and cosmic-rays...) to make one-off changes here-and-there.

I understand this now happens quite regularly, even on today's desktops (
[https://stackoverflow.com/questions/2580933/cosmic-rays-
what...](https://stackoverflow.com/questions/2580933/cosmic-rays-what-is-the-
probability-they-will-affect-a-program) ) - I guess we just don't observe it
much because probably most RAM is occupied by non-executable data or
otherwise-free memory - and if it's a desktop or laptop then you're probably
rebooting it regularly so any corruption in system memory would be corrected
too.

~~~
simias
This stack overflow link is interesting but most of the concern is over very
theoretical issues. In practice a significant portion of the humanity uses
multiple non-ECC RAM devices every day and yet most of us don't seem to
experience widespread memory issues. I can't even remember the last time my
desktop experienced a hard crash (well actually I can, it was because of a
faulty... graphic card).

I wish my phone fared that well, but I'm not sure RAM would be the first
suspect for my general Android stability issues...

~~~
sho
> most of the concern is over very theoretical issues

I've seen photo and other binary files become corrupted that were sitting on
RAID drives. The RAID swears they're fine, the filesystem swears they're fine,
both are checksummed so I believe them. The only possibility that I can see is
that they were corrupted while being modified or transferred on non-ECC
desktops connected to the RAID.

I'm not afraid of my computer crashing. I'm afraid of data I take great pains
to preserve being silently, indeed undetectably, corrupted while in flight or
in use. So that's why ECC is worth it to me.

~~~
semi-extrinsic
I'm curious: if storing lots of photos as .dng, .png or .jpg on ZFS without
ECC, one presumably gets bit flips eventually. How does this affect the files?
Do you just get artifacts in the photo? Or does the file become unreadable? If
so, can you recover the file (with artifacts)?

I guess the answer boils down to how much non-recoverable but essential-for-
reconstruction metadata there is in these file formats.

~~~
sosuke
I had bit flips on a few JPGs and it renders them useless. Luckily I had a
backup of a backup that had them uncorrupted. I'm still trying to find a
complete solution to this problem. Presumably the TIFF or BMP file formats are
more stable against bit flips.

I'd been reading so much about it over the past year or so I got to wondering
just how many times cosmic rays affect our brains and what kind of protections
we're running up in our skulls.

~~~
ianai
Our brains evolved through a chaotic, organic process. We're all the time
storing new data and even losing data (selective memory). I'm thinking there's
no mitigation process. If anything the random environmental noise might play
some role in consciousness.

------
fcanesin
Meh, I bought a Ryzen 5 1600 for $199 and a ASUS B350M for $29 at micro
center, paired that with 16 GB Crucial ECC DDR4 2400 for $149 (working on
ubuntu 16.04, confirmed and stress tested)... so for $377 I have 12 threads
@3.9GHz with ECC, that can go up to 64GB. Thanks Intel, but no.

~~~
pixel_fcker
That's... completely irrelevant to the sort of people who might be interested
in this chip.

That's like saying, "I've got a double cheeseburger with curly fries for
$1.99. Thanks Intel, but no."

~~~
fcanesin
It is extremely relevant. The feature set and performance target overlap, if
you know how to read you will notice that anandtech includes Core i7-7800X (a
6 core 12 thread CPU) on the new processors table and has a final comparison
against Ryzen 7 1800X, which is the same chip as Ryzen 5 1600 (with 2 cores
disabled).

------
Noctix
Can this be stated as an effect of Ryzen launch?

~~~
redtuesday
Probably, but more likely because of AMD's HEDT (high end dedktop) platform
called Threadripper which will have up to 16 cores (32 Threads).

Before AMD announced Threadripper Intel had only a 12 core chip on the roadmap
for the x299 platform, and charged around 1700$ for their 10 core chip. Now
they will be charging 2000$ for 18 cores.

Competition is such a nice thing. Glad that AMD is back in the CPU game. Can
only be good for us customers.

~~~
positivecomment
I'm personally looking forward to the times that this competition drastically
lowers the price of mid-range CPUs for the benefit of the normal people who
don't buy CPUs for 2000$.

(Yes, I know that "normal people" don't even buy laptops anymore, let alone
desktops. Please excuse my fantasy-world in which people buy desktop computers
and even upgrade the amplifiers of their at least 7-piece stereo sound system)

~~~
lhl
The AMD Ryzen R7 1700 is a $320 8 core/16 thread processor. Intel's cheapest 8
core/16 thread processor is their i7-6900K which sells for $1049. Even their
6/12 i7-6850K is over $600.

IMO the Ryzen R7's have been a huge "mid-end" win for anyone doing any sort of
multicore/CPU intensive work. Without competition, Intel's been gouging the
market for the past few years.

------
Sephr
Intel has been selling hexa-channel DDR4 Xeons since 2015 to select customers.

For users like myself constrained by memory bandwidth I would prefer that they
publicly started selling their Skylake-SP Purley platform. In some
configurations they even include a 100Gbit/s photonic interconnect and an FPGA
for Deep Learning acceleration.

I would gladly pay $2500-3500 for an 18-24 core Intel CPU with hexa-channel
DDR4 and PCIe 4.0 (or simply more than 44 lanes of 3.0).

~~~
emiliobumachar
Out of curiosity, why "to select customers" only?

I'd suppose the feeling of exclusivity isn't much of a sales point to
processor buyers.

I supply is constrained, seems like demand could be similarly constrained by a
price hike.

Do they get better feedback from these select customers? Better acceptance of
eventual defects without bad PR?

~~~
jacquesm
> Out of curiosity, why "to select customers" only?

To justify extreme price differences so the 'select customers' can credibly
claim this expensive stuff gives them an edge their competitors will not be
able to easily match.

In an arms race arms that are supply constrained will fetch premium prices.

~~~
StillBored
But to the parent point, all the more reason to open it up, and charge an even
higher premium when other people come online and start a bidding war.

------
abalashov
Perfect for running modern JavaScript frameworks! /s

~~~
gpderetta
Isn't JS still mostly single threaded?

~~~
abrookewood
Hence the 'end sarcasm' tag: /s

~~~
gpderetta
it work on multiple levels!

~~~
fb03
but only one at a time :)

------
mrmondo
Very glad to to see the clock speed didn't take a drop for the extra cores
however still no ECC is disappointing to say the least.

------
pulse7
So the ultimate question is now, how much the ThreadRipper will cost...

------
faragon
My next home CPU will be an AMD Ryzen.

~~~
theandrewbailey
I've been running a 1800X for about a month. Great chip, lousy RAM support
(hoping that BIOS announcement from last week turns out good).

I guess since I'm used to new high end GPUs being scarce for months after
launch, I wasn't expecting availability to be so good. Additionally, I didn't
expect the small aftermarket AM4 cooling selection.

~~~
nalllar
Anecdotal, running a beta BIOS with the new agesa version. The announcement
seems to be accurate.

Now able to reach 3333MHz on 2x16GB RAM which is specced to do 3200. Couldn't
hit 2900Mhz before, it wouldn't even boot.

~~~
theandrewbailey
That's encouraging, as I also have 2x 16GB 3200MHz sticks and a system that
won't boot if I don't run with the defaults.

------
StillBored
Really intel? I don't want 10+ cores just to get reasonable PCIe connectivity.
This is just another strike against these parts (after the lack of ECC). I
guess intel is trying really hard to protect their server parts, but they
continue to gimp the high end desktop parts (as if the removal of multisocket
isn't enough).

I would really like to understand why intel tries so hard to not make a
desktop part for people willing to spend a little more to get something that
isn't basically an i5 (limited memory channels, limited PCIe, smaller caches,
etc).

~~~
old-gregg
Do you mind me asking what you'd be using those PCIe lanes for? Their 8c part
is good for a couple of NVMe drives and a video card, that's quire reasonable.
The only use for 44+ lanes I have in mind is a mining rig but that's probably
beyond reasonable and quite niche. No?

------
peter303
Please put in nextgen Macbook to be announced in June. Jump to the head of the
line Apple. Remember your roots.

------
drudru11
I am still getting a Ryzen build

------
vbezhenar
Well, Intel still didn't show anything better than Ryzen 8 core. Their
processors have higher costs and require fancy motherboards which I don't even
sure I can buy in my city.

~~~
old-gregg
I am actually (sadly) won't be buying Ryzen because of this announcement.
Based on Ryzen/Skylake benchmarks, looks like i7-7820X will be a better deal:
15-20% performance advantage (because of better IPC + faster clock) for only
$100 extra. I honestly do not know how to consume more than 28 PCI lanes...

Also, Ryzen seems to struggle on Linux vs Intel a bit. I have seen people
complaining about it's unwillingness to use the Turbo frequency and its
unixbench numbers are unimpressive, particularly execl throughput.

------
nazri1
90s: CPU Hertz 2000s: RAM Sizes 201xs: CPU Cores?

~~~
jacquesm
Yes, that's roughly correct. Even so, a CPU that would have much better single
threaded performance would outsell one that has much lower single threaded
performance but more cores in the consumer market. In the server market it is
the opposite.

------
kruhft
Good. Bring on more cores. I could use them.

------
m-j-fox
High-Cost Computing?

------
dboreham
But this one goes to....9..

------
known
Why not name it as i18

------
RichardHeart
I'm sick of having 0 to 1 choice in so many things. If a monopoly is bad, then
what's the next worst number of companies? Two. Isn't the governments job to
enhance the "free" market by forcing competition through forcing open on-
boarding, or IP sharing, or breaking up, or really anything effective to
lubricate the wheels of capitalism.

~~~
qubex
Actually some counterintuitive results from Industrial Relations (the branch
of economics that studies supply-side structure, amongst other things) and
Game Theory indicate that competition might be _greater_ between oligopolist
firms than between those that exist in situations of perfect competition
(mainly because by ”knocking out” an oligopolistic competitor you get a big
chunk of market share and thus sales volume and economies of scale, whereas
”knocking out” an anonymous perfect competitor nets you (ideally) an
infinitesimal additional market share shared by an infinite number of other
competitors).

~~~
bryanlarsen
If my competitor offers a widget on Amazon for $X and I offer it on Amazon for
$X-1, I will capture ~100% of that market and the competitor will capture ~0%.

The internet's winner-take-all effect has both benefits and drawbacks for us
consumers.

~~~
qubex
This is the economically expected outcome iff the lower-priced firm has no
production capacity constraints and the products are an undifferentiated
commodities to the point that consumers have no decision to make other than
price.

------
pulse7
18-core Skylake-X is a luxury good: people will buy it just because it has 2
cores more than the ThreadRipper...

~~~
reitzensteinm
Or, you know, 4x the AVX throughput, stronger single threaded performance, and
far fewer performance gotchas.

Interested in using BMI2 for bit twiddling because you'd like to efficiently
manipulate bit matrices? PDEP has a reciprocal throughput of 1 on Skylake, or
18 on Ryzen. Guess it's time to make the tough choice between the top end
Threadripper and a Core i3 6320.

Intel has positioned these well if the Ryzen price tag rumors are correct. If
you're building a workstation with 16 cores, $1k for Ryzen or $1.7k for
Skylake is not a straight forward decision.

If Ryzen is more than that, I don't see it taking a big bite out of the
market. Which isn't surprising, as Intel did just halve the margins on their
enthusiast parts...

~~~
quickben
"If Ryzen is more than that, I don't see it taking a big bite out of the
market. Which isn't surprising, as Intel did just halve the margins on their
enthusiast parts..."

Either it is more than that, and it will take huge chunk of the market, or
Intel simply reduced margins out of the goodness of their heart.

~~~
reitzensteinm
I don't understand what you're trying to say. My point was that a $1k
Threadripper would have absolutely destroyed Intel's 2016 lineup, but it'll
merely be competitive with what was announced today.

If the top end SKU is more than $1k then I don't see it taking much of the
market, due to the factors in my original post, factoring in the total cost of
a machine and inertia greatly favoring Intel.

~~~
quickben
I was trying to say that AMD did threaten Intel with market share, and they
countered that with lowering prices.

As for the rest of your post, it depends. HEDT is diverse. I was looking for a
new CPU for my hobby project, 8 - 16 cores, still undecided. I have literally
0 FPU needs, but will take any integer power there is.

I also pay for electricity, so, 65W AMD vs 140W (at least for 6 core) Intel
makes my decision very easy.

You also have to consider that AMD HEDT is announced and arriving. Intel
response is all marketing slides right now, full with TBDs. They are also
misleading people that the chunk of cache they moved from L3 to L2 will
magically be all IPC gains.

I own right now more Intel than amd machines, but moving forward my TOC says
AMD is the clear winner.

I do hope Intel will come back, but realistically, they are still overclocking
sandy bridge. It may take them several years for a new architecture.

