
Intel Skylake Launch, with Architecture Analysis - MekaiGS
http://www.anandtech.com/show/9582/intel-skylake-mobile-desktop-launch-architecture-analysis
======
nhaehnle
The most interesting to me is that Intel apparently stopped publishing
transistor counts starting with the 14nm node.

This is significant because as structure sizes become smaller, the
restrictions on possible layouts (so called DRCs, design rule constraints)
become ever stricter. For example, you can't just place wires wherever you
want; you have to take into account the relationship to other wires. With
stricter rules, the end result _may_ be that the effective scaling achieved is
worse than what the structure size suggests, because the rules force a lower
density of transitors.

So what are Intel hiding? Are they far ahead of the competition in terms of
DRCs and don't want others to know how much, or are they struggling (like
apparently everybody else) and want to hide a less-than-ideal effective
scaling? Obviously, your guess is as good as mine, but it's certainly
fascinating to watch the competition as Moore's law is coming to an end.

~~~
jonah
How has/hasn't CPU reliability changed over the decades? Are these generally
as sturdy as a 486? A 286 machine was mentioned in a recent article here, will
you be able to find a running Skylake machine in 2045? How does the process
size affect this? What other factors?

~~~
hga
_A 286 machine was mentioned in a recent article here, will you be able to
find a running Skylake machine in 2045?_

I've been thinking about this lately, and between the jihad against lead
([https://en.wikipedia.org/wiki/Tin_pest](https://en.wikipedia.org/wiki/Tin_pest))
and these machines storing the BIOS etc. in flash memory, I'm beginning to
doubt it.

------
tychuz
Intel processors are great and really blow competition away (on desktops). But
that's kind of annoying - when there's no competition, Intel holds on
releasing high end CPUs (w/ 6 cores) on new architecture and their price is
insane...

~~~
bhouston
This is a real issue. At my company, we always buy the $300-$400 CPUs but they
are barely getting faster with each generation. The last time there was a big
jump was with Nehalem.

~~~
gh02t
The performance jumps just got more specialized. For instance, introduction of
AVX instructions made a _huge_ performance bump for some areas, especially
scientific, albeit you needed crazy cooling to run a program using AVX
extensively for more than a few seconds. I can remember having around an 80%
performance jump with one code I used compiled with vs. without AVX. x86
hardware virtualization extensions were another enormous performance boost.
Hardware transactional memory was supposed to be a big (order of magnitude)
boost to some things on Haswell but they ended up having to disable it.

Through all this time though, I haven't noticed a massive _overall_
performance boost in a long time. If all I wanted to do was browse the web and
read email I'd probably still be using my Core 2 Duo MBP from forever ago. The
speed boosts have been piecemeal, one generation improves one thing, the next
something else. This is different from Ye Olden Days, I can remember upgrading
from a 386 to a Pentium II to a Pentium 4 and each upgrade was so dramatic
that it completely redefined what I could do with my computer. The last real
performance jump I can remember was the Core Duo upgrade that brought real
multicore.

I do kinda miss my beloved old SPARCstation though. Makes me wonder what
computing looks like in the alternative dimension where SPARC or one of the
other architectures had taken over the PC market like Intel ultimately did.

~~~
uxcn
There have been lots of other performance improvements, but I think they're
less visible than other stuff. The CRC instruction that got added in SSE4.2
seems to get fairly wide use in hashes and stuff like btrfs. I think _rdrand_
, _aes_ , and _clwb_ are interesting ones as well. I guess power consumption
is kind of another visible gain.

I think the biggest visible change recently though, at least as a software
engineer, is the continuously lowering cost, increasing availability of,
memory. I can literally store an entire project in memory and not have to
worry about any network or disk bottleneck.

------
TheLoneWolfling
Call me crazy, but I want a processor that's just that - a processor. No
integrated GPU that's just sitting there taking chip resources (die space,
power leakage, etc) that could be used for better things (designing things for
lower leakage current, async sections of the chip, etc).

Among other things, discrete GPUs blow integrated GPUs out of the water, and
the time-to-obsolescence of a GPU isn't anywhere near that of a CPU. It also
makes sense from a cooling perspective - it's a whole lot easier to cool two
chips in separate areas than one larger chip, generally speaking.

~~~
Someone1234
For desktops, sure. But the desktop form factor is dying.

Where it matters is mobile. Even if you have a dedicated graphics processor,
it makes sense to have integrated and then to have it auto-switch as needed.
This saves on electrical power, reduces heat, and may extend the life of your
mobile devices (since heat is damaging).

Ultimately if your want to force-on dedicated 24/7 then normally you CAN, but
integrated simply gives you more options if you wish to extend the battery
life of your device significantly.

~~~
api
"Dying" is a pet peeve of mine. Usually it's not true.

The desktop form factor is declining significantly in popularity in the
mainstream market in favor of laptops and mobile devices. The reason is
obvious: laptops and mobiles are portable, and the average user (and even many
pro users) do not need _that_ much power in their local device.

It's still very popular in the gamer and professional workstation market, and
likely will be for a long time. It's just hard to cram the processing power
that gamers, hard-core developers, CAD and simulation users, etc. need into
something as small as a laptop without creating what's comically referred to
as a "ball burner" or "weenie roaster." I'm sure there's a female equivalent
expression but it's even less polite. :P

Intel has many-core options for these machines, but they tend to lag a little
behind. If you have deeper pockets and _really_ want power you can always put
a high-end GPU in a server board with 2-4 Xeons and put it in a tower case.
Now you have a data center node with a 4K monitor on it.

~~~
coldtea
> _It 's still very popular in the gamer and professional workstation market,
> and likely will be for a long time. It's just hard to cram the processing
> power that gamers, hard-core developers, CAD and simulation users, etc. need
> into something as small as a laptop without creating what's comically
> referred to as a "ball burner" or "weenie roaster."_

Gaming is a niche market mostly for younger ages (people with "game rigs"
drops rapidly after a certain age) and most hard-core developers use laptops
-- the laptops you see in each and every dev conference are their main boxes,
with or without external monitors.

As for CAD and simulation that's too niche to even mention. Of course some
people will always have a need for a more powerful form factor. But if CAD
users are part of the counter-argument for "it's not dying", then it very much
is.

~~~
hga
I'm not sure you're right, or at least as of late Supermicro has been selling
X[9/10]SAE workstation motherboards, which will take a Xeon or consumer chip
and ECC/non-ECC memory. No IPMI, lots of the sorts of slots we want, and
onboard sound. The first X9SAE model, with which this message is being
drafted, would appear to have been successful enough they repeated it in the
next generation:

[http://www.supermicro.com/products/motherboard/Xeon/C216/X9S...](http://www.supermicro.com/products/motherboard/Xeon/C216/X9SAE.cfm)

[http://www.supermicro.com/products/motherboard/Xeon/C220/X10...](http://www.supermicro.com/products/motherboard/Xeon/C220/X10SAE.cfm)

They didn't make such a class of boards in the '8' generation.

Not dead yet!

------
skrause
It seems that all Skylake CPUs suitable for new MacBook Pros won't ship until
early 2016. I was hoping for a new MBP this fall, but it seems that my old
MacBook Air has to do it's job a little longer.

~~~
reitzensteinm
I'm really looking forward to a benchmark of the GT4 graphics; I'm a game
developer, and it might just be the first time I can work on a laptop with
integrated graphics.

Of course, it'll still be a rare and power hungry configuration, so it's not
like I'll suddenly be using a Macbook, but it's a nice progression.

~~~
bd
_I 'm a game developer, and it might just be the first time I can work on a
laptop with integrated graphics._

Please don't :). New iGPUs may be catching up to old dGPUs, but meanwhile new
dGPUs are just skyrocketing in power far beyond them.

I made this chart to easily see relative performance of most common notebook
GPUs:

[http://alteredqualia.com/texts/notebooks/nvidia-
gpus.png](http://alteredqualia.com/texts/notebooks/nvidia-gpus.png)

Or if you are interested in both desktop and notebook GPUs relative
performances:

[http://alteredqualia.com/tools/gpus/](http://alteredqualia.com/tools/gpus/)

What Intel does is great for rising minimal specs you can expect in notebook
graphics, but you can do much better than that, even cheap modern dGPUs wipe
out the best iGPUs. You can extrapolate Skylake Iris Pro from Intel marketing
claims vs older Iris Pros, it's going to be better but not Earth-shatteringly
better.

And this gap is just going to get wider in 2016, with HBM2 and 16 nm node
sizes finally coming to GPUs (both to Nvidia with Pascal and AMD with Arctic
Islands).

~~~
wlesieutre
Even if you're developing a high-end crazy graphics game in UT4 on your
desktop with SLIed GTX 980s, it's still useful to be able to work on it from a
laptop at more than 2 FPS.

~~~
reitzensteinm
That's exactly my situation. Going to 720p requires half the fill rate of
1080p. Turn off some postprocessing and antialiasing and you might get double
the performance again.

So if you're at 1/4 peak performance, you're still running a smooth game
albeit not quite at full detail.

If you're at 1/16 peak performance... things aren't looking too good by the
time you get your game smooth.

------
MichaelGG
Intel SGX is possibly huge. With a trusted enclave that's verifiable, you
could do things like verify that a server is actually running certain code
(like a bitcoin mixer).

Of course it also allows real DRM, where a remote server can verify you're
running unmodified code.

But how does the key management work?

My personal interest is the continuing quest for a machine strong enough to
develop, but not warm my hands at all. Macbooks are insanely hot (how can
Apple even pretend to be about quality with those designs??), X250 ThinkPads
is "alright" if I aggressively throttle the processor. Seems like the perf
improvements are effectively dead so maybe we'll see cool laptops. Though OEMs
seem to screw up as much as possible so who knows...

~~~
userbinator
_Of course it also allows real DRM_

I think thats's the scariest part. 16 years ago people strongly oppposed the
processor serial number in the P3(
[https://news.ycombinator.com/item?id=10106870](https://news.ycombinator.com/item?id=10106870)
), something that seems almost harmless compared to what (anti-user)
"security" features are in today's hardware. It's not only DRM, but no doubt
other ways of taking control away from the user will be found for it.

With performance not being all that much better, I'm personally going to stay
on older, "more free" hardware for a while.

~~~
anonymousDan
I would think the real target market is cloud computing though.

------
chipaca
Am I alone in wanting a small (NUC or (thin?) mITX) board with a Core M on it?
:-(

~~~
ju-st
No! I still want to try out a Core M with a beefy cooling solution. You could
propably beat mobile i3/i5's with it while having a very low idle power
consumption...

------
currysausage
I wonder why there were so few laptops with iX-5... processors in the market.
Most laptops with 5th generation Cores seem to have low-power U processors,
but even these are hard to find with some manufacturers. Did the full-power
versions arrive too late, so manufactuters just chose to wait for Skylake?

~~~
jychang
Pretty much. Apple is a good example; the Macbook Pro 15 inch skipped over the
5xxx series, and the next update should be a Skylake chip.

------
throwaway7767
I've seen a lot of references to skylake "supporting wireless charging" but
with no explanation on what that means. I read this article hoping for an
answer but it was not mentioned.

Can someone enlighten me? What does the chipset and CPU have to do with the
power supply method? I'm assuming they're not including an RF power antenna
on-die, so is this just code for "we got peak power consumption below the
threshold practical with today's wireless power transmission devices"?

~~~
wmf
Intel doesn't market chips; they market "platforms" that are bundles of chips
that they want you to buy together. Centrino was the most infamous one and
maybe now Intel is trying to bundle some wireless charging chips for tablets
or something.

------
uxcn
I'm honestly still trying to figure out the implications of eDRAM. It seems
like the main benefit is offloading traffic from the memory bus, particularly
now that it's coherent.

Am I missing something?

~~~
eloff
It's to increase the effective memory bandwidth of the system. GPUs have very
high memory bandwidth requirements, so the models with the high end GPU would
be limited by the memory subsystem. Using the extra cache the amount of
fetches from memory can be reduced, increasing the amount of memory bandwidth
available.

~~~
uxcn
That makes sense. Using it as a victim cache seemed kind of odd, but I think
it makes more sense now. It seems like it's a still evolving technology. I
wonder if there are corresponding performance counters.

------
tkinom
"4.5W ultra-mobile Core M" \- does it means if someone uses it with a typical
2250 mAh cell phone battery, it only last half hour?

That's a long way to catch up with ARM SOC, right?

~~~
nyc640
That's the TDP of the chip, or the maximum amount of heat it can generate, not
necessarily the power draw. Also, you were assuming it was running at full
load 100% of the time. I don't have a source, but for comparison I believe
Apple's AX chips have a TDP of ~2-3 W.

------
glasz
i'm pretty pissed. skylake-h chips will only have hd 530 graphics which is
slower than the current version. and still no hdmi 2.0. just laughable.

~~~
DiabloD3
HDMI 2.0 support is supplied by the motherboard, and not a required feature.
Skylake's Alpine Ridge can drive two 4k 60hz displays, and supports HDMI 2.0
operation.

~~~
bryanlarsen
Alpine Ridge is an extra cost option that won't be included in the vast
majority of laptops.

~~~
coldtea
Then don't buy the "vast majority of laptops" but on that includes it.

