
Skylake's Linux power management is dreadful you shouldn't buy until it's fixed - edward
http://mjg59.dreamwidth.org/41713.html
======
dman
For what its worth Microsoft has struggled with power management on mobile
skylake as well - [https://www.thurrott.com/mobile/microsoft-
surface/62772/micr...](https://www.thurrott.com/mobile/microsoft-
surface/62772/microsoft-will-not-fix-power-management-issues-with-new-surface-
devices-until-next-year) . There have been multiple firmware upgrades but if
you look at /r/surface on reddit multiple people (myself including) still have
issues with power management and sleep on the skylake surface pro 4's.

~~~
StillBored
It is nice when your devices appears to always be on. But lets be real, my
windows tablet isn't a phone.

So, I disabled InstantGo/Conected Standby on my dell tablet, and the battery
now runs for days. I charge it once a week or so, unless I start playing games
on it (in which case the battery only lasts for something like ~3/4 hours).
That device gets pretty heavy use for book reading/ netflix/ web surfing.

The downside is that the power button is stupidly placed next to the volume
buttons. So I frequently hit it instead of the volume button and that results
in a ~5-10 second hibernate/resume sequence. I've been considering how hard it
would be to physically cut the lead to it and the useless "window button" and
PtP solder the windows button as the power button. (I haven't found any
software solutions for reversing them).

~~~
duncans
Isn't there an option in Control Panel "Change what happens when I press the
power button"?

~~~
StillBored
The button does what I want. I just want it somewhere that is tactilely (is
that a word?) different than the volume buttons. I have the same problem with
kindles, I hit the power button instead of the volume buttons. This is one of
the things that apple got right.

------
hacknat
I can personally confirm this. I bought a brand new Dell XPS 15 with Skylake
(i7) in December. I installed Linux on it (kernel 4.3), and it has been a
power management nightmare from day 1. I've only ever gotten 2 hours from the
battery. I just sent it in because it powercycles at random now, never making
it past the Dell splash screen anymore. When I run the hardware diagnostics,
sans hard-drive, it dies in the middle of one of the processor tests.

Other people have been pointing out that Windows is struggling with Skylake as
well, and I've heard the same.

Skylake was touted by Intel as being one of their proudest achievements in
power management to date. My guess is that their changes were so drastic that
the software didn't keep up.

Edit:

I do have an NVMe hard drive, which does seem to cause some issues, for
reasons passing understanding.

~~~
UVB-76
Perhaps this explains why Apple have been so slow to get their Skylake
MacBooks to market.

I was disappointed other OEMs had beaten them to it, but it looks like they
just dumped hardware on the market without suitable software. Apple obviously
take responsibility for both.

~~~
usefulcat
"..it looks like they just dumped hardware on the market without suitable
software"

Intel has been doing this too with the NUC. If you go on the Intel forums,
you'll find people having serious problems right now with Skylake NUCs [1].

Even the previous generation of NUCs (that were released over a year ago)
_still_ have major bugs with Linux. For example, there's a BIOS bug that
reboots the machine instead of shutting down. They've known about it for at
least 5 months now and it's still not fixed [2]. And it seems likely to affect
all versions of Linux, not just some obscure variant.

[1]
[https://communities.intel.com/docs/DOC-110236](https://communities.intel.com/docs/DOC-110236)
[2]
[https://communities.intel.com/thread/88822](https://communities.intel.com/thread/88822)

------
speeder
I am planning in buying a 4690K

LOTS of people keep nagging me that I should go for a Skylake instead, just
because it is "newer" and "better because it is new"

I really don't understand that logic.

Beside the thing pointed in the article, Skylake has other problems:

Win7 don't work properly in it (and Win7 is the last Windows to emulate old
DirectX versions correctly on Windows itself).

Skylake wasn't design to support analog video at all, something that is still
common in third world countries, specially as people keep using old monitors
that never break, and are frequently superior to almost all reasonably priced
new monitors.

Skylake doesn't support OSX (and there are people with reasons to want that).

Skylake uses DDR4, that in third world might not be even available for sale,
or might have some insane prices (2, 3 times the DDR3 price).

Skylake has a couple bugs, and more might pop up in the future.

except in US and maybe some EU countries, the price to build a Skylake system
is higher than the speed benefit it gives compared to Haswell (usually at most
10%, frequently less...).

EDIT: I would also like to point out that Devil Canyons has been reported to
work with DDR3 up to 2666 with no issue, some mobos allow Devil Canyons to go
up to DDR3 2800 without erroring or being unstable.

The thing is, those DDR3 can ALSO reach much lower latency than similar
bandwidth DDR4, the few DDR3 vs DDR4 benchmarks done so far, show that usually
there is no difference, and when there IS a difference, is usually DDR3
winning.

~~~
rz2k
Will the inclusion of HEVC/H.265 decoding in the integrated graphics be
beneficial?

On an older MacBook Pro I have with Intel HD Graphics 3000, the fans go beserk
on video that my iPad handles with passive cooling without even becoming warm.

I can imagine the codecs used on everything you encounter on regular sites
will quickly begin to expect optimizations built into Skylake.

~~~
cmurf
Yeah why is that? My Macbook goes nuts like a hair dryer on full when playing
a youtube video, where the giant ipad with the same dimension and resolution
video is all _yawn_.

~~~
xgbi
I had this too, on Chrome on my MacbookAir 2012. Then I found this:
[https://chrome.google.com/webstore/detail/h264ify/aleakchihd...](https://chrome.google.com/webstore/detail/h264ify/aleakchihdccplidncghkekgioiakgal)

Google pushes his VP9 format so hard, it serves it to its chrome users on Mac,
where it cannot be hardware-accelerated... This is ridiculous, especially when
it serves H264 when browsing on Safari!

I installed this app and switched to H264 for all videos, bringing in HW
decoding, and since then my autonomy tripled when I go on Youtube binge.

~~~
Klathmon
It's a bit of a chicken and egg problem.

Hardware vendors won't support VP9 until it has some usage, and it won't get
usage until it has hardware support.

VP9 has some serious space savings, so it's extremely beneficial for Google to
use it when possible, and many users will appreciate faster loading video at a
higher quality.

Unfortunately they can't know when you'd prefer to have a more efficient
decoding experience, so they rank their codecs and use the "highest" one
available.

------
Animats
Intel: _" Long term reliability cannot be assured unless all the Low-Power
Idle States are enabled."_

Does that mean if you run the CPU too much, it will die quickly? Is there some
low limit on time at full power? Electromigration problems, perhaps?

~~~
sliverstorm
_Electromigration_

That would be my bet. They probably assumed based on typical workloads, a
certain percentage of the time, the part would be asleep. Which is generally a
reasonable assumption for non-server parts.

More sleep means less activity, and less heat (which accelerates EM)

~~~
Animats
The desktop Broadwell CPU was supposedly canceled because they had lifespan
problems.[1] Overclocking sites report that Skylake processor life suffers
badly when overclocking is attempted. Now Intel is effectively saying their
CPU is intermittent-duty only. Intel may be having serious problems with
electromigration.

Is this the limit for CPU speed and transistor size?

[1] [http://wccftech.com/intel-debating-commercializing-
broadwell...](http://wccftech.com/intel-debating-commercializing-broadwellk-
binned-alleges-italian-report/)

~~~
userbinator
_Now Intel is effectively saying their CPU is intermittent-duty only._

That's really disturbing if true. Poor power management has resulted in
devices being warmer than they need to be and shorter battery life, but that
seems trivial in comparison to the hardware actually being damaged. IMHO I
would consider it a _flaw_ if a CPU did not last effectively forever at full
load --- older OSs which lacked any sort of power management basically kept
the CPU in this state all the time, and there's plenty of old hardware around
and working to show that it isn't unrealistic.

It seems they're heavily sacrificing lifespan for performance, which is
attractive to (most) users and also builds in some planned obolescence, but
it's sad that what was once considered to have indefinite lifetime is now
almost a consumable. To use a car analogy, this is like moving from a
conservatively designed engine that lasts hundreds of thousands of miles but
only produces 100HP to a top-fuel dragster engine that can produce thousands
of HP but can't run at full power for even a minute without destroying itself.

~~~
therein
Can you imagine a future where we get so used to producing microprocessors
this way for so long that another consideration for long-distance space travel
becomes "but will the CPU still work when the rover gets there"?

~~~
sbierwagen
Presumably Intel is having electromigration problems with their chips because
the feature size is so small-- 14nm. Space hardware tends to use rad-hard
chips, which use bigger transistors. Curiosity apparently uses the RAD750,
which has a 150nm feature size, and can be clocked up to a blistering 200MHz.

~~~
Animats
And costs a blistering $200K.

The US applies export controls to radiation-hardened ICs, which has resulted
in a dearth of rad-hard ICs. Nobody wants to run a silicon on sapphire fab any
more.

------
openfuture
This is literally the best timing ever. I was going to place the order for my
XPS 13 yesterday but ran into some banking trouble. Then on my way to the bank
today I was browsing HackerNews on my phone and now I'm conflicted if I should
go through with the purchase or try to find the older model.

Does anyone have a reasonable guesstimate as to how likely it is that this
gets fixed, because it sounds to me (from this thread) that there is a flaw in
the design of the chip and this won't be fixed so easily.

~~~
_Codemonkeyism
Or the worst, I've ordered a XPS13 some days ago ;-)

------
csense
I was thinking of going AMD for my next system. This post solidifies that
decision. Hopefully when their next-gen arch is released [1] it won't be as
buggy as Intel's -- which seems like a fairly low bar.

[1]
[https://en.wikipedia.org/wiki/Zen_%28microarchitecture%29](https://en.wikipedia.org/wiki/Zen_%28microarchitecture%29)

~~~
8draco8
I've just build AMD system for myself based on FX 8320 and so far I love it. I
am developer and I am constantly running multiple applications (http server,
db, multiple chrome tabs) which are using multiple cores. FX series also
supports VT and many other features [http://cpuboss.com/cpu/AMD-
FX-8320](http://cpuboss.com/cpu/AMD-FX-8320) on top of that usually mobos are
supporting 32/64GB RAM even in cheaper boards, and it's enough for casual
gaming, not bad for used £90 CPU with CM Hyper 212 EVO cooler

------
dsp1234
Note that several commenters on phoronix[0] are saying that they are not
having any problems (along with the commenter on this thread[1])

[0] - [https://www.phoronix.com/forums/forum/phoronix/latest-
phoron...](https://www.phoronix.com/forums/forum/phoronix/latest-phoronix-
articles/865131-well-known-linux-kernel-developer-recommends-against-buying-
skylake-systems#post865131)

[1] -
[https://news.ycombinator.com/item?id=11492693](https://news.ycombinator.com/item?id=11492693)

~~~
mjg59
Yes, the behaviour between desktop and mobile parts is very different on
current Intel. You'll only see this on mobile - I'm sorry for not having made
that clear.

------
TYPE_FASTER
A few semi-relevant notes: * We've bought multiple Dell XPS 13 laptops with
Windows off Amazon because the same spec there was roughly $400-500 cheaper
than buying the XPS Developer Edition from the Dell website. * We tried
running Linux on a Dell non-XPS Skylake Core i7 laptop and it was having many
kernel panics. A quick Googling revealed people with a Skylake chipset having
a similar issue. * I noticed just a couple days ago that Dell has updated
their XPS Developer Edition laptop to Skylake. One difference I see on the
site is it ships with Ubuntu 14.04 SP1. I haven't read much about SP1, other
than it looks like it's about a year old.

So yeah, I'm still buying XPS laptops with 5th gen chipsets because we've had
issues with Skylake.

~~~
davidbanham
Keep in that the XPS units often have Broadcom wifi chipsets while the
developer edition units have Intel. Much better Linux driver support for the
Intel cards. The Precision 15" now comes in the same chassis as the XPS and
has the Intel chipset (and other goodies)

~~~
gshulegaard
This x1000. Always get the Precision 15" over the 13"...the hardware is just
better suited for Linux. The Sputnik team has to take chassis from other Dell
products and support them as manufactured...this is why the XPS 13 DE has
Broadcom. I have a XPS 13 DE from last year and I wish every day I would have
just ponied up and gotten the 15.

But keep in mind that Dell is releasing their drivers into upstream kernels,
so if you are on 14.04, you should be using the LTS Enablement stack in order
to avail yourself of the latest firmware with stability improvements (this is
how I fixed the Broadcom Null pointer bug).

[https://wiki.ubuntu.com/Kernel/LTSEnablementStack](https://wiki.ubuntu.com/Kernel/LTSEnablementStack)

You don't have to use the LTS Enablement Stack if you installed with newer
service pack .isos.

The 15" also has Thunderbolt which is a must if you want to have a reasonable
docking solution for a Linux laptop.

~~~
suvelx
So _thats_ what LTSE is. Saw it in the compatibility table for the TB15
thunderbolt dock.

Which, I really can't tell if it'll work or not. Really need to plug a 4k
monitor into it.

~~~
gshulegaard
Yeah, LTSE is how you can keep upgrading your kernel while staying on the LTS
releases of Ubuntu. Granted you are limited to the kernel versions of the
previously released Ubuntu version and there is a slight delay between when a
new version is relased and when it becomes available to LTSE, but overall I
have found the stability to be good.

From what I found out (if I remember correctly), I have great confidence that
Thunderbolt docks should work well with Linux kernel 3.19+. But take this with
a grain of salt since I ended up with the 13 DE and therefore only have a mDP
and couldn't try a Thunderbolt dock.

------
nkurz
_Skylake 's Linux power management is dreadful you shouldn't buy until it's
fixed_

Why is this phrased as being an issue with Skylake, rather than an issue with
Linux? That is, why not "Linux's power management on Skylake is dreadful and
you shouldn't install it until it's fixed?"

Also, as someone who is running a custom compiled Linux 4.4 on Skylake, what's
the best way to check what idle state is being used? Idle stats with
'powertop' shows the majority of idle time being spent in C8-SKL. Is this the
same as the PC8 he's talking about?

~~~
zxcvcxz
>"Linux's power management on Skylake is dreadful and you shouldn't install it
until it's fixed?"

Because Linux is sometimes the only option. No other OS has the same options
as far as containerization.

~~~
techdragon
Ok so this is completely wrong. Both FreeBSD and the entire Family of
OpenSolaris derivatives offer:

\- Superior "container" support to Linux (more secure, better tested, better
designed, not a hacky bolted on thing like Cgroups.)

\- Their source code, they are Open Source

Or did you actually just mean to say "Only Linux has Docker"?

~~~
tehbeard
1\. Reference to this superior "container" technology? (geniunly interested to
read about it).

2\. Source access to the OS don't mean squat unless you're you can grok OS
code. Also how is BSD open source different to Linux's?

~~~
neerdowell
They're likely referring to Solaris Zones and FreeBSD Jails.

------
lllllll
I got my Asus UX303UA(which won over the dell xps13 option) - i7-u6500 last
week. I installed Linux Mint(MATE edition, though I use mainly i3wm). After
upgrading to kernel 4.5 I get 8-9h of battery with normal use ( Vim, firefox
several tabs, rails/node/redis/postgres server running...), even longer if I'm
on-off the computer. I'm really happy with it so far.

Besides, I could add +4GB RAM and replace sata HDD with SSD.

------
xcasex
This reminds me of something.. Oh, right. Baytrail support. that's still
lacking as well, and that's also a C-state issue.

------
ciokan
[https://gist.github.com/ciokan/1bb8a5a23e00d1f4344b04d88debc...](https://gist.github.com/ciokan/1bb8a5a23e00d1f4344b04d88debc59c)

Good or bad? Have no idea what C8 should say that's why i'm asking. Dell XPS
9550 ubuntu 16.04

~~~
mjg59
Well, that's better than most people are seeing - getting deeper than PC6 may
involve some graphical things. So the question now is why you have such
different results to me. Can you paste the output of sudo lspci -vvv
somewhere, along with cat /sys/devices/virtual/dmi/id/bios_version ?

~~~
ciokan
lspci:
[https://gist.github.com/ciokan/9b0c9b310ecb62b2b2d2cdeacb5f8...](https://gist.github.com/ciokan/9b0c9b310ecb62b2b2d2cdeacb5f8d18)
bios_version: 01.01.15

~~~
mjg59
Thanks, I'll try to replicate your PCI configuration and see if I can trigger
any behavioural changes.

~~~
shadeslayer
I think I'm seeing a similar usage on my 9550 here
[https://gist.github.com/shadeslayer/4184b49505d817a9a60d5b0a...](https://gist.github.com/shadeslayer/4184b49505d817a9a60d5b0a2ee83513)

------
blinkingled
I guess with all the pressures and cost cuts in QA you'd be better off
sticking to Haswell Gen machine for next few years. Which are perfectly fine
by the way - Haswell Xeons for your workstation and anything around the
i7-4xxx for your laptop will do you fairly well for next 3 years or whatever
it takes for Intel to put out a power efficient, stable and well supported
part.

------
MrQuincle
Seems a bit exaggerated. I now have a Yoga 900 running 4.4.0-18-generic. It's
fine to work through the entire day (6 hours).

The only thing that's still causing hiccups is the combination of Wifi and
Bluetooth on the same chip which doesn't play nice if I for example stream
spotify to my bluetooth speakers. However, that has nothing to do with
Skylake.

------
jonotime
Interesting since I continue to be impressed by my skylake's low power
consumption. I built a desktop (arch linux kernel 4.2 - 4.4) a few months back
and my killawatt generally show 25-35 watts. Thats way below my old desktop.
Its also plenty cool.

~~~
viraptor
As mentioned by the article, this applies to mobile version of the cpu only
(so laptops, etc.) It does not affect desktop versions.

------
ikeboy
I had an HP skylake laptop that overheated last month, then refused to boot,
running ubuntu. Had constant issues when resuming from suspend, would
sometimes refuse to start, or would lose wifi.

Now wondering if it's connected.

------
bb85
Anecdotal, but I've been running Ubuntu 15.10 for a few months on a Dell XPS
15 (9550), and with kernel 4.4 everything works flawlessly.

When idling, power consumption is 8W, and powertop shows 30% C8 and 70% C10.

~~~
ggreer
Hmm... 8 watts sounds extremely high to me. My Broadwell laptop uses less than
3 watts when idle, and most of that is the display. Even my old Ivy Bridge
MacBook Air from 2012 idles at 5W. Is there some peripheral responsible for
the higher idle on your system? Maybe a hard disk or dedicated GPU? I'm
curious about what's causing the difference.

CPU/GPU power usage: (≈1W)
[http://geoff.greer.fm/photos/screenshots/Screen%20Shot%20201...](http://geoff.greer.fm/photos/screenshots/Screen%20Shot%202016-04-13%20at%2020.05.54.png)

Total power usage: (2.9W)
[http://geoff.greer.fm/photos/screenshots/Screen%20Shot%20201...](http://geoff.greer.fm/photos/screenshots/Screen%20Shot%202016-04-13%20at%2020.07.33.png)

~~~
bb85
You were right! I barely use it on battery so I never investigated.

I installed tlp and system went down to 5.5W idle with normal programs open.
On fresh boot as low as 4.9W.

Turning the screen off shaves 2.5W, and disabling the wireless card saves ~1W.

I suspect the SSD + HDD are making part of the difference.

------
suprjami
OK, it's broken and doesn't work. I'm getting over 10 hours battery on a
Skylake laptop, but it's broken and doesn't work.

~~~
mjg59
Intel's documentation says that long-term reliability of the hardware isn't
assured in this configuration, so yes, it's broken.

~~~
spoiler
This is so frustrating, especially when you buy a $3000+ laptop (which is a
lot for a laptop in Croatia) only to have to find this shit out on your own,
because it's a very specific thing you need to Google.

And it is even more frustrating that people complain it's Linux's fault
because it "works fine on Windows" (no it doesn't actually[1]), when hardware
vendors suck up to Microsoft and have a "oh yeah whatever" attitude towards
linux users most of the time.

And then people ask themselves why Linux runs better on "old hardware" because
it takes _years_ sometimes for all the shit to be ironed out. Ugh! So
annoying.

Sorry for useless rant.

Edit: Hardware vendor shaming needs to be a thing. Most of the time this types
of things go "quietly" and this is why nobody at the "management" level cares.

[1]:
[https://news.ycombinator.com/item?id=11153940](https://news.ycombinator.com/item?id=11153940)

~~~
kyrra
Intel actually provides a lot of code to the Linux kernel. At least as of a
year ago they were the top contributor to it:
[http://www.linuxfoundation.org/news-
media/announcements/2015...](http://www.linuxfoundation.org/news-
media/announcements/2015/02/linux-foundation-releases-linux-development-
report)

~~~
spoiler
Yes, and that is really great, but it would be even greater if they adequately
supported their newer & "consumer-level" (for a lack of better expression)
hardware as well.

