So, I disabled InstantGo/Conected Standby on my dell tablet, and the battery now runs for days. I charge it once a week or so, unless I start playing games on it (in which case the battery only lasts for something like ~3/4 hours). That device gets pretty heavy use for book reading/ netflix/ web surfing.
The downside is that the power button is stupidly placed next to the volume buttons. So I frequently hit it instead of the volume button and that results in a ~5-10 second hibernate/resume sequence. I've been considering how hard it would be to physically cut the lead to it and the useless "window button" and PtP solder the windows button as the power button. (I haven't found any software solutions for reversing them).
Other people have been pointing out that Windows is struggling with Skylake as well, and I've heard the same.
Skylake was touted by Intel as being one of their proudest achievements in power management to date. My guess is that their changes were so drastic that the software didn't keep up.
I do have an NVMe hard drive, which does seem to cause some issues, for reasons passing understanding.
I was disappointed other OEMs had beaten them to it, but it looks like they just dumped hardware on the market without suitable software. Apple obviously take responsibility for both.
Intel has been doing this too with the NUC. If you go on the Intel forums, you'll find people having serious problems right now with Skylake NUCs .
Even the previous generation of NUCs (that were released over a year ago) still have major bugs with Linux. For example, there's a BIOS bug that reboots the machine instead of shutting down. They've known about it for at least 5 months now and it's still not fixed . And it seems likely to affect all versions of Linux, not just some obscure variant.
It's one thing to be running hot all the time, it's an entirely different thing to actually cause damage because of that.
This reminds me of some laptops a few years ago which would overheat just sitting in the BIOS setup screen for too long, because the fans were entirely software-controlled and that hadn't been loaded yet.
I don't think power management should ever be left to software entirely if it can result in situations like this - software can make the CPU go into a lower power state, but the CPU should know when it's too hot and throttle itself without any intervention from software.
(I'm well aware it's not a realistic experiment, but it's a fun demonstration nonetheless.)
That video was made because AMD didn't incorporate such protective features back in that era. That's why their chip absolutely fries itself within moments of removing the heatsink. 100W through an area the size of a Tic-Tac (die area dedicated to cache uses almost no power) is a recipe for catastrophic failure if left unchecked.
At which point I'd take the side of my PC off, and manually spin the fan until it got the idea.
I run Debian-testing (aka Stretch), which ships with linux 4.4 and Wayland. I had weird issues with X11, but I had blamed that on the NVidia/dual-card setup which I did not bother investigating.
Besides that, I've had no issues at all with the laptop. I haven't even run the BIOS update yet. Only annoyance was the lack of select-to-copy in Wayland/Gnome, which apparently is fixed in the new version. :-)
LOTS of people keep nagging me that I should go for a Skylake instead, just because it is "newer" and "better because it is new"
I really don't understand that logic.
Beside the thing pointed in the article, Skylake has other problems:
Win7 don't work properly in it (and Win7 is the last Windows to emulate old DirectX versions correctly on Windows itself).
Skylake wasn't design to support analog video at all, something that is still common in third world countries, specially as people keep using old monitors that never break, and are frequently superior to almost all reasonably priced new monitors.
Skylake doesn't support OSX (and there are people with reasons to want that).
Skylake uses DDR4, that in third world might not be even available for sale, or might have some insane prices (2, 3 times the DDR3 price).
Skylake has a couple bugs, and more might pop up in the future.
except in US and maybe some EU countries, the price to build a Skylake system is higher than the speed benefit it gives compared to Haswell (usually at most 10%, frequently less...).
EDIT: I would also like to point out that Devil Canyons has been reported to work with DDR3 up to 2666 with no issue, some mobos allow Devil Canyons to go up to DDR3 2800 without erroring or being unstable.
The thing is, those DDR3 can ALSO reach much lower latency than similar bandwidth DDR4, the few DDR3 vs DDR4 benchmarks done so far, show that usually there is no difference, and when there IS a difference, is usually DDR3 winning.
On an older MacBook Pro I have with Intel HD Graphics 3000, the fans go beserk on video that my iPad handles with passive cooling without even becoming warm.
I can imagine the codecs used on everything you encounter on regular sites will quickly begin to expect optimizations built into Skylake.
5 years ago I had a desktop computer that was 5 years old, and it plain couldn't handle std-def hulu video at the time and struggled a bit with youtube, but could play good-bitrate 720p h264 smoothly with mplayer - without hardware acceleration. I'm sure hardware acceleration helps, but it seems to partially make up for the inefficiency of the browser-context rendering.
Google pushes his VP9 format so hard, it serves it to its chrome users on Mac, where it cannot be hardware-accelerated... This is ridiculous, especially when it serves H264 when browsing on Safari!
I installed this app and switched to H264 for all videos, bringing in HW decoding, and since then my autonomy tripled when I go on Youtube binge.
Hardware vendors won't support VP9 until it has some usage, and it won't get usage until it has hardware support.
VP9 has some serious space savings, so it's extremely beneficial for Google to use it when possible, and many users will appreciate faster loading video at a higher quality.
Unfortunately they can't know when you'd prefer to have a more efficient decoding experience, so they rank their codecs and use the "highest" one available.
Safari is served H.264, because it does not support VP9 at all. On the other hand, VP9 streams on youtube are significantly smaller compared to similar quality H.264 streams.
There's no h265 streaming anywhere at this stage. You meant h264?
There are quite a few rumours of support in hardware from A8 onwards but nothing official from a quick Google. Several sources saying it's used in FaceTime.
However, while that older MacBook Pro is just new enough to handle virtual machines pretty well, and the CPU is fine for everything I wouldn't already decide to send to a cluster somewhere, the graphics are a reason to get rid of it. I'd like to get an enormous UHD monitor to put on the wall across the room, plus models get dropped from future OS X versions almost entirely based on graphics capabilities now.
I think this post misses some of the benefits of the new architecture that will eventually be realized.
It's the other way around though?
My advice though is: look at PCI passthrough, for example 4670 has all the tech required, 4670K doesn't, 4690K does again
As soon as vm's get good enough for a semi-newb like me to work it,im moving to Linux host (running on i7's GPu) with vm Windows (with its own pci-passthrough discrete GPU)
Honestly, even in the US I don't think it's worth it. I thought about upgrading from my Haswell i5 and the benchmarks are actually lower on a single core. Unless you truly need the Hyperthreading there's no point in upgrading.
Does that mean if you run the CPU too much, it will die quickly? Is there some low limit on time at full power? Electromigration problems, perhaps?
That would be my bet. They probably assumed based on typical workloads, a certain percentage of the time, the part would be asleep. Which is generally a reasonable assumption for non-server parts.
More sleep means less activity, and less heat (which accelerates EM)
Is this the limit for CPU speed and transistor size?
That's really disturbing if true. Poor power management has resulted in devices being warmer than they need to be and shorter battery life, but that seems trivial in comparison to the hardware actually being damaged. IMHO I would consider it a flaw if a CPU did not last effectively forever at full load --- older OSs which lacked any sort of power management basically kept the CPU in this state all the time, and there's plenty of old hardware around and working to show that it isn't unrealistic.
It seems they're heavily sacrificing lifespan for performance, which is attractive to (most) users and also builds in some planned obolescence, but it's sad that what was once considered to have indefinite lifetime is now almost a consumable. To use a car analogy, this is like moving from a conservatively designed engine that lasts hundreds of thousands of miles but only produces 100HP to a top-fuel dragster engine that can produce thousands of HP but can't run at full power for even a minute without destroying itself.
I still do all my development at home on an i5-2600K and an i5-2430 laptop, neither is noticeably slower than any of the new machines I've used (both have SSD's).
I'll probably run this desktop til it dies as there is no compelling reason to upgrade.
The US applies export controls to radiation-hardened ICs, which has resulted in a dearth of rad-hard ICs. Nobody wants to run a silicon on sapphire fab any more.
Does anyone have a reasonable guesstimate as to how likely it is that this gets fixed, because it sounds to me (from this thread) that there is a flaw in the design of the chip and this won't be fixed so easily.
 - https://www.phoronix.com/forums/forum/phoronix/latest-phoron...
 - https://news.ycombinator.com/item?id=11492693
So yeah, I'm still buying XPS laptops with 5th gen chipsets because we've had issues with Skylake.
But keep in mind that Dell is releasing their drivers into upstream kernels, so if you are on 14.04, you should be using the LTS Enablement stack in order to avail yourself of the latest firmware with stability improvements (this is how I fixed the Broadcom Null pointer bug).
You don't have to use the LTS Enablement Stack if you installed with newer service pack .isos.
The 15" also has Thunderbolt which is a must if you want to have a reasonable docking solution for a Linux laptop.
Which, I really can't tell if it'll work or not. Really need to plug a 4k monitor into it.
From what I found out (if I remember correctly), I have great confidence that Thunderbolt docks should work well with Linux kernel 3.19+. But take this with a grain of salt since I ended up with the 13 DE and therefore only have a mDP and couldn't try a Thunderbolt dock.
One gripe so far: Since I updated the BIOS, the entire thing performs like a Netbook-Hype-era Atom CPU when there's no AC power.
It's a Skylake machine, and it's been working great. I'm using Debian Testing with the kernel pulled from Sid (necessary to get the wifi working).
Powertop output is:
Package | Core | CPU 0 CPU 2
| | C0 active 6.7% 3.9%
| | POLL 0.1% 0.2 ms 0.0% 0.0 ms
| | C1E-SKL 5.5% 0.2 ms 1.3% 0.2 ms
C2 (pc2) 9.7% | |
C3 (pc3) 2.2% | C3 (cc3) 2.5% | C3-SKL 3.2% 0.3 ms 0.4% 0.2 ms
C6 (pc6) 4.1% | C6 (cc6) 7.0% | C6-SKL 8.0% 0.3 ms 8.4% 0.7 ms
C7 (pc7) 0.0% | C7 (cc7) 28.7% | C7s-SKL 0.0% 0.1 ms 0.0% 0.7 ms
C8 (pc8) 12.1% | | C8-SKL 29.4% 1.9 ms 60.5% 2.3 ms
C9 (pc9) 0.0% | | C9-SKL 0.0% 0.0 ms 0.0% 1.3 ms
C10 (pc10) 0.0% | | C10-SKL 4.3% 3.6 ms 14.6% 4.1 ms
| Core | CPU 1 CPU 3
| | C0 active 5.2% 4.4%
| | POLL 0.0% 0.0 ms 0.0% 0.2 ms
| | C1E-SKL 1.5% 0.2 ms 1.5% 0.3 ms
| C3 (cc3) 0.6% | C3-SKL 0.7% 0.2 ms 0.7% 0.2 ms
| C6 (cc6) 14.7% | C6-SKL 10.8% 0.7 ms 13.8% 0.8 ms
| C7 (cc7) 52.0% | C7s-SKL 0.1% 1.1 ms 0.0% 0.4 ms
| | C8-SKL 59.6% 2.1 ms 59.8% 1.8 ms
| | C9-SKL 0.0% 0.0 ms 0.0% 0.0 ms
| | C10-SKL 9.7% 4.4 ms 8.4% 3.5 ms
| GPU |
| Powered On 13.2% |
| RC6 86.8% |
| RC6p 0.0% |
| RC6pp 0.0% |
I am currently running Ubuntu 16.04 4.4.0-8 kernel which works fine. The ubuntu backport of i915 from 4.6 to 4.4 which is present in 4.4.0-9 causes weird video issues. (upstream bug: https://bugs.freedesktop.org/show_bug.cgi?id=94593)
lspci -vvv: http://pastebin.com/raw/FUsMqPZz
cat /sys/devices/virtual/dmi/id/bios_version: 1.2.3
Happy to provide any other info as well, I love this machine to bits and would be happy if I could help other people enjoy it too.
sudo lspci -vvv : http://pastebin.com/79BfxQC4
Why not try Ubuntu 15.10 or 16.04 beta? At the very least run something released in the last 6 months if you have any reasonable expecatation of things working.
Seems like all the Arch folks have this working also.
Camel, meet straw.
Why is this phrased as being an issue with Skylake, rather than an issue with Linux? That is, why not "Linux's power management on Skylake is dreadful and you shouldn't install it until it's fixed?"
Also, as someone who is running a custom compiled Linux 4.4 on Skylake, what's the best way to check what idle state is being used? Idle stats with 'powertop' shows the majority of idle time being spent in C8-SKL. Is this the same as the PC8 he's talking about?
Package | CPU 0
Powered On 0.0% | POLL 0.0% 0.0 ms
C1E-SKL 1.3% | C1E-SKL 0.2% 7.1 ms
C3-SKL 0.0% | C3-SKL 0.0% 0.2 ms
RC6pp 0.0% | C6-SKL 0.0% 0.0 ms
C7s-SKL 0.0% | C7s-SKL 0.0% 0.0 ms
C8-SKL 64.5% | C8-SKL 99.6% 9.5 ms
| CPU 1
| POLL 0.0% 0.0 ms
| C1E-SKL 0.0% 0.0 ms
| C3-SKL 0.0% 0.0 ms
| C6-SKL 0.0% 0.6 ms
| C7s-SKL 0.0% 0.0 ms
| C8-SKL 100.0% 84.8 ms
I had to rewrite it to get it under 80 characters.
Because Linux is sometimes the only option. No other OS has the same options as far as containerization.
You might want to watch some of Bryan Crantrill's talks on FreeBSD Jails, Solaris Zones, and his criticisms of Docker and Linux containers in general when compared to them.
Still, Solaris Zones are freaking awesome, with them having a Linux personality now I really need to try to find some time to mess with SmartOS more.
- Superior "container" support to Linux (more secure, better tested, better designed, not a hacky bolted on thing like Cgroups.)
- Their source code, they are Open Source
Or did you actually just mean to say "Only Linux has Docker"?
2. Source access to the OS don't mean squat unless you're you can grok OS code. Also how is BSD open source different to Linux's?
So does Linux?
Besides, I could add +4GB RAM and replace sata HDD with SSD.
Good or bad? Have no idea what C8 should say that's why i'm asking. Dell XPS 9550 ubuntu 16.04
Btw. I noticed that a new bios was posted yesterday. Unfortunately it is not yet available in whatever place fwupdmgr is looking at. But at least bios updates now work from Linux on this laptop.
The only thing that's still causing hiccups is the combination of Wifi and Bluetooth on the same chip which doesn't play nice if I for example stream spotify to my bluetooth speakers. However, that has nothing to do with Skylake.
Now wondering if it's connected.
When idling, power consumption is 8W, and powertop shows 30% C8 and 70% C10.
CPU/GPU power usage: (≈1W) http://geoff.greer.fm/photos/screenshots/Screen%20Shot%20201...
Total power usage: (2.9W) http://geoff.greer.fm/photos/screenshots/Screen%20Shot%20201...
I installed tlp and system went down to 5.5W idle with normal programs open. On fresh boot as low as 4.9W.
Turning the screen off shaves 2.5W, and disabling the wireless card saves ~1W.
I suspect the SSD + HDD are making part of the difference.
And it is even more frustrating that people complain it's Linux's fault because it "works fine on Windows" (no it doesn't actually), when hardware vendors suck up to Microsoft and have a "oh yeah whatever" attitude towards linux users most of the time.
And then people ask themselves why Linux runs better on "old hardware" because it takes years sometimes for all the shit to be ironed out. Ugh! So annoying.
Sorry for useless rant.
Edit: Hardware vendor shaming needs to be a thing. Most of the time this types of things go "quietly" and this is why nobody at the "management" level cares.