Hacker News new | past | comments | ask | show | jobs | submit login
Intel Skylake Launch, with Architecture Analysis (anandtech.com)
102 points by MekaiGS on Sept 2, 2015 | hide | past | favorite | 97 comments



The most interesting to me is that Intel apparently stopped publishing transistor counts starting with the 14nm node.

This is significant because as structure sizes become smaller, the restrictions on possible layouts (so called DRCs, design rule constraints) become ever stricter. For example, you can't just place wires wherever you want; you have to take into account the relationship to other wires. With stricter rules, the end result may be that the effective scaling achieved is worse than what the structure size suggests, because the rules force a lower density of transitors.

So what are Intel hiding? Are they far ahead of the competition in terms of DRCs and don't want others to know how much, or are they struggling (like apparently everybody else) and want to hide a less-than-ideal effective scaling? Obviously, your guess is as good as mine, but it's certainly fascinating to watch the competition as Moore's law is coming to an end.


> Are they far ahead of the competition in terms of DRCs

I'm sure other companies like AMD can figure out the transistor count without consulting the Intel press release. If Intel isn't publishing a figure like total transistor count, it's for marketing purposes. All indicators point to the fact that Intel is struggling at the 14mm process.


*14nm


How has/hasn't CPU reliability changed over the decades? Are these generally as sturdy as a 486? A 286 machine was mentioned in a recent article here, will you be able to find a running Skylake machine in 2045? How does the process size affect this? What other factors?


A 286 machine was mentioned in a recent article here, will you be able to find a running Skylake machine in 2045?

I've been thinking about this lately, and between the jihad against lead (https://en.wikipedia.org/wiki/Tin_pest) and these machines storing the BIOS etc. in flash memory, I'm beginning to doubt it.


I'm nowhere near an expert in this, but I've heard that electromigration in the really thin wires can cause wearout over time. I imagine that this is probably near-negligible in the micrometer-sized interconnect layers in the 286 but a non-negligible issue in 14nm.


I'd say that it's decreased, because of how tiny the transistors and interconnects are getting. The CPUs still work today (obviously), but the margins are much thinner than they used to be because they don't design for more than a few years' lifespan.

The 486 was produced in a 1um (1000nm) to 600nm process and runs on 5V+/-5% (4.75-5.25, a 250mV range), with an absolute maximum of 6.5V.

The 14nm Skylake has a core voltage of ~1V, tolerance of a few tens of mV, and an absolute maximum of 1.52V.

Electromigration will likely become a significant source of IC failures in the future.


Due to Intels integration of design and fab they've been in a good position to make tradeoffs regarding design rules. Intel has been first to a new node pretty consistently and they've generally had better drive currents at a given node than their competitors. But I'd be willing to bet all this causes lots of extra work for their design teams who, given how much money Intel can throw at the problem, are large and very skilled.


Is it reasonably feasible for a company like AMD to decap a processor and count?


The KGB compromised US intelligence officers over decades for less money than Intel spends on a single advertisement. If knowing some internal details is going to make a huge difference, it is absolutely doable for basically any corp to compromise another's secrets.


Legally? No. AMD wants Intel hardware nowhere near their R&D people.


I doubt looking and counting under microscope is illegal. Also, not to learn from successful competitor's product would be very foolish.


Intel processors are great and really blow competition away (on desktops). But that's kind of annoying - when there's no competition, Intel holds on releasing high end CPUs (w/ 6 cores) on new architecture and their price is insane...


Hardware review sites are very good at distracting us and making us think that absolute performance is all that matters (hence your comment about "blowing competition away"). And, yes, Intel has the fastest processors. But what matters to most people is performance per dollar. And in fact AMD often beats Intel at this game, especially in the consumer market (sub-$300 range).

Example: I recently built a Linux video transcoding machine to reencode H.264 BluRay movies to smaller H.264 files [1]. A $110 AMD FX-8300 is able to reencode an average BluRay movie around 47-50 fps, while you would need to spend more than twice that amount of money to match this performance with Intel (the performance of the FX-8300 at this tasks falls between a $200 Intel i5-4590 and a $250 Xeon E3-1231 v3).

[1] Using: avconv -i input.mkv -threads auto -s hd720 -c:v libx264 -c:a libmp3lame -sn -b:v 1400k -b:a 128k -ac 2 output.mkv


Both measures have their place. In clusters e.g. rendering farms, then perf/dollar or even perf/watt are paramount. But in a single desktop for a performance-hungry engineer then absolute performance is the number that counts.


Only if he doesn't care at all about money.


> Only if he doesn't care at all about money.

Power is not always the only cost. I work in an industry where you pay per core for software license fees far in excess of the cost of power. There, perf/core is what matters.


Well, even if money does matter, the tradeoff is often more complex than just the one-time hardware cost. If you are a software engineer whose time is expensive, and a slower workstation processor means you wait an extra 5 seconds for a compilation 100 times a day, then the time costs add up quickly.

I would imagine the same is true for e.g. server response latency when that latency correlates to user experience and thus growth/popularity of your service. People might not notice a few ms, but 100ms vs 300ms is definitely visible.


Compiling can be more expensive than 5s/compile, but you should still probably lean toward things like ccache/ramdisks/distcc instead of the latest processor.

I think the real value is new things like TSX/HLE.


Yup, all those things are useful, but they don't solve everything. I've worked on projects where a single change to a header file would lead to a 20-minute rebuild adventure and where individual .cc files would take 10 minutes to compile. Caching and distcc build farms help somewhat but there's still, fundamentally, the critical path of a single file's (re)compilation, and only a faster CPU will help with that.

(And if you say "thin client, distcc, big build farm", then you just need the latest CPUs in the build farm instead.)

Of course this is as much an argument for a real module system in C/C++ as anything else...

(And, I completely agree that TSX and other architectural stuff is totally awesome, too. Perhaps that'll drive adoption more than incremental perf, but I'm just arguing perf is still often really important.)


When workplace is buying - ofcourse not. I have never ever seen a workstation for developers with AMD CPU here in Baltic countries.


This is a real issue. At my company, we always buy the $300-$400 CPUs but they are barely getting faster with each generation. The last time there was a big jump was with Nehalem.


The performance jumps just got more specialized. For instance, introduction of AVX instructions made a huge performance bump for some areas, especially scientific, albeit you needed crazy cooling to run a program using AVX extensively for more than a few seconds. I can remember having around an 80% performance jump with one code I used compiled with vs. without AVX. x86 hardware virtualization extensions were another enormous performance boost. Hardware transactional memory was supposed to be a big (order of magnitude) boost to some things on Haswell but they ended up having to disable it.

Through all this time though, I haven't noticed a massive overall performance boost in a long time. If all I wanted to do was browse the web and read email I'd probably still be using my Core 2 Duo MBP from forever ago. The speed boosts have been piecemeal, one generation improves one thing, the next something else. This is different from Ye Olden Days, I can remember upgrading from a 386 to a Pentium II to a Pentium 4 and each upgrade was so dramatic that it completely redefined what I could do with my computer. The last real performance jump I can remember was the Core Duo upgrade that brought real multicore.

I do kinda miss my beloved old SPARCstation though. Makes me wonder what computing looks like in the alternative dimension where SPARC or one of the other architectures had taken over the PC market like Intel ultimately did.


There have been lots of other performance improvements, but I think they're less visible than other stuff. The CRC instruction that got added in SSE4.2 seems to get fairly wide use in hashes and stuff like btrfs. I think rdrand, aes, and clwb are interesting ones as well. I guess power consumption is kind of another visible gain.

I think the biggest visible change recently though, at least as a software engineer, is the continuously lowering cost, increasing availability of, memory. I can literally store an entire project in memory and not have to worry about any network or disk bottleneck.


Are many people often CPU capped anyway? How would you detect a performance increase unless you're over-utilising the machine.

For example, my work machine has an i7, 8GB of RAM, and an SSD. I can cap IO and RAM, but I am yet to see the i7 max out, heck even over half the cores. I guess what I am saying is: Is CPU really a bottleneck most people encounter day to day anymore?


Confused. Faster CPU means lower latency for all operations. Doesn't have to be maxed-out.


> Faster CPU means lower latency for all operations.

Not really. Per core CPUs have become slower and slower. It become about throughput and not raw per op speed a long time ago. Using that measure you'll want to install a Core Duo in your machine.


Depends entirely what 'it' is. Rendering in parallel - sure. But still most apps are single-threaded.


On the bright side, they're typically getting far more power efficient, and cooling them becomes easier.


For developer boxes, the price of power and cooling do not matter compared to developer productivity, and developer productivity can be in some cases hugely impacted by the speed of their PC (during builds, simulations or renders, etc.)


Intel is still incentivized to provide much higher performance CPUs to the market because they compete with themselves - they want to encourage upgrades.

What you're seeing here is beginning of the end of Moore's Law. Dennard scaling is dead and that will limit performance going forward.

The implications are huge for the computer industry. As Bob Colwell liked to say - imagine if Moore's law ended in the year 2000. We'd never have smartphones.


not sure what you mean by "desktops" - their chips are the fastest CPUs you can get for notebooks, workstations and for many (most?) servers.


> not sure what you mean by "desktops"

I would assume he means "non-mobile". I.e. things not ARM.


I meant that I have no clue and intereset in notebook situation and workstation is a desktop PC. Same for servers - I have no idea what sysadmins use, nor I care. Monitoring shows CPU percentage, it reaches 80% - we tell them to upgrade. :)


Call me crazy, but I want a processor that's just that - a processor. No integrated GPU that's just sitting there taking chip resources (die space, power leakage, etc) that could be used for better things (designing things for lower leakage current, async sections of the chip, etc).

Among other things, discrete GPUs blow integrated GPUs out of the water, and the time-to-obsolescence of a GPU isn't anywhere near that of a CPU. It also makes sense from a cooling perspective - it's a whole lot easier to cool two chips in separate areas than one larger chip, generally speaking.


For desktops, sure. But the desktop form factor is dying.

Where it matters is mobile. Even if you have a dedicated graphics processor, it makes sense to have integrated and then to have it auto-switch as needed. This saves on electrical power, reduces heat, and may extend the life of your mobile devices (since heat is damaging).

Ultimately if your want to force-on dedicated 24/7 then normally you CAN, but integrated simply gives you more options if you wish to extend the battery life of your device significantly.


"Dying" is a pet peeve of mine. Usually it's not true.

The desktop form factor is declining significantly in popularity in the mainstream market in favor of laptops and mobile devices. The reason is obvious: laptops and mobiles are portable, and the average user (and even many pro users) do not need that much power in their local device.

It's still very popular in the gamer and professional workstation market, and likely will be for a long time. It's just hard to cram the processing power that gamers, hard-core developers, CAD and simulation users, etc. need into something as small as a laptop without creating what's comically referred to as a "ball burner" or "weenie roaster." I'm sure there's a female equivalent expression but it's even less polite. :P

Intel has many-core options for these machines, but they tend to lag a little behind. If you have deeper pockets and really want power you can always put a high-end GPU in a server board with 2-4 Xeons and put it in a tower case. Now you have a data center node with a 4K monitor on it.


>It's still very popular in the gamer and professional workstation market, and likely will be for a long time. It's just hard to cram the processing power that gamers, hard-core developers, CAD and simulation users, etc. need into something as small as a laptop without creating what's comically referred to as a "ball burner" or "weenie roaster."

Gaming is a niche market mostly for younger ages (people with "game rigs" drops rapidly after a certain age) and most hard-core developers use laptops -- the laptops you see in each and every dev conference are their main boxes, with or without external monitors.

As for CAD and simulation that's too niche to even mention. Of course some people will always have a need for a more powerful form factor. But if CAD users are part of the counter-argument for "it's not dying", then it very much is.


I'm not sure you're right, or at least as of late Supermicro has been selling X[9/10]SAE workstation motherboards, which will take a Xeon or consumer chip and ECC/non-ECC memory. No IPMI, lots of the sorts of slots we want, and onboard sound. The first X9SAE model, with which this message is being drafted, would appear to have been successful enough they repeated it in the next generation:

http://www.supermicro.com/products/motherboard/Xeon/C216/X9S...

http://www.supermicro.com/products/motherboard/Xeon/C220/X10...

They didn't make such a class of boards in the '8' generation.

Not dead yet!


Throughout my career I have enjoyed a desktop as my primary workstation except for an unpleasant 6 month period where I had only a MacBook Pro. I wouldn't be so quick to assume that the laptop is also the primary workstation.


This highlights one of my pet peeves with the integrated versus discrete debate.

The reason why typical GPUS have high power consumption is because they are generally designed for the desktop market where they can.

But you can have low-power discrete GPUs also.

For that matter, there is no reason why you cannot have a low-power discrete GPU, and a high-power discrete GPU.

And again, it's a whole lot easier to dissipate heat being produced in multiple places than all in one hotspot.


Even for a new desktop I want a small, efficient and noiseless form factor.


See I'm not so sure about that. I like the idea of more integrated packages especially because it makes better quality graphics available to more users. This enables better experiences to more users who buy lower end machines.

And anyway, people who need more power know the difference between discrete and integrated chips.

I think this is about pushing the overall baseline experience of users up.


You're not crazy, and I kind of wish there was an option to not include the GPU as well. But the fact of the matter is, it's very difficult to have several very distinct manufacturing methods for this kind of silicon.

Intel has had a lot of problems shrinking the die size recently, so it must be cheaper to just give every processor a GPU that isn't used, than to produce several different dies.

I guess it's just because for most end user needs, the integrated GPU's are becoming powerful enough.


Intel has the Xeon/E/EX/EP lines for you.


.. and HEDT (ie. Haswell-E 6/8-core CPU's). Only problem is that the latest HEDT platform is always at least 1 generation behind what mainstream gets.


What you surrender in process node, you get back (in spades) in increased core count, PCIe lanes, and cache size.


At an absurd price compared to their desktop lines.

And also generally they come out a while after the equivalents with integrated graphics.


This generation is different, but the past 4 generations, the Xeon E3s have launched pretty much simultaneously with their i5/i7 siblings, with the 12x0 versions not having integrated graphics, and usually for a bit less than their equivalent with integrated graphics. You give up on overclockability, but if you're concerned about tdp, you're probably not going to be overclocking anyways.


> You give up on overclockability

So let's see. Xeons are what I want now. Except for absurd prices. And that they aren't available. And that they don't overclock. And... And...

As I said, I wish there was something that was actually vaguely what I wanted (a desktop processor that's <=4 cores, and focused on single-core CPU performance). The Xeon isn't it.

Also:

> if you're concerned about tdp, you're probably not going to be overclocking anyways.

In a desktop overclocking is a large chunk of the reason why one would be concerned about TDP!


> As I said, I wish there was something that was actually vaguely what I wanted (a desktop processor that's <=4 cores, and focused on single-core CPU performance). The Xeon isn't it.

What do you think about i3-4370 ? It seems this processor has very good single-thread performance and is quite affordable as well.


That's the processor I am looking at, as a matter of fact. That is, assuming that the next gen won't have anything better for my needs.


You can buy many models of Haswell and newer without the GPU component, such as k parts (sometimes?) omit it.


Unfortunately, they tend to be released after the other versions. And also, they seem to often be "other parts with the iGPU disabled".


I'm glad they include a passable gpu. My workstation is higher end (maxxed out memory, 6 storage devices, higher end i7, 3 monitors) and is mostly running developer tools, browsers, editors etc as well as some games (eg Civ V). The integrated GPU is all I need for that.

It has the advantage of being very quiet and lower power consumption. I did briefly try an Nvidia card while getting 3 monitors working which was far too noisy (whiny).


From what I can gather, Skylake really only has one major use case -> affordable HTPC.


Whaaaat? Laptops with integrated graphics are the vast, vast majority of personal computer sales. That's its major use case. The one that almost everyone with a laptop has.


Major bias here. That being said, among me and my peers I don't know of anyone who has a (non-apple) laptop without a dedicated graphics chip.


What about laptops, tablets, smartphones, and "internet of things" devices? It is definitely a mobile-first artitecture, no doubt about it, but you're selling it short.


It seems that all Skylake CPUs suitable for new MacBook Pros won't ship until early 2016. I was hoping for a new MBP this fall, but it seems that my old MacBook Air has to do it's job a little longer.


I'm really looking forward to a benchmark of the GT4 graphics; I'm a game developer, and it might just be the first time I can work on a laptop with integrated graphics.

Of course, it'll still be a rare and power hungry configuration, so it's not like I'll suddenly be using a Macbook, but it's a nice progression.


I'm a game developer, and it might just be the first time I can work on a laptop with integrated graphics.

Please don't :). New iGPUs may be catching up to old dGPUs, but meanwhile new dGPUs are just skyrocketing in power far beyond them.

I made this chart to easily see relative performance of most common notebook GPUs:

http://alteredqualia.com/texts/notebooks/nvidia-gpus.png

Or if you are interested in both desktop and notebook GPUs relative performances:

http://alteredqualia.com/tools/gpus/

What Intel does is great for rising minimal specs you can expect in notebook graphics, but you can do much better than that, even cheap modern dGPUs wipe out the best iGPUs. You can extrapolate Skylake Iris Pro from Intel marketing claims vs older Iris Pros, it's going to be better but not Earth-shatteringly better.

And this gap is just going to get wider in 2016, with HBM2 and 16 nm node sizes finally coming to GPUs (both to Nvidia with Pascal and AMD with Arctic Islands).


Even if you're developing a high-end crazy graphics game in UT4 on your desktop with SLIed GTX 980s, it's still useful to be able to work on it from a laptop at more than 2 FPS.


That's exactly my situation. Going to 720p requires half the fill rate of 1080p. Turn off some postprocessing and antialiasing and you might get double the performance again.

So if you're at 1/4 peak performance, you're still running a smooth game albeit not quite at full detail.

If you're at 1/16 peak performance... things aren't looking too good by the time you get your game smooth.


And that was exactly my point :) In case it wasn't clear from the context, I was suggesting getting a notebook with a discrete GPU instead of integrated GPU (that's what iGPU/dGPU stand for, not a desktop GPU).

If you are game developer / graphics programmer / game player / designer / photo or video editor / VR enthusiast / anybody who cares about graphics, I would advise to get at least GTX 960M (if you have budget or size constrains, this should get you roughly to PS4/Xbox One levels, so relatively safe for this gen), but preferably GTX 970M (best price/power and you can still get slim notebook).

GTX 970M will get you about average GPU performance overall, including desktops (these days I get ~50-percentile rank in 3dmark firestrike with my GTX 970M notebook, so average GPU in September 2015 is at least as fast or faster, at least for that cohort of people).


>Please don't :). New iGPUs may be catching up to old dGPUs, but meanwhile new dGPUs are just skyrocketing in power far beyond them.

So? When did the idea behind developing games got to have the most eye-candy to take advantage of new dGPUs?


On Linux Intel has been doing pretty well making high performance (meaning as good as Windows) open source drivers for their integrated graphics which makes the GT4 sort of appealing to me.


I'm still hoping for external GPUs to catch on.

A decent CPU and adequate graphics is usually all I need in a laptop while traveling, but it would be nice to be able to plug in a more beefy GPU at home to play games and drive more monitors at higher resolutions.


Well, Thunderbolt 3 now explicitly supports them (previously Intel wouldn't certify for what appeared to be non-technical reasons), so it might actually catch on.

I don't expect it to be cheap; but even if the chassis is a few hundred dollars, if you can reuse them for three generations of GPU and laptop, it's probably worth it.

http://images.anandtech.com/doci/9331/TB19_575px.png http://www.anandtech.com/show/9331/intel-announces-thunderbo...


So far they have largely been built into monitors (by Sony) which gives different obsolescence profile.


according to some bench even the i5 6500 gpu is as good as my dedicated (but old) readeon hd 6670 - which would put it in the "good enough to run crysis 2 with some compromises" territory

even this GPU would make for a great value proposition in a midrange macbook pro


What do you expect from Skylake based MBP that can't be done with the current MBP?


Well, the current 15 inch MBP is running 2 years old Haswell processors, so it's simply a bad buy when Skylake MBPs are coming out in a few months.

Additionally, Skylake brings DDR4 memory, Thunderbolt 3.0 (Thunderbolt over USB), and better performance/battery life. There's also possible wireless charging support, and the Skylake integrated GPU possibly has double the speed of the Haswell Iris chips.


- USB Type-C

- The ability to run external 5K monitors.

Since I usually keep a laptop for at least four years, it would be unwise for me to now buy a laptop that can't do both of those technologies which I really want to have.


I have a Haswell MBP. I don't remember a time when I had to plug a USB device into it. The only device I can think of is a flash drive, but the last time I used one was more than 2 years ago.

Which USB devices do you use with your laptop?

Regarding 5k displays, currently I'm still not ready to buy even a 4k version, mostly because I'm waiting for higher frequencies (via G-Sync or FreeSync, both of which require a discrete card, and it's not at all clear if those will be supported in the next MBP). 5k monitors currently cost 4 times as much as 4k. Really hard to justify buying one anytime soon.

If Apple does not release Skylake MBP in October, I would recommend buying the current one, rather than suffering the horrible resolution of MBA for another 6 months.


32gb of ram, 5k monitors, Thunderbolt/USB 3, etc etc etc...


Intel SGX is possibly huge. With a trusted enclave that's verifiable, you could do things like verify that a server is actually running certain code (like a bitcoin mixer).

Of course it also allows real DRM, where a remote server can verify you're running unmodified code.

But how does the key management work?

My personal interest is the continuing quest for a machine strong enough to develop, but not warm my hands at all. Macbooks are insanely hot (how can Apple even pretend to be about quality with those designs??), X250 ThinkPads is "alright" if I aggressively throttle the processor. Seems like the perf improvements are effectively dead so maybe we'll see cool laptops. Though OEMs seem to screw up as much as possible so who knows...


Of course it also allows real DRM

I think thats's the scariest part. 16 years ago people strongly oppposed the processor serial number in the P3( https://news.ycombinator.com/item?id=10106870 ), something that seems almost harmless compared to what (anti-user) "security" features are in today's hardware. It's not only DRM, but no doubt other ways of taking control away from the user will be found for it.

With performance not being all that much better, I'm personally going to stay on older, "more free" hardware for a while.


I would think the real target market is cloud computing though.


Is there any information about what processors support SGX?


Am I alone in wanting a small (NUC or (thin?) mITX) board with a Core M on it? :-(


No! I still want to try out a Core M with a beefy cooling solution. You could propably beat mobile i3/i5's with it while having a very low idle power consumption...


I'm not sure if this would suit your use case, but there's a Skylake Core M compute stick coming out:

http://arstechnica.com/gadgets/2015/09/intel-announces-a-bee...


I wonder why there were so few laptops with iX-5... processors in the market. Most laptops with 5th generation Cores seem to have low-power U processors, but even these are hard to find with some manufacturers. Did the full-power versions arrive too late, so manufactuters just chose to wait for Skylake?


Pretty much. Apple is a good example; the Macbook Pro 15 inch skipped over the 5xxx series, and the next update should be a Skylake chip.


I've seen a lot of references to skylake "supporting wireless charging" but with no explanation on what that means. I read this article hoping for an answer but it was not mentioned.

Can someone enlighten me? What does the chipset and CPU have to do with the power supply method? I'm assuming they're not including an RF power antenna on-die, so is this just code for "we got peak power consumption below the threshold practical with today's wireless power transmission devices"?


Intel doesn't market chips; they market "platforms" that are bundles of chips that they want you to buy together. Centrino was the most infamous one and maybe now Intel is trying to bundle some wireless charging chips for tablets or something.


I'm honestly still trying to figure out the implications of eDRAM. It seems like the main benefit is offloading traffic from the memory bus, particularly now that it's coherent.

Am I missing something?


It's to increase the effective memory bandwidth of the system. GPUs have very high memory bandwidth requirements, so the models with the high end GPU would be limited by the memory subsystem. Using the extra cache the amount of fetches from memory can be reduced, increasing the amount of memory bandwidth available.


That makes sense. Using it as a victim cache seemed kind of odd, but I think it makes more sense now. It seems like it's a still evolving technology. I wonder if there are corresponding performance counters.


"4.5W ultra-mobile Core M" - does it means if someone uses it with a typical 2250 mAh cell phone battery, it only last half hour?

That's a long way to catch up with ARM SOC, right?


That's the TDP of the chip, or the maximum amount of heat it can generate, not necessarily the power draw. Also, you were assuming it was running at full load 100% of the time. I don't have a source, but for comparison I believe Apple's AX chips have a TDP of ~2-3 W.


Sounds somewhat higher than an arm. But take any phone and run it under a worst case load. Say a 3d game that supports multiplayer and GPS. When you run display, CPU, GPU, GPS and WAN flat out many phones don't last very long.


i'm pretty pissed. skylake-h chips will only have hd 530 graphics which is slower than the current version. and still no hdmi 2.0. just laughable.


HDMI 2.0 support is supplied by the motherboard, and not a required feature. Skylake's Alpine Ridge can drive two 4k 60hz displays, and supports HDMI 2.0 operation.


Alpine Ridge is an extra cost option that won't be included in the vast majority of laptops.


Then don't buy the "vast majority of laptops" but on that includes it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: