By quiet I mean "I can't hear it at night" not "I can't hear it over the music".
Edit: Also, I think data centers these days are limited in density by power delivery and cooling. Even a 25W delta adds up when you're talking about thousands of servers.
I expect the "140W" Intel part will have closer power consumption to Ryzen in non-AVX loads, and outperform Ryzen in AVX loads while using more power.
Of course, performance scales not to far from linearly here so going to AVX is a large net performance win, as you say. It's just that I'd assume equal power consumption on non-AVX loads.
I.e., LAPACK/BLAS benchmarks are just really big linear algebra matrix problems, so obviously your pre-fetch and branch prediction performance will be significantly better since you aren't dealing with interrupts, locatedb, or Windows DCOM events firing off in the background. You have a huge set of matrices with a very predictable set of branches, fetches, and decodes, so obviously your CPU can optimize for that load, you're just paying for it in latency on the back-end (RAM fetches are the new disk swap ;)).
On all those benchmarks, (i.e. your standard LU matrix decomposition which previously was the basis of the LAPACK benchmarks, though things might have changed in the ~10 years since I've really looked at things) isn't CPU-bound anymore, so of course your instruction-per-cycle load on the CPU isn't where you'll be bottlenecking (and hasn't been since "let's avoid floating-point operations and just use static look-ups instead since we don't want the 10x cost of using the FDIVP instruction!"). Your processor can very easily anticipate from where in that sparse-matrix your next data fetch is going to be. It's the cost of that RAM fetch going along that copper trace which is going to be where you're going to bottleneck on any heavy numerical computation.
The power consumption on your CPU might drop a nominal amount which is great for those marketing white papers, but for a numerically heavy load, you're paying just as much (in total power consumption per 4U in the data center, total heat generation/dissipation within the case, and total processing time) on the back-end for those fetches.
 https://i.stack.imgur.com/a7jWu.png (I normally cite academic references, but this is 'good enough' to convey my point, I hope).
That's incorrect, DGEMM and most BLAS3 is way above most if not all processor uarch's aritmetic intensity threshold . Broadwell CPUs are at 10 Flops/byte  while e.g. DGEMM is 32 Flops/byte , so that's definitely FLOP/instruction bound and not memory.
> Your processor can very easily anticipate from where in that sparse-matrix your next data fetch is going to be. It's the cost of that RAM fetch going along that copper trace which is going to be where you're going to bottleneck on any heavy numerical computation.
You're mixing things up, it seems! LAPACK/BLAS is dense matrix, not sparse, so now you switched topics. Sparse matrix ops are generally >1 Flops/byte (see ), so that's indeed memory bound.
For a 95W TDP you might need to spend a little more (like on those big Noctua heatsinks with 14cm fans), but silent air-cooling is definitely possible. (Water is always trickier, AIO kits will typically be a little noisier due to the pump).
In a smaller HTPC it can make sense to go passive but for an 8-core desktop workstation I'm not convinced it's worth the trouble. Your PSU and GPU will likely have fans, also with a PCIe SSD it's better to have some airflow.
Passive cooling isn't for everyone, but the remarkable efficiency of modern components has made it a perfectly feasible option even for high performance workstations. You can choose whatever level of noise you prefer, from extremely quiet to dead silent.
Fans, in comparison, tend to be a steady thrum, and thus fade into the background in a way that HDD chatter doesn't.
This is a good site to keep an eye on if you're interested in fanless computers.
Surprisingly poor. A large conventional heatsink like the Noctua NH-D15 has far greater surface area. The densely packed fins of a conventional heatsink perform poorly if you're relying on convection, but they will dissipate a lot of heat with even modest airflow.
I thought data centers are going for ARM cpu's now, as shoveling data is not all that cpu intensive ?
I second this. I've been using the Hyper 212+ for 5 years and it's been great. The 212EVO looks just as good. Only drawback is the height (not an issue with larger cases).
Funny aside: I actually replaced the fan on my 212+ thinking the one it shipped with had failed. Turns out I'm an idiot and either didn't install the rubber dampeners correctly or the mounts came loose during shipping (can't remember which). The replacement's been solid for 5 years and although the original was relegated to spare (nothing wrong with it), I doubt I'll ever need it before I retire this build.
DEFINITELY better than the crummy cooler my Phenom II shipped with. That thing was noisy and ineffective. Next upgrade is definitely including the newer series 212 as well.
Anyone building their own silent pc should hang a bit on silentpcreview before they buy anything.
A Noctua NH-D15 or a be quiet! Dark Rock Pro 3 will match or exceed the performance of a 240mm AIO, with considerably lower noise in a decent case.
There aren't a lot of applications where water cooling really makes sense. If you're cramming a very high performance rig into a tiny ITX case, you might be better off with an AIO. Competition-grade overclocking rigs will naturally benefit from a custom open-loop watercooling system. Otherwise, modern components just don't put out enough heat to stress a good air cooler.
The performance of water cooling can be deceptive, because of the relatively high thermal mass of the water in the loop. If you fire up Aida64 on an air cooler, the CPU temperature will climb fairly rapidly before levelling off. A water-cooled CPU will see a much more gradual increase in temperature, taking up to an hour to reach a maximum. Water coolers look really impressive if you only stress them for a short benchmark run, but they're far less impressive under sustained load.
This, of course, leads to a tremendous advantage in server CPUs -- if you have capable cores at a lower wattage, you can add more of them. Hence the 14 core E5-2690V4 @ 135W at 2.6Ghz vs 8 core E5-2690 @ 135W at 2.9GHz. So in just four generations from Sandy Bridge to Broadwell, no TDP change, roughly the same or a bit better single thread execution but almost double the core count. If you are willing to drop a tiny bit your base clock further and your TDP higher, perhaps 10% less single thread performance, you can get a hulk of a 24 core E7-8890V4 at 165W. And that's where the big profit is -- currently.
Now some unfounded nonsense: what if Intel is not pulling these crazy prices out of their sorry behind but there's in fact some reality behind them? It just bothers me that the price of the 24 core chip is so close to the 6*6 times of a 4 core chip. It could be a coincidence.
Consumer loads are mostly limited by single threaded performance, though software is increasingly written for more cores. Best choice for a consumer used to be dual core, now it's probably quad core with thermal limited boosting on less parallel workloads.
It's only prosumers doing lots of transcoding or rendering, or CPU intensive VMs, that typically benefit from increasing cores above 4.
Whereas the server space prizes throughput and much of its workload is trivially parallelizable. On the server, higher single core speed mostly just decreases latency for small tasks; if you're happy with the latency, you can get more cores working on shared memory and potentially get big wins in perf.
There are diminishing returns though, scaling up boxes is expensive and unless it's being forced by software licensing or architectural models, things like Hadoop and Spark for spreading the load across a whole cluster are increasingly attractive. This helps solve the I/O throughput problem too.
The big difference is in the E5/E7 chips.
E.g. with my last laptop change, I went from a 35W TDP i7-M to a 15W TDP i7-U, while still gaining some performance.
Laptops with 35W/45W CPUs still exist, but they use e.g. the quadcore i7-6700HQ.
I'd hardly call that old.
And that's not an apples to apples comparison. The 6700HQ is a quad core part, whereas the 7500U is dual core (both of them feature hyperthreading, so 4 and 8 threads respectively).
Given equivalent software, the 6700HQ will outperform the 7500U in multithreaded workloads.
If intel needs 20% more cores to match the MT perf on the high end then it really isn't an advantage.
point 2 bullshit, people complain about nvidia's drivers all the time.
As a developer, the amount of inconsistencies in AMD drivers is baffling.
Conversely, though I admire Intel for its central place in advancing computing itself, I cannot love the company because it has shamelessly monetized its monopoly over the past 7 years or so with vast overpricing on higher end lines. I for one am definitely doing a Ryzen 7 build as soon as I can get a chip, and the same goes for Vega. So happy to see AMD back in the game.
I also like AMD because they've been at least a little bit more "open source friendly" than some of their competitors (cough Nvidia cough). Which is not to say that they couldn't do more, of course.
In the last few years, more often than not AMD hardware leaks have shown potential that in the end wasn't met. Things look good for Ryzen though, maybe they can finally make 6-8 Core CPUs mainstream.
It did bad compare to what their marketing team was telling their users.
The past two or so architecture underperform what the marketing them had in the slides.
AMD having temporary edge over Intel has happened before. Remember when AMD had Athlon in 1999 and they made huge $1 billion in profits.
Intel's response has always been the same. They cut their profit margins and start selling chips cheaper. They undercut AMD with volume and price and suddenly AMD is in the doghouse again struggling. AMD's best efforts can cut into Intel's profits, but Intel's response is to remove all profits from AMD until it's left behind.
Just compare these two:
Revenue/gross profit/gross profit margin (Sebtember 2016)
AMD $1.1B/$930 mil/4.5%
INEL $60B/$16B /60+%
Even if you add $5.5B revenue from GLOBALFOUNDRIES (manufacturer of AMD chips) to make AMD camp comparison more relevant, there is large difference.
The last time Intel did that it was through illegal tactics - unlikely they'll get away with it a second time.
Intel can undercut AMD lawfully if it wants. Instead of hidden rebates, Intel can openly cut prices.
They can't. They don't have profit margins or cash in hand to do that.
> Intel starts selling at a loss they'll be subject to even more punitive damages t
I think you failed to see my argument. Intel has 60% profit margins. They can cut their prices a lot without making a loss. AMD cant.
If AMD still has energy and get loads of cache, they might be able to finish the APU idea, now that opencl, gpgpu, ml, cv has caught wind, it might give them a fresh market to strive.
More importantly though, Intel definitely needs competition. This is really good news and I hope it plays out well.
It's interesting that this time around, if these numbers are true, AMD is not only faster performing and less costly, but also lower power. In the past they've always run hot, noisy, but cheap and fast-enough.
Maybe their experience in optimizing GPU production is paying dividends here.
I was always thinking that ECC can prevent blue screens/kernel faults, but I haven't seen those in years on my laptop without ECC.
Anyone using ZFS will (or should!) care about ECC support. 
Lots of people build their own NAS/SAN boxes, so ECC support on a desktop CPU at a reasonable price point would be very appreciated. Currently you need to buy specific model CPUs (Celeron or Xeon, IIRC) to get ECC support from Intel. 
You are getting the same benefits using any filesystem while running with ECC.
Also the "myth" that wrong checksum calculations due to a bitflip will degrade a ZFS filesystem even faster is not valid.
There are some articles/newsgroups around explaining this in much more detail.
So: Using ZFS on non ECC is not inherently unsafer than other filesystems on non ECC.
However, the checksum and the data is also only as good as the memory in which it is stored during computation. Since ZFS takes care of everything else other than memory, you'd use ECC if the data being stored is important enough that you want to ensure that ZFS gets healthy checksums.
Once ZFS writes corrupted metadata with corrupted checksums to the disk, it's very hard to recover that data. Yes, ZFS is not backup, but the implications of serving clients messed up data is also worrying in certain business cases.
Given that ECC RAM isn't (much?) more expensive, why not just use ECC? (For my microserver I remember it being the same price as non-ECC.) Well, because CPUs with ECC support are rare, etc.
Without ECC, ZFS loses one of the guarantees that it otherwise provides. But it only degrades down to how bad every filesystem is in the face of memory corruption - not worse.
If Zen client chips have full ECC support and can handle 64GB of memory I'll be sold easily on an upgrade for my TrueNAS box, then I'll anxiously await some lower-cost (4/8c) dual-socket server CPU's and swap out my TD340 with some supermicro barebones build.
The E3-1220 is usually idle and frequency scaled back unless something like a ZFS scrub is running, I've got ~45W of PCIe cards (SAS HBA, 10GBe NIC and 4x1GBe NIC), 4x8GB sticks of DDR3 UDIMM's probably uses 12W.
I'd say without drives it probably draws no more than 150W on average, unless I'm doing something CPU intensive. My drives all sit in an external SAS enclosure, since I wanted more than 4 drive bays that the LFF expansion bracket provided.
Anyway, the ML10 is pretty decent for light-medium work. It's got 4 really fast cores and 32GB of RAM is adequate for most home server use (this was why I got the TD340 though, I've got 72GB in it) - just beware that the Gen1 units don't include the drive bracket so if you want more than a single HDD you have to purchase one or get an external SAS enclosure.
EDIT: the ML10 is basically silent too, even under load - I can't hear the fans unless I try (though my SAS enclosure makes up for this by being the loudest bit of kit in my lab).
For way less, if you're willing to five up on ECC, you can get an i5-7500T and a pretty good motherboard, bringing your TDP to 35W but having a passmark of 7055, and most importantly, a single thread rating of 1924.
This might not matter much if you're use is exclusively NAS, but I will probably end up running some virtualized or containerized server, or streaming video possibly transcoding on the fly, and I'm afraid the Atom might become a bottleneck.
Is there a solution that is somewhat competitive with the i3/5/7 on price and power, and has ECC? And, ideally, that comes in a Mini-ITX form factor?
 Obviously the PassMark score is only a ballpark estimate to get an idea of how fast is a chip, but still.
The 80W TDP worries me about though, cooling, noise and money/pollution wise (the extra 45W make around 400kWh/year).
Considering the CPU will be idle most of the time, do you have figures about the actual idle consumption of an E3 based machine?
Coding Horror To ECC or Not To ECC , What Every Programmer Should Know About Memory , Memory Errors in Modern Systems , and an analysis of memory errors in the entire fleet of servers at Facebook over the course of fourteen months .
RAM bitflips randomly, period. It's just how it works. A cosmic ray can hit the memory chip just right and flip it, there's no way to predict or control that no matter how "stable" your machine is. ECC still does the same, it just has a parity bit on each line to confirm against and flip it back, as needed.
Cosmic radiation bitflips are BS. No cosmic rays reach ground level. The chance of a, say, Al-28 nucleus successfully penetrating the entire atmosphere is as close to zero as it could be possible to get.
Basic physics. Something with such a high charge density won't penetrate ~100km of atmosphere and magnetic field. Even a basic muon wouldn't get through a sheet of aluminum foil and those are still capable of actually getting (barely) through the atmosphere.
The chances of cosmic radiation causing a bitflip are pretty much in the range of "Elvis coming into town on Nessie." Radiation originating from inside the system itself is much more likely a cause.
Your ground-level bit flips are most likely caused by terrestrial radiation sources, not extra-terrestrial ones. This is just basic physics.
Only a small minority of main memory data corruptions lead to OS crashes, mostly the in-memory application or filesystem data just silently gets corrupted.
Not exactly. The prices between comparable i7 to Xeon are nearly the same.
I did check and it is still true that with laptops you always pay a premium for Xeon v Core on an equal performance basis (even within the same model).
Because I care about my data. Data corruption may kill your main storage and the first backup, too.
Companies I worked for used those only on the servers where we run critical stuff, not a single developer laptop had ECC.
Not sure I agree. Most science datasets (both from simulations and experiments) are sufficiently noisy that if your scientific end results and conclusions change as a result of even thousands of bitflips in your 16 GB of data, you're Doing It Wrong and your article isn't worth the paper it's printed on. (There are probably exceptions, as always, but those working in those few specific subfields should be aware of it.)
There is way too much overreach in this statement. Imagine doing Finite Element or Computational Fluid Dynamics analyses; bitflips of the floating-point values in the field solutions, which could easily make those values completely unphysical, are not the kinds of errors the solvers are written to guard against. In order to do so, you'd need to sanity-check every value, and if you had to use a "guardrail" value, it could easily take a significant number of iterations to recover to the more correct value. Solvers can be easily crashed by corruption of numbers. Sure, if you're lucky enough to have bitflips in low-order mantissa bits, no real harm done. Just don't expect the bitflips to cooperate in this way.
Maybe the "big data" and machine learning crowd don't care about some corrupt values, but most numerical/scientific computing is not so sanguine about corruption.
It is a little frightening how we are moving into significantly larger computational solutions, but are simultaneously increasing our exposure to the fragility and lack of guarantees regarding enormous quantities of perfect bits at all times.
a) bit flip happens so high in the mantissa it makes your code crash. detectable, you run simulation again
b) bit flip happens somewhat lower in the mantissa, enough that it shows up in your analysis. detectable as unphysical result, you run simulation again
c) bit flip happens even lower, you don't catch it as unphysical in your postprocessing and analysis. Here's where I'm saying: if a bit flip happens like this and affects your simulation in such a way that you don't see it's an error, but it still changes your end result and conclusion, and you're not running replications of your simulation to test robustness etc., you're Doing It Wrong.
d) bit flip happens even lower, same order of magnitude as numerical errors. nothing bad happens
There's a reason why HPC systems universally use ECC.
I was under the impression that HPC systems universally use ECC because at that scale, the probability of memory errors in the OS are large enough to cause constant instability of some of the nodes?
More generally, it's not unreasonable to say that our entire computing paradigm rests on accurate RAM. Above 16GB, the risks just become too big for anybody doing serious work, and not just messing around with prototypes.
Exactly this. The size of your code is positively infinitesimal compared to your data. And unless you're writing your code and then running it exactly once, which is a) even more unlikely and b) bad practice, you'll catch any of those bit-flip errors in your code or data structures.
This has been discussed a lot in the literature, especially for GPUs where ECC carries a performance penalty both on speed and available memory, e.g. in this paper where they've tested it on a GPU cluster:
People will see two identical laptops, one with the 2-core Intel i3, and one with the 4-core Ryzen, with similar clock speeds and similar prices. The Ryzen model might even give the illusion that it comes with an AMD graphics card even though it's integrated. That's all that matters.
Yes, it's still true. That should've been plainly obvious.
But it was released immediately before the next cpu generation was released. It got no real marketing, and support for it was abysmal – some to many games just produced a black screen with it. Based on its performance and pricing I'd have liked to recommend it, but given its socket and its support issues that was almost never a good idea.
Normally you'd compare the A10-7870K to the integrated HD graphics of a Skylake or Kaby Lake cpu, it beats those easily.
Those are made for overclockers, enthusiasts, which are a tiny segment of the market. For every K-part Intel produces they must make six E-series Xeon chips and fifty low-power notebook ones.
Why is single thread more important in development? Large project builds typically spawn as many compiler processes as there are CPU cores.
Otherwise, your cores will be either waiting for something or just getting hot running synthetic benchmarks.
All the other chips have 16 PCI-E lanes, which you max out just with the graphic card.
Where I live, the price of a 256GB NVMe drive == 512GB SATA drive :(
It's also pointless if consumers will be choosing between a 7700K and the closest equivalent Ryzen part.
If we look at PC exclusives  then we can see that these are extremely CPU-hungry games. This hunger only goes up if we want to achieve a framerate higher than 60, say going for 144.
I personally have an i7 @ 3.8 GHz with GTX 1060 and none of these games can hold a stable 1080p @ 144 Hz. What's more, I've benchmarked the effect of changing GPU/CPU and increased CPU power increases FPS far more than increased GPU.
I upgraded from Radeon R270X to GTX 1060. In multiplatform games this is a huge leap. Battlefield 4 (1080p Ultra) goes from 43.1 to 94.8 , a whopping +120% increase. While in Dota 2 (1080p Ultra) that only netted me a +11% gain. Then when I overclocked my i7 from 2.8 GHz to 3.8 GHz I got a +23% increase.
 For example Dota 2, H1Z1, DayZ, Civilization 6, Guild Wars 2.
 http://www.anandtech.com/bench/product/1043 & http://www.anandtech.com/bench/product/1771
 This being all reaction-dependant games, doesn't have to be FPS. Even fast paced pong qualifies.
Beyond that, if you wanna act innocent and play a citation game, I can throw you a bone. I'll give you this , what do you give me in return?
 This was already evident back in CRT days when 70 Hz was garbage and 100+ Hz was what every serious gamer was after.
How about the NIH? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2826883/figure/...
Gap detection thresholds for different age ranges. Notice that the average for vision is around 20ms, or 50fps. Yeah some people are lower but the large majority of people don't see any faster than 60fps
 so its not actually a citation it's just annoying.
I could explain further, but I get the feeling that you've made up your mind and aren't willing to read much. If you change your mind, start with the link I gave earlier. 
In addition, this thing is pretty easily testable with home equipment. Get yourself a 144 Hz or faster monitor and construct the following program with OpenGL: two identical boxes moving side-by-side, one updating its position at the full 144 Hz and another at 60 Hz.  You'll see the difference yourself.
 There's a web app as well, but unfortunately browsers don't support frame rates higher than 60 Hz that well. My chrome is limited to 60 Hz even on 144 Hz for example. https://www.testufo.com/#test=framerates
I know people love their 144Hz screens, but if you're generating 170FPS then the remainder is utterly wasted. You're "bottlenecking" the monitor.
So at some point you're spending money on stuff you don't need, the extra frames are simply thrown away.
It's not just old games that need good cpus these days, though I will admit I'm probably more than just the enthusiast market.
You are correct if they're trying to do 4k 144hz they're not going to have any luck. Stick with 1080 or 1440 max resolution 144 hz gaming.
Also personally I think 144hz is a waste. I work for Twitch so I have access to and play on 144hz hardware a ton and I could easily afford it. I instead go for max screen real-estate and high end color reproduction. I'm totally fine with my 60hz gaming on my dell u2714 at home and at work where I can walk 10 feet and use nice 144hz gaming rigs I generally just play on my 30" dell instead.
Totally anecdotal but my coworker was all about the big glorious 1440 display, bought a raptor or whatever and a 1080 gpu. Then because the raptor had an issue he bought a 48" 4k tv instead. The raptor has been replaced but sits idle because having equivalent 4 screens with no border on the tv is much more useful to him.
144hz is like 3d in movies, if it doesn't distract you (glasses for movies, smoothness for games) then it's immersive. And that's the problem. Immersive things you forget about, they bleed away. Gaming is already very immersive so you're just forgeting about more things.
Go with more screen real estate if your desk will fit it. I've convinced 30+ friends to go with more monitors over better single monitors, friends who thought that 2 monitors was ridiculous, friends who thought that 3 was ridiculous. No one has ever come back and said I was wrong.
Final point: Overwatch, I believe, is capped in the engine at 60fps. This means that if you run over 60fps you're not getting additional information. I don't think they even do sub-frame interpolation, so you're likely just seeing the same frame multiple times. But best case even sub-frame interpolation is just showing you predictions for 6.49 milliseconds at a time, but these are not actually accurate, they're interpolations.
How much ram do you have?
It's DDR3 1600.
Reinstalling windows is not really something I want to do.
The only specs we've seen released for any of the benchmarks have shown very slow memory with high latency being used. When someone went about replicating the same timings on his Intel CPU, he saw a significant drop in those same scores. The post was on the TomsHardware forums when the first leak slipped out.
It could be the benchmarks are very focused on some particular kind of operation where Ryzen is very suited/unsuited for. Definitely need real app and more sophisticated benchmark. It looks to be a good processor probably miles ahead of Bulldozer in many department but what in terms of current gen intel in popular workload has to wait for March 2nd.
That's not how it works. If they think AMD can make a dent, they'll mark their chips down to be competitive (for whatever definition of competitive they use, which is probably supported by data, though it might not be YOUR definition). Poor Intel will have to deal with only 100% profit instead of the 300% or so they've become accustomed to.
Intel got a knock out from AMD some 15 years ago. Enough people working at Intel still remember it will enough. This aspect of history is very unlikely to repeat or even to rhyme.
The trouble is it looks like they nearly folded out of the market completely, so they've got a lot of trust to rebuild.
I am surprised Intel has never come up with some more gaming oriented i7s with more cores and no integrated graphics as pretty much no gamer would run without a discrete video card anyways, but then again without any competition from AMD it probably wasn't worth the engineering effort or the risk of cannibalizing their Xeon offerings.
Seems pretty fair to me.
Intel would need to update their "current gen" (Skylake, Kaby Lake) CPUs with 8-core models to be competitive but for some reason Skylake and Kaby Lake have been stuck with a max of 4 cores... which is why these benchmarks are compared to an 8 core Broadwell.
Intel has been pretty much milking the Desktop Market for as long as they could.
High Memory Instances are Cheaper due to support of 8 Memory Channels. Lower End Instances could be cheaper due to Lower Cost per Core Count.
That seems to be untypically good, in the market where the new generations brought typically 10-20%.
I still wouldn't say it's impossible, as, for example, Intel had (apparently until Sandy Bridge) much slower NaN handling than AMD, at least without SSE2. So maybe AMD discovered some weak point like that. But until I read some explanation, it does seem to be too good...
HPCG is a memory bound application so I doubt it runs the system as hot as HPL (high performance linpack)
I predict in the future that AMD will come to dominate the home/enthusiasts CPU market, and intel's low-power CPUs will dominate enterprise along with whoever comes out with a CUDA competitor.
To get an idea of how smart AMD people are, this video instructive:
If ryzen is enough of a success this will be heaven for the company.
I wish them all the best.
Even Intel supports Win7 with their most recent CPU as well, even if it's not officially marketed, and one has to search around for the driver.
I know Microsoft has a contractual obligation to support 7 for a while but I think there's next to zero chance they will reset Windows 10 and go back to Windows 7. You will need to get away from Windows 7 at some point. Better start making preparations now.
Radv, community-developed Vulkan driver supporting last 3 generations of Radeons, supports vkQuake, Dota 2, and The Talos Principle, which is pretty much everything there is on Linux . Playing Doom 2016 over Wine using Vulkan is also working if one uses airlied's branch . Performance is not on par with Windows right now, but I expect it to improve over time.
Win7 is superb, looks nice and works rock solid and is supported until 2020 or beyond (see XP). And then in 2020s Fuchsia/Android, or what ever OS is available and make sense for end consumer needs. (Of course *nix driver support is important too, AMD knows that)
If you want a platform you can ride ad infinium, Microsoft is the wrong boat to be in. We're using OrangePi boards for everything, since you can build Debian for them with a fully libre stack and no closed firmware.
None of the games I care to play are DX12 and Windows Store exclusive, so using Windows 7 has worked well so far.
You should be able to find a copy fairly easily floating around here and there.
It lists two other repos at the bottom for privacy and de-bloat. I version froze my only Win10 VMs, so not sure if the projects are current for post-Anniversary Update.
Windows 10 Enterprise Long Term Servicing Branch (LTSB) is similar to Windows 10 Enterprise but does not include Cortana, Windows Store, the Edge browser, Photo Viewer and the UWP version of Calculator (replaced by classic version), and will not receive any feature updates, gives companies more control over the update process. Windows 10 Enterprise N LTSB also lacks the same components absent in other N variants (see below), and it is the most stripped down edition of Windows 10 available.
Given that this is Enterprise stuff, you won't get it preinstalled on retail laptops/PCs and buying it separately is impossible - you only get it with a Volume License contract or a MSDN subscription (IIRC 1.1k $ per year).
The "-N" suffix is there since (IIRC) Win XP, it is a version of Windows without bundled Media Player and other stuff.
Edit: According to https://www.howtogeek.com/273824/windows-10-without-the-cruf... one could get Win 10 Enterprise as a $7/month subscription and via this also Win 10 Enterprise LTSB? Does anyone have more details on how and if this works? I'd GLADLY pay $7/month for this, even more.
If they are allowed to offer it to private customers and if any do I have no idea.