They want to drop support for a lot of hardware, and can't figure out how to do that without looking really bad while staying as Windows 10.
That is really the difference between Win 10 and Win 11: Requirement for UEFI, TPM 2.0 and a CPU that does AVX2 and FMA3, and some kind of hardware-assisted virtualization.
UEFI allows them to rewrite their entire bootup codebase with only one target. Literally hundreds of megabytes of sourcecode will be removed from their tree, some of it dating back to the 80's, and replaced with something that is much, much smaller and cleaner.
TPM 2.0 and hardware-assisted virtualization allow them to move the kernel one level up in the privilege hierarchy, while properly verifying it during boot to eliminate entire categories of exploits and providing trusted computing that might actually work this time. (Not entirely positive, but it's a desirable feature for them.)
And having all the cpus that can support the OS also support the good vector instructions greatly raises the baseline against which most software is compiled. This is an often underlooked point: The benchmarks and the most performance sensitive programs might get multiple code paths or versions for different cpus, but by far the most software made for Win 10 still only targets a very low baseline. Even if it's not 32-bit, most programs only target the ancient SSE2 (!) because that's the baseline that's quaranteed to exist if you have a 64-bit cpu, never mind it's over 20 years old now.
As someone who has actually written (albeit following a tutorial, for a hobby project) a working non-UEFI bootloader, and having done a lot of experimentation with dual-booting, manipulating disk images, etc.:
I find it hard to believe dropping legacy boot support saves hundreds of megabytes of any data in the OS, let alone source code. (If anything, the UEFI code is orders of magnitude larger / closer to that size, than legacy boot support.)
I could believe that dropping legacy boot support drops a lot of complexity and arcane, old code that's no longer as well-understood. However the actual footprint of that code can't be large. The boot sector is only 512 bytes, then you chain to a secondary and maybe tertiary step in the boot but those are measured in kilobytes or megabytes, hardly hundreds of megabytes (compiled).
But the fact is once the boot process has completed there's no longer any reason to refer back to any of the stuff that happened during the legacy boot while the operating system is running normally; by that point how the boot was accomplished is a non-issue. So it's not like, for example, supporting outdated/obscure hardware where you have to keep a bunch of drivers around.
Maybe the size includes pdf/docx/ppt notes. Maybe there are a few decades of accumulated copy+paste+modify code chunks. Or maybe it really is just complexity -- but at the end of the day that's what matters.
> But the fact is once the boot process has completed there's no longer any reason to refer back
The instruction pointer is hardly the only state here. Doesn't the __SM__ structure (and the mess that came before) cascade into a bunch of configuration? It seems to me that it really could look like keeping a bunch of drivers around.
Or maybe this is a comment on a forum full of people that like to pretend they are experts in any subject at hand, and the author has no idea how big this source code is. Literally.
Yep, I develop a media software, something to handle live audio, video.. I tried last year to upgrade from sse2 to sse4 and got bug reports from users the week after, apparently a lot of people still use old Phenom and Atom boards...
I was still happily running a 2009 Phenom II until last year, when I was forced to upgrade when the motherboard started acting up. The thing is, performance-wise it was just fine. In 2009 it would be unthinkable for a 1998 CPU to be anything but a paperweight, but the performance plateau of today is real
I don’t know, I had a phenom running for years as a server that was running a Linux vm for automated torrenting, a Linux vm running the ubiquiti NVR, and the host itself serving as a media server. But that’s about all it could do. ;)
I realize that sounds a bit like a Monty Python critique of the Romans, but there’s no way I would/could use the platform as a daily workstation professionally. It served its last days as a home servant droid well, but just. It’s longevity in a low-demand role was only possible by augmenting it with an array of SATA SSDs and a hypervisor. I finally put it down a while back (and still had one backup mobo NIB for it). All the above relatively lightweight functions it was performing can be done with better performance on a far smaller footprint for size/power consumption/noise today. Those attributes are part of an improving performance profile. People talk about a performance plateau, which I’ve pushed back on in other posts as really being a demand plateau for many of its advocates. Case in point, using the phenom platform as a daily workstation. If one can even think about pulling that off, it can only be because their expectations have dropped very low for performance or their needs haven’t changed since Phenom was in its prime. Yeah it runs the old workloads like it always did (and why would it not? Even better when augmented with more modern hw/sw), but no way under the sun would I be able to overlay my current demand profile on a phenom platform and have anything but swamped resources and interminable delays. I say this as someone who fully appreciates the awesome power we have in a 5v/2A RPi smaller than a deck of cards, and what can be done a larger platform drawing less than 60w today, let alone what can be done with 1000w. The phenom platform is highly inefficient in comparison to the tools now at your disposal.
I still have a hexa-core Phenom II as my primary desktop workhorse. I'm not looking forward to the day the motherboard or CPU flakes out. It really does fulfill most of my needs. The only real advantage I can see by upgrading to something more modern is lower power usage. From an environmental perspective, I'm guessing I'd have to run the new system for a long time to make up for the impact of manufacturing the new motherboard and CPU.
I agree about the performance plateau, I'm using an FX-8350 for gaming, and I'm fine. Even handled Cyberpunk 2077 well, and that only came out not even a year ago.
Newer CPUs have better security (hardware encryption, better virtualization, etc). Not nearly as much better as it should be, even compared to ARM, but better.
Also, some of the recent processors that support everything you've listed are dropped. A hunch at the time of Windows 11 announcement was that https://portswigger.net/daily-swig/bitlocker-sleep-mode-vuln... can only be fully mitigated in the CPUs (or the mitigation caused performance degradation).
I'm not sure I buy your point about ISA extensions. Many programs still target Windows XP or Windows 7, and if that "lag" remains the case, then it will be many, many years until vendors can reasonably require Windows 11 as a proxy for those ISA extensions. In the meantime, vendors can offer a Windows XP-11 version, and a Windows 11 "optimized" version, but at that rate, it would be just as much work to put the optimizations in the single version and make the program automatically detect the CPU, saving the user the trouble of figuring out which version they need while also making the CPU optimizations work on older Windows versions.
I don't think Microsoft is concerned about ISA extensions for third parties. Apps can use function multi-versioning to support beneficial ISA extensions. I think they're more interested in ISA extensions and CPU features for first party code. They can enforce protections in drivers, kernel modules, and services.
But Microsoft can do that too? They can compile one version of each DLL for SSE2 and another for AVX or whatever if they want, same as any other vendor. It's not like they can just turn off SSE2 for everybody, because it would break every program, and even new programs may use SSE2 instructions in combination with newer x86 extensions. Even if they did (and I'm not sure it's even possible in x86), I don't see how it could increase security anyways; any new Spectre vulnerability or whatever is probably going to be just as exploitable if not more so using AVX load instructions as SSE load instructions.
> the most software made for Win 10 still only targets a very low baseline. Even if it's not 32-bit, most programs only target the ancient SSE2 (!) because that's the baseline that's quaranteed to exist if you have a 64-bit cpu, never mind it's over 20 years old now.
Windows 10 will be widely used in 2031. You're not going to drop support for it and old CPU regardless of Windows 11 existence.
No, but since everyone on Win11 will be on a new baseline they can force developers to either support both Win10 and Win11, and not one for all, on the lowest support level available.
> and can't figure out how to do that without looking really bad while staying as Windows 10
Would that really look so bad? Even the Linux kernel dropped 80386 eventually.
Windows 10 feature updates occasionally phasing out hardware support after a few years wouldn't seem wrong at all in my eyes, they could easily explain it as a trade-off necessary for getting "10 forever". And Windows users were already used to not getting their feature update immediately on older hardware, an eventual escalation from late to never wouldn't have been surprising at all.
But do you pull kernel updates automatically, unattended? Because Windows feature updates are like that - every now and then they may market them up, including popping up some notification in the system, but otherwise they just get installed automatically over time. If they started dropping compatibility with older CPUs and software with subsequent updates, they could find themselves in the middle of a huge PR mess when one of their automated updates suddenly breaks a chunk of some industry. Makes more sense to create a version boundary if the changes are really as extreme as GP described, preventing large amounts of users from sleep-walking into a disaster.
Windows 10 feature updates (recently twice a year, named $year'H'$term) have been gated by hardware compatibility lists for quite a while (since the start?), and there's even a distinction between feature update versions that have their own "Long Term Servicing Channel" and those that don't. Sounds familiar?
The boundaries you ask for have already been in place, long before 11.
Thank you for the excellent summary here. You should have written the tl;dr for the Windows team even though I know that officially saying this would likely piss off just as many people.
One follow-on question - any insights into how what you wrote above explains why Microsoft dropped support for AMD Zen in Win11 (Zen+ and higher are only supported)? Looking at Wikichip, it doesn't seem like there's a feature missing, so perhaps there's an unfixable bug in Zen?
> That is really the difference between Win 10 and Win 11: Requirement for UEFI, TPM 2.0 and a CPU that does AVX2 and FMA3, and some kind of hardware-assisted virtualization.
Only SSE4.2. It supports Intel CPUs without AVX2 since that was missing from some Celeron/Pentiums.
Mostly agree but AVX isn't still a requirement because Intel still sell Celeron without AVX. Even Core i5-L16G7 (sold in 2020) don't support AVX at all, because Tremont core (used for Celeron and little core) don't support AVX.
> TPM 2.0 and hardware-assisted virtualization allow them to move the kernel one level up in the privilege hierarchy, while properly verifying it during boot to eliminate entire categories of exploits and providing trusted computing that might actually work this time.
So Microsoft is going all in with hardware based security while everyone is still recovering from Spectre and Meltdown and dropping in additional layers of software fix CPU security flaws? Is there a betting pool somewhere on how long that will take to backfire?
A big part of it is not just hardware based security, it's virtualization based security (VBS). Essentially adding a hypervisor layer under Windows that manages access to sensitive processes like lsass, which handles credential material. This is a huge improvement in the Windows security model, and even if there are bypasses, they are non-trivial and consist of more than just elevating to admin/SYSTEM as is the current case.
That is really the difference between Win 10 and Win 11: Requirement for UEFI, TPM 2.0 and a CPU that does AVX2 and FMA3, and some kind of hardware-assisted virtualization.
UEFI allows them to rewrite their entire bootup codebase with only one target. Literally hundreds of megabytes of sourcecode will be removed from their tree, some of it dating back to the 80's, and replaced with something that is much, much smaller and cleaner.
TPM 2.0 and hardware-assisted virtualization allow them to move the kernel one level up in the privilege hierarchy, while properly verifying it during boot to eliminate entire categories of exploits and providing trusted computing that might actually work this time. (Not entirely positive, but it's a desirable feature for them.)
And having all the cpus that can support the OS also support the good vector instructions greatly raises the baseline against which most software is compiled. This is an often underlooked point: The benchmarks and the most performance sensitive programs might get multiple code paths or versions for different cpus, but by far the most software made for Win 10 still only targets a very low baseline. Even if it's not 32-bit, most programs only target the ancient SSE2 (!) because that's the baseline that's quaranteed to exist if you have a 64-bit cpu, never mind it's over 20 years old now.