It seems that because they haven't done that until now (keeping the delta to the base locally) they distributed these same otherwise locally computable deltas to every Windows computer in the world with every update!
If I understood it correctly they patented that "approach" and wrote this much (or more) to make it appear more clever and worth of a patent?
"1] The approach described above was filed on 03/12/2021 as U.S. Provisional Patent Application No. 63/160,284 “REVERSE UPDATE DATA GENERATION”"
Even if the ideas aren't novel, getting this thing built and rolled out for Windows is not at all trivial, so good job there. But calling it "new compression technology" is misleading.
That basic technique goes back at least to the TranslateAddress method in Exediff (1999) .
I've been meaning to write up an article on this stuff , Google doesn't seem interested in publicizing Zucchini themselves, maybe due to the patent kerfuffle around Courgette. Microsoft's document on delta compression  covers a lot of this ground.
 Some of this can be avoided, I made some changes to Courgette for a significant speed increase here: https://bugzilla.mozilla.org/show_bug.cgi?id=504624#c39
 I did write up a bug to consider Zucchini in Firefox, with patch size comparisons, but ultimately we didn't switch from the simple power of bsdiff: https://bugzilla.mozilla.org/show_bug.cgi?id=1632374
 Their system can use info from the pre-linked objects and PDB symbol files for better alignment, I'd played around with seeding alignments like this in bsdiff and Zucchini but I don't recall it giving significant improvement. https://docs.microsoft.com/en-us/previous-versions/bb417345(...
Interestingly I found myself using ultra minimal distro vm, and installing nearly nothing on them (qubes os) and just shard the apps: the startup / update of the main gui linux is trivially fast, each vm only has a little subset of apps I care about, and when I trigger a many-vm update they do it in sequence and just update their own subset, meanwhile I can fully use the other vms.
Also I tend to use stable-like distros now like debian and barely see updates coming in anymore, compared to my fedora youth when there was several updates a day, some breaking everything.
Isn't this basically the idea behind snaps? Or is that more docker-like?
On my fast home connection dnf is way faster than apt to update my laptop unless it's a really small update, then apt is faster because it refreshes the repos quicker.
I think there are alternatives to apt that can do parallel downloads and delta as well.
The above being said, my reaction to the article's key point was "you've been shipping forward and reverse diffs this entire time???"
I guess my point is that it's kinda sad that it's most correct to generally describe things like this like as "new technology", because customers who've never seen something like it before don't apply any distinction between that and the idea having existed for however long. </rant>
A lot of that got lost over the years.
Advantages? Just download the image. Could even delta from old to new in multiple ways. Easy revert as well. Security would then come from the usual deal around privileges but there's the possibility of exotic new approaches. Automatic revert could be a thing as well. OS images could actually have a version. Kind of like a ROM.
Disadvantages? Disk space. Filesystems have come a long way though. Plenty of tricks around that. But I'm sure there are plenty of other downsides I'm sure. Don't corrupt that file.
You can already boot VHD(x?) files stored on, eg C, drive so this isn't actually impossible.
EDIT: These images wouldn't be unpacked on each boot. There would be files inside the image and the image is mounted as a filesystem. This is old tech now in the 2020s and its definitely not exotic.
If images really are too much overhead then change the word "image" to "partition". Or some combination thereof (eg image for each app). But in reality, I'm not really convinced the overhead is really that great. Disk encryption uses significant processing power already so accessing a filesystem from an image isn't that great a leap.
I wish Microsoft just go make a new OS from scratch, that behaves more like other OS's (file/directory slashes, rgb formatting, fonts etc) and that have the core of the system immutable while it is running. Get all user data onto a separate partition. Have a root user + password by default. Have a fancy package manager like apt/dnf (where ALL software can be updated through), have a fancy bootloader menu where you can install system updates from, strip out all language packs, drivers and other useless features (xbox, weather app, phone app), have a new terminal (get rid of cmd + powershell and start from scratch), rebuild diskpart...... the list goes on. Make it lean, fast and make the choice to ignore backwards compatibility with windows. Don't even call it windows for that matter.
Having all that, you can have an isolated, immutable windows in less than 2Gb. All of the extra partitions can exist as VHD's that gets mounted at startup, that way you can copy an entire environment by copying 1 file.
Network drivers are always useful.
Alas, the tech debt for MS is enormous so unless they had the ability to create a new OS with no backwards compatability (except most of the WinAPI if possible) then we have to live with the pain.
They could achieve something like this but on boot level:
They seems to achieve size reduction by transmitting only the forward upgrade patches, and let the machine generate the downgrade patches during the upgrade. How do they manage to keep install time the same while this definitely uses more resources (cpu, io)?
This would mean, if you have a fast link, the experience is somewhat worse, but if you have a slow link, the experience is better.
Just a total guess though.
Do they? Or Windows updates are going to get even slower?
> removed major revenue sources
But the original post was complaining about ads, too :-)
Our Windows Server 2016 VM's take over two hours to apply the monthly security patches even on a basic web or application server. The Sysadmin subreddit had a thread recently where some were claiming some of their 2016 machines take over four hours.
Not only are our Linux servers dramatically faster to update there's also more information displayed on progress for those inclined to look at console outputs. This leads to a lot less awkward pauses where you try to work out if it's permanently stuck or just thinking.
We generally see <5 minute downtimes on all our Server 2016 and 2019 VMs, never more than 10 minutes.
Also its incredibly slow.
Updating a bunch of computers with i9, 64gb ram, nvme and gigabit uplink from an older iso install to recent still took half a day and 4 restarts on average.
Just kidding - impressive how much you can safe in terms of size.
While it is important to reduce file size for the benefits of people with slower connection. I am wondering if there are ways we could trade with larger download size but much faster update overall.
Guess Chrome updates still use this compression.
Windows, not even once.
That said, I recognize that it might be better to give users a choice. But then again, isn't a group policy exactly designed to tackle this problem? Good defaults for normal people, and customizability for power users.
This is useful when the OEM isn't being a jerk. They can be sure you've got the drivers for the touchpad or whatever, regardless of how you install... But they don't always use their powers for good.
What's the table name, and can I read it from Linux?
The ACPI table responsible for it is WPBT.
If you wipe, its OK, but if you dont, the partition may be used by Windows, even if you install fresh from an image you prepared yourself.
Because a lot of people:
- don't care about their privacy and spyware (a non-insignificant chunk of the populace, even i'm growing weary after decades of surveillance attempts)
- don't know any better (perhaps the majority of the consumerbase)
- only have certain software available on Windows (e.g. MobaXTerm, or have something like Sony Vegas fit their workflow and don't have the desire to change)
- don't know how to use the alternative OSes well, or have bad experiences with them (Linux driver installation, anyone?)
- feel like that's a fair tradeoff to make for being able to run their favourite entertainment software or video games (Proton will probably eventually be good enough, but yet doesn't have 100% coverage, which currently isn't enough to sway everyone)
Honestly, Linux is probably the better platform for many other things, everything from privacy to development (how it works with Docker and other *nix software is especially nice, no need for weird mapping, WSL/WSL2, MinGW or any other weirdness on Windows), but for a variety of reasons the adoption remains low.
That said, i actually recently wrote about some of my frustrations as a part of my blog article, "On technological illiteracy", which ended up talking a bit more about the driver issues and hardware support: https://blog.kronis.dev/articles/on-technological-illiteracy
Let's just hope that things continue improving in the coming decades!
It's just sits there scanning for ages, and then installs are really slow as well. Downloads seem fine. Is it not just asking for updates since x?
I've always thought that's because the newest ones are dependent on some of the second-newest ones, so it couldn't install them before the dependencies were installed. And the reason it throws those at you immediately, "boom", is that it hasn't really "found more updates" after installing the first set; it knew about these ones too, from looking through the list it made for the first update, and just held them back for a second round.
Just my speculation, though, so could be totally fucking wrong.
Then for example "Update telemetry" scans all applications on the system and only gives you the option to upgrade to Windows 11 if it doesn't find anything incompatible. On Apple you just get the option to upgrade and stuff just stops working afterwards.
Granted, they host an "incompatible apps list" which their installers are bundled with + download updates if they can. I have no idea what it's for and what it's doing, never found a thing on my systems.
Windows 7 and earlier had some issues where, as the size of the update catalog grew, the updater would start taking much longer to "scan for updates". This was eventually fixed in an update to Windows 7 -- but this still meant that you'd be waiting a very long time to install the first few rounds of updates to a fresh 7 install.
That being said, Windows 7 is EOL. If you're still using it, please stop.
I don’t mind so much that it takes time checking, downloading and installing updates, because that’s not downtime. My beef is with what happens once you reboot.
This has been my experience with Windows updates for the last 10 years or so, on different PCs and different Windows versions. It's just slow (and it's definitely the installation phase, not the downloading).
It could be a lot faster if they parallelized the file operations or deferred the antivirus.
(Both Windows and macOS spend a lot of time in indexing your disks and doing all sorts of on-device "AI" analysis to provide some specific feature. And both collect a lot of your data for this.)
what linux distro do you recommend?
These days, I just have an old XUbuntu installation that has been consistently upgraded since 16.04, and it just works. Some may consider Ubuntu-clones to be "beginner's distros", ultimately, though, I like how it gets out of your way, apt works well, and I can focus on being productive.
Doesn't mean you have to use all they do. I happily work emacs-based and in i3...
I have a pretty respectable system. i9-9900K, 32 GB of RAM, and Windows 10 is installed on an NVMe that can read/write at up to 3 gigabytes/sec (Yes, gigaBYTES!).
A 200 MB update should take a fraction of a second. Even if it's actually applying a delta and not just simply overwriting files, I can't imagine it should take more than a couple seconds.
We are looking to move to a Linux host instead but that is a big up-front cost to get the same size machine, move the VMs across and remove the old host.
Fedora/GNOME prefers to install updates during a reboot. So although most updates can work this way, it's not always the case.
There are some minor cases where a reboot is better, or where an update requires a bit of work. E.g. Firefox updates, but also I sometimes have that video playback suddenly breaks (not sure why; kernel/mesa/something). Only a reboot seems to fix the video playback and it happens in various applications once it breaks (Firefox, mpv). I install the updates via dnf via cron/systemd (forgot what I did), so not with the suggested reboot and so on.
I do appreciate the Flatpak bits, those update easily.
Wouldn't it be so much more reliable, faster and simpler to just install those updates on the next restart?
It's certainly simpler for the developer to pick up the updates only upon rest, but it's sometimes very inconvenient for the user, and probably slower to pick up the actual updates. For instance, I run a vanity domain from a Linux device at home. It would be inconvenient to run Windows on that machine and have to restart it following monthly patches or following emergency patches.
Before this the experience could be quite fragile post-update, so I tended to always reboot right away anyway.
Otherwise, are there any distros regularly using ksplice?
I never have these slow downs that I read others have. I always do a clean install from a USB stick (including deleting recovery partitions). I do this like once a year or 1 1/2 years when something is being released that is either a new version or what is we used to call a service packs.
Windows sucks at upgrading. There is always something strange going on after upgrading. Things that not happens after a clean install.
If it is looong time since you have performed a clean install, I recommend you to consider doing that.
But this has always been my experience. I remember reinstalling Windows 7 and waiting hours for Windows Update to finish. Even checking for updates could take 20 minutes.
The only thing I can think about comparing your setup to mine is that I rarely turns off my computer. If you only use your very infrequent and turn it off when not used. I wonder if Windows is doing all kinds of maintenance jobs every time you start your computer since it is off most of the time, and then when you manually hit windows update, it gets a bit busy working out the correct state of your computer
Those jobs are on my computer distributed over days / weeks but wit you they run every time you turn on windows since it couldn't run them at their scheduled frequency since the computer was off.
I am just speculating, but maybe try to turn on windows once in a while and leave it over night, to test if this improves things.
My distro's package manager probably breaks my system if it loses power during an update. Even so, it's been one of the more reliable and pleasant to use programs I've encountered.
I also have a PCIe NVMe boot drive, typically i don't even notice updates until i go to shut my PC down in the evenings and windows tells me it'd like a few mins to do the updates.
Same difference to me, i was shutting it down and walking away anyway
It immediately tells you who did what. I hope more companies would follow this cenvention.
Note that this isn't the main Microsoft blog, and in fact there are lots of different blogs.
Reminds me of Jimmy
That's because Microsoft isn't a technology company. Microsoft is a marketing and sales company that happens to sell technology.
As someone who worked at Microsoft in a technical role, I know that's the truth.
You can lie to yourself if you want, but it's still true
- Dad needs you to eat your food!
- Dad is going to brush your teeth!
- Give those scissors to dad!
- Don't touch dad's laptop!!!
- Dad needs to sleep now...
The primary reason for using specifics ("dad", <name of child>, <name of sibling>, ...) is that pronouns are one of the last things a child will pick up. Think about how a child will always hear themselves referred to as "you" -- and end up assuming that "you" is like a name form them and start using "you" instead of "I"/"me".
In reality, you will be using both specifics and pronouns, and often repeat the same sentence with both versions to teach that they are identical.
Speaking for a friend, of course.
Of course, Linux mailing lists used to joke that "micro" and "soft" reference Bill, back when people had thick skin and cancel culture had yet to be invented.
Does he have a beard nowadays? He didn't use to, back in the day -- and aren't they almost mandatory for "bears"?
Seriously, Microsoft blather on about compelling experience this, rich interface that, but the whole thing seems a nightmare to me. What's more, it's been "normalised". People go through all this garbage and they rave about how good Windows is.
This applies to how the updates are handled, the telemetry stuff in general, the way the Control Panel and other settings are arranged, and so on. It's hard to pinpoint one specific thing, because the whole system is steered in this direction.
This must be how delta rpms work right? Because why would they contain backwards data? They only contain new data, and the diff between old and new data.