This does strongly suggest a compromise of the Windows Update servers or of some bit of infrastructure that connects people to them, but also suggests that whoever the attackers are, they made a mistake - a successful compromise executed correctly would not leave so much evidence around. It's quite possible that they've been compromised for awhile, and this is a buggy update to the existing malware.
We programmers like being specific. Sometimes these sorts of details matter.
A random string out of a specific character set is also subtly different from taking random bits and encoding them with that same character set, in that the first couple digits will have different distributions.
No, it doesn't. 0xFF is a number I just made up, no source data at all, I promise. Also, it's base 16 :)
Anyhow, the source data was most definitely base 2 (as is your computer's memory, I assume) and later encoded into base52 to be represented as a string (unless someone at Microsoft wrote it in base52, which seems unlikely).
It's not base 16 encoded, which was his point. Encoding demands a source. This is just a base-16 number unless you encoded something to arrive at this. You could interpret "Romeo and Juliet" as a very large base65 number (65 unique chars in the random copy I grabbed) if you want, but it's not meaningful or accurate to call it a base65 encoding.
> Even if that were true, the source data was most definitely encoded from base 2 (which is what our computers work with).
This is the kind of pedantry that people hate because it adds nothing to the conversation. It's a way to inject "I'm right" moments into the conversation so you can feel smart, while no one else really cares. It makes for unpleasant conversations.
You're also not right. Your brain doesn't work in base-2, and you likely didn't enter this number into your computer in base2. You typed in the string "0xFF", and that string was encoded in base 2. The base2 that represents the string "0xFF" is very different from the base2 number that represents the logical (base16) number 0xFF.
I didn't understand it that way.
Also, your whole <pedantry> block and the paragraph above is based on misunderstanding my comment (probably because I'm not a native speaker and you caught me inbetween edits).
I think you're the only one making an "unpleasant conversation".
The original comment did say "encoded". The discussion about the phrase "base52 encoding" was the base of this entire thread. The parent of your original response also used the term "encoded". The context is clear. I don't see how you could have missed it. (Edit: I see you're not a native speaker. That might be part of why we're not understanding each other. Plus I apparently keep replying in between your edits, which happened again.)
> Also, your whole <pedantry> block and the paragraph above is based on misunderstanding my comment
Well, you rewrote the comment after I replied. I assumed your "source data" was your logical 0xFF number. If you were referring to the "source data" for the strings in the update description, then in all likelihood, there was never a "source" number at all. These strings were almost certainly generated via random selection from a set of characters. You could generate a very long number and then base52-encode it to produce the same thing, but it would be more work and less obvious for future code maintainers. So the "source" was a sequence of characters (azAZ), not a base2 number. You could argue that this is still somehow base-2 since it's in a computer, which I guess is fine (if pointless and pedantic), but it's still not accurate to say that these were "encoded" into base52.
I was unnecessarily snarky and rude. I'm sorry for that.
But the full quote was "base52-encoded". (Though I would argue that base52 implies encoding, because nothing naturally works in base52. The only thing that's naturally 52 is "random letters with random case". Or something with cards.)
>Anyhow, the source data was most definitely base 2 (as is your computer's memory, I assume) and later encoded into base52 to be represented as a string (unless someone at Microsoft wrote it in base52, which seems unlikely).
That is an enormous assumption. It's easier to pick random letters than it is to take a specific binary number and convert it to letters. And they don't give you the same result. Bits stored in base 52 will never start with zzzzz.
It's also so specific as to be inaccurate. These strings could be interpreted as "base-52", but also as base-64 or any other base greater than 52. Calling them "encoded" also implies a belief about how these were derived that isn't justified. "Encoded" means that there is some original source that can be recreated by decoding. It's possible that these were actually created by generating random numbers and then encoding that data in base-52. I think that's pretty unlikely, though.
So no, I don't think this sheds any light or additional detail. It's inaccurate and misleading and if we're so specific we're making up terminology, then we should also be specific enough to say things like "assumed pseudorandom" rather than "random" when we don't know. Otherwise we're just being obtuse.
'base-64' is at least a defined term with a defined alphabet.
But above all, we like being pedantic. (Not you.) :)
That's not nice. Please don't do that.
I doubt that dpark is "expecting everything to be in layman's terms", but even if dpark were, many HN users would be happy to explain. And that's the kind of site we want.
One of the frustrating things about reading academic papers is that many authors insist on using less common and less clear phrasing when there are much simpler ways of saying the same things. It makes for unpleasant, heavy reading and it makes the author sound pompous.
"Obtuse" also means dumb, and is often used to mean "difficult to understand". "Abstruse" might have been a better choice of word.
hobbes@namagiri:~/scratch$ echo gYxseNjwafVPfgsoHnzLblmmAxZUiOnGcchqEAEwjyxwjUIfpXfJQcdLapTmFaqHGCFsdvpLarmPJLOZYMEILGNIPwNOgEazuBVJcyVjBRL|ent
Entropy = 5.352821 bits per byte.
Optimum compression would reduce the size
of this 108 byte file by 33 percent.
Chi square distribution for 108 samples is 650.52, and randomly
would exceed this value less than 0.01 percent of the times.
Arithmetic mean value of data bytes is 92.6019 (127.5 = random).
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
Serial correlation coefficient is 0.142915 (totally uncorrelated = 0.0).
hobbes@namagiri:~/scratch$ echo "You have no way of knowing that. All you see are some random-looking strings. These could be numbers (random or not) that were encoded as base52. These could also be numbers encoded as base64, or base88, or any other base52+. Or they could be randomly-generated strings and there is no meaningful number underlying them." |base64 |sed -e :a -e '$!N; s/\n//; ta'
CL-USER> (to-base 52 (from-base 64 "WW91IGhhdmUgbm8gd2F5IG9mIGtub3dpbmcgdGhhdC4gQWxsIHlvdSBzZWUgYXJlIHNvbWUgcmFuZG9tLWxvb2tpbmcgc3RyaW5ncy4gVGhlc2UgY291bGQgYmUgbnVtYmVycyAocmFuZG9tIG9yIG5vdCkgdGhhdCB3ZXJlIGVuY29kZWQgYXMgYmFzZTUyLiBUaGVzZSBjb3VsZCBhbHNvIGJlIG51bWJlcnMgZW5jb2RlZCBhcyBiYXNlNjQsIG9yIGJhc2U4OCwgb3IgYW55IG90aGVyIGJhc2U1MisuIE9yIHRoZXkgY291bGQgYmUgcmFuZG9tbHktZ2VuZXJhdGVkIHN0cmluZ3MgYW5kIHRoZXJlIGlzIG5vIG1lYW5pbmdmdWwgbnVtYmVyIHVuZGVybHlpbmcgdGhlbS4K"))
hobbes@namagiri:~/scratch$ echo "6lH81qbtcGjfFOclhdtt7ieuljADG17Ou6swolu2A2qlnO52zmMkG8Nfk2APbEbso4idFF44wsLIGfrOg5atP0ucg5gubxAcD9ztMcboeC4sAui28skbwtEiuv64OuD8fbFnn1M01oeq3bH7n9bvu8p3P1MwirdDHxKDONktDvtNOLE1srOz2I4wNLsBpgOGIlLs1i11xt58JpOC1whJ54Krmln1ahmrvksODe7kqjtBazKzKamu5tygI6hGHq3h123Ighyw8s2MxE4dl9rBdBNG1o7tJM2HvzOh955NpLB33nuPr4OwfhjB618y1BP9y2euMquIMszOuH1rPAEkBccOu9qIJBqKgGf1qH42bBd7GOMsbKExosd8CErJlMIAcEyyytFzoKOfkJy1ExEnKc0iE7OGkpLJM3ybB2JNlnwtrC5F0cH8kB41IIv0BwNk9DO" | ent
Entropy = 5.610518 bits per byte.
Optimum compression would reduce the size
of this 452 byte file by 29 percent.
Chi square distribution for 452 samples is 2093.27, and randomly
would exceed this value less than 0.01 percent of the times.
Arithmetic mean value of data bytes is 86.3496 (127.5 = random).
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
Serial correlation coefficient is 0.018253 (totally uncorrelated = 0.0).
So the question is why a test update would have "hidden" meaning underneath the random-looking strings.
As for the flaw in your methodology, Your ent command (not familiar with this, so just basing this off what I see) is assuming full use of the binary space (hence assuming 127.5 as mean of random data). No base52 data will use the full 256-value space, by definition. Base52 (azAZ) will actually have a mean of 93.5 for random data, extremely close to the measured mean. Serial correlation coefficient should also be higher for azAZ than 09azAP, because more of the alphabet is contiguous.
I did a little bit of analysis on the data as well to determine if the data was random or gibberish typed by a human on a keyboard, and found that most of the data lined up well for true (or pseudo) random. (Ugly code here: http://pastebin.com/9YN93xhi)
Home row % expected: 34.6%
Home row % actual: 38.3%
Expected upper: 50%
Actual upper: 47.7%
Expected sequential case match: 50%
Actual sequential case match: 55.7%
Regarding the assumption of full use of the binary space... May I phrase that differently? azAZ and 09azAP both leave big swaths of the space as always-zero or perhaps always-one. Another way to phrase it: you could write out an azAZ string using five-bit characters and have room to spare.
The swaths of emptiness in the range of possible values is why I took an English sample and did my best to encode it the same way as the original was, so that we'd have an apples-to-apples comparison. The English sample had higher entropy than the original--after "accounting for" encoding.
You are right, my method of "accounting for" encoding did nothing for the "serial correlation coefficient" metric. I didn't know what that was until you mentioned it, thanks.
Your actual/expected analysis is a good idea, though it took me a minute to understand what you were doing. I guess: "If these are truly random digits from azAZ, then half of them will be uppercase." Indeed. But... I just read an article which convinced me that I have no idea how "random" works. (Specifically this bit: As an example, the probability of having exactly 100 cars is 0.056 — this perfectly balanced situation will only happen 1 in 18 times.
Your understanding of "random" probably isn't that far off. After all, "the average number of cars on each road will be 100". When you look at random numbers they do the expected thing in aggregate. Individual samplings will vary, though. You could get all 200 cars on one road randomly. It's just exceedingly unlikely.
7 bits, 128 symbols.
6 bits, -- oh. ... Time for coffee.
$ dd if=/dev/urandom bs=512 count=1 2>/dev/null | base64 -w 0 | ent
Entropy = 5.933850 bits per byte.
Optimum compression would reduce the size
of this 684 byte file by 25 percent.
Chi square distribution for 684 samples is 2310.15, and randomly
would exceed this value 0.01 percent of the times.
Arithmetic mean value of data bytes is 85.0731 (127.5 = random).
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
Serial correlation coefficient is -0.018269 (totally uncorrelated = 0.0).
EDIT: Or, perhaps even better, the .invalid TLD which is also guaranteed to never exist.
Considering MS's typical QA with updates, this shouldn't be terribly surprising. I'm going to guess that this was a test patch that got loose. Big companies aren't fault immune, if anything they are fault prone.
That was my point, yeah.
I'm not saying the OP's link is a result of these vulns being exploited, but them being exploited is always a possibility in the future if it hasn't already happened or been fixed.
/this forum is a god send
Disclosure: MSFT employee, no knowledge of this beyond what I've read in this thread, though.
Also, big whoops.
This position seems to be based on the assumption that malware is less likely to be subject to mistakes than MSFT.
Want to check with the PR dept? ;)
Random text but real TLDs also just smells like test data to me. Someone smarter than me could probably do the statistical analysis to determine if this is actually random or pseudo-random garbage typed by a person. Given the fairly long runs of all-caps and all-lowercase in the description, I'd guess a person typed this out (presumably as part of creating a test or test suite).
> A test update getting accidentally pushed out is more likely than a compromise of the Windows Update system.
NEVER turn on auto updates on windows. Read all the KBs, then choose to install, ALWAYS. If you have a corp network, use WSUS and stop all updates and check them. If the KB is content-free like the new ones, no install. I avoided the whole CEIP bag of shit and Windows 10 upgrade notification hell thanks to that.
I'm sure this won't increase my load as the family technical support person at all.
Simple reason: if your computer updates it is not in a stable state until a reboot. Simply, your computer may not ask you to reboot after an update, but some software will (eventually, not every time) run very odd until you reboot.
I've seen this happen many, many times on my own machine and on many company machines I've managed.
It's best to install updates when you want to install them.
For company servers, I absolutely agree.
For corporate desktops, the administrators of WSUS (assuming an environment large enough to warrant running it) should approve them for installation after having had a chance to review them. Even so, the desktops should (IMO) be set to automatically install them and reboot once they are available.
For home PCs, just set them to automatically install and reboot and forget about it (n.b.: general rule; obviously there are/will be exceptions).
Personally, my own Windows machines (a grand total of two, running Windows 7 Professional, that are very rarely used), are configured to automatically download and install updates at 11:00 p.m. on Mondays. When an update is released that breaks things, this gives me about six days to hear about it and turn off Windows Updates until they get it fixed (assuming a typical Patch Tuesday release). A long time ago, I reviewed every update before installing them but not anymore. When one of those "drop everything and patch now!" updates comes out, I hear about them elsewhere and install them manually.
I mean, it isn't the end of the world, but your helpdesk will be getting a few calls because your users are refusing reboots!
Not during the middle of the night, either - typically these get pushed out around 10-11 AM.
And when was this, over a decade ago? Also, what evidence did you have it was the auto-update system that caused the outage? Past performance is not a predictor of future performance.
Seriously folks, turn on auto-updates.
I'll add my voice against this, if you have enough technical knowledge to check more carefully. I too have seen numerous occasions where something installed via Windows Update has taken out a machine and required significant action to restore it to normal functionality. My personal policy has long been security updates only, and even then I tend to do a quick web search before letting them install, which has saved me from the odd howler in the past.
On the other hand, the number of times I have seen a PC rendered inoperable or compromised because it didn't install a Windows update within 24 hours of the update being available is zero. Even if the PC is just a simple home machine, there's probably still at least some sort of firewall/router between it and the public internet, and just about any device like that is going to block unsolicited incoming traffic by default these days. To get compromised within that time frame you'd likely have to actively open something or visit somewhere that included an exploit for a new vulnerability, and while that is always a risk even on a fully patched system, it's not a big one for most people.
It's far more important to keep your browser and plug-ins updated to guard against those threats. Personally I also block almost all ads and other third party content, primarily on security and privacy grounds, which also significantly reduces the risk of running into malware while browsing.
If IE or Edge is your browser of choice then of course updates for those are going to be a priority for the same reasons. But even then, if someone has managed to compromise sites like Google's or Microsoft's so you can't even do a ten second web search before installing a patch without getting hit by an exploit that patch would have blocked, we're all in pretty big trouble anyway.
What about all the browser sandbox escapes that rely on kernel vulns?
Fool me once, there won't be a second time, and that means you get to pull something like "UPGRADE TO WINDOWS 10 NOW!!!11!!" on me exactly once. Auto-updates are now turned off on my Windows 7 box, and they will remain that way.
Basically in the medium to long term if you regard your OS creator as a potential threat you have very little option but to change OS...
This is why fucking with Windows Update should have been the very last thing anyone at Microsoft would ever have wanted to do... or the very last thing they ever did do just before Security escorted them out to the parking lot.
No it isn't, but when they demonstrably are hostile to a degree, as with Microsoft's recent behaviour, that treatment is justified all the same.
It's important to separate updates that fix defects in the original product (security patches, bug fixes) from other updates that simply change the behaviour. The reason it's important is that from a legal point of view, there are often implied expectations of fitness for purpose and adequate quality when you buy something.
Software companies have for some time enjoyed a cosy position. For one thing, those kinds of rules have often not been enforced rigorously, partly because as long as the software companies were putting out bug fixes before large scale damage was done it has been pragmatic to let them carry on. Also, the law has often lagged the technology, with various loopholes meaning the same consumer protections that apply to physical products haven't always applied to digital ones and extra rights in digital products have been very rare.
However, the laws in a lot of places have been starting to catch up, just as modern trends in software have been pushing towards effectively forced updates. It would be a brave software company that rocked the boat by limiting access to security patches or other essential bug fixes in their push to get everyone upgrading all the time, though. The consequences if they push too far and the consumer protection authorities and/or business lawyers start to challenge them seriously could be extremely expensive.
Basically in the medium to long term if you regard your OS creator as a potential threat you have very little option but to change OS.
Unfortunate, but true. For now, I am still "changing" to Windows 7 for new machines on the Microsoft side. Personally, I'm betting that the inevitable backlash against ever-changing, never-owned, user-hostile, sub-standard digital products is going to pick up enough momentum over the next few years that either Microsoft or whoever actually kills their business will offer a better alternative before 2020 when Win7 support is scheduled to end.
I thought that software licenses and EULAs were designed to remove liability?
> Unfortunate, but true. For now, I am still "changing" to Windows 7 for new machines on the Microsoft side. Personally, I'm betting that the inevitable backlash against ever-changing, never-owned, user-hostile, sub-standard digital products is going to pick up enough momentum over the next few years that either Microsoft or whoever actually kills their business will offer a better alternative before 2020 when Win7 support is scheduled to end.
I could see the year of the Linux desktop coming eventually. But not as originally envisioned. I would not be surprised by a world where only specialists (developers, graphic designers etc) have desktops and the actual majority of computers in use are locked down iOS or Android kiosk type devices.
No doubt they try, but the fact is, those kinds of documents can't override the law. In some places, the law imposes minimum standards on what is acceptable in a consumer (or even business) transaction, and software companies have tried to play the "But the EULA says..." card, and if it's actually tested in court they have sometimes lost. They often rely on people not being aware of their rights and/or not having the time or money or willpower to contest the issue.
Even that barrier may not help the software companies in the long run. Coincidentally, just today the UK introduced a sort of lightweight version of US class action lawsuits as part of a major revision of consumer protection law, as well as various other explicit consumer rights relating to digital rather than physical content.
I would not be surprised by a world where only specialists (developers, graphic designers etc) have desktops and the actual majority of computers in use are locked down iOS or Android kiosk type devices.
I'm afraid that is one all too realistic possibility. But there are reasons for hope as well.
For one thing, tablets and the like are convenient for small-scale content consumption and minor interactions, but they're awful for serious content creation or more complicated interactions. I don't think general purpose computers are going anywhere any time soon.
Perhaps more significantly, there is now a push in quite a few places to promote computer literacy and basic programming skills even at school age, and to spread the word that you can still tinker and make cool stuff, perhaps using devices like the Raspberry Pi and Arduino. We also have Linux and the FOSS community following a similar philosophy on the software side, of course, and actually one of the nicer results of so many kids having smartphones these days is that writing simple apps to run on them is now an attractive introduction to programming for kids who enjoy playing with technology. Ultimately, there is a strong human instinct to create and many people enjoy making stuff that is fun and interesting, and fortunately no amount of marketing is ever likely to change that.
Dumbed-down, locked-in devices may be the majority in the future, but I think there will always be room for powerful, flexible tools and there will always be room for innovation and creativity. It's a big world.
On the one hand I hope you are right -- when I pay for software I have certain expectations which are often not met. On the other hand I hope that this doesn't apply to free (as in freedom and beer) projects. If the disclaimer of liability were to become invalid in e.g. the GPL a lot of good people could be put to a lot of trouble.
Hogwash. If this were true, there wouldn't be the concept of reputation.
You could also just wait a week for anything noncritical to allow others to flush out any issues, which is a more time-efficient strategy than manually reviewing gobs of KB articles.
For most people, disabling auto-update is a horrible strategy. If you have a central team actively managing updates with WSUS, you can get away with this. For the vast majority of people, turning off auto-update just means they stop installing updates at all, which is the reason auto-update is the default.
there's been a few comments in the wild saying windows 10 can install without your permission. it may even be true, a bug.
so yeah I 'seriously' disagree with you.
scroll back through the years.
I would hazard to say that if you were a sysadmin, then this would not be the case.
how about confessions?
In nearly 100% of all scenarios that I've ever, ever had issues with anything. It's because an update broke something - sometimes irreversibly. Auto-updates are a larger threat factor for me than malware or niche security threats that only attack certain features that I don't utilize (thus I'm not a potential target for that attack vector).
>Past performance is not a predictor of future performance.
In some contexts I agree with you. With programming - I disagree entirely.
Bad programming habits are a great predictor of continued bad programming habits. When the same threat vector pops up again and again in a program it's because the programmer isn't learning from past mistakes. Video game bugs are proof of this.
The first thing many glitchers do on a game I play is test variations of old, patched bugs on new updates to smuggle items out of areas that you shouldn't be able to smuggle items out of. It almost always works. Because the general, underlying problem has not been fixed. They just throw band-aid patches on it after the fact and forget to apply the band-aid patch to future updates, allowing the bug to resurface. The same variations of the same bug have been resurfacing for over a decade now.
Bugs resurface all the time in software, because programming is really tricky to get perfect and humans repeatedly make the same mistakes time and time again.
An upgrade provided by the company that is completely legitimate that completely renders the program unusable or destroys my workflow has happened far more often than my system being compromised has ever negatively affected me. I could count on a stub the number of times I've known my system to be compromised. I'd have to count on my hands using a binary method to count the number of times a legitimate update was botched.
I still update my programs. I just don't let them do it automatically. Leaving an extra few attack vectors up for a few days/a week to let the patch mature or for an emergency-fix patch (i.e. 30-->30.0.2 "Super major security exploit was live for 3 hours but we fixed it") to be released has always worked to my benefit. I've never had a negative outcome for waiting a few days to patch. I don't have to deal with botched releases or newly opened attack vectors. Instead I get to listen to the canaries in the mine.
Also what happens when an auto-updater gets compromised? I get to listen to the canaries. You get to be one of the canaries. So for that, I thank you.
It isn't the content of the update you should be weary of (make this decision for yourself if you care this much) but it is the act of updating machines that will cause problems.
When a Windows machine updates (yes, even as of today - I had this issue just last week) it is in an indeterminable state until a reboot, even if the update doesn't require a reboot.
I can definitely say that it is better to wait to update when you can reboot than to update immediately. Of course, if there is a really bad vulnerability, update immediately. Let the user know it's an exception.
After the Windows 10 debacle, I'm looking to get off of Windows as soon as I can afford to. Whoever decided to turn Windows Update into an advertising platform needs to be fired -- it's that simple.
Not that I ever plan to run it, but my understanding is that Windows 10 itself, not just Update, is an advertising platform.
Nobody in Vegas is taking odds that someone will be fired. But I'm in full agreement with you. Someone should be fired.
Unless, of course...
But that would be a wee bit obvious.
Unless the servers are compromised and used as C&C?
If MSFT is anything like where I work, that "payload" is a picture of a cat.
If a test update really did make it through, it would warrant significant questioning of the procedures at Microsoft. If a test could get through without being discovered, then so might malicious code.
The fact that a test patch got to this stage doesn't mean the safeguards aren't in place or that malicious code could have slipped through, though. Assuming even basic competence, this test update could not have been signed, and if someone had managed to push malicious code, the same would be true, so it wouldn't have been installed onto target machines.
The only way to guarantee that is to not allow updates to be published at all.
> If a test could get through without being discovered, then so might malicious code.
You are conflating very different things. MSFT being able to publish updates is normal and does not require a security breach, even if one particular update shouldn't have been published. An external entity being able to publish an update containing malicious code would be a huge security breach, requiring both the ability to sign the update and to publish it.
That being said, that something like this could happen should raise lots of questions about the amount of oversight on updates hitting windows, and the general security of such systems. I'll wait for an official response or a reverse engineer before I decide what's going on here.
It's already done. About 5 hours after the post was first opened on the forum. There's also an article on ZDNet.
Interesting that the update in question is also 4.3MB?
Now the spy updates are not hidden, and marked as "Important." They're bound and determined to force this crap down our throats. Bastards.
"Because f*ck you, that's why." The rallying cry of the corporate world.
I uninstalled each of those KBs manually from the "Installed Updates" screen, then changed the update policy. I used to use "download and install manually" but now I'd prefer only being notified, and THEN deciding whether or not I want to download whatever is offered. I then re-ran the check for updates, and hid the offending KBs.
That was earlier this month. After reading this article, I decided to have a look and see if there was anything fishy in my update history (beyond the listed KBs that I don't want). Nothing there, at least, but my hidden updates were un-hidden (along with Silverlight and Skype, two more "do not want" things that I always hide).
Shouldn't Microsoft be signing updates so that redirection attacks don't work?
Elaborating on my question; I mean much more like Linux distributions which sign both packages (updates) and the index of those files. Some distributions use multiple hashs/digests to make collision attacks far less likely to succeed.
Such an attack could be either the traffic at layer 3 redirected via router compromise, via some name resolution weakness (possibly even to localhost as a way of malware upgrading from being able to edit the hosts file to having system level services).
The signing of both the update files and the list of updates could offer protection from an attack that would thus need to be valid for all of the signature checks, not just a single check.
Based on the info in the post, I'd guess that this is a test update of some sort and that it was pushed by mistake.
Disclosure: MSFT employee, but no knowledge of what this is about.
Microsoft sign updates and utilise HTTPS.
Given how few users are impacted by this suspect update, it may be the result of malware on their local machine. If malware has root then all bets are off, the signing requirement can be removed.
The signature is distributed alongside the binaries.
I'm not certain if the Windows Update system uses the same Autheticode system used for application binaries, but you can start reading here:
It verifies that the signer had access to the private key, and that the data signed by the private key is the same data that you are verifying with the public key.
It's like the other checksums (SHA-1/MD5/etc) with the addition of identity verification (so long as you can trust that the private-public keypair used to sign it is only accessible to parties you trust).
(I am not experienced in cryptography. This explanation might be a little simplistic.)
The odds that parties outside of Microsoft have access to their update signing key is actually seems pretty likely given the Snowden revelations. Consider the Stuxnet distribution strategy -- what a boon it'd be to be able to deploy that sort of machine-specific payload via the built-in update kit.