Hacker News new | past | comments | ask | show | jobs | submit login
Windows 7 Update appears to be compromised? (microsoft.com)
375 points by cyann on Sept 30, 2015 | hide | past | web | favorite | 161 comments



This links to a Microsoft support thread in which several users are reporting a suspicious update distributed through Windows Update. In lieu of a title and description, the update has 108-character and 24-character base52-encoded random numbers. In lieu of "more information" and "help and support" links, it has similarly random base52-encoded domains, which currently do not resolve, in .gov, .edu and .mil. Searching for the patch title turns up a bunch of people asking about the same suspicious patch on other sites, all within the past day. The update is attracting attention because it fails to install.

http://security.stackexchange.com/questions/101520/weird-win... https://www.reddit.com/r/techsupport/comments/3mykv1/weird_w...

This does strongly suggest a compromise of the Windows Update servers or of some bit of infrastructure that connects people to them, but also suggests that whoever the attackers are, they made a mistake - a successful compromise executed correctly would not leave so much evidence around. It's quite possible that they've been compromised for awhile, and this is a buggy update to the existing malware.


"Base52-encoded random numbers" is a rather obtuse way to describe random letters.


Base52 means capital letters and lower case letters. Base62 includes numbers (0 through 9).

We programmers like being specific. Sometimes these sorts of details matter.


And saying "base52" is misleading. It implies that there is a source data that's been encoded, which is not likely here. You can be specific without implying that.

A random string out of a specific character set is also subtly different from taking random bits and encoding them with that same character set, in that the first couple digits will have different distributions.


> saying "base52" is misleading. It implies that there is a source data that's been encoded

No, it doesn't. 0xFF is a number I just made up, no source data at all, I promise. Also, it's base 16 :)

Anyhow, the source data was most definitely base 2 (as is your computer's memory, I assume) and later encoded into base52 to be represented as a string (unless someone at Microsoft wrote it in base52, which seems unlikely).


> 0xFF is a number I just made up, no source data at all, I promise. Also, it's base 16.

It's not base 16 encoded, which was his point. Encoding demands a source. This is just a base-16 number unless you encoded something to arrive at this. You could interpret "Romeo and Juliet" as a very large base65 number (65 unique chars in the random copy I grabbed) if you want, but it's not meaningful or accurate to call it a base65 encoding.

> Even if that were true, the source data was most definitely encoded from base 2 (which is what our computers work with).

This is the kind of pedantry that people hate because it adds nothing to the conversation. It's a way to inject "I'm right" moments into the conversation so you can feel smart, while no one else really cares. It makes for unpleasant conversations.

<pedantry>

You're also not right. Your brain doesn't work in base-2, and you likely didn't enter this number into your computer in base2. You typed in the string "0xFF", and that string was encoded in base 2. The base2 that represents the string "0xFF" is very different from the base2 number that represents the logical (base16) number 0xFF.

</pedantry>


> It's not base 16 encoded, which was his point.

I didn't understand it that way.

Also, your whole <pedantry> block and the paragraph above is based on misunderstanding my comment (probably because I'm not a native speaker and you caught me inbetween edits).

I think you're the only one making an "unpleasant conversation".


> No, it wasn't (or I didn't understand it that way). He'd have said encoded somewhere.

The original comment did say "encoded". The discussion about the phrase "base52 encoding" was the base of this entire thread. The parent of your original response also used the term "encoded". The context is clear. I don't see how you could have missed it. (Edit: I see you're not a native speaker. That might be part of why we're not understanding each other. Plus I apparently keep replying in between your edits, which happened again.)

> Also, your whole <pedantry> block and the paragraph above is based on misunderstanding my comment

Well, you rewrote the comment after I replied. I assumed your "source data" was your logical 0xFF number. If you were referring to the "source data" for the strings in the update description, then in all likelihood, there was never a "source" number at all. These strings were almost certainly generated via random selection from a set of characters. You could generate a very long number and then base52-encode it to produce the same thing, but it would be more work and less obvious for future code maintainers. So the "source" was a sequence of characters (azAZ), not a base2 number. You could argue that this is still somehow base-2 since it's in a computer, which I guess is fine (if pointless and pedantic), but it's still not accurate to say that these were "encoded" into base52.

I was unnecessarily snarky and rude. I'm sorry for that.


You're right, but the "base52" part isn't what you're right about, it's the 'encoded' part. "base52" is accurate, it is a series of base52 characters, however it doesn't appear to be the product of an encoding.


Base52 is still not really accurate. Base52 implies an encoding. Further, Base52 is not a standard so it's not even meaningful to say that the characters are from the Base52 set. You could also represent Base52 by including 0-9 and excluding Q-Z. Any string that "looks like" Base52 (azAZ) also looks like Base64 and any number of other encodings.


>No, it doesn't. 0xFF is a number I just made up, no source data at all, I promise. Also, it's base 16 :)

But the full quote was "base52-encoded". (Though I would argue that base52 implies encoding, because nothing naturally works in base52. The only thing that's naturally 52 is "random letters with random case". Or something with cards.)

>Anyhow, the source data was most definitely base 2 (as is your computer's memory, I assume) and later encoded into base52 to be represented as a string (unless someone at Microsoft wrote it in base52, which seems unlikely).

That is an enormous assumption. It's easier to pick random letters than it is to take a specific binary number and convert it to letters. And they don't give you the same result. Bits stored in base 52 will never start with zzzzz.


This detail doesn't matter and is needlessly confusing. "Random upper and lowercase letters" is exactly as specific and accurate as "base52-encoded random numbers", but the former is more understandable while the latter is trying way too hard to sound smart.


look, this is hacker news. the author knows his audience. most people who read this will know what base52 is or will at least recognize what it might be and be able to look it up. i dont think it's meant to sound "smart", it's an accurate and concise description of the randomness


It wasn't smart since I literally thought he was saying there was encoded data.


These details can often shed light into what happened ( or was supposed to happen ), what tools were being used, etc. While they might be useless for the average reader, there are many on HN, myself included, who will be dissecting the update to learn more about how Windows update works


That's fine, but it's not useful to use obtuse terminology. Not once have I heard anyone use the term base-52 before today. I understood the term, but it's not common (because base-52 encoding is not common). It's so not common that it doesn't merit a page on Wikipedia, nor a reference from the page for the number 52, nor even a reference from the page for base-64. It's obtuse.

It's also so specific as to be inaccurate. These strings could be interpreted as "base-52", but also as base-64 or any other base greater than 52. Calling them "encoded" also implies a belief about how these were derived that isn't justified. "Encoded" means that there is some original source that can be recreated by decoding. It's possible that these were actually created by generating random numbers and then encoding that data in base-52. I think that's pretty unlikely, though.

So no, I don't think this sheds any light or additional detail. It's inaccurate and misleading and if we're so specific we're making up terminology, then we should also be specific enough to say things like "assumed pseudorandom" rather than "random" when we don't know. Otherwise we're just being obtuse.


I have to agree --- apart from anything else, 'base-52' doesn't make it clear that the encoding alphabet is, in fact, made up of letters. It'd be just as valid to use 0-9, A-Z and a-p.

'base-64' is at least a defined term with a defined alphabet.


> We programmers like being specific.

But above all, we like being pedantic. (Not you.) :)


[a-Z] and [a-Z0-9] would be a better representation, no?


[flagged]


> Don't come to HN if you're expecting everything to be in layman's terms

That's not nice. Please don't do that.

I doubt that dpark is "expecting everything to be in layman's terms", but even if dpark were, many HN users would be happy to explain. And that's the kind of site we want.


Then why describe being pedantic (extremely specific) as being "obtuse" (annoyingly insensitive)?


Because the specificity here is inaccurate and misleading. I've left plenty of comments about what's wrong with the term "base52-encoded random numbers" here and so have a number of other commenters. I didn't ask for anyone to use lay terminology, but it's reasonable to ask for clear and correct terminology.

One of the frustrating things about reading academic papers is that many authors insist on using less common and less clear phrasing when there are much simpler ways of saying the same things. It makes for unpleasant, heavy reading and it makes the author sound pompous.

"Obtuse" also means dumb, and is often used to mean "difficult to understand". "Abstruse" might have been a better choice of word.


It's not random letters. It's random numbers encoded in base52. Because the number strings are encoded, they're probably not random at all.


You have no way of knowing that. All you see are some random-looking strings. These could be numbers (random or not) that were encoded as base52. These could also be numbers encoded as base64, or base88, or any other base52+. Or they could be randomly-generated strings and there is no meaningful number underlying them.


    hobbes@namagiri:~/scratch$ echo gYxseNjwafVPfgsoHnzLblmmAxZUiOnGcchqEAEwjyxwjUIfpXfJQcdLapTmFaqHGCFsdvpLarmPJLOZYMEILGNIPwNOgEazuBVJcyVjBRL|ent
    Entropy = 5.352821 bits per byte.
    
    Optimum compression would reduce the size
    of this 108 byte file by 33 percent.
    
    Chi square distribution for 108 samples is 650.52, and randomly
    would exceed this value less than 0.01 percent of the times.
    
    Arithmetic mean value of data bytes is 92.6019 (127.5 = random).
    Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
    Serial correlation coefficient is 0.142915 (totally uncorrelated = 0.0).
    
...Which is meaningless without context. Let's see if we can create some context...

    hobbes@namagiri:~/scratch$ echo "You have no way of knowing that. All you see are some random-looking strings. These could be numbers (random or not) that were encoded as base52. These could also be numbers encoded as base64, or base88, or any other base52+. Or they could be randomly-generated strings and there is no meaningful number underlying them." |base64 |sed -e :a -e '$!N; s/\n//; ta'
    WW91IGhhdmUgbm8gd2F5IG9mIGtub3dpbmcgdGhhdC4gQWxsIHlvdSBzZWUgYXJlIHNvbWUgcmFuZG9tLWxvb2tpbmcgc3RyaW5ncy4gVGhlc2UgY291bGQgYmUgbnVtYmVycyAocmFuZG9tIG9yIG5vdCkgdGhhdCB3ZXJlIGVuY29kZWQgYXMgYmFzZTUyLiBUaGVzZSBjb3VsZCBhbHNvIGJlIG51bWJlcnMgZW5jb2RlZCBhcyBiYXNlNjQsIG9yIGJhc2U4OCwgb3IgYW55IG90aGVyIGJhc2U1MisuIE9yIHRoZXkgY291bGQgYmUgcmFuZG9tbHktZ2VuZXJhdGVkIHN0cmluZ3MgYW5kIHRoZXJlIGlzIG5vIG1lYW5pbmdmdWwgbnVtYmVyIHVuZGVybHlpbmcgdGhlbS4K
ok, that's base64. Let's turn that into base52...

    CL-USER> (to-base 52 (from-base 64 "WW91IGhhdmUgbm8gd2F5IG9mIGtub3dpbmcgdGhhdC4gQWxsIHlvdSBzZWUgYXJlIHNvbWUgcmFuZG9tLWxvb2tpbmcgc3RyaW5ncy4gVGhlc2UgY291bGQgYmUgbnVtYmVycyAocmFuZG9tIG9yIG5vdCkgdGhhdCB3ZXJlIGVuY29kZWQgYXMgYmFzZTUyLiBUaGVzZSBjb3VsZCBhbHNvIGJlIG51bWJlcnMgZW5jb2RlZCBhcyBiYXNlNjQsIG9yIGJhc2U4OCwgb3IgYW55IG90aGVyIGJhc2U1MisuIE9yIHRoZXkgY291bGQgYmUgcmFuZG9tbHktZ2VuZXJhdGVkIHN0cmluZ3MgYW5kIHRoZXJlIGlzIG5vIG1lYW5pbmdmdWwgbnVtYmVyIHVuZGVybHlpbmcgdGhlbS4K"))
    "6lH81qbtcGjfFOclhdtt7ieuljADG17Ou6swolu2A2qlnO52zmMkG8Nfk2APbEbso4idFF44wsLIGfrOg5atP0ucg5gubxAcD9ztMcboeC4sAui28skbwtEiuv64OuD8fbFnn1M01oeq3bH7n9bvu8p3P1MwirdDHxKDONktDvtNOLE1srOz2I4wNLsBpgOGIlLs1i11xt58JpOC1whJ54Krmln1ahmrvksODe7kqjtBazKzKamu5tygI6hGHq3h123Ighyw8s2MxE4dl9rBdBNG1o7tJM2HvzOh955NpLB33nuPr4OwfhjB618y1BP9y2euMquIMszOuH1rPAEkBccOu9qIJBqKgGf1qH42bBd7GOMsbKExosd8CErJlMIAcEyyytFzoKOfkJy1ExEnKc0iE7OGkpLJM3ybB2JNlnwtrC5F0cH8kB41IIv0BwNk9DO"
Astute readers will note that this output contains numeric digits. There's more than one representation of "base-52". The one above goes from 0 through p. An alternative would go from A-z. But none of that matters to measure the entropy of base-52 encoded English text, ya dig?

    hobbes@namagiri:~/scratch$ echo "6lH81qbtcGjfFOclhdtt7ieuljADG17Ou6swolu2A2qlnO52zmMkG8Nfk2APbEbso4idFF44wsLIGfrOg5atP0ucg5gubxAcD9ztMcboeC4sAui28skbwtEiuv64OuD8fbFnn1M01oeq3bH7n9bvu8p3P1MwirdDHxKDONktDvtNOLE1srOz2I4wNLsBpgOGIlLs1i11xt58JpOC1whJ54Krmln1ahmrvksODe7kqjtBazKzKamu5tygI6hGHq3h123Ighyw8s2MxE4dl9rBdBNG1o7tJM2HvzOh955NpLB33nuPr4OwfhjB618y1BP9y2euMquIMszOuH1rPAEkBccOu9qIJBqKgGf1qH42bBd7GOMsbKExosd8CErJlMIAcEyyytFzoKOfkJy1ExEnKc0iE7OGkpLJM3ybB2JNlnwtrC5F0cH8kB41IIv0BwNk9DO" | ent
    Entropy = 5.610518 bits per byte.
    
    Optimum compression would reduce the size
    of this 452 byte file by 29 percent.
    
    Chi square distribution for 452 samples is 2093.27, and randomly
    would exceed this value less than 0.01 percent of the times.
    
    Arithmetic mean value of data bytes is 86.3496 (127.5 = random).
    Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
    Serial correlation coefficient is 0.018253 (totally uncorrelated = 0.0).
Conclusion: I don't think the original text was pseudorandom. If you can convince me that the original isn't "base-52 encoded" at all (and I'd be easily convinced), I'm willing to reevaluate that conclusion. I'm also interested to see if anyone sees a flaw in my process.


Well, we know that it was a test update that was released unintentionally. http://www.zdnet.com/article/microsoft-accidentally-issued-a...

So the question is why a test update would have "hidden" meaning underneath the random-looking strings.

As for the flaw in your methodology, Your ent command (not familiar with this, so just basing this off what I see) is assuming full use of the binary space (hence assuming 127.5 as mean of random data). No base52 data will use the full 256-value space, by definition. Base52 (azAZ) will actually have a mean of 93.5 for random data, extremely close to the measured mean. Serial correlation coefficient should also be higher for azAZ than 09azAP, because more of the alphabet is contiguous.

I did a little bit of analysis on the data as well to determine if the data was random or gibberish typed by a human on a keyboard, and found that most of the data lined up well for true (or pseudo) random. (Ugly code here: http://pastebin.com/9YN93xhi)

  Home row % expected: 34.6% 
  Home row % actual: 38.3%
  Expected upper: 50%
  Actual upper: 47.7%
  Expected sequential case match: 50%
  Actual sequential case match: 55.7%


Sure, I see the zdnet article. We could still play this game with any arbitrary string, though!

Regarding the assumption of full use of the binary space... May I phrase that differently? azAZ and 09azAP both leave big swaths of the space as always-zero or perhaps always-one. Another way to phrase it: you could write out an azAZ string using five-bit characters and have room to spare.

The swaths of emptiness in the range of possible values is why I took an English sample and did my best to encode it the same way as the original was, so that we'd have an apples-to-apples comparison. The English sample had higher entropy than the original--after "accounting for" encoding.

You are right, my method of "accounting for" encoding did nothing for the "serial correlation coefficient" metric. I didn't know what that was until you mentioned it, thanks.

Your actual/expected analysis is a good idea, though it took me a minute to understand what you were doing. I guess: "If these are truly random digits from azAZ, then half of them will be uppercase." Indeed. But... I just read an article[0] which convinced me that I have no idea how "random" works. (Specifically this bit: As an example, the probability of having exactly 100 cars is 0.056 — this perfectly balanced situation will only happen 1 in 18 times.

Cheers.

0: https://www.quantamagazine.org/20150925-solution-the-road-le...


Your understanding of base52 leaving large amounts of empty space is correct (you need 6 bits to fit it, though). That space throws off the "ent" measure. The 2.3 unused bits in every char dwarfs the entropy difference between random data and encoded text. This would probably be easier to see if you'd added in known pseudorandom strings for more context. (I see kaoD provided a Base64 example.)

Your understanding of "random" probably isn't that far off. After all, "the average number of cars on each road will be 100". When you look at random numbers they do the expected thing in aggregate. Individual samplings will vary, though. You could get all 200 cars on one road randomly. It's just exceedingly unlikely.

Cheers. :)


8 bits, 256 symbols.

7 bits, 128 symbols.

6 bits, -- oh. ... Time for coffee.


    $ dd if=/dev/urandom bs=512 count=1 2>/dev/null | base64 -w 0 | ent
    Entropy = 5.933850 bits per byte.
    
    Optimum compression would reduce the size
    of this 684 byte file by 25 percent.

    Chi square distribution for 684 samples is 2310.15, and randomly
    would exceed this value 0.01 percent of the times.
    
    Arithmetic mean value of data bytes is 85.0731 (127.5 = random).
    Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
    Serial correlation coefficient is -0.018269 (totally uncorrelated = 0.0).
I don't have the means to convert base64 to 52 but the result shouldn't be much different.


Oh, I didn't expect this. Thank you!


That's you should always test both what proves and what disproves your theory :)


I don't think anyone actually knows that. Which is exactly why it was inaccurate to describe it as "Base52", it made you think someone did know that.


For some reason, all the replies to this are focused on how I described the title and description strings. Which is... not at all the interesting part of this, and not something I gave much thought when I wrote this comment.



I find it interesting that MS would use following URLs even though it was a test update https://hckSLpGtvi.PguhWDz.fuVOl.gov https://jNt.JFnFA.Jigf.xnzMQAFnZ.edu


Both are well-known TLDs that can't be acquired without verification by the US government. Presumably the assumption is that these URLs can be guaranteed to never exist.


Then they should be using example.com (or .org, .net , .edu), which are actually reserved by the standard and guaranteed to never exist.

EDIT: Or, perhaps even better, the .invalid TLD which is also guaranteed to never exist.


Wow, MS didn't follow an established standard ... /s.


You don't need verification by the government to get a .EDU domain. You just need accredition by an agency that is recognized by the USG.


Considering the panic it generated, some sort of official response, apology, etc. would be nice rather than simply giving a terse "We goofed" response to a third party.


These things take time, at least a few hours, on the corporate level. We've seen responses from MS before about similar issues.


I saw that, and was wondering about it. Seems like a complete failure of the testing protocol, or a complete exposure of the partnership Microsoft has with someone invested in having something installed on a Windows 7 machine :-). Normally I'm not nearly that tin-hattish but with the disclosures of what people do these days, one wonders ...



If someone has managed to compromise Windows Update (which I doubt seriously based on what's presented here), why on Earth would they not bother to come up with text more convincing than the garbage on display here?


Yeah I'd say it's more likely someone or something in the update toolchain screwed up.


The other option is a deliberate "ethical" hack.


Wouldn't that also call for putting in text that at least makes it clear it's an ethical hack? The only explanation for that garbage is either lorem ipsit filler text (something never meant to get out) or the theory buffoon had about it being a hash collision.


Because its not a hack most likely, its just MS screwing up. As someone who has managed/been tortured by WSUS for years, nothing surprises me about how MS handles its update infrastructure. WU is something of a pig and errors come up now and again, both on the client and server-side. In 24 hours this will be forgotten and yet another example of web hysteria.

Considering MS's typical QA with updates, this shouldn't be terribly surprising. I'm going to guess that this was a test patch that got loose. Big companies aren't fault immune, if anything they are fault prone.


> Because its not a hack most likely, its just MS screwing up.

That was my point, yeah.


That might have been the only text that allowed the updated to be signed.


That's an interesting theory, but seems unlikely given that the TLDs are all real. Also that would imply a successful hash collision attack which seems exceedingly unlikely. And if true, why not mutate some random bytes in the payload to get the collision rather than the update text (which also may not actually even be stored as part of the signed update).


May be their attack is so specific that they could only use Microsoft signed files in update payload, so they send old vulnerable versions.


That would be an amazing exploit. I doubt it's the case, and I hope it's not, but it would be pretty amazing.


Well, two years ago I tried to report a few fairly critical security vulnerabilities on update.windows.com and they responded saying it wasn't an issue. I'd consider denial of service, buffer overflow, possible remote code execution (didn't test because I didn't want to make MSFT mad), and sensitive configuration information enumeration critical vulnerabilities. Especially on update.microsoft.com, which distributes Windows updates. But apparently they don't. So who knows.

I'm not saying the OP's link is a result of these vulns being exploited, but them being exploited is always a possibility in the future if it hasn't already happened or been fixed.


How did you verify a buffer overflow in their remote code?


There is a chance that the machines affected were already compromised by malware which altered the way Windows Update was working.


Several of the forum comments mention fresh installs. So possible, but fairly unlikely.


Fresh installs from what media though? "Pre-activated" Windows ISOs are freely available on any torrent search site with who-knows-what added.


That is a main issue with people who don't know how to look for the right ones. But I get my images by matching the MSDN MD5 hashes (I am not on 10 yet, even though digital river is gone the MSDN images are still out there)

/this forum is a god send

http://forums.mydigitallife.info/index.php


Wouldn't that malware just download and install payloads itself rather than piggybacking off of Windows update? It would need root access to manipulate Windows Update in this way, but with root access it wouldn't need Windows Update to install packages.


It would allow the malware to get around any software firewalls.


Which it would already have control of with root access.


It would, but it would need to deal with the whole plethora of software firewall to ensure it doesn't trip them but doesn't break them in a way noticeable by the user. Piggybacking on Windows Update accomplishes both because every software firewall has Windows Update whitelisted out of the box.


Agreed, and there are so many possible explanations that jumping to "Windows Update was compromised" is just silly.


Does anyone have a copy of the 4.3MB file that this refers to? If so, please: (1) submit it to VirusTotal, and (2) post it here.


I've been deploying Microsoft based computer networks for 18 years... this would nearly top my nightmare list! I can't imagine what the alert level is at MS offices right now, but I bet they are expending every effort to get to the bottom of this ASAP :/


Is this a "lock the doors, man the battle stations" kind of problem? It seems super, super bad.


I doubt it. It looks to me a lot more like a test update that was pushed to production unintentionally rather than malware. A server compromise pushing malware as an update would presumably try to make the update look legitimate in order to maximize how long it went undetected.

Disclosure: MSFT employee, no knowledge of this beyond what I've read in this thread, though.



They've gone through the process to put the "test" update to production servers, so I'm pretty sure this is intentioned. Though they might thought this is ok wo/ harms.


> a test update that was pushed to production unintentionally rather than malware

This position seems to be based on the assumption that malware is less likely to be subject to mistakes than MSFT.

Want to check with the PR dept? ;)


It's based on the assumption that it's more likely that someone made a harmless but embarrassing mistake than that someone shipped a major security bug in a critical system, which was then exploited by someone simultaneously skilled enough to hijack the Windows Update servers to deliver malware and incompetent enough to completely screw it up.

Random text but real TLDs also just smells like test data to me. Someone smarter than me could probably do the statistical analysis to determine if this is actually random or pseudo-random garbage typed by a person. Given the fairly long runs of all-caps and all-lowercase in the description, I'd guess a person typed this out (presumably as part of creating a test or test suite).


You could rephrase that as:

> A test update getting accidentally pushed out is more likely than a compromise of the Windows Update system.


Appeared on WSUS as well...

NEVER turn on auto updates on windows. Read all the KBs, then choose to install, ALWAYS. If you have a corp network, use WSUS and stop all updates and check them. If the KB is content-free like the new ones, no install. I avoided the whole CEIP bag of shit and Windows 10 upgrade notification hell thanks to that.


I'll be sure to tell my 60-something mother to make sure she reads all KBs before deciding to install the updates that Windows is telling her is super important.

I'm sure this won't increase my load as the family technical support person at all.


Always turn on auto-updates. The likelihood of you missing or delaying an update and getting hit by an a known exploit is a lot more likely than an exploit getting through the update system or enabling a new exploit.


This is NOT good advice for reasons other than you're thinking.

Simple reason: if your computer updates it is not in a stable state until a reboot. Simply, your computer may not ask you to reboot after an update, but some software will (eventually, not every time) run very odd until you reboot.

I've seen this happen many, many times on my own machine and on many company machines I've managed.

It's best to install updates when you want to install them.


> It's best to install updates when you want to install them.

For company servers, I absolutely agree.

For corporate desktops, the administrators of WSUS (assuming an environment large enough to warrant running it) should approve them for installation after having had a chance to review them. Even so, the desktops should (IMO) be set to automatically install them and reboot once they are available.

For home PCs, just set them to automatically install and reboot and forget about it (n.b.: general rule; obviously there are/will be exceptions).

Personally, my own Windows machines (a grand total of two, running Windows 7 Professional, that are very rarely used), are configured to automatically download and install updates at 11:00 p.m. on Mondays. When an update is released that breaks things, this gives me about six days to hear about it and turn off Windows Updates until they get it fixed (assuming a typical Patch Tuesday release). A long time ago, I reviewed every update before installing them but not anymore. When one of those "drop everything and patch now!" updates comes out, I hear about them elsewhere and install them manually.


> be set to automatically install them and reboot once they are available.

I mean, it isn't the end of the world, but your helpdesk will be getting a few calls because your users are refusing reboots!


Haha, "refusing reboots". You know what happens to our work PCs when you click "postpone"? Fuck you, that was your warning. If you disregard it, in five minutes you get another 20 second warning to desparately hammer "Save" before your system force-reboots.

Not during the middle of the night, either - typically these get pushed out around 10-11 AM.


If you push them out during the day when everyone is working, sure. At 2:00 a.m.? Not so much.


No no no no no. I've watched entire networks of machines downed with auto-updates. Always read, always test.


It might make sense to pay a guy to make this his job for hundreds of computers on a corporate network, but there is no way in hell I'm keeping that close of track of updates on my home computer.

And when was this, over a decade ago? Also, what evidence did you have it was the auto-update system that caused the outage? Past performance is not a predictor of future performance.

Seriously folks, turn on auto-updates.


Seriously folks, turn on auto-updates.

I'll add my voice against this, if you have enough technical knowledge to check more carefully. I too have seen numerous occasions where something installed via Windows Update has taken out a machine and required significant action to restore it to normal functionality. My personal policy has long been security updates only, and even then I tend to do a quick web search before letting them install, which has saved me from the odd howler in the past.

On the other hand, the number of times I have seen a PC rendered inoperable or compromised because it didn't install a Windows update within 24 hours of the update being available is zero. Even if the PC is just a simple home machine, there's probably still at least some sort of firewall/router between it and the public internet, and just about any device like that is going to block unsolicited incoming traffic by default these days. To get compromised within that time frame you'd likely have to actively open something or visit somewhere that included an exploit for a new vulnerability, and while that is always a risk even on a fully patched system, it's not a big one for most people.


I guess you trust every website you visit then? And the ad networks used by the sites you visit...


Approximately nothing installed via Windows Update will protect most people from most threats they might find on web sites.

It's far more important to keep your browser and plug-ins updated to guard against those threats. Personally I also block almost all ads and other third party content, primarily on security and privacy grounds, which also significantly reduces the risk of running into malware while browsing.

If IE or Edge is your browser of choice then of course updates for those are going to be a priority for the same reasons. But even then, if someone has managed to compromise sites like Google's or Microsoft's so you can't even do a ten second web search before installing a patch without getting hit by an exploit that patch would have blocked, we're all in pretty big trouble anyway.


> Approximately nothing installed via Windows Update will protect most people from most threats they might find on web sites.

What about all the browser sandbox escapes that rely on kernel vulns?


Those are very rare. When they appear, there's inevitably enough panic and publicity to attract my attention, at which point I can evaluate and install the update myself when/if appropriate.

Fool me once, there won't be a second time, and that means you get to pull something like "UPGRADE TO WINDOWS 10 NOW!!!11!!" on me exactly once. Auto-updates are now turned off on my Windows 7 box, and they will remain that way.


The problem with this is that you are starting to treat the OS creator as hostile. This is not a good situation to be in. Microsoft has the equivalent of root on all windows machines so it is difficult to treat them as hostile. They could roll out an upgrade tomorrow that incorporated a critical kernel security update together with non-turnable off automatic updates and you would have to accept the patch or remain vulnerable. There are some that would argue that Windows 10 home editions are exactly this...

Basically in the medium to long term if you regard your OS creator as a potential threat you have very little option but to change OS...


Correct.

This is why fucking with Windows Update should have been the very last thing anyone at Microsoft would ever have wanted to do... or the very last thing they ever did do just before Security escorted them out to the parking lot.


The problem with this is that you are starting to treat the OS creator as hostile. This is not a good situation to be in.

No it isn't, but when they demonstrably are hostile to a degree, as with Microsoft's recent behaviour, that treatment is justified all the same.

It's important to separate updates that fix defects in the original product (security patches, bug fixes) from other updates that simply change the behaviour. The reason it's important is that from a legal point of view, there are often implied expectations of fitness for purpose and adequate quality when you buy something.

Software companies have for some time enjoyed a cosy position. For one thing, those kinds of rules have often not been enforced rigorously, partly because as long as the software companies were putting out bug fixes before large scale damage was done it has been pragmatic to let them carry on. Also, the law has often lagged the technology, with various loopholes meaning the same consumer protections that apply to physical products haven't always applied to digital ones and extra rights in digital products have been very rare.

However, the laws in a lot of places have been starting to catch up, just as modern trends in software have been pushing towards effectively forced updates. It would be a brave software company that rocked the boat by limiting access to security patches or other essential bug fixes in their push to get everyone upgrading all the time, though. The consequences if they push too far and the consumer protection authorities and/or business lawyers start to challenge them seriously could be extremely expensive.

Basically in the medium to long term if you regard your OS creator as a potential threat you have very little option but to change OS.

Unfortunate, but true. For now, I am still "changing" to Windows 7 for new machines on the Microsoft side. Personally, I'm betting that the inevitable backlash against ever-changing, never-owned, user-hostile, sub-standard digital products is going to pick up enough momentum over the next few years that either Microsoft or whoever actually kills their business will offer a better alternative before 2020 when Win7 support is scheduled to end.


> The reason it's important is that from a legal point of view, there are often implied expectations of fitness for purpose and adequate quality when you buy something.

I thought that software licenses and EULAs were designed to remove liability?

> Unfortunate, but true. For now, I am still "changing" to Windows 7 for new machines on the Microsoft side. Personally, I'm betting that the inevitable backlash against ever-changing, never-owned, user-hostile, sub-standard digital products is going to pick up enough momentum over the next few years that either Microsoft or whoever actually kills their business will offer a better alternative before 2020 when Win7 support is scheduled to end.

I could see the year of the Linux desktop coming eventually. But not as originally envisioned. I would not be surprised by a world where only specialists (developers, graphic designers etc) have desktops and the actual majority of computers in use are locked down iOS or Android kiosk type devices.


I thought that software licenses and EULAs were designed to remove liability?

No doubt they try, but the fact is, those kinds of documents can't override the law. In some places, the law imposes minimum standards on what is acceptable in a consumer (or even business) transaction, and software companies have tried to play the "But the EULA says..." card, and if it's actually tested in court they have sometimes lost. They often rely on people not being aware of their rights and/or not having the time or money or willpower to contest the issue.

Even that barrier may not help the software companies in the long run. Coincidentally, just today the UK introduced a sort of lightweight version of US class action lawsuits as part of a major revision of consumer protection law, as well as various other explicit consumer rights relating to digital rather than physical content.

I would not be surprised by a world where only specialists (developers, graphic designers etc) have desktops and the actual majority of computers in use are locked down iOS or Android kiosk type devices.

I'm afraid that is one all too realistic possibility. But there are reasons for hope as well.

For one thing, tablets and the like are convenient for small-scale content consumption and minor interactions, but they're awful for serious content creation or more complicated interactions. I don't think general purpose computers are going anywhere any time soon.

Perhaps more significantly, there is now a push in quite a few places to promote computer literacy and basic programming skills even at school age, and to spread the word that you can still tinker and make cool stuff, perhaps using devices like the Raspberry Pi and Arduino. We also have Linux and the FOSS community following a similar philosophy on the software side, of course, and actually one of the nicer results of so many kids having smartphones these days is that writing simple apps to run on them is now an attractive introduction to programming for kids who enjoy playing with technology. Ultimately, there is a strong human instinct to create and many people enjoy making stuff that is fun and interesting, and fortunately no amount of marketing is ever likely to change that.

Dumbed-down, locked-in devices may be the majority in the future, but I think there will always be room for powerful, flexible tools and there will always be room for innovation and creativity. It's a big world.


> No doubt they try, but the fact is, those kinds of documents can't override the law. In some places, the law imposes minimum standards on what is acceptable in a consumer (or even business) transaction, and software companies have tried to play the "But the EULA says..." card, and if it's actually tested in court they have sometimes lost. They often rely on people not being aware of their rights and/or not having the time or money or willpower to contest the issue.

On the one hand I hope you are right -- when I pay for software I have certain expectations which are often not met. On the other hand I hope that this doesn't apply to free (as in freedom and beer) projects. If the disclaimer of liability were to become invalid in e.g. the GPL a lot of good people could be put to a lot of trouble.


I have only checked up the Swedish law, but it distinguish between something given for free and when money or services are traded. The consumer protection laws are designed to identify a customer - merchant situation and then regulate it. FLOSS projects should have nothing to worry about here, and the only issue that I have heard is when projects sell CD's.


> Past performance is not a predictor of future performance.

Hogwash. If this were true, there wouldn't be the concept of reputation.


This was Oct 2014. KB2949927.


So your "read all the KBs and choose" strategy would have prevented this, really? You would have read that KB2949927 adds SHA-2 cryptographic support and said "No, we don't want that one. We'd rather stick with deprecated SHA-1"?


No we go "hmm that might fuck something up; let's try it on a test VM" or at the very least google and see if anyone else has any problems.


Do you actually deploy every update to a VM to test it? Would your testing have caught this issue (which apparently only affected people who'd explicitly disabled the bitlocker service)?

You could also just wait a week for anything noncritical to allow others to flush out any issues, which is a more time-efficient strategy than manually reviewing gobs of KB articles.

For most people, disabling auto-update is a horrible strategy. If you have a central team actively managing updates with WSUS, you can get away with this. For the vast majority of people, turning off auto-update just means they stop installing updates at all, which is the reason auto-update is the default.


auto-updates have goosed more windows systems on me than malware. I'm not even a sysadmin.

there's been a few comments in the wild saying windows 10 can install without your permission. it may even be true, a bug.

so yeah I 'seriously' disagree with you.

https://www.google.de/search?q=crash+tuesday+broken+windows+...

scroll back through the years.


> auto-updates have goosed more windows systems on me than malware. I'm not even a sysadmin.

I would hazard to say that if you were a sysadmin, then this would not be the case.


Eye-witness accounts are the least reliable source of evidence.



I have auto-updates turned off for absolutely everything. I read patch notes before upgrading anything. Especially on my personal computer.

In nearly 100% of all scenarios that I've ever, ever had issues with anything. It's because an update broke something - sometimes irreversibly. Auto-updates are a larger threat factor for me than malware or niche security threats that only attack certain features that I don't utilize (thus I'm not a potential target for that attack vector).

>Past performance is not a predictor of future performance.

In some contexts I agree with you. With programming - I disagree entirely.

Bad programming habits are a great predictor of continued bad programming habits. When the same threat vector pops up again and again in a program it's because the programmer isn't learning from past mistakes. Video game bugs are proof of this.

The first thing many glitchers do on a game I play is test variations of old, patched bugs on new updates to smuggle items out of areas that you shouldn't be able to smuggle items out of. It almost always works. Because the general, underlying problem has not been fixed. They just throw band-aid patches on it after the fact and forget to apply the band-aid patch to future updates, allowing the bug to resurface. The same variations of the same bug have been resurfacing for over a decade now.

Bugs resurface all the time in software, because programming is really tricky to get perfect and humans repeatedly make the same mistakes time and time again.


System exploits wouldn't be doing much good for the exploiter if they left your system unusable.


You're falsely equating "broken updates" and "security exploits" and I'm not sure why. I thought I was clear that I was comparing the two as separate negative occurrences with one happening more frequently than the other. Not that one would cause the other...

An upgrade provided by the company that is completely legitimate that completely renders the program unusable or destroys my workflow has happened far more often than my system being compromised has ever negatively affected me. I could count on a stub the number of times I've known my system to be compromised. I'd have to count on my hands using a binary method to count the number of times a legitimate update was botched.

I still update my programs. I just don't let them do it automatically. Leaving an extra few attack vectors up for a few days/a week to let the patch mature or for an emergency-fix patch (i.e. 30-->30.0.2 "Super major security exploit was live for 3 hours but we fixed it") to be released has always worked to my benefit. I've never had a negative outcome for waiting a few days to patch. I don't have to deal with botched releases or newly opened attack vectors. Instead I get to listen to the canaries in the mine.

Also what happens when an auto-updater gets compromised? I get to listen to the canaries. You get to be one of the canaries. So for that, I thank you.


You're right, but for reasons that people may not realize right away.

It isn't the content of the update you should be weary of (make this decision for yourself if you care this much) but it is the act of updating machines that will cause problems.

When a Windows machine updates (yes, even as of today - I had this issue just last week) it is in an indeterminable state until a reboot, even if the update doesn't require a reboot.


Are you rebooting all your windows servers for each little update too then?


No. In a perfect world yes, you would update immediately. However, it isn't practical. Define what's a good time frame (week, month, daily) for your server, its role, and your manpower and stick to that schedule.

I can definitely say that it is better to wait to update when you can reboot than to update immediately. Of course, if there is a really bad vulnerability, update immediately. Let the user know it's an exception.


This was true until Microsoft started shipping their own "exploits" (read: updates that are more for their benefit than yours.)

After the Windows 10 debacle, I'm looking to get off of Windows as soon as I can afford to. Whoever decided to turn Windows Update into an advertising platform needs to be fired -- it's that simple.


Whoever decided to turn Windows Update into an advertising platform needs to be fired

Not that I ever plan to run it, but my understanding is that Windows 10 itself, not just Update, is an advertising platform.

Nobody in Vegas is taking odds that someone will be fired. But I'm in full agreement with you. Someone should be fired.


auto-installing new security updates but having a delay of 24-48hours before installing them might be a safer alternative


lol, thanks 'moron4hire' I almost took you seriously :-)


Very good advice... I agree. Took me a long time to figure that system out, but it sure works well!



Just to state the obvious, .gov, .edu, and .mil are all restricted TLDs run by the US. What kind of attacker uses domain names in their attack that they can't register?

Unless, of course...

But that would be a wee bit obvious.


> Unless, of course...

Unless the servers are compromised and used as C&C?


This is probably just a test update that went out by mistake.

If MSFT is anything like where I work, that "payload" is a picture of a cat.


Microsoft _should_ not be anything like where you work. I'm not a Windows-user, but if I were I would hope and expect that the update mechanism for one of the worlds most used pieces of software was closely guarded by several layers of computer-based signing and human approval.


Employees of a company like Microsoft aren't special, they're like anyone else and they make mistakes. There's extra bureaucracy to catch mistakes, but the bureaucracy was also designed by people, who make mistakes. You'll never get perfection no matter how much you try. Windows has had few big failures for me in 20+ years. Measured against its considerable complexity, that's shockingly impressive.


There are certainly test environments that these updates are pushed to much more freely than the production environment. Mistakes happen.


I certainly understand what you are saying, but I must repeat the essence of my previous post. For something so critical, there should simply be too many safeguards for any test to make it through all the way to end users.

If a test update really did make it through, it would warrant significant questioning of the procedures at Microsoft. If a test could get through without being discovered, then so might malicious code.


It was a test update and there will undoubtedly be a review of this. http://www.zdnet.com/article/microsoft-accidentally-issued-a...

The fact that a test patch got to this stage doesn't mean the safeguards aren't in place or that malicious code could have slipped through, though. Assuming even basic competence, this test update could not have been signed, and if someone had managed to push malicious code, the same would be true, so it wouldn't have been installed onto target machines.


> For something so critical, there should simply be too many safeguards for any test to make it through all the way to end users.

The only way to guarantee that is to not allow updates to be published at all.

> If a test could get through without being discovered, then so might malicious code.

You are conflating very different things. MSFT being able to publish updates is normal and does not require a security breach, even if one particular update shouldn't have been published. An external entity being able to publish an update containing malicious code would be a huge security breach, requiring both the ability to sign the update and to publish it.



Looks more like an internal flub: "//⁠rr1winwusfs04/⁠c/⁠msdownload/⁠update/⁠software/⁠defu/⁠2015/⁠09/⁠testexe_896e3a62-⁠8954-⁠447b-⁠5a562bd65cc6_d5e430cb05ee8a627ee6d811da8d7c4ccea57f4b.exe"

That being said, that something like this could happen should raise lots of questions about the amount of oversight on updates hitting windows, and the general security of such systems. I'll wait for an official response or a reverse engineer before I decide what's going on here.


I'd be surprised if an attacker would waste a compromise with something obvious. Perhaps it's some testing thing that wasn't supposed to go out.


Or maybe it has been exploited for a long time without anyone noticing it, and now the attackers screwed up?


Or a recent update to the infrastructure introduced a breaking change in a previously working malware package.


Where's Microsoft on this? This is on two news outlets as well as HN. Microsoft PR needs to issue a statement in the next hour or two, even one that just says they're investigating the issue, or it will be on the evening TV news.


http://arstechnica.com/security/2015/09/nerves-rattled-by-hi...

It's already done. About 5 hours after the post was first opened on the forum. There's also an article on ZDNet.

http://www.zdnet.com/article/microsoft-accidentally-issued-a...


Could it be a man in the middle that tries to install updates that aren't signed by Microsoft? It reminds me of this: http://www.leviathansecurity.com/blog/the-case-of-the-modifi... .


Not seeing anything on my Win7Pro SP1 VM - last update was 4.3MB VC++ 2008 Security fix - MFC applications being vulnerable to DLL planting due to MFC not specifying the full path to system/localization DLLs.


> 4.3MB

Interesting that the update in question is also 4.3MB?


Right, that's why I brought it up - it also relates to localization which may or may not be related. But interesting none the less.


I haven't seen any randomly-named updates on my system - but I had earlier ripped out all the telemetry and Windows 10-related crap (KB2952664, KB3021917, KB3035583, KB3068708, KB3075249, and KB3080149) and marked them hidden. I've also set my update policy to notify-only.

Now the spy updates are not hidden, and marked as "Important." They're bound and determined to force this crap down our throats. Bastards.

"Because f*ck you, that's why." The rallying cry of the corporate world.


Could you please elaborate on how you did that?


Note this is Windows 7.

I uninstalled each of those KBs manually from the "Installed Updates" screen, then changed the update policy. I used to use "download and install manually" but now I'd prefer only being notified, and THEN deciding whether or not I want to download whatever is offered. I then re-ran the check for updates, and hid the offending KBs.

That was earlier this month. After reading this article, I decided to have a look and see if there was anything fishy in my update history (beyond the listed KBs that I don't want). Nothing there, at least, but my hidden updates were un-hidden (along with Silverlight and Skype, two more "do not want" things that I always hide).


Note you can remove updates from Control Panel -> Programs and Features -> View Installed Updates (link on the left sidebar).


Can anyone shed some light on this?


Too many "tests" this month, I'd say. Test cert, test update... Let's hope something worse like "test nuclear strike" won't follow.


And the same company doesn't allow the users of the Windows 10 Home to review the updates, instead, the Windows 10 Home updates always download and install.


Could it be that older versions of windows (2k3 for example) might allow this update to be installed? Has anyone tested this in a sandbox?


Microsoft sending spyware again?


I'm worried about friends, family, and small businesses that run Windows with install updates set to automated mode...

Shouldn't Microsoft be signing updates so that redirection attacks don't work?

Edit:

Elaborating on my question; I mean much more like Linux distributions which sign both packages (updates) and the index of those files. Some distributions use multiple hashs/digests to make collision attacks far less likely to succeed.

Such an attack could be either the traffic at layer 3 redirected via router compromise, via some name resolution weakness (possibly even to localhost as a way of malware upgrading from being able to edit the hosts file to having system level services).

The signing of both the update files and the list of updates could offer protection from an attack that would thus need to be valid for all of the signature checks, not just a single check.


What does this have to do with redirection attacks? And who says the updates aren't signed? I would be a bit surprised if they weren't.

Based on the info in the post, I'd guess that this is a test update of some sort and that it was pushed by mistake.

Disclosure: MSFT employee, but no knowledge of what this is about.


> Shouldn't Microsoft be signing updates so that redirection attacks don't work?

Microsoft sign updates and utilise HTTPS.

Given how few users are impacted by this suspect update, it may be the result of malware on their local machine. If malware has root then all bets are off, the signing requirement can be removed.


Why would malware bother to hijack the update system like this? It seems like a lot of work to trick the user into installing something that the malware could just install directly.


I concede this point. Seems like a whole lot of work for little to no pay off.


To convince people to disable automatic updates?


Yeah, I'm sure someone compromised Windows Update as a public service...


I'm pretty sure Microsoft does sign updates. Which means either this is a glitch of some kind, or is being refused/failing installation because it's not signed ... Or, worse case, it means the update signing key has been compromised.


What does 'sign' mean in this context? I hear it a lot and don't understand the mechanism.


It refers to the idea that most asymmetric cryptosystems (which, generally speaking means that each user has a public key and a private key) allow for a user to create a 'signature' using their private key, which can be verified using their public key. See here:

http://stackoverflow.com/questions/454048/what-is-the-differ...


It means generating a signature of the binaries being installed, and having this signature be authenticated by Microsoft (using their signing key).

The signature is distributed alongside the binaries.

I'm not certain if the Windows Update system uses the same Autheticode system used for application binaries, but you can start reading here:

https://msdn.microsoft.com/en-us/library/ms537361%28v=vs.85%...


Think of it as a SHA-1 or an MD5 that is generated using a private key, and can be verified using the public key (and the content of the data that was signed).

It verifies that the signer had access to the private key, and that the data signed by the private key is the same data that you are verifying with the public key.

It's like the other checksums (SHA-1/MD5/etc) with the addition of identity verification (so long as you can trust that the private-public keypair used to sign it is only accessible to parties you trust).



It's a cryptographic mechanism. MS has a private key they apply to each Windows update to mathematically prove A) they're the ones who issued it and B) the content was not modified in transit.

(I am not experienced in cryptography. This explanation might be a little simplistic.)


> it means the update signing key has been compromised.

The odds that parties outside of Microsoft have access to their update signing key is actually seems pretty likely given the Snowden revelations. Consider the Stuxnet distribution strategy -- what a boon it'd be to be able to deploy that sort of machine-specific payload via the built-in update kit.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: