Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft Says Russian Hackers Exploited Flaw in Windows (wsj.com)
64 points by collinmanderson on Nov 2, 2016 | hide | past | favorite | 98 comments



> Microsoft Says Russian Hackers Exploited Flaw In Windows > (And Blames Google)

Jedd Says Microsoft Should Have Fixed The Exploit

Yes, it sounds like a short notice period (5 days or so?).

But ... from the chronology it's sounding like this particular exploit was performed long before Google revealed the vulnerability, and indeed a goodly time before Google reported that vulnerability on the hush to Microsoft, so I can't see how it's Google's fault.

EDIT: And, as per sentiment expressed by world+dog in previous threads about this particular event, Google's observation that exploits were already in the wild is the most important aspect of this story. If a vulnerability exists for some software I'm using, and I don't know about it but the bad guys do ... I want to be told. I may not be able to patch, but I can mitigate.


> Yes, it sounds like a short notice period (5 days or so?).

7 day notice period as per google's usual policy.


And Google actually revealed it 10 days later (after the weekend, I guess).


I imagine that applies if there is no evidence that the bug is being exploited. When evidence exists that it is being exploited perhaps it doesn't apply.


Other way around.

Actively exploited: 7 days.

Generally: 90 days.

There are probably other caveats, but they're not interesting here.


So you think you are safer now that every single bad guy knows about it instead of one bad guy?


It's safer because now users/administrators know about the issue and can watch for the exploit being abused.


Apparently it was only exploited by a single state actor. Now anyone can exploit it. Thousands of small companies with no DPI are screwed up until the patch arrives.

Way to go, Google.


> Thousands of small companies with no DPI are screwed

They were anyways.

> Apparently it was only exploited by a single state actor.

We don't actually know that. In fact, it's amazingly unlikely.

> Way to go, Google.

Suck it up and use the vulnerability as an excuse to implement better procedures. It'll only be worse tomorrow...


All the reports I've read point to only GRU using the exploit, which invalidates your comment completely. And the downvote I got.


That's the only group that we knew were using it. It's extremely likely that other groups were using it too, but weren't caught.


So we only had evidence of a state actor using it against its objectives.

Now we can be sure everyone is using it because the bug is public.

I think it is easy to understand why this has been a mistake. I believe a patch was announced for next week; what's the risk for the general population if only one actor knows the bug? What's the risk if everyone knows the bug? IMHO this hasn't made the general population safer, quite the opposite.

Google doesn't want to wait for the scheduled fix? Then disclose the information to AV and security vendors and at least we have a headstart against general exploitation of the bug until the fix is out.


No. We did it that way in the old days and MS took years to patch bugs. Now nobody gives their excuses any weight and they manage (mostly) to keep up.

> only had evidence of a state actor

You have no evidence of it being used so it must not have been then. That's that. Lack of evidence is evidence, or something.

> hasn't made the general population safer

Oh, you have evidence of that?

This is a bit of a paradox. Any given disclosure might make some people less secure, but a policy of rapid disclosure has made all of us vastly more secure.

> Then disclose the information to AV and security vendors

Oh great, give the data to companies who will then turn around and bill me to tell me what to block. And weeks later, not in the moment when I need it. Thanks a lot!

But this also misses that these partial disclosures are usually still enough to tell someone skilled how to write the exploit, and it only takes one exploit being written. All it does is give a false sense of security.

Withholding bugs is a useless idea that's exclusively harmful.


Google's policy is to release under 7 days if there's evidence of the bug being exploited. They told Microsoft and a date for the patch was set, I guess because they follow processes to make sure they don't make it worse or break something else when patching. This is not "the old days", Microsoft's security track record is completely different.

I can't believe that you're (rhetorically) asking me for evidence when it's Google who makes decisions based on that evidence. It's vox populi now that only one actor was using it. Google knows it, you know it too. Why are you trying to move the goal posts with useless rhetoric?

How does publishing a bug with no patch available until a few days from now on make us safer? Especially when you don't want security companies to have a head start either. How are you going to protect your users then?

I don't know if your comment is just bait, but there's a difference between withholding bugs and being a responsible company. As far as I understand Microsoft gave Google a date that's not very far in time. If whoever made the decision to disclose it anyway can't see the difference (or a calendar, this is 2016, not 1996) then I give up.

Once the bug was discovered we need an urgent fix, disclosing the bug has prematurely expired any short time we had. There was no need for that. Why can't Google accept a patch date so close to their disclosure date and only disclose if the third party (Microsoft in this case) fails to deliver the patch?

You can ignore all the above, but let me know one thing: How are you protecting your user base from someone exploiting this bug? If you don't have any security products capable of detecting the exploit nor a patched OS. I'm curious and I'm sure I can learn from you (I'm not being sarcastic).


> you don't want security companies to have a head start

Right. I want the information that I can use, or get my team to use, rather than waiting for some company to distill it for us. I've worked in some of those companies so I don't have any illusions about them.

> This is not "the old days", Microsoft's security track record is completely different.

Only because they got nailed so many times. Security isn't their market discriminator so they'd rather ignore the issue and hope it blows over.

We're factually better off than we were years ago, thanks largely to a liberal disclosure policy.

> Once the bug was discovered we need an urgent fix,

We urgently needed the fix beforehand. It's not 0 -> DANGER, it's N -> N+3, where N is not a small number.

> Why can't Google accept a patch date so close to their disclosure date and only disclose if the third party (Microsoft in this case) fails to deliver the patch?

Why can't Microsoft hurry this critical patch even if it means breaking its routine a little? I imagine Google didn't give much weight to their arguments, probably because of past experience.

> How does publishing a bug with no patch available until a few days from now on make us safer?

Knowing there's a landmine in my yard makes me safer even if it means I simply don't go in the lawn.

And you can almost always figure out a mitigation strategy. At that, if there's a super-bug that's so bad no mitigation strategy can be devised, I'd rather know to turn my computers off until patch-day.

> You can ignore all the above

Ignore nothing. Acknowledged and refuted.

> How are you protecting your user base from someone exploiting this bug?

Not having a user-base of windows machines, and thus not having read about it, I couldn't say. I'd probably be able to just turn on the draconian policies that users would rebel against normally.

But the point isn't an interview question about what I'd do, if alone at the helm, but what the entire internet could come up with. I'd wait a bit and copy that. If a security company came up with it, then good for them. But if not, good for us anyways.


I'll play game.

> Right. I want the information that I can use, or get my team to use, rather than waiting for some company to distill it for us. I've worked in some of those companies so I don't have any illusions about them.

You have it now, hope you're happy. How are you using it?

> Not having a user-base of windows machines, and thus not having read about it, I couldn't say. I'd probably be able to just turn on the draconian policies that users would rebel against normally. > But the point isn't an interview question about what I'd do, if alone at the helm, but what the entire internet could come up with. I'd wait a bit and copy that. If a security company came up with it, then good for them. But if not, good for us anyways.

Because there's nothing you can do!!! you're sold!!! there are companies with thousands of Windows seats. How can you just go and say "not having Windows users"? That shows very poor judgement and a seriously worrying detachment from reality.

It also shows me you actually don't really give a shit about security and know nothing about the challenges in the real world. This disclosure and the kind of attitude shown on this thread are the two main reasons why the InfoSec industry stinks so hard.

> Only because they got nailed so many times.

Yes. Does it matter why though? When they were getting nailed I was probably shitting my diapers. Should I be judged now for what I was doing those years?

> Security isn't their market discriminator so they'd rather ignore the issue and hope it blows over.

Are you sure? I don't think you really know that Microsoft is these days a huge and (for many big companies) reliable security vendor. Compliance, tooling, innovation, products... They've got their hands on everything. They even help shutting down botnets.

> Why can't Microsoft hurry this critical patch even if it means breaking its routine a little?

Because it was not as critical until Google disclosed it. I'm baffled you can't see this.

> I imagine Google didn't give much weight to their arguments, probably because of past experience.

I think Google was just being strict about their policies. I don't think they've got prejudices. However, it is proven by your comments that you do have those prejudices and you're basing your opinion on them.

Objectively we're not safer than before the disclosure.

> Knowing there's a landmine in my yard makes me safer even if it means I simply don't go in the lawn.

This shows poor understanding of the issue. This is not something you stumble upon while doing your daily menial tasks.

If you'd like a silly comparison, this is like Google releasing blueprints to create super cheap surface to surface missiles because they were being used by a nation against another. Now they have weaponized any script kiddie out there.

> And you can almost always figure out a mitigation strategy. At that, if there's a super-bug that's so bad no mitigation strategy can be devised, I'd rather know to turn my computers off until patch-day.

Oh really? How? What's your mitigation strategy? What a sysadmin can do to mitigate this? If your answer is don't use Windows, which could be a good long term plan, you're again out of touch with reality.

Remember, reality is not Silicon Valley.

> Ignore nothing. Acknowledged and refuted.

You're going to have to point me where you've refuted that we're not safer after weaponizing everyone.

> But the point isn't an interview question about what I'd do, if alone at the helm, but what the entire internet could come up with. I'd wait a bit and copy that. If a security company came up with it, then good for them. But if not, good for us anyways.

Holy shit. I hope I'm not using any of your products. This is not how you do security.

I thought I was discussing with someone that took security seriously and that I could learn a thing or two (I can't call myself an expert, maybe a hobbyist); it does seem though you're in this conversation just because you like to stick it to Microsoft (or to the big guys, or whatever) and can't be objective about it.

If you'd like to continue this conversation I'd like to ask you to tell me how can we be safer after the disclosure. How can I help my friends running small and medium businesses to protect themselves against the exploitation of this bug?


> You have it now, hope you're happy.

In general, yes very. Thanks.

> How are you using it?

I'm not, I don't have Windows boxes. Did you miss that?

> Because there's nothing you can do!!! you're sold!!! there are companies with thousands of Windows seats. How can you just go and say "not having Windows users"? That shows very poor judgement and a seriously worrying detachment from reality.

Nope, I just double-checked my entire inventory and there aren't any Windows computers.

That's an example of how by knowing more about it, I can make better decisions. For now, for this bug, for me, 'nothing' is an acceptable response.

> It also shows me you actually don't really give a shit about security and know nothing about the challenges in the real world.

Exactly the opposite. You're sounding like you've never considered mitigations. You're parroting a corporate message that has caused more vulnerabilities over the years than null-terminated strings.

> Are you sure? I don't think you really know that Microsoft is these days a huge and (for many big companies) reliable security vendor. Compliance, tooling, innovation, products... They've got their hands on everything. They even help shutting down botnets.

Shutting down botnets is admirable. But it's not platform security. Microsoft is still too focused on extreme user convenience, etc, to make the hard choices.

Their cloud offerings still have problems with filenames their OS accepts, and when it dies it just refuses to copy some of the files. It's not a security bug but it shows a lack of attention to details and improper sanitization. It's hard to imagine them actually properly executing on a robust and secure solution. I wouldn't trust a builder living in a crooked house...

> I don't think [Google has] got prejudices. However, it is proven by your comments that you do have those prejudices and you're basing your opinion on them.

Yes, I also remember their total lack of concern in the 90s which does color my view but the issue is that I still see those behaviors from them.

When they release mitigations in days, not patches in months, I'll revise that opinion.

> How? What's your mitigation strategy? What a sysadmin can do to mitigate this? If your answer is don't use Windows, which could be a good long term plan, you're again out of touch with reality.

You seem not to read the posts you respond to. I have no idea because I haven't even read two lines about this bug. It doesn't affect me or my charges and so I'll focus my energy on things that do.

In the hypothetical where it was in software my users used, I would probably just block that software for everyone. Rarely are even mission-critical apps actually so, in practice.

Failing that I'd block that type of media, etc. Or restrict it to a small subset of users who I could trust, and I'd revoke a bunch of other privileges for them temporarily to avoid the attacker gaining anything of value.

For instance, Disable the PDF reader, block all PDF attachments, and forward all email with an attachment to a user with a reduced-access machine to sanitize.

This could be done in minutes which is why I care about finding out about things right away. I can slam the door even without fixing the problem, then analyze it at leisure.

> If you'd like a silly comparison, this is like Google releasing blueprints to create super cheap surface to surface missiles because they were being used by a nation against another. Now they have weaponized any script kiddie out there.

If Google found the blueprints then anyone else could. And yes, we'd want to start analyzing them for weaknesses before they were flying towards us.

> Holy shit. I hope I'm not using any of your products. This is not how you do security.

Haha, but you're totally wrong. Security (in any domain) is about doing what you can and understanding the limits of it, not about stupid ivory-tower perfection.

My users would be safer in minutes, your users would be ignorantly vulnerable for months.

> I thought I was discussing with someone that took security seriously and that I could learn a thing or two (I can't call myself an expert, maybe a hobbyist); it does seem though you're in this conversation just because you like to stick it to Microsoft (or to the big guys, or whatever) and can't be objective about it.

MS isn't even on my radar and I normally say this sort of thing about other vendors.

I don't care to wait on their patch cycle, or wait for them to make a tidy patch, I want as much of a mitigation as possible, as soon as possible. Some companies have bitten the bullet and pushed updates to disable whole areas of broken functionality and taken the PR hit, others would rather wait and hope nobody notices.

> If you'd like to continue this conversation I'd like to ask you to tell me how can we be safer after the disclosure.

Trivially, because without the disclosure nothing you do with make you safer. With the disclosure you've got a range of options.

> How can I help my friends running small and medium businesses to protect themselves against the exploitation of [any] bug?

Even knowing when to just unplug the network and wait is still a huge step up over being ignorantly plundered.

When you've done everything you can, stop and assess. Maybe there'll be a more fine-grained mitigation (blocking a smaller subset of incoming traffic for instance), or an actual solution by then.


DPI?

Anyone could have exploited it before. Now people know it's exploitable. I'm still on Google's side here (as much as it pains me to say it). Making the vuln known is the best course of action.


At least now you know what to defend against. Also, how do you know that only one bad guy knows about it? If it's in a wild, all the bad guys will know about it sooner rather than later anyway.


How do I turn the specific knowledge of the exploit into a more meaningful defense than if I had vague knowledge there is a windows zero-day actively in the wild?

I'm not asking snidely. I legit want to know how this is leveraged in defense and what/how I can do.

Is it common practice in the pre-patch period to enable some sort of system call tracing that monitors for (and/or kills) processes that use the vulnerable call in a way described by the google blogpost? Or is there a sandboxing solution where I can blacklist filter certain uses of system calls?


> How do I turn the specific knowledge of the exploit into a more meaningful defense than if [...]

By being able to raise an alarm and allocate actual time to fixing/mitigating it, unlike if it was only a vague warning. We know that there's never been a day where Windows didn't have a critical, remote-code execution, security flaw. Obviously if you're still using it management isn't doing anything proactive to improve security so you need these motivators.


> At least now you know what to defend against

Who is "you"? Do you realistically think the hundred of millions of Windows Servers out there are actively managed by a team of security experts who have the expertise to go under the skin of new security vulnerabilities in a matter of hours and deploy a mitigating strategy?


I mean, no, but you can't actually account for the worst security results. Heartbleed is still unpatched in a lot of places, but there's really no disclosure strategy capable of improving that. Even a low-skilled maintenance team can be on heightened lookout for compromised accounts and illicit connections.

I'm not sure I back Google here, though. Generally, I think active exploits are reasonable to disclose, but Microsoft had a specific patching plan that would have been up in another week. It seems reasonable to accommodate requests like that when the vulnerable product is actively being fixed.


> Microsoft had a specific patching plan

We've given Microsoft too much deference over the years. This can be a kick in their pants to make them come up with a better plan.

> up in another week

Sure, that's what they always say. And then the patch misses that window and it's another 90 days away.

Hopefully they'll realizing the days they get to call the shots are over and they'll be driven to improve.


It's easy to read some tone in there that may not have been intended, but it's actually a really good question.

Short answer - I don't know, but I think so.

In my particular case I'm not exposed (I use Debian pretty much everywhere, with a couple of Win7 VM's for work that are NAT'd and sandboxed, used almost exclusively for editing MS-Office applications, with minimal Internet access, etc).

You said 'instead of one bad guy', but there's no evidence a single bad guy had the exploit, so I can't really quantify my risk prior to the announcement.

Anyway, in the general case of your question - yes, I think I am safer if I have the same knowledge (about problems with my software) that everyone else has, rather than me not knowing but some unknown number of potentially very sophisticated attackers are confirmed to know and be using that knowledge.


I was a bit sarcastic, because in my mind this is a no brainer. As soon as a hacking technique becomes public, every script kiddie will get their hand on it, and then suddenly your server is not exposed to just a team of russian hackers that may have used it in a targeted attack completely unrelated to your business, but to anyone else on the web, potentially scanning all IPs.

You might have actively looked into the technique used, understood it, and done something to mitigate it. What about the hundred of millions of Windows servers that rely on monthly updates for security patches, maintained by IT teams that do not have the resources or expertise to investigate every MSFT security bulletin in detail. They will become sitting ducks. Is that a good outcome?

In my mind it is a rhetorical question.


Hey, I entirely get where you're coming from.

But I'd suggest it's kind of a security by obscurity blanket (I know that's a horrendous aphorism - forgive me, please).

I think the problem I have lies here:

  >  As soon as a hacking technique becomes public,
  >  every script kiddie will get their hand on it,
At the time Google announced it we had no idea how many script kiddies (or worse) had their hands on it. Certainly it was known the number was > 0.

So the question instantly becomes - is it safe to assume only relatively benign bad people know about, and are using, and other people (eg. swarms of script kiddies) don't know about it and won't for some weeks?

I'm empathetic to people whose day job is not IT, and are stuck for various reasons using poorly maintained Microsoft software, but I genuinely do not believe we are all safer by assuming (hoping?) active exploits are minimally used.

Anyway, Microsoft was well aware of Google's policy on announcements for actively exploited vulnerabilities, and didn't even choose to provide some mitigation advice prior to their (presumably upcoming) security patch.


If Microsoft issues a patch that will break windows server, then we will have a Y2K bug-style scenario with all the infrastructures being disrupted, etc. I have some sympathy for Microsoft going through a period of testing before releasing a patch to Windows Server.


Sympathy aside, this is their flagship product and they claim it is superior to all others. They claim that they have superior support, and they state that they offer a more professional product.

I have no sympathy for a business with delusions of superiority who won't even properly test their own software. Why do we pay a lot of money to beta test their software for them?


there's also the benefit that MSFT will be pressured to release quicker. even if there is a loss right now, there may be a higher pay off in the future with quicker fixes


Basically you are arguing against responsible disclosure?


no it's an argument against irresponsible patching. the budget allocated to fixing these exploits will be higher at M$ as a result from the backlash


MS is already fixing exploits and publishing patches regularly.


So this is how rumour becomes truth - the US government accuses Russia of hacking the DNC, either because they have some kind of proof (unlikely) or because it serves their global interests (likely), media publishes articles of this myth, and then completely unrelated articles embed this "truth" in the title, even though the content doesn't really require it. Abhorent!


> the US government accuses Russia of hacking the DNC, either because they have some kind of proof (unlikely)

"two competing cybersecurity companies, Mandiant (part of FireEye) and Fidelis, confirmed CrowdStrike’s initial findings that Russian intelligence indeed hacked the DNC. The forensic evidence that links network breaches to known groups is solid... etc etc"

http://motherboard.vice.com/read/all-signs-point-to-russia-b...


Attribution is hard. The NSA likely never hacks someone's systems from US IPs. They hack Russian and Chinese computers first, from where they launch their exploits. I would imagine the NSA is not the only country that does this.

I don't know how good the two security firms are and whether they could actually identify something like this, though.


Attribution is difficult, but your comment shows a lack of understanding. Everyone already knows that you can't attribute activity based solely on the geographical association of IP addresses.


Read the whitepaper that CrowdStrike wrote on the HammerToss virus. They attribute it to Russia for two reasons:

1) It is "sophisticated", because it uses public services like Twitter/Github and embeds command & control scripts into images. Any mid-level application developer could write a similar virus in a weekend.

2) The work required to keep the virus operating (registering Github/Twitter accounts, posting messages) was done during the Russian workday. Because the Russian hackers are highly sophisticated enough to write this virus but incompetent enough to not cover their tracks.

This is the clearest, most blatant example of the US propaganda machine at work. With no concrete evidence whatsoever they have accused a nuclear power of performing these attacks, and the media and a bunch of "infosec researchers" (who most in tech recognize as charlatans) fell in line.


Did you read the link? There's actually a lot more evidence than that.

"a reused command-and-control address—176.31.112[.]10—that was hard coded in a piece of malware found both in the German parliament as well as on the DNC’s servers. Russian military intelligence was identified by the German domestic security agency BfV as the actor responsible for the Bundestag breach. "

Also, occam's razor.


It's funny how conspiracy theories take more evidence that they are wrong as evidence that the conspiracy is bigger than they thought. So now, it's a bunch of us intelligence agencies, the wall street journal, the democratic party, several unrelated it security companies, microsoft and google that are all trying to frame the russians?


Just like there was no conspiracy to frame Iraq as having WMDs or being responsible for 9/11? I mean, all of NATO was on board, they couldn't have possibly been wrong.


So, because that conspiracy theory adopted some very well known (at the time) correct elements, then this one is also correct?

Then how about these: because Iraq was framed as having WMDs, then the Titanic was actually the Olympic rebadged and sunk as insurance fraud. The Moon? Man didn't walk on it, that's just an American lie because no WMDs. It's also a hologram, because Iraq didn't have WMDs!


Nothing else you mention has a documented history of government non transparency.

Foreign policy on the other hand continues to reveal events that occur not as they were originally described.


It's not a conspiracy theory to say that solid attribution of cyber attacks is practically impossible.


There are other kinds of intelligence besides computer forensics. So, attribution is not impossible if you are a nation state with other resources.


Does anybody else feel like there's a ongoing media push against Russia recently (as a UK resident, US/RU relations haven't been that great since WW2) - if I had to put a date on it, around the time of the Aleppo bombing?

Kinda like how 'terrorists' were responsible for every drop of spilled milk in the early 2000s, Russia is now responsible for hacking every system in the west.

Not that I care one iota for Russia, but being influenced to distrust and hate feels really underhanded.


It's not media's fault. Russia is waging open cyber war against west, of course media covers it. I know from reliable source that my country's government IT infrastructure, especially ministry of foreign affair, are being attacked by Russia nearly constantly -- it's even worse that what media says.


It's not like NSA and Chinese gov. hackers weren't doing the same, hilariously hypocritical to blame only one party. Same was happening with surveillance, with basically everyone with their hands in the cookie jar.


I think everyone understands/expects espionage, it's the public 'weaponization' of the information that makes Russia a particularly bad actor.

If Russia had not released the DNC emails (or had somehow done so surreptitiously) to queer the US election, there would be no story to report on.


How quickly we forget things.

https://en.wikipedia.org/wiki/Stuxnet


Did the media incorrectly implicate the actors in that case?


The argument is that if Country A uses cyber warfare it's "good", but if country R does it, it's "bad". While both are doing the exact same thing, as I said above.


Is trying to stop a country from obtaining nuclear weapons in violation of the Nuclear Nonproliferation Treaty really morally equivalent to interference with a democratic election?


If country R hacking other countries is an accepted fact, why all the whining about it being covered in media? It is a big deal.


This statement is very unpopular. "Someone" has to be good and anti-evil and at the same time very competitive. No one knows how, sadly, but also no one want to feel like evil.


Well, relations with Russia have gotten worse since Ukraine.


Why do people seem so adamant that Russia didn't do it when it's very and most likely that they did?


Trump/Alt-right supporters. They love Putin because he's a "strong leader" and he hacked the DNC.

But it could be anyone! China! Or a 400 lb kid living with his parents. /s


Maybe not adamant, but doubtful considering our government has consistently misled us about who are our real enemies and their level of actual threat.

What Iraq and those WMDs? Or Iran that has been just a few months away from getting Nukes for like 20 years. Or nearly all the 9/11 hijackers were from Saudi Arabia, but apparently they are our friends.


Except the Russians have been implicated by more than just the government, its been backed up by multiple independent internet security firms. Also, the Bush administration that supported the Iraq war has kinda been out of power for a while now.


This is not about just one administration.

Foreign policy continues to reveal events that occur not as they were originally described.

The real question "Are we certain enough to take actions that could lead to war?" because that is where are government is going with this. http://www.pbs.org/newshour/rundown/does-government-know-hac...


> when it's very and most likely that they did

Why do you think so? First of all, there are countless "enemies" that would want to hack the US/DNC (North Korea comes to mind, who was accused of hacking Sony some time ago, as does Iran, who was itself victim of US hacking). All of those seem equally likely. Second, it's possible that there was a leak, not a hack.


Given that there is forensic evidence that led infosec/intelligence institutions to conclude Russian hackers did it, I would say the other options are not equally likely.


But you don't see much reaction when NK or Iran gets accused of hacking.. but the mention of Russia seems to get people really up in arms about it.


Maybe. I called it "bullshit" then just as I do now.


Yes, it will be repeated until accepted as truth.

This is the same kind of 'truth' that led us to war in Iraq over WMDs.


[flagged]


We've asked you already to please not comment like this, and we have to ban accounts that continue. Please comment civilly and substantively on Hacker News or not at all.


> either because they have some kind of proof (unlikely) or because it serves their global interests (likely)

Is it just me or is Trump making conspiracy theories great again? There's always a fringe that is happy to see a conspiracy in everything (and sometimes they're actually true), but it seems that this sort of thinking is everywhere at the moment.


Snowden revelations made "conspiracy theories" plausible regarding cyber security, operations and media reporting.


The thing is, I remember the EFF reporting on NSA spying activity long before Snowden blew the cover open. The EFF has a great timeline (https://www.eff.org/nsa-spying/timeline). The first apparent reveal of warrant-less spying was in 2005 (http://www.nytimes.com/2005/12/16/politics/bush-lets-us-spy-...) and Mark Klein was the first major whistleblower back in 2006. (https://en.wikipedia.org/wiki/Mark_Klein).

What Snowden confirmed was more the scale. The details honestly shouldn't have been terribly surprising for anyone who had followed computer news for the last decade, unfortunately.

As far as Russia hacking the DNC is concerned, what I know is there are several cybersecurity companies that have confirmed that Russian hackers were involved, publishing white papers with a fair bit of evidence. I personally haven't seen a single cybersecurity company so far refute that information.

I do welcome a technical refutation of the white papers or a point to a cybersecurity company that disagrees! Counterpoints however have to be data-oriented to be interesting. I consider random commentators posturing that all this is just US propaganda with little evidence to be merely cheerleading, sorry, that's not interesting to me...


How naive one government agency has to be to think backdoors are for them only.

How pissed was the FBI when the learned that now the Russians also have access to the voting machines.

They're using next decade technology with a 60 year old mindset.


"New Shadow Brokers dump contains list of servers compromised by the NSA to use as exploit staging servers." -> https://twitter.com/musalbas/status/793001139310559232

Lots of .ru domains in that list. Attribution is hard if you care about being correct.


Ok, I'm only going off the extremely limited information in this article, but this attack phishes users, gets them on a malicious webpage and then uses Flash as a vector to exploit an MS Windows vuln, right?

If that's the case, haven't Chrome and Firefox been blocking Flash for over a year now[0]? Considering the MS Edge is apparently not vulnerable (according to the article), that doesn't leave much market share left. If all of the above is correct, I'd say the surface area of this attack is pretty low.

[0] https://archive.fo/oxlmT


On Microsoft Edge and Google Chrome, it can't be exploited because of win32k lockdown.


Why would you submit an article that requires an account to read it?


Russia has been so so bad again. Bla bla.



And Microsoft's post doesn't seem to mention "Russian" anything. Compare that with the coverage in the style of "Microsoft CONFIRMED Russians help HIM"

Edit: The titles of the articles and the coverage make a logical jump that the authors of the texts took care to avoid. The jump is only to be explained by the need of the influence of the perception of the readers shortly before the elections.


They state that STRONTIUM is likely APT28/Sofacy in linked PDF:

"STRONTIUM has been active since at least 2007. Whereas most modern untargeted malware is ultimately profit-oriented, STRONTIUM mainly seeks sensitive information. Its primary institutional targets have included government bodies, diplomatic institutions, and military forces and installations in NATO member states and certain Eastern European countries. Additional targets have included journalists, political advisors, and organizations associated with political activism in central Asia. STRONTIUM is Microsoft’s code name for this group, following its internal practice of assigning chemical element names to activity groups; other researchers have used code names such as APT28, Sednit, Sofacy, and Fancy Bear as labels for a group or groups that have displayed activity similar to the activity observed from STRONTIUM. The group’s persistent use of spear phishing tactics and access to previously undiscovered zero-day exploits have made it a highly resilient threat."

http://download.microsoft.com/download/4/4/C/44CDEF0E-7924-4...


Alternatively, get this addon: https://addons.mozilla.org/en-GB/firefox/addon/refcontrol/ or an equivalent for Chrome.

Then add www.ft.com and www.wsj.com sites with the Action of https://www.google.com


Thank you but I just don't click them at all any more.


This is nice - but...I'd rather not read/support paywalls by reading their content.


Hey, come work for me. I have an addon on Outlook that blocks emails where employees demand their salary.


...I think my boss uses that.


[flagged]


Please stop posting unsubstantive comments like this.


Google's behavior seems strange:

http://www.theverge.com/2016/10/31/13481502/windows-vulnerab...

"Google went public just 10 days after reporting the bug to Microsoft, before a patch could be coded and deployed. The result is that, while Google has already deployed a fix to protect Chrome users, Windows itself is still vulnerable — and now, everybody knows it."


IIRC from when this was originally reported, this was already being exploited in the wild prior to release. The only reason to not go public is to give the vendor a chance to patch it prior to general exploits being available. If exploits are already happening, waiting only endangers more people as they can't take actions to mitigate a danger themselves if they don't know about it.


There's no need to know the actual details of exploit to provide workarounds, if they exist. If they don't, everybody has to wait for Microsoft anyway, but the malware authors get one now famous 0-day. Instead of being used exclusively, it get to be available to everybody before Microsoft publishes the fix. And that is only in the interest of Google, not the customers. When today's Google writes something "gives clear benefits" you can almost imagine them as the film villain saying to the camera with the wink afterwards "for us, muahhaha." In this case I actually agree with Microsoft and with what they write:

"We believe responsible technology industry participation puts the customer first, and requires coordinated vulnerability disclosure. Google’s decision to disclose these vulnerabilities before patches are broadly available and tested is disappointing, and puts customers at increased risk."

Edit: The distinction is important: the "malware authors" that have the access to the 0-day before Google publishes it are a minority of all malware authors: 0-days are carefully guarded and have high price. The moment Google publishes the details, the whole malware community has an access to something that is provably 0-day working exploit. The difference is huge, more orders of magnitude, up to every script kiddie.


> There's no need to know the actual details of exploit to provide workarounds

Your premise is wrong - there is a need. The details can be used to create signatures for anti-malware software and IDSes by 3rd parties who are nimbler than Microsoft.

Additionally, there is more to be worried about than just workarounds: some organisations do full-take packet capture archiving and might be interested to know if they were hacked via this flaw (retrospectively) before (more) data is exfiltrated, the sooner they are made aware of an actively-exploited bug, the better.

> ... If they don't, everybody has to wait for Microsoft anyway, but the malware authors get one now famous 0-day

Wrong again: there is no need to wait for Microsoft for the reasons I mentioned above. Before this, only the bad guys knew about the exploit, now the good guys also know about it and may work to mitigate it. We have no idea how well-known this exploit was in underground forums, so it is hard to quantify how many more bad guys know it for certain.


> The details can be used to create signatures for anti-malware software and IDSes by 3rd parties who are nimbler than Microsoft.

No, you don't need to publish the details of the vulnerability to the public in order to cooperate among the anti-virus vendors.

> some organisations do full-take packet capture archiving and might be interested to know if they were hacked via this flaw

Observing full packets has nothing to do with giving to the public the details of which system API flaw was used for the attack.


> but the malware authors get one now famous 0-day.

They already had that. That's what active exploitation means. It's not a 0-day after it's known.

> And that is only in the interest of Google, not the customers.

Sure. Just like newspapers report misconduct and it's only in the interest of themselves, not their customers. You seem to be under the impression that companies will endeavor to fix the problems as quick as possible no matter what, and that public exposure won't change that timeline at all. History tells a different story

Edit for your edit:

> 0-days are carefully guarded and have high price.

They are carefully guarded because every use risks discovery of the exploit which reduces its value. Being identified in the wild to the level seen here, or at least to the degree of importance here, indicates that the can't already out of the bag. You can assume being used in a high level attack like this effectively eliminates it as a zero day at some point, as forensics will be done. I wouldn't be surprised if the exploit was dumped to the general malware public as soon as the hack was made public, which also helps to obscure the origin slightly.

> The moment Google publishes the details, the whole malware community has an access to something that is provably 0-day working exploit. The difference is huge, more orders of magnitude, up to every script kiddie.

You act like we can reliably make predictions about what will happen if not released. It's entirely possible it could, would or has been dumped to the greater malware community already, and not releasing the details will just allow those same script kiddies access to a populace that hasn't had a chance to mitigate the attacks.

When presented with the option to keep information about danger to the greater populace secret or take it live when taking it live allows people to take their own actions to protect themselves, you take it live. Preventing people from taking their safety into their own hands by hoarding the information is immoral. It actively prevents them from protecting themselves.


How do you imagine any company actively "protecting" from this specific attack vector by knowing exactly these details? I am able to extract the details for making the attack and I just can't imagine. The attackers are faster, exactly during the time between the publishing by Google and the patch by Microsoft it's the attackers who are better off by the orders of magnitude. Unless you give some specific examples, I assume you don't know what you're talking about.


> How do you imagine any company actively "protecting" from this specific attack vector by knowing exactly these details?

Adobe patched flash before it public. Chrome protects from the specific Windows problem. Using Chrome for the time between the announcement and a fix would mitigate the attack. This course of action would leave anyone who was not already donig it safer, and in this case is easy to do.

> The attackers are faster, exactly during the time between the publishing by Google and the patch by Microsoft it's the attackers who are better off by the orders of magnitude.

As I said above, an easy mitigation exists above. Using Chrome for a few days if you weren't already if all you have to do to protect yourself from remote execution of this until it's patched.

It's not any one person or organization's place to decide the course of action for everyone else because they have secret information. The responsible thing to do if people have the the ability to change how it affects them is to disclose this information. To do otherwise is to make everyone's decision for them. It's the same principle that leads to the freedom of press. The people have aright to know about information that affects them when their choices matter.


> Using Chrome for the time between the announcement and a fix would mitigate the attack.

This example you gave supports my claim, not yours: "they can use Chrome" is exactly something for what the public doesn't need to know exactly which API in which system DLL contains the bug when invoked under which circumstances, knowledge of which can be used to develop new attacks.


Except that once it's made public, other browser vendors can put in their own mitigations as well. Firefox would probably have a patch of some sort within hours.

You might respond that Google should just tell the other browser vendors then. But why just the other browser vendors? Why would Google get to choose who qualifies as worth notifying?

I stand by my assertion. When the public is at risk, and they have the ability to take steps to reduce that risk, it is their right to know about that risk so they can take those steps if they desire.

If there were a gang of thieves working through an area taking advantage of the local population because the majority of locks sold in the area had a major flaw, I would expect it to be reported. That may spur the thieves to change how they operate, and may in some ways make them much harder to catch, but it's irresponsible to not notify the people in danger so they have a chance to protect themselves and their property, and explain what the problem is. They need to explain how it works so people can feel assured any fixes put in place actually fix the problem,and the easiest way to tell if you are in danger is often to try the exploit of the flaw yourself, because sometimes it affects more than the discoverer and the company involved thought.

You either believe the responsible thing to do is to report the details, or never report them. If you believe the responsible thing to do is to report the details, then it's just a matter of when. If it's being actively exploited, like it is here, and if you agree that a year is too long, and no time is too short, then it's a matter of choosing an amount of time that provides the vendor time to fix the exploit while reducing the number of people that might suffer in the meantime. You can either try to make decisions on a case by case basis, which will ultimately be arbitrary, or you can set a policy and follow it.

Microsoft had a week to get a fix out. If it was important enough to them, they would have. They have the talent, time, money and existing resources to do so. It wouldn't necessarily have been easy, but not doing so is a failure on their part, not Googles. I won't fault Google for giving me the choice to protect myself when Microsoft won't.


> And that is only in the interest of Google, not the customers

Long term, it's in the interest of the customers as well - that's the only way to encourage ocmpanies to be releasing fixes sooner!



I know about that post from 2013 and I still don't agree.

Just like I don't agree with how Google implemented AMP, making google.com a link redirector that didn't exist before and was actually used in the fishing campaign, specifically:

http://seclists.org/bugtraq/2016/Apr/70

That is what was actively used to fish the logins from Podesta et al.

Google's response:

https://sites.google.com/site/bughunteruniversity/nonvuln/op...

"tooltips are not a reliable security indicator"

Translation: "we don't look at that sh.t"

"poses very little practical risk."

See how Podesta et al. were tricked.

"offers fairly clear benefits"

Translation: "For us. Muahhaha."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: