Hacker News new | comments | show | ask | jobs | submit login
Another Ransomware Outbreak Is Going Global (forbes.com)
504 points by smn1234 145 days ago | hide | past | web | 410 comments | favorite



Maersk is down. Their main site says:

    Maersk IT systems are down

    We can confirm that Maersk IT systems are down across multiple sites
    and business units due to a cyber attack. We continue to assess the
    situation. The safety of our employees, our operations and customer's
    business is our top priority. We will update when we have more information.[1]
Maersk is the largest shipping company in the world. 600 ships, with ship space for 3.8 million TEU of containers. (The usual 40-foot container counts as two TEUs.) If this outage lasts more than a few hours, port operations worldwide will be disrupted.

[1] http://www.maersk.com/en


The web sites that are supposed to give APM port status are frozen. It appears that many (all?) APM terminals worldwide are not accepting incoming trucks. Unclear whether ships are being unloaded.

There's surprisingly little info about this from the actual ports. Even Twitter output has become so PR-controlled that nobody involved is getting important information out. APM, Maersk, and the Port of Los Angeles all have Twitter feeds, and none of them have any useful info about this. Even the Port of Los Angeles Police have nothing.

The Port Authority of New York and New Jersey has a clue. Their alerts feed has useful info.[1]

    6/27/2017 4:30:08 PM

     APM closed 6/28 & plan to open 6/29 6:00 am, 
     gate hours to 7:00 pm (cut off) 6/29 thru 7/7. 
     Free-time will be extended 2 days due to service impact.
     (The free time extension means customers have two extra days to 
     bring back their empties before being charged.)

    6/27/2017 1:14:23 PM

     Due to extent of system impact, APM Terminals will not be opening 
     for the remainder of the day. Updates on tomorrow's status to follow.

    6/27/2017 9:12:22 AM

     APM is still experiencing system issues. Please delay arrivals.

    6/27/2017 8:58:03 AM

     APM Terminals is still experiencing system issues. Please
     delay arrival until further notice. Updates will follow.

    6/27/2017 7:53:09 AM

     APM Terminals is experiencing system issues and working to 
     restore. Please delay arrival.
Whoever is posting those seems to be the one person on the planet sending out useful info about this. The biggest container terminal on the East Coast is closed today and tomorrow.

[1] http://btt.paalerts.com/recentmessages.aspx


And the next relevant update:

6/27/2017 7:11:15 PM

As of 6:30 Tues. 6/27, APM Terminals employees are still without email or office telephone services. No emails or voicemails can be accessed or answered. Please standby for PA Alerts or for critical matters please contact Giovanni Antonuccio (908) 966 - 2779.


That's bad. Maersk hasn't been communicating with the shipping industry. Journal of Commerce says nobody is getting useful info about Maersk's status.[1] Now we have a hint as to why - they can't even communicate internally.

The Maersk site still has nothing but a statement that they are down. Maersk's Twitter feed has nothing useful. No press releases. The only useful comments are coming from non-Maersk port employees.

[1] http://www.joc.com/maritime-news/container-lines/maersk-line... [2] http://labusinessjournal.com/news/2017/jun/27/maersk-halts-o...


> Maersk's Twitter feed has nothing useful

I wouldn't be surprised if nobody had access to the password.


Maersk Line's login site for customers is down, with a message saying their systems are down.[1] APM Terminals, their business unit which runs ports, has their web site down with a 500 error.[2]

* Los Angeles APM container terminal shut down for today according to press report.[3] No mention of this on APM web site.[4]

* Port Elizabeth (NJ) APM container terminal is down for incoming trucks, according to Port Authority of NY and NJ site.[5] No mention of this on APM web site for the port, so apparently APM web site updates have stopped.

* Mobile (AL) APM container terminal is down.[6]

[1] https://my.maerskline.com/ [2] http://www.apmterminals.com [3] http://www.sgvtribune.com/business/20170627/la-ports-largest... [4] http://www.apmterminals.com/en/operations/north-america/los-... [5] https://www.panynj.gov/port/ [6] http://wkrg.com/2017/06/27/widespread-cyberattack-impacts-co...


Great. Maybe we can finally put a price on lack of security protocol.


The article says the ransomware affects even patched Windows boxes. Perhaps what you mean to say is, "Great. Maybe we can finally put a price on using Windows."


Patched machines are affected, but they probably weren't the initial entry point. We still rely too much on having a single line of defense.


My understanding is that patched computers are only affected via pass-the-hash from an unpatched computer.


Yes, a zero-day is no excuse for a lack of defense in depth.


OSHA violation in 2025: operator was not using a deterministic operating system


what you are suggesting has grave realities for those who cannot or do not want to mess with formal verification techniques. This is the future of computing and a lot of people will be left behind once this catches on.


>lot of people will be left behind once this catches on

if this catches on.


I dream of seeing a "security first" development process adopted ..


Are you sure about that? You do know most organizations will implement that as a huge amount of bureaucracy for every commit, rather than proper man-hours of security-oriented development.


Only because most organizations don't know how to be effective at security.

It's not hard. You don't actually have to change much. You just have to schedule regular pentests, ideally every couple weeks.

Pentests protect everyone because it's our job to worry about all of the security flaws that you can't possibly be aware of in your normal day-to-day development cycle. There's just too much for any organization to know about except security companies. This way you can focus on development and we can focus on pointing out how to fix what's broken.


Pentests aren't a magic bullet either. You can easily find a consultant who isn't going to rip you a new one.

Security is a mindset. Any "checklist" approach will eventually devolve into ass-covering by an organization that is not internally motivated to run a tight ship. Legitimate variances will be hassled to no end, while actual security vulnerabilities will be ignored.


In the real world, one of the only reasons people get pentests is because another company is forcing them to. That results in a document saying company B is secure.

This is a very effective approach at cutting through ass-covering. Company B has to fix the security problems uncovered in the pentest. There is no other option. And I've seen it take products from "SQL injection by typing an apostrophe" to "It'd be very difficult to exploit this app."

If that's not proof that pentsts are effective, then I'm not sure what would be.

We like to say that security is a mindset, but developers have way too much on their mind to be aware of every possible security vector. It's easier and more effective to punt and let us worry about it instead.


There's different levels of penetration testing too. I worked at a SaaS startup and when we got our first big customer they demanded we get a third party to run a pen test on us. They basically ran their script and gave us a report. There might have been some minimal going back and forth about some false positives, but that was about it. That's better than nothing, but may not be what some of the more technically/security minded folks here at would consider a real pen test.


"It's not hard."

No, it is not, you just need skilled people working on it. Oh, those people want money for it ...


Exactly. It's not hard, it just costs some money.

It's exactly the same as physical security. You build fences and buy locks. You pay people to keep an eye on things. You take insurance to cover the rest of the risk.

Nothing hard, no new inventions required. It just takes some attention and cash. It's part of the cost of being in business.


Wait, the hardness of information security comes because it has to be built-in everywhere since everything is connected and so everything is a potential attack surface.

It's not impossible but it requires a somewhat universal attitude change.


I want to agree with you in principle, but in practice it's not possible to be secure with just an attitude change. The attack surfaces have grown too large. Keeping track of all possible vectors is a full-time job in itself. You either need a dedicated security person or regular pentests. And honestly, regular pentests are probably more effective.

It's a positive statement though: it is possible to be constantly secure if you just get a pentest every few weeks. Big companies can even afford to make it a requirement of their release cycle.


> Big companies can even afford to make it a requirement of their release cycle.

Oh man. I have a peer who works for a very large international company. They require pentests in their release cycle. What could go wrong?

Turns out that pentesting isn't in the final portion of their release. They tag a release candidate (e.g. v5.7.0-rc), send that build to the pentesters, then fix other integration and user-acceptance bugs while the pentesters are working. The pentesters may greenlight v5.7.0-rc when it's really v5.7.3-rc that's shipping, and the pentesters are none the wiser.

Security only works when the culture supports it.


Attitude change in the sense of not being willing to allow inherently insecure architectures - management always moving the company towards secure-on-principle architectures (not that I'm qualified to say if it's a good example but Google's BeyondCorp is an example of aiming to make everything secure on principle meaning not leaky on principle). That added to any pentesting or other necessary immediate security measures.

The impression I have is that today's event was the result of a lot of companies allowing insecure-on-principle architectures like a zillion apps each with their own update structure (random Ukrainian enterprise app supplier gets penetrated and the whole world goes down). A pentester might never be able to find that vector until that app supplier leaves their door open or someone finds out about them for example.


And people skilled at picking the skilled people and a willingness to actually do what the skilled people say... when those skilled people aren't necessarily the same as the managers shouting managementese...

And this also collides with the willingness to do anything to save a couple of dollars and once that dictate isn't flowing through every once of the company's blood, who knows what will happen.


Pen-tests show the presence if vulnerabilities, not their absence.

To make secure systems, we need to take the (very) difficult road of working our systems bottom up and proving the absence of vulnerabilities and defining the boundaries of safe operations.


What I really want to see is security being integrated into the development process as a conscious tradeoff teams have to make.

When a new feature is proposed, it's rare to hear someone object on the grounds that it could potentially add new vulnerabilities, but in the long run an approach that recognizes and considers those risks would be beneficial.

At the same time, this is incredibly hard to do - managers celebrate employees who develop things that look cool and awesome, not employees who can mitigate risk and manage security effectively (hopefully this changes, but I can't imagine that many unaffected CEOs are calling up their sysadmins right now and congratulating them on their diligence in making sure all their machines are patched).


Definitely a problem. People (incorrectly) compare vulnerability scanning with pen testing. Vuln scanning often is a component of a pen test, but we do a bad job explaining the distinction. Pen test should attempt to use the app(s), maybe test the people and process, not just profile the software versions and complain they are out of date or misconfigured.


[flagged]


Do you think the engineers at Microsoft who were responsible for these badly designed and buggy systems wouldn't have been able to get credentials?


What does it have to do with credentials?


Or on some IT guy being asleep at the wheel...


No. IT management owns this.


A lack of professionalism owns this. In other industries engineers can overrule management and have legal protection when doing so.


My company is actively helping enterprise companies and cyber insurers do just this.


This afternoon I was sitting next to a Maersk employee when people walked in with bricked laptops. This person didn't believe it immediately (with all the fake news these days), he tried to get it verified through some former colleges. One minute later this laptop wasn't working anymore. He was lucky as his laptop was synced with a corporate subscription of one-drive and can continue from home on his personal iMac.

Externals and people with a MacBook could continue working.

Some departments request personal to stay home tomorrow.

Mail seems to also be down, although I don't understand as it is hosted on outlook.com


I gotta say, I really like that I managed to get my own, snowflake, self-managed linux notebook at my place of work.

I mean all of IT can access the box once I give them the password for the vault I gave them. That's just the right thing to do. But no one touches or updates my fortress of last hope but me, from a local shell.


"Maersk is down."

They made themselves fragile to this attack. It was completely gratuitous.

They are large enough to chart their own destiny and critical enough to care deeply about it ... and they built on top of cutesy new versions of Windows that everyone knows are garbage.

How does that old saying go ?

"Fool me once, shame on you. Fool me a multitude of times, in varying circumstances, over and over and over again for two fucking decades, shame on me."

Something like that ...


By the looks of it, it will be down for several hours, hopefully. And sorry if this sounds wrong, but that's actually a good thing. Only with real damages like this is that security may be taken seriously.


The parallels with the "Daemon" in Daniel Suarez' novel are scary.

small spoiler ahead

This Daemon is an AI that keeps data of big companies hostage - it will destroy all that company's data if the company does not pay protection money, or if the company involves law enforcement.

Because a lot of companies in the novel don't stick to the AI's rules, these companies go down with the exact same symptoms as Maersk is now having:

  - unable to do business
  - unclear what happened
  - declining stock prices
https://en.wikipedia.org/wiki/Daemon_(novel_series)


This is also a fabulously entertaining novel.


It seems fitting that the top comment on such a big congestion issue be posted by Animats :-)


FYI to Sysadmins: Paying the ransom at this point will be a waste of money, as the contact e-mail address has been blocked.

https://posteo.de/blog/info-zur-ransomware-petrwrappetya-bet... (German)

https://posteo.de/en/blog/info-on-the-petrwrappetya-ransomwa... (English)


It's always seemed like the best way to end ransomware is to launch hundreds of variants that demand money but don't actually decrypt anything. Unethical, to be sure, but eventually people would learn not to give them money.

All the competent ransomware authors are probably quite unhappy whenever a defective ransomware strain pops up.


If it gets big enough then people just hear from each other if paying unlocks the data or not.

The best way to end ransomware is to get serious about security. In many cases, being attacked by a ransomware, is paying a low price compared to if it was a targeted attack.

edit: Also, I imagine it gets easier after you wrote one i.e. many ransomewares come from the same author. So he could gain a reputation by signing messages saying that yes, this is our ransomware, we always unlock after receiving the payment.


>If it gets big enough then people just hear from each other if paying unlocks the data or not.

The idea would be to create "fake" ransomware that looks exactly like the real one

>The best way to end ransomware is to get serious about security

No matter how serious you get there always gonna be bugs, there isn't a single piece of mass distributed software in human history without bugs. That said, we should try to improve security of software but expecting it to be THE solution is wrong.

>Also, I imagine it gets easier after you wrote one i.e. many ransomewares come from the same author. So he could gain a reputation by signing messages saying that yes, this is our ransomware, we always unlock after receiving the payment.

Forging a signature is not that hard.


If forging digital signature is not that hard, then you can release some great scientific paper moving crypto decades ahead, or alternatively you can make billions.


I was talking about pixel-made signatures; you know, the one the user actually sees when the computer its already infected; not cryptography between public/private keys; otherwise its a chicken and egg problem; how do you know what signature its the "real"; you google it and hope nobody added its own search results? Go to the official website of the ransomware developer?


The ransomware can present the key fingerprint for example.

But even without it, there are so many options, e.g. timestamp signed message on the blockchain before the release. After just one confirmed message you don't care about pretenders because people can check if the signature matches with the previous message.


I think you are overestimating the technical capacities of the average randomware victim.


It's enough that some technical people verify it. The average random victim gets his info from news sites and more technical friends.


Is it not cryptographically possible to create a transparent provably-operational decryptor on top of something like ethereum?


It's a marketing issue. People likely to get hit with ransomware are incredibly unlikely to understand what that means. Hell, even main devs have trouble writing contracts, so even if a user knew there was a smart contract, verifying it would be another thing. So it'd get reduced to "guys on Twitter said this one works".

I like the idea though.


Since you can't store the private key needed to decrypt the files in ethereum, I can't think of how to do this.

All blockchain state is public, since it needs to be calculated by and verified by all nodes, so there's nowhere to stash a private key without revealing it.


It's a lot more likely that the attackers make money via the markets than via a direct ransom.


It's like saying best way to fight heroin addicts is to supply market with poisoned heroin. No heroin users - no problems!


The US government did this with alcohol during prohibition,

> by the time Prohibition ended in 1933, the federal poisoning program, by some estimates, had killed at least 10,000 people.

http://www.slate.com/articles/health_and_science/medical_exa...


A non-strawman analogy would be to sell fake heroin that looks exactly like heroin but does nothing at all. This analogy is more exact because the resource you are wasting its the same: money (not lives as in yours)


except that getting malware isn't addictive...


Your information is tho


Interesting. If I was Posteo I don't think I would've been so quick to ban the email, this will potentially cause a lot of harm. What about all the people that need their data back? They have no way to get it now. Plus many people are still going to post the money only to get no response from the email.


This is indeed a heavily debated topic.

It will cause a major headache for those who pay and will hopefully make people learn to distrust ransomware, in turn making it less lucrative.

On the other hand, that requires a fair number of "acceptable casualties" so to speak.

I personally think both sides of this are valid and don't know what the best option really is. It will be interesting to watch how things evolve at least.


>will hopefully make people learn to distrust ransomware, in turn making it less lucrative.

Ransomware will never ever not be lucrative. Preventing people from getting their data back doesn't discourage future campaigns and primarily hurts the victims of the ransomware.


[citation needed]


Seriously? The whole idea is so fundamentally stupid.

1) Ransomware authors have obvious economic incentive to decrypt, and no reason not to. This makes it a herculean task to convince the general public that they wouldn't do so.

2) By the time your data is encrypted, you'll be researching your specific ransomware strain and will find out if it's legit or not. Googling the onion address is an obvious choice and something the ransomware author can just tell you to do.

3) Most people will need someone more technical to arrange the bitcoin payment anyway, these people will verify if the ransomware seems to be legit or not.

4) People don't magically get smarter, phishing still works if you pass the spam filters.

5) Winlockers were immensely lucrative even before they started using crypto.

6) Unless you're going to run your fake-ransomware campaign at an immense scale you'll never drown out the real, working ransomware.

And then in the end, what the was your goal anyway? Good job, now you've deleted millions of peoples data on a retarded mission to "stop ransomware". But hey, at least you stopped those evil russians!!!

There are precisely zero good arguments for preventing people from decrypting their data.


> 1) Ransomware authors have obvious economic incentive to decrypt, and no reason not to. This makes it a herculean task to convince the general public that they wouldn't do so.

Its irrelevant, this has nothing to do with the fake ransomwares.

>2) By the time your data is encrypted, you'll be researching your specific ransomware strain and will find out if it's legit or not. Googling the onion address is an obvious choice and something the ransomware author can just tell you to do.

The search results of any onion address are just as fake-able.

> 3) 3) Most people will need someone more technical to arrange the bitcoin payment anyway, these people will verify if the ransomware seems to be legit or not.

Sure, with their ransomware-detecting powers

>4) People don't magically get smarter, phishing still works if you pass the spam filters.

What has to do with anything

I got bored to keep answering, in general your points seem week which make you sound a bit too much like a ransomware creator. Probably not because you have 3 years here but otherwise you do.


>I got bored to keep answering, in general your points seem week which make you sound a bit too much like a ransomware creator. Probably not because you have 3 years here but otherwise you do. 

Not a ransomware creator but I understand the economics at play. Ransomware is more profitable than sending spam, unless you're spamming to spread malware.

The value of individual installs has historically averaged at significantly less than a dollar each, ransomware is bringing that way up.

You aren't going to stop ransomware unless you figure out a solution to all other malware, or invent a more profitable scheme. People need to do something with their bots and ransomware is always going to make more money than spamming from bots that haven't been able to inbox anything for 5 years.

There's simply no way you'll stop enough people from paying to make viagra spam beat ransomware.


Not really, ransomware is way more dangerous than selling viagra; I may want to kill you if you encrypt my data, not so much if you sell me a couple of viagra pills that don't work. When you scam someone (e.g nigerian scam) you take money from one (or a few) person only, here you are taking data from a lot of people and hoping some very few will pay; making a lot more enemies in the process, likely including state actors; which may make it a federal crime to pay such ransomwares.


Diminishing returns, running spam botnets is already so risky that making more enemies by graduating to ransomware probably doesn't make a perceptible difference. Do you go to prison for 25 years or 30?

Sure, you could probably deter ransomware by sending DEVGRU to murder the authors, but I doubt it's worth the political shitstorm that'd follow.


It's also a good way to catch the authors. They might accidentally log in without using tor. (It's happened surprisingly often.)


It doesn't seem that getting ransoms was the actual goal of this malware, it seems rather like plausible deniability.

This is a good description of some details https://medium.com/@thegrugq/pnyetya-yet-another-ransomware-... - it rather looks like a targeted attack to do chaos+damage to Ukraine.


It's surprising that the attackers ask victims to send an email. Why not ask victims to publicly post a picture of their screen to social networks with a certain hash tag (and a new account)? That would be less traceable and harder to shut down, I think.

Not that I want to give attackers any ideas... :-)


There's so many creative things you can do when you imagine yourself as one of these attackers.

- Make a big target amount of money that any large company can pay, eg $10M, and tell people you'll release everyone's key if the amount is raised.

- Use an online board like this one to control the state of your network.

- Embarrass individual firms by posting pics of their offices from their own webcams.

Etc, etc. I reckon talking about these sorts of things will help find solutions rather than just inspire the bad guys.


> Make a big target amount of money that any large company can pay, eg $10M

That sounds like a good way to get state level actors on your case.


Yes, as long as the payments are individually relatively small and anonymous, it's easy for people to misunderstand the amount they may actually be getting. Once you paint a target in the millions, people will notice more, it will become a bigger news story, some congressman or another will make it a pet cause, and then you've got a lot or attention on you. Like any criminal enterprise, the less attention from authorities the better.


Thanks, we'll give it a go next time.


I've seen variants that use Bitmessage and other anonymous messengers before.



It's not easy (I presume) to create such software. So why do they rely on some random e-mail provider? They could have done it so that computers unlock automatically after the address receives the payment. It's not that hard, the software could use multiple ways to get the private key (DNS, IRC, twitter, DHT) and it would be really hard to shut down.


Petya is ransomware-as-a-service, the author gives you the binary payload and unlocking service and it's up to the buyer to distribute / infect people. It often leads to poorly setup things like this where the buyer probably didn't expect their variant to spread so wildly.


They should sue the pants off whoever shut down that email address. For many companies it would have been cheaper to pay than to suffer the damage that has already been done. And it would be easier to catch the culprits if people give them lots of cash because in spending the loot, they will make mistakes and make themselves visible to the police.

Now there is nothing to track until they rewrite their code and try the attack again with randomized email addrs


This is even more proof how powerful a 0-day in the wrong hands can be.

All of the affected companies' should be considered compromised by the NSA.

Actually, every single Windows PC with an internet connection that has been used before March 14 should be considered irrevocably compromised. Ransomware is much more visible than spyware. Think about all the spyware-infected PCs/networks that nobody knows about.


"Actually, every single Windows PC with an internet connection that has been used before March 14 should be considered irrevocably compromised."

March 14 of what year ?

I would say 2000 but I am open to discussion ...


People who don't run Windows shouldn't get cocky! There are many, many attacks on Linux:

Here's one in the news from just last week. A ransomware where the victim agreed to pay the equivalent of US$1MM in bitcoin.

https://arstechnica.com/security/2017/06/web-host-agrees-to-...


Something to keep in mind. They were running:

Apache version 1.3.36 and PHP version 5.1.4

It's not like a brand new Ubuntu installation connected to the open Internet will suddenly be pwned. The owners of this company were beyond inept.


And that was probably the tip of the iceberg with regard to their outdated software -- Apache 1.3.36 and PHP 5.1.4 are both from around 2006, so I'd bet everything else in their stack was similarly old. Failing to update anything for 10+ years will get you in trouble, regardless of what OS it's on.


Seeing Apache 1 in the wild makes me a bit nostalgic.

What kind of utter lunatic would use that for their company today?


Lord have mercy, Apache 1? That's what you get bro.


That would be akin to running Windows XP. People running Anfient Monftrosities should not get cocky in general, attacks on old systems are only getting worse with time.


Unicode suggestion: "Monſtroſities"


You're implying that from March 14, 2000, Windows was very secure.

I think you're getting this backwards. If you say 2017, you and your children-comments' dates will be covered, because they are before March 14 2017.


No, the implication is that Windows prior to that was insecure. That does not mean it's secure afterwards, just that we know it was insecure previously. You are extrapolating without evidence.


I think it's more accurate to say that the comment is explicitly stating that Windows was insecure prior to that date, the implication from which is that it was not as insecure after (else why make the distinction of the date at all).


> the implication from which is that it was not as insecure after

I'm saying there is no specific implication without confirmation from the author as the statement can be taken either way, and any you think you see is more to do with your state of mind than the statement itself. It's a statement about what we know. We know something to be factually true prior to that date. Afterwards is open to debate, and is opinion. Making a statement about that the period we have facts for does not imply anything about the period we do not have facts for.


I feel like you and I are not operating on the same definition of implication.

In the above comment when using the word implication my intent was "a conclusion that can be drawn despite not being explicitly stated".

To be unambiguous, the explicit statement is that computers prior to a specific date should be considered to be compromised. The conclusion that can be drawn, based on the fact that the writer specified that date, is that later dates did not qualify for the same statement, because the conditions were not sufficient. That is to say, that they were not insecure enough for the writer to include in his comment. That is the implication, despite the writer not saying outright that computers after that date were "secure".

The conclusion assumes the credibility of the writer, and the intellectual honesty of their comment (i.e. they didn't put that date there just to be facetious) but I believe that's a fair assumption given the context of questioning the semantics.

I also note that the actual implication here is not that computers are secure after that date, or even that computers are insecure but not compromised. The implication is, in fact, that while computers might be compromised after that date, the writer doesn't believe it's worth advising people to ASSUME they are compromised.


> the above comment when using the word implication my intent was "a conclusion that can be drawn despite not being explicitly stated".

Yes, that is the same definition. But it is an error to draw that conclusion in question because it requires unsupported assumptions. That's why it's not implied in the original statement.

> The conclusion that can be drawn, based on the fact that the writer specified that date, is that later dates did not qualify for the same statement, because the conditions were not sufficient.

No, the later dates did not qualify because the knowledge is insufficient, or if you allow that the knowledge was an implicit part of the statement, it's not longer a binary proposition . If there are two true propositions that must be true for the original statement (we were insecure, and we know we were insecure), there are multiple alternatives. The problem is you are assuming a single one of the possible alternatives is implied, when it's not.

For example, I can say "up to this point in life, I haven't committed a felony." That does not imply I plan to commit a felony by itself. With additional context, it may or may not. I could just as easily follow that statement with "I don't see that changing any time soon" as with "I'm not sure if it's likely I'll still be able to say that next year." That additional context combined with the original statement carries the implication. In this case, people are assuming it's along the lines of one of those followups, when there is really no disambiguating context. Assuming one or the other is a problem of the person interpreting the statement, and in my opinion the root cause of quite a few arguments as a result of misunderstanding, which is why I called it out in the first place.

> That is to say, that they were not insecure enough for the writer to include in his comment.

Or they decided for whatever reasons they did not want to mention it. For example, to simplify the message and call attention to what they thought was of greater importance. Don't assume intent without evidence.

> while computers might be compromised after that date, the writer doesn't believe it's worth advising people to ASSUME they are compromised.

Which is a valid stance to have. I don't believe it's useful for the average person that has stayed patched to assume they are compromised. To assume so would mean never logging into any online account in my case. I believe it's useful to assume you are always under some level of attack, whether active or passive, and take precautions, but to assume you are compromised is quite a bit farther than that.


I sent my first packet-of-death to an unprotected Windows machine in 1996, so...


I used an IRC client called BootFucker Pro back then that had all of the weaponry built in. Those were the good old days.


1997 here :-)

hat was that XP xploit app from back then... I cant recall what it was called...


To be clear XP wasn't available in 1997.


Might have been 98/99, can't recall... I started at that company in 97

But it was back orifice I was thinking of.


XP was August of 2001, if you're curious.


Holy crap... then it was win 95 I was using back orifice against...



Winnuke? That was way before XP, though, I remember it crashed windows 95 and 98 first edition.


back orifice? subseven?


Subseven was just a trojan, tho a really fancy one I had lots of fun with as a kid.


Back orifice!


teardrop?


landattack


redbutton?


> On Tuesday, March 14, 2017, Microsoft issued security bulletin MS17-010,[7] which detailed the flaw and announced that patches had been released for all Windows versions that were currently supported at that time

https://en.wikipedia.org/wiki/EternalBlue


It looks like this was not caused by a 0-day, it is apparently using EternalBlue as execution vector plus another (already fixed) vulnerability for lateral movement.

More of a "100-day" at this point.


It also appears to be using common Windows lateral movement techniques based on credential stealing (namely WMI and PsExec), in addition to EternalBlue.


Maybe I'm missing something, but is there any evidence that this is actually a 0day attack? I didn't study the last outbreak that closely, but it seemed like it was a vulnerability that had been patched, but affected computers that weren't patched. Maybe I'm wrong though. But 0days or no, there will always exist some number of computers that have not been properly kept up-to-date and thus will be vulnerable to security exploits even after they've been disclosed and patched.


No, it's probably not a 0-day this time. But this exploit used to be a NSA 0-day before it became public. Everything that's happening now is the "lite" version of what the NSA is capable of.


Yeah, and the Department of Defense is capable of nuking major cities. And it's about as relevant to this discussion.


It's relevant because it's like the nukes were stolen and that it will continue to happen


I'd argue its relevant because you can't CTRL+C CTRL+V a nuke.


But would you download a car?


Yes.


Let me know when the DoD routinely has their nukes stolen, possibly without them ever knowing.


Everyone would notice a nuclear attack. NSA exploiting vulnerabilities to their own ends, not so much.


This is absolutely detectable, and IDS signatures already exist for EternalBlue (Let alone the fact that it was patched by Microsoft in March).


The previous one WanaCry, was based on a vulnerability that was patched on later OSes. Microsoft went back and retroactively added patches for unmaintained operating systems (like XP).

It was based off an SMB exploit released in a ShadowBroker's dump; an unreleased exploit thought to have been used by the NSA.


> But 0days or no, there will always exist some number of computers that have not been properly kept up-to-date and thus will be vulnerable to security exploits even after they've been disclosed and patched.

You are correct about this. Patches were released in March, but many seem to have put off security-critical patching.


> Patches were released in March, but many seem to have put off security-critical patching.

In fairness to some of the unpatched - the last round of Windows 10 updates refused to install on some machines (well, mine and some others on Twitter), and trapped me in an endless loop of download-install-fail-download. When this happened my landline internet was down, so this was happening over 4G tethering, and burning up $20/day in cellphone data until I just turned off my internet/tethering.

I'm not saying don't patch (you should!), just that even people trying to stay patched and do the right thing can find they're unable to do so.


You are absolutely correct, people are even still wary after the aggressive Windows 10 update tricks, so it is extremely unfortunate yet does make some sense.

I hope Microsoft can find a way to earn trust back, this problem is going to get much worse if people do not install security patches ASAP when released.


Call me paranoid but I consider even a clean, freshly installed and fully updated Windows PC already compromised by the NSA.


Distrusting Windows was the wisest thing you did since you climbed off your horse. [1]

No, seriously. How is it paranoia to think the NSA was/is surveilling your Windows installation if we already have proof that they have the means [2] and motivation [3] to do it at scale?

[1] http://www.quotes.net/show-quote/34121

[2] https://en.wikipedia.org/wiki/EternalBlue

[3] https://en.wikipedia.org/wiki/PRISM_(surveillance_program)


There is no proof of means or motivation to use 0-days at scale. In fact, using EternalBlue "at-scale" would have caused it to not stay a 0-day for very long.


That's not true. When an exploit shows up on a computer, "How did it get there?" is often the hardest question. There's no way to know short of capturing it in a lab environment.

If you're talking about "at scale" being "the entire world," then yes. But usually the NSA tends to target their operations regionally, e.g. Iran.


To clarify, I am not talking about attribution. When I say "not stay a 0-day for very long" I am referring to the fact that 0-day use by any threat actor is generally going to be very targeted, because the chance of a PSP and/or network tap logging artifacts or alerting the user is extremely risky in regards to potential exposure of the intrusion, causing the 0-day to likely get burned (Since discovery allows for detection signatures and patches to be quickly created, as well as remediations applied to affected systems).


Any use of a zero-day risks burning it, and this was one of NSA's most potent zero-days. I imagine they used it rarely and wisely; probably trying other exploits first.


>and this was one of NSA's most potent zero-days.

Says who? We have no idea what they're sitting on, even our guesses come from terrible data.


And so now it's in the hands of people who have no such foresight. Which means soon it will be mitigated. Which means that despite all the pain right now, in the long run Wikileaks actually may end up having kind of helped humanity.


> Which means soon it will be mitigated.

It was fixed in a security patch one month before the Shadow Brokers leak. All computers affected by this ransomware outbreak (and WannaCry) were those who decided not to patch.


I suppose with the word "mitigation" kind of already having a connotation in the security community, I probably shouldn't have used it without making clear that I wanted the term to include its more banal implications such as "install the patch" and/or "get your systems off that old-ass OS!"


Wikileaks was not involved, they're securely posting CIA documents.


They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates. Also Microsoft began to heavily spy onto Windows users as part of normal operation making it difficult to impossible to fully opt out.


I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless. There would be very little positive gain yet a whole lot of negative blowback from doing such a thing.


MS engineers can login to your machine and run programs / download documents. There also is some keylogger that sends data back without warning you. I can't remember which bits you can turn off, which bits got backported to 8/7 without warning, etc.

To make a long story short: From what anyone can tell, there is no way for consumers to obtain a version of windows that has security patches and has the ability to run with sane privacy settings. There is an acceptable version called Windows LTSB, but you have to pirate it.

This has been discussed ad nauseum on HN and elsewhere.


What change?

Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?

If you are suggesting that, are you suggesting the trust root for that particular stack is something other than the vendor? If so who?

Take the example of Windows. Let's say they agree to put in a backdoor like DoublePulsar. Microsoft release the official OS and say 'we promise this is all good and only stuff that should be in here is in here. Honest.' How do we as third parties detect they've put something in there that shouldn't be?

I see you're CEO of verify.ly and have some background in this, so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.


> so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.

"Closed-source" certainly does not mean you cannot see the changes, just that far less people know how to read assembly/machine code to understand what is going on.

People frequently reverse engineer patches and updates as addition of features means more vulnerabilities. Security companies generally get a whole lot of free marketing in the press if they find and disclose major vulnerabilities (along with building detection/prevention into their products, so there is a large incentive there. Of course it requires trusting security companies to not hold back findings like that, a valid concern, but it at least a step up from completely trusting the vendor to deliver non-backdoored updates.

> Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?

The security researcher mindset would be along the lines of "How does this new added/changed functionality work, and how could it be abused?" (You are correct that there is no guaranteed manner to find this, otherwise all software would be un-hackable which is not the case).


Thanks.

So to go back to these two points:

> They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates.

> I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless.

It would seem to me that these things are happening. 0days are being added (often to look like simple bugs) and security companies are detecting them and we're talking about them...eventually. So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.

Take the example in this thread - EternalBlue. That particular flaw was introduced in XP wasn't it? And it survived all this time despite the uncountable security researches pouring over the code for a decade and more. It took a hack to reveal these tools.

Maybe the EternalBlue exploit really did just exploit a bug. Maybe it was a backdoor. It doesn't matter though. If it was a bug, it lay undiscovered for years which means there's plenty of opportunity for an actual backdoor to remain undiscovered too. So we have to deal with the possibility that 'exploitable code' (however it originated) may be around for decades and can be in every system as a result.

Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.

What about Heartbleed. This was another piece of 'exploitable code' that was around for years undetected. The example of this are no doubt many.

It would seem to me then that there are plenty of cases where a 'backdoor' has been placed and plenty where a genuine mistake was made, but we can't ever really know which is which.

I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.


> So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.

EternalBlue was a vulnerability, not a backdoor, as a backdoor would imply it was intentionally inserted. Again, any proof of malicious code being intentionally inserted would be huge news and would permanently kill trust in the vendor.

> Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.

This would be huge news. A negative cannot be proven, but it would not really serve much benefit to theorize about intentional backdoor insertion without proof. Anger at something like that is best saved for a provable case (Think of it this way: To a non-tech person, it would be great for them to be able to express outrage/call their reps/etc when there is definitive proof of this, versus saying "oh I heard this was already happening so whatever").

> I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.

There is nothing wrong with being overcautious. Problems arise when worrisome conclusions are reached, causing some (for example) to be unsure about the safety of automatic updates. The effect of this would be users avoiding a perceived risk of a malicious update, yet allowing them to be more exposed to real known vulnerabilities by not installing important security patches.


The theory is that back doors are designed to look like bugs, precisely so you can make the argument you just made - that they are not back doors.


I honestly cannot tell if this is brilliant sarcasm or if you'be somehow missed all the "very loud discussion" about Windows 10 on HN. :)


If you are referring to the level of analytics gathered, I fully agree! My point is, there would be a similarly loud reaction (at a wider scale) if a backdoor were introduced.


How could you tell a backdoor from a regular bug?

From a code perspective, of course.


Have you installed Windows 10 lately? It's all there in plain English.


I am definitely not a fan of all the default analytics gathered, not cool, but I took "cooperates" to be referencing legitimately malicious software.


This is absurd nonsense, but my viewpoint is a lonely one on HackerNews.


You're not the only one who thinks the idea of wearing a tin foil hat when you use Windows because the NSA only knows how to attack Windows is demeaning to the intelligence of other tin foil hat wearers.


What should I trust more:

A trade secret proprietary and obfuscated operating system from an organization known to collude with the government

Or

Code I have read in part, and know others read, and stand to believe that among all of us using those with the money or time would also audit

Given, we are all on predominantly x86 computers with proprietary obfuscated control processors that can seize control of the system and do whatever they are told by the manufacturer / those the manufacturer gives access to, so the security is in general a whiff.

Or more generally, don't use Linux for a false sense of security, because the security holes go much, much deeper than just the kernel and whats running on top of it, and Linux itself is nothing outstanding from a security architectural standpoint.


From the phrasing of your question, I suspect we disagree on the answer to your theoretically rhetorical question. I don't care what people could or would like to audit with their free time, I care what people do audit with their actual time, generally because they are paid or have a financial motive to do.

Windows is fuzzed, analyzed, traffic analyzed, attacked, and picked apart inside AND outside Microsoft with higher frequency and greater depth than Linux is, regardless of which happens to be open source and theoretically easier to examine. If Microsoft were to inject malicious stuff into Windows it would be found and reported and exploited. There is too much money, too much exploit opportunity, and too much security researcher brand cred available to anyone who discovers even a hint of malicious behavior on Microsoft's part for it to go unnoticed and unreported.

And again, the point of the comment wasn't "Windows is secure" as nothing in tech is secure. The point was that someone who advocates wearing tinfoil hats around Windows to protect against the NSA while thinking Linux somehow gets a pass from those same bogeymen is not making a rational case for how to behave or what to fear.


It makes sense if you consider that some folks will only read headlines and potentially skim news coverage without checking any further into validity.


It's compromised by Microsoft, who would willingly (and would be required to) cooperate with the NSA upon request.


Yep, forced updates + NSL = they don't need 0days anymore.


That would never happen. A network tap would be able to detect a malicious update even if the main PC was implanted very well, and a Microsoft-signed malicious update would be worldwide news.

Please correct me if I am wrong, but I don't think there has ever been a single instance of this actually occurring, only "this could possibly happen" theories. I am definitely interested to hear more if this is not the case.


> That would never happen. A network tap would be able to detect a malicious update even if the main PC was implanted very well, and a Microsoft-signed malicious update would be worldwide news.

While I don't know of that specific scenario, Stuxnet used a hardware vendor's key to install infected drivers[1]. There was also a Chinese registrar that allowed a customer to man-in-the-middle Google[2]. Depending on how Windows organizes their driver updates, I could see an adversary doing a man-in-the-middle between Microsoft and their target, and pushing a bad driver update.

1. https://www.welivesecurity.com/2010/07/22/why-steal-digital-... 2. https://www.techdirt.com/articles/20140909/03424628458/china...


I am talking specifically about a malicious Microsoft-signed OS update in this context.

I fully agree with you regarding general problems which could occur with PKI.


"That would never happen" doesn't fly as a security proof.


I will concede that phrasing may be poor, better way to put it is that "forced updates + NSL" would result in detection and a media firestorm, giving absolutely no benefit and obliteration of any trust in Microsoft.


It's extremely risky to put out a mass update, yes. But if it were a targeted attack against an individual, the risk is greatly reduced, especially if that individual won't think twice about it.

With that said, you do have individual targets that are suspicious (e.g. https://citizenlab.org/2016/08/million-dollar-dissident-ipho...). There's always risk.


> It's extremely risky to put out a mass update, yes. But if it were a targeted attack against an individual, the risk is greatly reduced, especially if that individual won't think twice about it.

At that point, you'd have to hope the target would not check the hashes of update files. If detected, then there is the same issue: A signed malicious update being detected (and easily verified cryptographically if given to a reporter) would cause a catastrophic media firestorm, eroding trust in the vendor forever.

> With that said, you do have individual targets that are suspicious (e.g. https://citizenlab.org/2016/08/million-dollar-dissident-ipho...). There's always risk.

0-day use against perceived "high value targets" is indeed a possibility and valid concern. No argument at all there.


A signed malicious update would be a Big Deal(tm), but the entity would also be able to survive it by claiming it was negligence. I don't believe negligence has not been significantly penalized in the marketplace, aside from perhaps CAs where damage can be limited (prevent new certs from being seen as valid, plenty of other options for sites). There's no such option available for penalizing Microsoft, and their lock-in is significant enough to limit nuclear options for doing so.

"We've revoked the signing key that was hacked by blah blah we have the utmost regard for security and adhered to best practices" and everyone would probably gloss over it for one instance.


Their update signing is surely performed using an HSM with strict procedures for getting production builds signed, due to the exceptional sensitivity.

I think you might underestimate the gravity of such a thing happening, it would not be glossed over.


What are the alternatives once an event occurs and Google/Microsoft/Redhat/?? claim it was an accident outside of their control (possibly due to negligence)? Yes, outside experts will be investigating to the best of their ability and there will be a statement about what measures have been put in place to mitigate the issue in the future. But what else would happen?


@willlstrafach, Nothing you have said convinces me the commentator you are replying to is wrong. Especially since an NSL would prevent ANYONE who detected anything from speaking about it. Updates that tweak code to introduce vulnerabilities, is not something thats science fiction.


> Especially since an NSL would prevent ANYONE who detected anything from speaking about it

Forced malicious updates would indeed be a reasonable concern if this was somehow actually the case. It is not, though, and I am not sure how that would even work. Are you saying that when it is detected, the government would somehow become aware of the detection and threaten the finder with an NSL before they could tell anyone?


Just because YOU cant figure out how it works, does not mean its not possible my friend. But I will say, that when you have a backdoor, and suddenly that backdoor stop providing intel/data/whatever, its usually a good indicator.


I do not know what you mean by this. Again, my point was that any backdoor is highly unlikely to stay hidden.


I point out yet again, to the Yahoo Email debacle. Google it please.


>a Microsoft-signed malicious update would be worldwide news

https://twitter.com/craiu/status/879690795946827776

>only "this could possibly happen" theories

Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.


> https://twitter.com/craiu/status/879690795946827776

1. That screenshot clearly shows the certificate is being treated as not valid. I assume it is being shared for IOC purposes.

2. I am referring to a software update, in the context of revmoo's "forced updates + NSL" comment.

> Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.

I could believe that is the case for those outside of the information security community, but nothing novel/tinfoil-hat-worthy was in the leaks, just confirmations of predictable sources/methods used for intelligence gathering and CNE work. Forcing a company to issue a blessed update containing malicious code is very different, and again, I am very interested to hear of any proof of such a thing occurring without detection (It doesn't seem possible for that to happen without it being detected and being discussed very loudly).


> All of the affected companies' should be considered compromised by the NSA.

Which is ironic seeing as the ransomware, like WannaCry, is using the NSA supplied 'EternalBlue' exploit.


The article doesn't say that it's using a 0-day vulnerability, nor does it say the NSA is involved.


Really? We're going to blame NSA for the companies leaving SMBv1 open to the public?


(Sorry for the repost but I feel the pain of sysadmins so it might be useful to some people as everything melts down around them this evening)...

Hey, FWIW we had to do some response for ransomware cases recently.

There was a lack of decent stuff out there for how IT teams should deal with it. So we contributed to putting together this quick checklist:

https://github.com/0xswap/guides/blob/master/ransomware-tria...

Would be great if more people wanted to add to it.


A minor nit: if you convert this over to markdown or ReStructuredText, it'll display more nicely on the page and be easier to move over to GitHub pages or the like.


Good idea! Will do that.




This confirmed to work? Anyone else speak on this? If so props to the guy who discovered this.


It is, although it isn't really a 'kill switch' in the sense it can't be deployed universally, but per system it works. This could be considered temporary though, as is turning off your computer if infected and NOT turning it back on. The encryption only takes effect after a restart.


Is a kill switch purpose to protect against a new infection? It doesn't help decrypting the files i guess? What is it used for?


Why would a ransomware author include a killswitch in their software?


Even ransomeware authors accidentally infect themselves and lose keys.


It's a means to detect sandboxing, either for testing or to foil analysis attempts by third parties.


The Netherlands and various other countries have created laws where either their version of the NSA and/or police can hoard 0days to be used for hacking.

This massive outbreak is so widespread that at this stage it appears that it either was a very recent 0day or something which only recently was fixed by a patch.

Instead of having loads of countries hoarding security problems I highly encourage a focus on security instead. Seems much better for the economy overall.


It is basically WannaCry without the kill switch. It is using the same exploits (EternalBlue). Not some recent zero-day, but sloppy patching.




Do you have a source for that?


Not OP, but he is right. I just walked out of work, where I had to reverse the sample. It indeed uses EternalBlue (attacks by enumerating local network IPs with Windows APIs and randomly scanning the internet). Apart from that, it overwrites the MBR with a custom bootloader and schedules a restart ("shutdown /t /r") as SYSTEM in a random amount of time. After rebooting, it fakes a chkdsk and meanwhile, encrypts your files.

It is also true that it uses PsExec to spread.

TL;DR good old Petya ransomware (old as shit) with a copy/pasted EternalBlue-based spreading method. Nothing new.


can you share literature on what tools you used to reverse engineer and maybe other items worth reading if I am interested in this type of research?


Literature: sorry no, I didn't read anything; everything I know is from practice.

As for the tools: just IDA Pro, really, if you don't count the standard stuff: a VM to avoid getting the host infected (VirtualBox), Burp (to analyze malware HTTP traffic), etc. Nothing too fancy.


In theory, yes. In practice, the reality may be more complicated. How many ongoing investigations and clandestine operations rely on 0days that could be patched tomorrow?

Even if this weren't the case somehow, I could imagine intelligence chiefs and the like defending their 0days as necessary on public safety or national security grounds.

Edit: just to clarify, I believe 0days should be reported and patched to make everybody safer.


Your strange theory, that the economical damage is unavoidable to improve security will break down hard if those 0days are used by terrorists for the first time.


"Your strange theory, that the economical damage is unavoidable to improve security will break down hard if those 0days are used by terrorists for the first time"

It's not a "strange theory", it's the literal reason: NatSec is not a strange theory, it's the stated reason by multiple administrators and officials for why this behavior occurs.

Plus, how much economic damage was mitigated by using zerodays against terrorists and foiling their plots?

What if they used a zero day and prevented a 9/11 size 3000 person, multi-billion-dollar terrorist attack?

To suggest that the needle is at 0 and any negative use makes the entire NatSec angle bad is very naive, because any successful NatSec use that has succeeded is classified and we're not privy.

So we don't know the score, and we certainly can't claim that the score favors one side after any particular event...

But, keep this in mind, Israeli hackers compromised an ISIS computer and were keeping tabs on plots including a plot to weaponize laptop batteries, up until DJT burned the source by outing the Israeli op to Russians.

So the idea that zero days aren't in active use seeing results against terrorists is very naive, I believe.


"What if they used a zero day and prevented a 9/11 size 3000 person, multi-billion-dollar terrorist attack?"

What if terrorists use a zero day to blow up a nuclear plant?


I'm talking about hypothetical things in the past, you're making up hypotheticals about the future.

Also, I provided a precise example of intelligence compromising ISIS for intelligence regarding airplane bombs, so my example isn't that outlandish.


When evaluating a risk it isn't a good idea to restrict yourself to scenarios which already have happened.


But the subject isn't risk evaluation, it's the idea of a "score" where using NatSec state zero days get positive points for saving lives and saving money, and negative points for when terrorists use leaked zerodays or take advantage of unfixed holes.

The claim was "any terrorist attack using these proves it's a net loss"

My response was "the classified nature of positive points doesn't invalidate positive points, and you cannot call it a net loss without a full accounting"

Now it's just devolved into a game of hypotheticals where people try to disprove the idea of a full accounting by creating even sillier terrorist scenarios?


I'm not condoning this action I'm just arguing that it's a likely path for politicians to take because of political and media pressures.

Of course I think 0days should be reported and patched immediately.


They will try to defend it, but a counterargument can be made if people start losing lives (eg. from medical systems going awry). Then the collateral damage will become unacceptable.


Can someone provide a simple (but not overly so) explanation of how the current generation of ransomware operate i.e., A) spread and B) lock up the computer? Does it always require human intervention for A. ? Thank you.


There are indications that this new version uses a number of ways to spread.

Where attacker == the ransomware executable:

First is the EternalBlue exploit developed by and leaked from the NSA. EternalBlue exploits a flaw in Windows systems on port 445 TCP that can be used to take complete control of an unpatched system. So if an attacker can connect to a vulnerable Windows machine on port 445 tcp they can take control of that machine.

There are also indications that this ransomware sample spreads using legitimate administrative tools in Windows systems such as WMI (execute commands on a remote system if you an administrator account on that PC), and PSEXEC (mount shares on the remote system if you have an administrator account, execute command if ''). These are legitimate (but legacy) Windows components that normally facilitate the management of client PC's when they're connected to a domain at a company or school. So if an attacker can connect to a Windows machine on port 445 tcp (PSEXEC) or 135 (WMI) AND have administrative credentials for that PC they can take complete control of that machine.

These two are probably part of how the ransomware spreads once it gets inside your network. The wcry outbreak a few weeks ago gained access to networks by infecting one or several people via a phishing e-mail with malicious files/links-to-files inside. AFAIK it's currently still unknown/unconfirmed how this outbreak spreads precisely but I'd guess it's either actively being spread by phishing OR it's been present but dormant in these networks for a while after having been installed by phishing over a longer period of time.

If an attacker possesses a 0-day then all bets are probably off, and even step A would not necessarily require any human interaction.

This outbreak is particularly nasty because after it's done encrypting files it supposedly triggers a crash that forces the system to restart. (handy for servers where a user is not normally able to restart the system). Because the system restarts any, artefacts from the encryption process that might be used to decrypt files without paying or restoring backups are gone.


Actually, I believe phishing / malicious attachment was debunked as the infection vector. Subsequent research found that WC starts scanning hosts and IP's on port 445 to try to find other machines to infect.

Source:https://www.us-cert.gov/ncas/alerts/TA17-132A

"Once the malware starts as a service named mssecsvc2.0, the dropper attempts to create and scan a list of IP ranges on the local network and attempts to connect using UDP ports 137, 138 and TCP ports 139, 445. If a connection to port 445 is successful, it creates an additional thread to propagate by exploiting the SMBv1 vulnerability documented by Microsoft Security bulliten MS17-010."


That only happens after the initial infection into the network. Notice that it says it scans the "local network".


This is minutiae at this point, but it scans the "local" /24. My assumption is that it scans the /24 for any interface available, so if a machine is infected with a public IP, it will start scanning machines on the public Internet. Not to mention other variations may decide to scan more aggressively.


Thank you for your explanation (also, others below as well). If I had more time I would try and learn about each of these security exploits because I find it fascinating.


A) It's got to find a victim (IP range scans or whatever), then try to infect it. WannaCry used a vulnerability in SMB (CIFS/Windows file sharing) to get the virus payload onto a new machine and get it to run.

B) Once a piece of ransomware is running on your computer, it can generate an encryption key and send that back to its controller machine, then start encrypting files on the computer.

"A" shouldn't be able to happen on its own on a properly firewalled network, I think. So the start of the spread might be someone clicking an e-mail link that they shouldn't, and the infection works to spread on its own once inside a network.


a) No intervention required although many start that way because people click on everything. The general idea is: get into a computer using any means possible and then spread using any means possible

b) They encrypt your files and make you pay usually with a time limit before they just delete the files


Depends on The ransomware.

Usually if it says "0-Day" assume that it can be exploited without human intervention a-la stuxnet


> Usually if it says "0-Day" assume that it can be exploited without human intervention a-la stuxnet

That's not at all what a 0-day means, it just means a previously unknown vulnerability. We've never seen a ransomware attack anywhere close to as sophisticated at Stuxnet. This latest attack is nothing new and is only affecting people who haven't kept their systems up to date.


I understand that this is not what it means. But generally speaking when an article says "0-day malware" it usually ends up meaning that no human involvement is needed.

Please don't assume I don't know what 0-day actually means. I chose my words carefully as not not imply that I was saying the definition of the term.


"0-day" does not mean without human intervention. That just means "previously undisclosed".


I understand that. Which is why I said "usually".

Typically when we see news using the term 0-day it's because there was no human element needed in the infection of machines. Thinking back in recent memory (17~ years) I can't remember a time when 0-day was used when it didn't mean autonomous infection.

Although. I fully understand that the term means that it's a previously unknown issue. Which is why I chose my words as carefully as I did.


It "usually" means "undisclosed". Everything else is entirely circumstantial and coincidental.

The reason human intervention is generally required now is because Windows has been hardened enough that some idiot user has to click a button to bypass the built-in basic protection. There's still a possibility of a "0-day" exploit remote-owning a machine, though these sorts of exploits are a lot harder to craft due to that attack surface being exposed to more security scrutiny.


Does anyone know if any tools exist on Linux which can be used for early detection of ransomeware?

Something that monitors file access, disk activity, etc. for suspicious behavior and can trigger some action or alert?

I think I remember some discussion about using a 'canary file' - some innocent looking file with known contents which should never be modified. If a modification is detected, you know something fishy is going on.


Aide is a popular utility to monitor for changes to files on Linux systems.

http://aide.sourceforge.net

You could also use the built in audit subsystem if you wanted to watch a specific canary file, directory, filesysyem, etc. https://www.linux.com/learn/customized-file-monitoring-audit...


I'd like to emphasize the canary file. This is a file that you should never access in normal operations. Thus, if the file was in fact accessed, that is a sign that something is scanning your file system.

Depending on the threat, such a scan might be a good reason to pull the cord from the mains socket. You don't want to let a normal shutdown occur, rather pull the cord and mount the disk on another system to recover / analyze.


what a horrible interface

  aide 
  Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db for reading

  aide -i
  Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db.new for writing


I suppose, but it's not really made as a one off run-a-command type tool. It needs set up so that you can compare now to then.

Having no parameters specified doing something real is probably not desired, as it would overwrite the DB that your aide Cron job is running.

That's why your Linux distro (not aide) picked those funny defaults.


I thought tripwire was the standard.. happy to know another name.


Tripwire got very stale after they split off a non open source commercial version.

Aide filled that gap. I believe most people prefer it to the open source tripwire.


I like running ossec on my linux boxes:

* ossec - https://ossec.github.io/

Also worth looking at:

* chkrootkit - http://www.chkrootkit.org/

* rkhunter - http://rkhunter.sourceforge.net/


To do it properly you would likely be looking at mandatory access control, such as SELinux, so that the ransomware wouldn't be authorized to modify the files and further would make itself obvious in the logs. Not very easy to use (in a way that still provides meaningful security) outside of the server space, though it can be done.


RHEL products, including Fedora, come with a fairly usable SELinux out of the box. By extension, so does Qubes OS.

I currently run a QEMU setup at home with different VMs, all Fedora, for different domains of use (internet, work, development/art, untrusted, a clean environment for installing OS's, etc) in the spirit of Qubes. Regular backups of everything are made frequently.

In the highly unlikely event of a ransomware infection, it would be limited to a single domain.

I believe this is the way forward for personal computing.


Tripwire is the archetype, it's been around for over 15 years.

https://en.wikipedia.org/wiki/Open_Source_Tripwire


This isn't yet the cyberattack "the world isn't ready for" (https://www.nytimes.com/2017/06/22/technology/ransomware-att...), is it?


no.

you will know when the big one hits because you won't be able to ask this question online and get an immediate answer.


No. This attack is using the EternalBlue vector (MS17-010[0]) and CVE-2017-0199[1]. Updated systems are unaffected.

[0] https://technet.microsoft.com/en-us/library/security/ms17-01...

[1] https://portal.msrc.microsoft.com/en-US/security-guidance/ad...


A friend sent me the bitcoin address, they've already collected 2600$.

[EDIT] Now 3230$

Source: https://blockchain.info/address/1Mz7153HMuxXTuR2R1t78mGSdzaA...


I like one of the more recent transactions:

https://blockchain.info/tx/82698e1f2fbf31914019da738e24515ae...

For 0.0000666 BTC. Sender is theoretically 1FuckYouRJBmXYF29J7dp4mJdKyLyaWXW6



Very interesting read


The only other transaction that account has made was to one of the WannaCry addresses. Also a tiny amount.


Thanks for the address, I made a live counter here: https://franciskim.co/petya-ransomware-live-counter-ransom-d...

Refreshes every 2 seconds.


it seems like a trivially avoidable mistake to use a single wallet for all collections, but maybe i shouldn't be giving them ideas...


Why do you think it a mistake?


Because now we can watch those funds and know how much money they made, we can watch them to see if they make a mistake.

If every address was different we'd have no idea how much money they're making and only funds paid by people who also reported them would be tainted by the long eyeball of the law.


It does not seem like something wrong directly. I also think showing off might be intention.


They should have pre-loaded more onto the wallet to give the impression that most people are paying.

Less than £10K USD gives the impression that nobody is paying.

It is the same psychology as a product only getting a couple of two star reviews - you don't buy it, you go for the product with hundreds of 4-5 star reviews instead.


Probably not viable right now because of the ridiculously high transaction fees.


Transaction fees only need to be high if you are in a hurry. If you can wait a week or two you can go with very small TX fees. As you can see in this graph even very low (5 to 10 Satoshi per byte) fee transaction are confirmed eventually. https://jochen-hoenicke.de/queue/#24h


Okay, then my theory is trash. They just weren't smart about it, then.


how do you konw it's only one wallet


Every infection is showing the same address.


Update: Over $4000 now


What a clickbait headline. A paltry $3k and yet the article calls this a "MASSIVE ransomware outbreak". I would be curious to see what a "minor" outbreak is.


In Ukraine, banking services nationwide as well as credit card payments on the metro in Kiev, and the airport IT systems, are all down. At what point do we call it massive? When US banks and airports start having trouble?


That's interesting. I've heard a lot about Russia testing out cyberwarfare in Ukraine as a possible proving ground for future targets. Was Ukraine the hardest hit by this latest one?

It'd be interesting if this were actually made to take down infrastructure under the guise of ransomware.


Ukraine seems to be explicitly targeted, with the initial distribution happening already for some time but having a trigger to start lateral infection (which would be detectable) only 27th June 10:30 ( https://twitter.com/CyberpoliceUA/status/879825132088426499 ) and the actual ransom attacks only some hours after that; so it was intended to spread locally before attracting global attention.


Apparently initially the thing spread through the update of a popular Ukrainian accounting software [1], infecting a lot of networks of Ukrainian companies.

[1] me-doc.com.ua


There are reports of hundreds to thousands of machines infected across multiple firms in multiple countries. I'd bet >99% of people are never gonna send the $300 in bitcoin to decrypt their machine, instead they'll just clean and restore as much as they can. The $3k is 11 people desperate to restore all their data now, more may come in the future after people have exhausted other options, but the vast majority will never pay unless their backups were hit too.


Seems like a better approach would be to have the ransom increase higher after every person that paid. Just so you'd have some competetion to pay sooner.


Exactly. This is a huge attack. The amounts paid don't mean a thing.


The conversion rate is likely very, very low for these, especially so soon after infection.


You are right. I misjudged.


Amount of money collected != scale of attack. Number of machines + importance of companies/systems = scale of attack.


it's not, my previous company took everything offline - they got infected via connections to their offices in Ukraine. lots of companies in the Ukraine are infected.

company I work for disabled all working from home VPN accounts for the time being until we do a security audit


obviously, that number will grow

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: