Maersk IT systems are down
We can confirm that Maersk IT systems are down across multiple sites
and business units due to a cyber attack. We continue to assess the
situation. The safety of our employees, our operations and customer's
business is our top priority. We will update when we have more information.
There's surprisingly little info about this from the actual ports.
Even Twitter output has become so PR-controlled that nobody involved is getting important information out. APM, Maersk, and the Port of Los Angeles all have Twitter feeds, and none of them have any useful info about this. Even the Port of Los Angeles Police have nothing.
The Port Authority of New York and New Jersey has a clue. Their alerts feed has useful info.
6/27/2017 4:30:08 PM
APM closed 6/28 & plan to open 6/29 6:00 am,
gate hours to 7:00 pm (cut off) 6/29 thru 7/7.
Free-time will be extended 2 days due to service impact.
(The free time extension means customers have two extra days to
bring back their empties before being charged.)
6/27/2017 1:14:23 PM
Due to extent of system impact, APM Terminals will not be opening
for the remainder of the day. Updates on tomorrow's status to follow.
6/27/2017 9:12:22 AM
APM is still experiencing system issues. Please delay arrivals.
6/27/2017 8:58:03 AM
APM Terminals is still experiencing system issues. Please
delay arrival until further notice. Updates will follow.
6/27/2017 7:53:09 AM
APM Terminals is experiencing system issues and working to
restore. Please delay arrival.
6/27/2017 7:11:15 PM
As of 6:30 Tues. 6/27, APM Terminals employees are still without email or office telephone services. No emails or voicemails can be accessed or answered. Please standby for PA Alerts or for critical matters please contact Giovanni Antonuccio (908) 966 - 2779.
The Maersk site still has nothing but a statement that they are down. Maersk's Twitter feed has nothing useful. No press releases. The only useful comments are coming from non-Maersk port employees.
I wouldn't be surprised if nobody had access to the password.
* Los Angeles APM container terminal shut down for today according to press report. No mention of this on APM web site.
* Port Elizabeth (NJ) APM container terminal is down for incoming trucks, according to Port Authority of NY and NJ site. No mention of this on APM web site for the port, so apparently APM web site updates have stopped.
* Mobile (AL) APM container terminal is down.
if this catches on.
It's not hard. You don't actually have to change much. You just have to schedule regular pentests, ideally every couple weeks.
Pentests protect everyone because it's our job to worry about all of the security flaws that you can't possibly be aware of in your normal day-to-day development cycle. There's just too much for any organization to know about except security companies. This way you can focus on development and we can focus on pointing out how to fix what's broken.
Security is a mindset. Any "checklist" approach will eventually devolve into ass-covering by an organization that is not internally motivated to run a tight ship. Legitimate variances will be hassled to no end, while actual security vulnerabilities will be ignored.
This is a very effective approach at cutting through ass-covering. Company B has to fix the security problems uncovered in the pentest. There is no other option. And I've seen it take products from "SQL injection by typing an apostrophe" to "It'd be very difficult to exploit this app."
If that's not proof that pentsts are effective, then I'm not sure what would be.
We like to say that security is a mindset, but developers have way too much on their mind to be aware of every possible security vector. It's easier and more effective to punt and let us worry about it instead.
No, it is not, you just need skilled people working on it.
Oh, those people want money for it ...
It's exactly the same as physical security. You build fences and buy locks. You pay people to keep an eye on things. You take insurance to cover the rest of the risk.
Nothing hard, no new inventions required. It just takes some attention and cash. It's part of the cost of being in business.
It's not impossible but it requires a somewhat universal attitude change.
It's a positive statement though: it is possible to be constantly secure if you just get a pentest every few weeks. Big companies can even afford to make it a requirement of their release cycle.
Oh man. I have a peer who works for a very large international company. They require pentests in their release cycle. What could go wrong?
Turns out that pentesting isn't in the final portion of their release. They tag a release candidate (e.g. v5.7.0-rc), send that build to the pentesters, then fix other integration and user-acceptance bugs while the pentesters are working. The pentesters may greenlight v5.7.0-rc when it's really v5.7.3-rc that's shipping, and the pentesters are none the wiser.
Security only works when the culture supports it.
The impression I have is that today's event was the result of a lot of companies allowing insecure-on-principle architectures like a zillion apps each with their own update structure (random Ukrainian enterprise app supplier gets penetrated and the whole world goes down). A pentester might never be able to find that vector until that app supplier leaves their door open or someone finds out about them for example.
And this also collides with the willingness to do anything to save a couple of dollars and once that dictate isn't flowing through every once of the company's blood, who knows what will happen.
To make secure systems, we need to take the (very) difficult road of working our systems bottom up and proving the absence of vulnerabilities and defining the boundaries of safe operations.
When a new feature is proposed, it's rare to hear someone object on the grounds that it could potentially add new vulnerabilities, but in the long run an approach that recognizes and considers those risks would be beneficial.
At the same time, this is incredibly hard to do - managers celebrate employees who develop things that look cool and awesome, not employees who can mitigate risk and manage security effectively (hopefully this changes, but I can't imagine that many unaffected CEOs are calling up their sysadmins right now and congratulating them on their diligence in making sure all their machines are patched).
Externals and people with a MacBook could continue working.
Some departments request personal to stay home tomorrow.
Mail seems to also be down, although I don't understand as it is hosted on outlook.com
I mean all of IT can access the box once I give them the password for the vault I gave them. That's just the right thing to do. But no one touches or updates my fortress of last hope but me, from a local shell.
They made themselves fragile to this attack. It was completely gratuitous.
They are large enough to chart their own destiny and critical enough to care deeply about it ... and they built on top of cutesy new versions of Windows that everyone knows are garbage.
How does that old saying go ?
"Fool me once, shame on you. Fool me a multitude of times, in varying circumstances, over and over and over again for two fucking decades, shame on me."
Something like that ...
small spoiler ahead
This Daemon is an AI that keeps data of big companies hostage - it will destroy all that company's data if the company does not pay protection money, or if the company involves law enforcement.
Because a lot of companies in the novel don't stick to the AI's rules, these companies go down with the exact same symptoms as Maersk is now having:
- unable to do business
- unclear what happened
- declining stock prices
All the competent ransomware authors are probably quite unhappy whenever a defective ransomware strain pops up.
The best way to end ransomware is to get serious about security. In many cases, being attacked by a ransomware, is paying a low price compared to if it was a targeted attack.
edit: Also, I imagine it gets easier after you wrote one i.e. many ransomewares come from the same author. So he could gain a reputation by signing messages saying that yes, this is our ransomware, we always unlock after receiving the payment.
The idea would be to create "fake" ransomware that looks exactly like the real one
>The best way to end ransomware is to get serious about security
No matter how serious you get there always gonna be bugs, there isn't a single piece of mass distributed software in human history without bugs. That said, we should try to improve security of software but expecting it to be THE solution is wrong.
>Also, I imagine it gets easier after you wrote one i.e. many ransomewares come from the same author. So he could gain a reputation by signing messages saying that yes, this is our ransomware, we always unlock after receiving the payment.
Forging a signature is not that hard.
But even without it, there are so many options, e.g. timestamp signed message on the blockchain before the release. After just one confirmed message you don't care about pretenders because people can check if the signature matches with the previous message.
I like the idea though.
All blockchain state is public, since it needs to be calculated by and verified by all nodes, so there's nowhere to stash a private key without revealing it.
> by the time Prohibition ended in 1933, the federal poisoning program, by some estimates, had killed at least 10,000 people.
It will cause a major headache for those who pay and will hopefully make people learn to distrust ransomware, in turn making it less lucrative.
On the other hand, that requires a fair number of "acceptable casualties" so to speak.
I personally think both sides of this are valid and don't know what the best option really is. It will be interesting to watch how things evolve at least.
Ransomware will never ever not be lucrative. Preventing people from getting their data back doesn't discourage future campaigns and primarily hurts the victims of the ransomware.
1) Ransomware authors have obvious economic incentive to decrypt, and no reason not to. This makes it a herculean task to convince the general public that they wouldn't do so.
2) By the time your data is encrypted, you'll be researching your specific ransomware strain and will find out if it's legit or not. Googling the onion address is an obvious choice and something the ransomware author can just tell you to do.
3) Most people will need someone more technical to arrange the bitcoin payment anyway, these people will verify if the ransomware seems to be legit or not.
4) People don't magically get smarter, phishing still works if you pass the spam filters.
5) Winlockers were immensely lucrative even before they started using crypto.
6) Unless you're going to run your fake-ransomware campaign at an immense scale you'll never drown out the real, working ransomware.
And then in the end, what the was your goal anyway? Good job, now you've deleted millions of peoples data on a retarded mission to "stop ransomware". But hey, at least you stopped those evil russians!!!
There are precisely zero good arguments for preventing people from decrypting their data.
Its irrelevant, this has nothing to do with the fake ransomwares.
>2) By the time your data is encrypted, you'll be researching your specific ransomware strain and will find out if it's legit or not. Googling the onion address is an obvious choice and something the ransomware author can just tell you to do.
The search results of any onion address are just as fake-able.
> 3) 3) Most people will need someone more technical to arrange the bitcoin payment anyway, these people will verify if the ransomware seems to be legit or not.
Sure, with their ransomware-detecting powers
>4) People don't magically get smarter, phishing still works if you pass the spam filters.
What has to do with anything
I got bored to keep answering, in general your points seem week which make you sound a bit too much like a ransomware creator. Probably not because you have 3 years here but otherwise you do.
Not a ransomware creator but I understand the economics at play. Ransomware is more profitable than sending spam, unless you're spamming to spread malware.
The value of individual installs has historically averaged at significantly less than a dollar each, ransomware is bringing that way up.
You aren't going to stop ransomware unless you figure out a solution to all other malware, or invent a more profitable scheme. People need to do something with their bots and ransomware is always going to make more money than spamming from bots that haven't been able to inbox anything for 5 years.
There's simply no way you'll stop enough people from paying to make viagra spam beat ransomware.
Sure, you could probably deter ransomware by sending DEVGRU to murder the authors, but I doubt it's worth the political shitstorm that'd follow.
This is a good description of some details https://medium.com/@thegrugq/pnyetya-yet-another-ransomware-... - it rather looks like a targeted attack to do chaos+damage to Ukraine.
Not that I want to give attackers any ideas... :-)
- Make a big target amount of money that any large company can pay, eg $10M, and tell people you'll release everyone's key if the amount is raised.
- Use an online board like this one to control the state of your network.
- Embarrass individual firms by posting pics of their offices from their own webcams.
Etc, etc. I reckon talking about these sorts of things will help find solutions rather than just inspire the bad guys.
That sounds like a good way to get state level actors on your case.
Now there is nothing to track until they rewrite their code and try the attack again with randomized email addrs
All of the affected companies' should be considered compromised by the NSA.
Actually, every single Windows PC with an internet connection that has been used before March 14 should be considered irrevocably compromised. Ransomware is much more visible than spyware. Think about all the spyware-infected PCs/networks that nobody knows about.
March 14 of what year ?
I would say 2000 but I am open to discussion ...
Here's one in the news from just last week. A ransomware where the victim agreed to pay the equivalent of US$1MM in bitcoin.
Apache version 1.3.36 and PHP version 5.1.4
It's not like a brand new Ubuntu installation connected to the open Internet will suddenly be pwned. The owners of this company were beyond inept.
What kind of utter lunatic would use that for their company today?
I think you're getting this backwards. If you say 2017, you and your children-comments' dates will be covered, because they are before March 14 2017.
I'm saying there is no specific implication without confirmation from the author as the statement can be taken either way, and any you think you see is more to do with your state of mind than the statement itself. It's a statement about what we know. We know something to be factually true prior to that date. Afterwards is open to debate, and is opinion. Making a statement about that the period we have facts for does not imply anything about the period we do not have facts for.
In the above comment when using the word implication my intent was "a conclusion that can be drawn despite not being explicitly stated".
To be unambiguous, the explicit statement is that computers prior to a specific date should be considered to be compromised. The conclusion that can be drawn, based on the fact that the writer specified that date, is that later dates did not qualify for the same statement, because the conditions were not sufficient. That is to say, that they were not insecure enough for the writer to include in his comment. That is the implication, despite the writer not saying outright that computers after that date were "secure".
The conclusion assumes the credibility of the writer, and the intellectual honesty of their comment (i.e. they didn't put that date there just to be facetious) but I believe that's a fair assumption given the context of questioning the semantics.
I also note that the actual implication here is not that computers are secure after that date, or even that computers are insecure but not compromised. The implication is, in fact, that while computers might be compromised after that date, the writer doesn't believe it's worth advising people to ASSUME they are compromised.
Yes, that is the same definition. But it is an error to draw that conclusion in question because it requires unsupported assumptions. That's why it's not implied in the original statement.
> The conclusion that can be drawn, based on the fact that the writer specified that date, is that later dates did not qualify for the same statement, because the conditions were not sufficient.
No, the later dates did not qualify because the knowledge is insufficient, or if you allow that the knowledge was an implicit part of the statement, it's not longer a binary proposition . If there are two true propositions that must be true for the original statement (we were insecure, and we know we were insecure), there are multiple alternatives. The problem is you are assuming a single one of the possible alternatives is implied, when it's not.
For example, I can say "up to this point in life, I haven't committed a felony." That does not imply I plan to commit a felony by itself. With additional context, it may or may not. I could just as easily follow that statement with "I don't see that changing any time soon" as with "I'm not sure if it's likely I'll still be able to say that next year." That additional context combined with the original statement carries the implication. In this case, people are assuming it's along the lines of one of those followups, when there is really no disambiguating context. Assuming one or the other is a problem of the person interpreting the statement, and in my opinion the root cause of quite a few arguments as a result of misunderstanding, which is why I called it out in the first place.
> That is to say, that they were not insecure enough for the writer to include in his comment.
Or they decided for whatever reasons they did not want to mention it. For example, to simplify the message and call attention to what they thought was of greater importance. Don't assume intent without evidence.
> while computers might be compromised after that date, the writer doesn't believe it's worth advising people to ASSUME they are compromised.
Which is a valid stance to have. I don't believe it's useful for the average person that has stayed patched to assume they are compromised. To assume so would mean never logging into any online account in my case. I believe it's useful to assume you are always under some level of attack, whether active or passive, and take precautions, but to assume you are compromised is quite a bit farther than that.
hat was that XP xploit app from back then... I cant recall what it was called...
But it was back orifice I was thinking of.
More of a "100-day" at this point.
It was based off an SMB exploit released in a ShadowBroker's dump; an unreleased exploit thought to have been used by the NSA.
You are correct about this. Patches were released in March, but many seem to have put off security-critical patching.
In fairness to some of the unpatched - the last round of Windows 10 updates refused to install on some machines (well, mine and some others on Twitter), and trapped me in an endless loop of download-install-fail-download. When this happened my landline internet was down, so this was happening over 4G tethering, and burning up $20/day in cellphone data until I just turned off my internet/tethering.
I'm not saying don't patch (you should!), just that even people trying to stay patched and do the right thing can find they're unable to do so.
I hope Microsoft can find a way to earn trust back, this problem is going to get much worse if people do not install security patches ASAP when released.
No, seriously. How is it paranoia to think the NSA was/is surveilling your Windows installation if we already have proof that they have the means  and motivation  to do it at scale?
If you're talking about "at scale" being "the entire world," then yes. But usually the NSA tends to target their operations regionally, e.g. Iran.
Says who? We have no idea what they're sitting on, even our guesses come from terrible data.
It was fixed in a security patch one month before the Shadow Brokers leak. All computers affected by this ransomware outbreak (and WannaCry) were those who decided not to patch.
To make a long story short: From what anyone can tell, there is no way for consumers to obtain a version of windows that has security patches and has the ability to run with sane privacy settings. There is an acceptable version called Windows LTSB, but you have to pirate it.
This has been discussed ad nauseum on HN and elsewhere.
Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?
If you are suggesting that, are you suggesting the trust root for that particular stack is something other than the vendor? If so who?
Take the example of Windows. Let's say they agree to put in a backdoor like DoublePulsar. Microsoft release the official OS and say 'we promise this is all good and only stuff that should be in here is in here. Honest.' How do we as third parties detect they've put something in there that shouldn't be?
I see you're CEO of verify.ly and have some background in this, so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.
"Closed-source" certainly does not mean you cannot see the changes, just that far less people know how to read assembly/machine code to understand what is going on.
People frequently reverse engineer patches and updates as addition of features means more vulnerabilities. Security companies generally get a whole lot of free marketing in the press if they find and disclose major vulnerabilities (along with building detection/prevention into their products, so there is a large incentive there. Of course it requires trusting security companies to not hold back findings like that, a valid concern, but it at least a step up from completely trusting the vendor to deliver non-backdoored updates.
> Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?
The security researcher mindset would be along the lines of "How does this new added/changed functionality work, and how could it be abused?" (You are correct that there is no guaranteed manner to find this, otherwise all software would be un-hackable which is not the case).
So to go back to these two points:
> They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates.
> I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless.
It would seem to me that these things are happening. 0days are being added (often to look like simple bugs) and security companies are detecting them and we're talking about them...eventually. So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.
Take the example in this thread - EternalBlue. That particular flaw was introduced in XP wasn't it? And it survived all this time despite the uncountable security researches pouring over the code for a decade and more. It took a hack to reveal these tools.
Maybe the EternalBlue exploit really did just exploit a bug. Maybe it was a backdoor. It doesn't matter though. If it was a bug, it lay undiscovered for years which means there's plenty of opportunity for an actual backdoor to remain undiscovered too. So we have to deal with the possibility that 'exploitable code' (however it originated) may be around for decades and can be in every system as a result.
Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.
What about Heartbleed. This was another piece of 'exploitable code' that was around for years undetected. The example of this are no doubt many.
It would seem to me then that there are plenty of cases where a 'backdoor' has been placed and plenty where a genuine mistake was made, but we can't ever really know which is which.
I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.
EternalBlue was a vulnerability, not a backdoor, as a backdoor would imply it was intentionally inserted. Again, any proof of malicious code being intentionally inserted would be huge news and would permanently kill trust in the vendor.
> Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.
This would be huge news. A negative cannot be proven, but it would not really serve much benefit to theorize about intentional backdoor insertion without proof. Anger at something like that is best saved for a provable case (Think of it this way: To a non-tech person, it would be great for them to be able to express outrage/call their reps/etc when there is definitive proof of this, versus saying "oh I heard this was already happening so whatever").
> I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.
There is nothing wrong with being overcautious. Problems arise when worrisome conclusions are reached, causing some (for example) to be unsure about the safety of automatic updates. The effect of this would be users avoiding a perceived risk of a malicious update, yet allowing them to be more exposed to real known vulnerabilities by not installing important security patches.
From a code perspective, of course.
A trade secret proprietary and obfuscated operating system from an organization known to collude with the government
Code I have read in part, and know others read, and stand to believe that among all of us using those with the money or time would also audit
Given, we are all on predominantly x86 computers with proprietary obfuscated control processors that can seize control of the system and do whatever they are told by the manufacturer / those the manufacturer gives access to, so the security is in general a whiff.
Or more generally, don't use Linux for a false sense of security, because the security holes go much, much deeper than just the kernel and whats running on top of it, and Linux itself is nothing outstanding from a security architectural standpoint.
Windows is fuzzed, analyzed, traffic analyzed, attacked, and picked apart inside AND outside Microsoft with higher frequency and greater depth than Linux is, regardless of which happens to be open source and theoretically easier to examine. If Microsoft were to inject malicious stuff into Windows it would be found and reported and exploited. There is too much money, too much exploit opportunity, and too much security researcher brand cred available to anyone who discovers even a hint of malicious behavior on Microsoft's part for it to go unnoticed and unreported.
And again, the point of the comment wasn't "Windows is secure" as nothing in tech is secure. The point was that someone who advocates wearing tinfoil hats around Windows to protect against the NSA while thinking Linux somehow gets a pass from those same bogeymen is not making a rational case for how to behave or what to fear.
Please correct me if I am wrong, but I don't think there has ever been a single instance of this actually occurring, only "this could possibly happen" theories. I am definitely interested to hear more if this is not the case.
While I don't know of that specific scenario, Stuxnet used a hardware vendor's key to install infected drivers. There was also a Chinese registrar that allowed a customer to man-in-the-middle Google. Depending on how Windows organizes their driver updates, I could see an adversary doing a man-in-the-middle between Microsoft and their target, and pushing a bad driver update.
I fully agree with you regarding general problems which could occur with PKI.
With that said, you do have individual targets that are suspicious (e.g. https://citizenlab.org/2016/08/million-dollar-dissident-ipho...). There's always risk.
At that point, you'd have to hope the target would not check the hashes of update files. If detected, then there is the same issue: A signed malicious update being detected (and easily verified cryptographically if given to a reporter) would cause a catastrophic media firestorm, eroding trust in the vendor forever.
> With that said, you do have individual targets that are suspicious (e.g. https://citizenlab.org/2016/08/million-dollar-dissident-ipho...). There's always risk.
0-day use against perceived "high value targets" is indeed a possibility and valid concern. No argument at all there.
"We've revoked the signing key that was hacked by blah blah we have the utmost regard for security and adhered to best practices" and everyone would probably gloss over it for one instance.
I think you might underestimate the gravity of such a thing happening, it would not be glossed over.
Forced malicious updates would indeed be a reasonable concern if this was somehow actually the case. It is not, though, and I am not sure how that would even work. Are you saying that when it is detected, the government would somehow become aware of the detection and threaten the finder with an NSL before they could tell anyone?
>only "this could possibly happen" theories
Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.
1. That screenshot clearly shows the certificate is being treated as not valid. I assume it is being shared for IOC purposes.
2. I am referring to a software update, in the context of revmoo's "forced updates + NSL" comment.
> Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.
I could believe that is the case for those outside of the information security community, but nothing novel/tinfoil-hat-worthy was in the leaks, just confirmations of predictable sources/methods used for intelligence gathering and CNE work. Forcing a company to issue a blessed update containing malicious code is very different, and again, I am very interested to hear of any proof of such a thing occurring without detection (It doesn't seem possible for that to happen without it being detected and being discussed very loudly).
Which is ironic seeing as the ransomware, like WannaCry, is using the NSA supplied 'EternalBlue' exploit.
Hey, FWIW we had to do some response for ransomware cases recently.
There was a lack of decent stuff out there for how IT teams should deal with it. So we contributed to putting together this quick checklist:
Would be great if more people wanted to add to it.
This massive outbreak is so widespread that at this stage it appears that it either was a very recent 0day or something which only recently was fixed by a patch.
Instead of having loads of countries hoarding security problems I highly encourage a focus on security instead. Seems much better for the economy overall.
It is also true that it uses PsExec to spread.
TL;DR good old Petya ransomware (old as shit) with a copy/pasted EternalBlue-based spreading method. Nothing new.
As for the tools: just IDA Pro, really, if you don't count the standard stuff: a VM to avoid getting the host infected (VirtualBox), Burp (to analyze malware HTTP traffic), etc. Nothing too fancy.
Even if this weren't the case somehow, I could imagine intelligence chiefs and the like defending their 0days as necessary on public safety or national security grounds.
Edit: just to clarify, I believe 0days should be reported and patched to make everybody safer.
It's not a "strange theory", it's the literal reason: NatSec is not a strange theory, it's the stated reason by multiple administrators and officials for why this behavior occurs.
Plus, how much economic damage was mitigated by using zerodays against terrorists and foiling their plots?
What if they used a zero day and prevented a 9/11 size 3000 person, multi-billion-dollar terrorist attack?
To suggest that the needle is at 0 and any negative use makes the entire NatSec angle bad is very naive, because any successful NatSec use that has succeeded is classified and we're not privy.
So we don't know the score, and we certainly can't claim that the score favors one side after any particular event...
But, keep this in mind, Israeli hackers compromised an ISIS computer and were keeping tabs on plots including a plot to weaponize laptop batteries, up until DJT burned the source by outing the Israeli op to Russians.
So the idea that zero days aren't in active use seeing results against terrorists is very naive, I believe.
What if terrorists use a zero day to blow up a nuclear plant?
Also, I provided a precise example of intelligence compromising ISIS for intelligence regarding airplane bombs, so my example isn't that outlandish.
The claim was "any terrorist attack using these proves it's a net loss"
My response was "the classified nature of positive points doesn't invalidate positive points, and you cannot call it a net loss without a full accounting"
Now it's just devolved into a game of hypotheticals where people try to disprove the idea of a full accounting by creating even sillier terrorist scenarios?
Of course I think 0days should be reported and patched immediately.
Where attacker == the ransomware executable:
First is the EternalBlue exploit developed by and leaked from the NSA. EternalBlue exploits a flaw in Windows systems on port 445 TCP that can be used to take complete control of an unpatched system. So if an attacker can connect to a vulnerable Windows machine on port 445 tcp they can take control of that machine.
There are also indications that this ransomware sample spreads using legitimate administrative tools in Windows systems such as WMI (execute commands on a remote system if you an administrator account on that PC), and PSEXEC (mount shares on the remote system if you have an administrator account, execute command if ''). These are legitimate (but legacy) Windows components that normally facilitate the management of client PC's when they're connected to a domain at a company or school. So if an attacker can connect to a Windows machine on port 445 tcp (PSEXEC) or 135 (WMI) AND have administrative credentials for that PC they can take complete control of that machine.
These two are probably part of how the ransomware spreads once it gets inside your network. The wcry outbreak a few weeks ago gained access to networks by infecting one or several people via a phishing e-mail with malicious files/links-to-files inside. AFAIK it's currently still unknown/unconfirmed how this outbreak spreads precisely but I'd guess it's either actively being spread by phishing OR it's been present but dormant in these networks for a while after having been installed by phishing over a longer period of time.
If an attacker possesses a 0-day then all bets are probably off, and even step A would not necessarily require any human interaction.
This outbreak is particularly nasty because after it's done encrypting files it supposedly triggers a crash that forces the system to restart. (handy for servers where a user is not normally able to restart the system). Because the system restarts any, artefacts from the encryption process that might be used to decrypt files without paying or restoring backups are gone.
"Once the malware starts as a service named mssecsvc2.0, the dropper attempts to create and scan a list of IP ranges on the local network
and attempts to connect using UDP ports 137, 138 and TCP ports 139, 445. If a connection to port 445 is successful, it creates an additional
thread to propagate by exploiting the SMBv1 vulnerability documented by Microsoft Security bulliten MS17-010."
B) Once a piece of ransomware is running on your computer, it can generate an encryption key and send that back to its controller machine, then start encrypting files on the computer.
"A" shouldn't be able to happen on its own on a properly firewalled network, I think. So the start of the spread might be someone clicking an e-mail link that they shouldn't, and the infection works to spread on its own once inside a network.
b) They encrypt your files and make you pay usually with a time limit before they just delete the files
Usually if it says "0-Day" assume that it can be exploited without human intervention a-la stuxnet
That's not at all what a 0-day means, it just means a previously unknown vulnerability. We've never seen a ransomware attack anywhere close to as sophisticated at Stuxnet. This latest attack is nothing new and is only affecting people who haven't kept their systems up to date.
Please don't assume I don't know what 0-day actually means. I chose my words carefully as not not imply that I was saying the definition of the term.
Typically when we see news using the term 0-day it's because there was no human element needed in the infection of machines. Thinking back in recent memory (17~ years) I can't remember a time when 0-day was used when it didn't mean autonomous infection.
Although. I fully understand that the term means that it's a previously unknown issue. Which is why I chose my words as carefully as I did.
The reason human intervention is generally required now is because Windows has been hardened enough that some idiot user has to click a button to bypass the built-in basic protection. There's still a possibility of a "0-day" exploit remote-owning a machine, though these sorts of exploits are a lot harder to craft due to that attack surface being exposed to more security scrutiny.
Something that monitors file access, disk activity, etc. for suspicious behavior and can trigger some action or alert?
I think I remember some discussion about using a 'canary file' - some innocent looking file with known contents which should never be modified. If a modification is detected, you know something fishy is going on.
You could also use the built in audit subsystem if you wanted to watch a specific canary file, directory, filesysyem, etc.
Depending on the threat, such a scan might be a good reason to pull the cord from the mains socket. You don't want to let a normal shutdown occur, rather pull the cord and mount the disk on another system to recover / analyze.
Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db for reading
Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db.new for writing
Having no parameters specified doing something real is probably not desired, as it would overwrite the DB that your aide Cron job is running.
That's why your Linux distro (not aide) picked those funny defaults.
Aide filled that gap. I believe most people prefer it to the open source tripwire.
* ossec - https://ossec.github.io/
Also worth looking at:
* chkrootkit - http://www.chkrootkit.org/
* rkhunter - http://rkhunter.sourceforge.net/
I currently run a QEMU setup at home with different VMs, all Fedora, for different domains of use (internet, work, development/art, untrusted, a clean environment for installing OS's, etc) in the spirit of Qubes. Regular backups of everything are made frequently.
In the highly unlikely event of a ransomware infection, it would be limited to a single domain.
I believe this is the way forward for personal computing.
you will know when the big one hits because you won't be able to ask this question online and get an immediate answer.
[EDIT] Now 3230$
For 0.0000666 BTC. Sender is theoretically 1FuckYouRJBmXYF29J7dp4mJdKyLyaWXW6
Refreshes every 2 seconds.
If every address was different we'd have no idea how much money they're making and only funds paid by people who also reported them would be tainted by the long eyeball of the law.
Less than £10K USD gives the impression that nobody is paying.
It is the same psychology as a product only getting a couple of two star reviews - you don't buy it, you go for the product with hundreds of 4-5 star reviews instead.
It'd be interesting if this were actually made to take down infrastructure under the guise of ransomware.
company I work for disabled all working from home VPN accounts for the time being until we do a security audit