
Maersk, Me and NotPetya - omnibrain
https://gvnshtn.com/maersk-me-notpetya/
======
rafaelm
I can fully recommend reading the book "Sandworm" by Andy Greenberg. It
explains NotPetya and all of the sorrounding investigation.

~~~
retortio
In the same genre, I recommend "Countdown to Zero Day" which looks at Stuxnet
and Flame and the events surrounding their creation, deployment and aftermath.

~~~
bitexploder
Just an aside, I have met Kim a few times at various conferences. She is an
interesting person, in a good way. She came to the book as an infosec
neophyte. We talked about her process as something of a technologist outsider
and how she decided what to focus on. Ultimately, she decided the storytelling
was far more important than the technical details, though she felt those were
incredibly important as well.

------
hyperman1
Its an interesting article, and I agree with a lot of it, but I never manage
to run windows desktop without local admin.

Some examples: Thanks to corona, tons of people started using usb headphones.
The bloody thing needs local admin for almost weekly firmware updates. No idea
why, but if you dont do them, a windows update for the driver will break them
soon.

VPN! If you dare to log on with alternate credentials, it ends your
connection. Hence any admin on a remote machine can only happen by a local
admin.

Banking software pushes an update and immediately refuses to do any payment
until you upgrade. The (nice) people who should package that upgrade are
swamped and need months, at least if you manage to get a budget to let them
package it. After that, an infosec review might take more weeks.

Printer drivers. No admin? No printing! Bonus points if vendor decides to
publish the driver in the app store, which is blocked by group policy for
everyone including admins.

ctr alt del needs local admin to kill a task.

As a bonus, infosec is the biggest hurdle: If they take weeks to approve any
kind of admin access,and keep asling bureaucratic questions, only big problems
that burn for weeks are worth solving. We have expensive software being unused
because no one wants to do the battle to get access to fix it. I would love to
drop some privileges, if only I trust I could claim them back when shit hits
the fan.

------
virgulino
Related talk at Black Hat 2019: "Implementing the Lessons Learned From a Major
Cyber Attack", by the Chief Information Security Officer of Maersk:

[https://www.youtube.com/watch?v=wQ8HIjkEe9o](https://www.youtube.com/watch?v=wQ8HIjkEe9o)

------
terom
As someone with very limited Microsoft/Windows background, I would be curious
to somehow better understand how these lessons would apply to the Linux world.

What are the Linux equivalents of pass-the-hash, TAM/PAM/PAWs etc?

~~~
tialaramex
I'm guessing you know what a password hash is and roughly how password hashing
works?

Microsoft's systems don't like to send your plaintext password over the
network. Rather than either get rid of passwords or at least _secure_ that so
it isn't a problem any more, they do the hashing on your machine and send the
hash to wherever it needs to be authenticated. This behaviour enables Pass-
the-hash.

Since we can authenticate with the hash, not the password, we don't even need
to know the password. If we break into one system that knows the password hash
for JimSmith in order to authenticate JimSmith then we can tell other systems
we're JimSmith and present that password hash and they'll accept it.

So malware that gets enough rights on a machine with a bunch of people's
password hashes effectively gets the ability to log in as them on other
machines, on which maybe it can ascend to equivalent rights and get more
hashes, which it can use again, recursively.

Pass-the-hash isn't a thing on Linux itself, it's a consequence of crypto-
illiterate design in Windows. Arguably that design pre-dates modern Windows,
ie it isn't the fault of the people who built versions of Windows you use
today. On the other hand though rather than just outlaw this behaviour
entirely they've chosen to try to mitigate the worst effects and whose fault
is that?

~~~
jiggawatts
Linux isn't vulnerable by default, because it's missing features by default.
It has no equivalent of Active Directory, and doesn't use Kerberos or anything
like it by default.

However, it _can_ , at which point you're back to the same problem. The
vulnerability is with the protocol, not the operating system.

Modern versions of Active Directory enable strong protections for Kerberos
that almost entirely stops the majority of the Pass-the-Hash or the Golden
Ticket attacks. However, this isn't on by default even in Windows Server 2019
running a domain in 2019 mode for "compatibility" reasons.

I put that in air quotes because it's an excuse, and this is where Microsoft
has consistently dropped the ball. They refuse to change security defaults,
even when it starts getting absurd, and then lay the responsibility (and
blame) at the feet of their customers.

For example, domain trusts between two Windows Server 2019 DCs will use
NT4-era RC4 ciphers by default, downgrading all AES-capable devices across the
trust.

Similarly, newly created accounts will always default to RC4, allowing
downgrade attacks.

SMB is neither signed, nor encrypted by default.

Up until very recently, Windows Server has TLS 1.1 and 1.2, but they were
disabled. Now, they're enabled, but so is TLS 1.0!

So on, and so forth.

That's the real issue. It's not that Windows is "crypto illiterate". That's
like a person who can't read. No, it's like a person that can read but
_refuses_ to.

~~~
omnibrain
> They refuse to change security defaults, even when it starts getting absurd,
> and then lay the responsibility (and blame) at the feet of their customers.

They changed a lot of security defaults with Windows Vista and
literally(figuratively) everybody dumped on them. It got called worst Windows
ever, unusable, and names I don’t want to spell out from the public and the
press. That made them reluctant to attempt such drastic measures again. But at
least they disable SMBv1 by default nowadays.

~~~
jiggawatts
The crypto wasn't at all the criticism most (any?) people had with Vista.

My criticism is that they didn't implement the Vista-era crypto _enough_.

In 2020, most Microsoft software doesn't support ECC certificates because
their server products are still written to use the 2000/XP/2003 era crypto
APIs instead of the Vista and later crypto APIs.

I remind you that none of those operating systems are supported any longer,
but apparently for "comptibility reasons" SQL Server 2019, AD FS 2019, and
System Centre 2019 can't use elliptic certs. Or use TPM-hosted certs. Or
anything at all really other than RSA 2048-bit certs stored in software.

IIS can, but that's the lone exception, not the rule.

------
zaat
All and all very interesting, however, some of strongly stated opinions in the
article lack justifications and that's pity.

I don't mean that the author is wrong, just that he is stating his opinions
strongly as facts, for an instance regarding ADFS vs SSO with Hash Sync, the
statement that the latter is much better is stated as an obvious fact, without
much of explanation of justification.

Since not everyone would agree, for instance some security teams I have the
pleasure of working with/against, more facts and reasons and less assertions
could have done better.

------
HenryBemis
In a geographical note, if you can get a chance to work/contract in Denmark,
absolutely do so! I have been working in Copenhagen in 2014-2015 and I only
have great memories of the country and the capital.

~~~
x86_64Ubuntu
What about the language? I've heard that the best way to learn Danish is to be
born to Danish parents.

~~~
bmn__
Kamelåså [https://youtu.be/ykj3Kpm3O0g](https://youtu.be/ykj3Kpm3O0g)

------
unixhero
Great article and good bottom line advice.

------
betaby
Not sure if advises are actionable. Especially in the light that decisions
ultimately are made by 'tops' and consulting companies.

~~~
detaro
By that logic, nothing is actionable because someone else might prevent it
happening. It being derived from the Maersk incident is probably among the
better arguments for pushing it against that sort of resistance, since that is
something business higher-ups have have heard of and are scared by.

------
Iwillgetby
devops should be done from 2 systems.

Dev (local administrator access ok, production access not ok)

Ops (local admin access not ok, production access ok)

~~~
threentaway
Nobody should directly have access to production, it should be controlled via
CD flows which are gated on approvals from other team members or metrics.

~~~
mrweasel
I can see that being somewhat impractical in real life, but you’re not wrong.

In the ideal setup NotPetya would have been less of an issue for Mærsk should
only have allowed whitelisted software to run on computers controlling
critical infrastructure. It’s just a solution very few choose to deploy.

~~~
brazzy
How would that have helped? The finance software that started the breach was
legitimately needed and would have been whitelisted.

~~~
mrweasel
One of two things:

Either the malware modifies the finance software, and is executed as part of
the finance software, but the checksum for the software is now different and
can't run.

Or: The executable malware code is separate and only triggered by the finance
software, which will fail to execute it, because the malware isn't a
whitelisted application.

At any rate, the malware would never be able to escape beyond the finance
software computers. This means that yes you could have some issues with
invoicing, new orders and so on, but you most likely didn't have to shutdown
ports, because the computers there aren't allowed to run the finance software.

~~~
jojobas
NotPetya authors penetrated the accounting software vendor and planted their
attack code in a regular update.

------
secfirstmd
Very impressive write-up.

------
Veserv
Why would any of the proposals provide any meaningful protection against this
threat model?

Maersk claims NotPetya cost them $250M to $300M [1]. Assuming a criminal
organization could demonstrate to Maersk that they could do an attack with
similar effects they should be able to extort Maersk for a similar amount of
money. If we discount due to unknown information, ROI, etc. I think it is
reasonable to say that an extortion demand for $100M, assuming a credible
demonstration that the criminal organization could pull off such an attack,
would be an economically sound demand and likely to be paid. A criminal
organization, considering their own ROI, would probably be willing to invest
$30M for a $100M return. For $30M an organization could hire 30 full-time
security specialists for 3 years at SV wages to identify an attack with
similar effects.

Does anybody here think that their new system could resist such an attack even
assuming they adopted all recommendations proposed?

Does anybody here think that there is any deployed system in the world that
could resist such an attack?

Does anybody here think that even adopting and correctly practicing all
practically deployed recommendations of the security industry that a system
could resist such an attack?

All of my research points to no on all of those fronts. And, assuming the
answer is no, then adopting all of the recommendations provides no meaningful
protection to Maersk or any other company in a similar position since it would
still be extremely profitable to attack them. Therefore, any company in a
similar circumstance should probably not be deploying connected systems that
allow this level of attack.

If the answer to any of those is yes, could you provide an example and
evidence that supports that claim? I would sorely like to find a credible
deployed case.

[1] [https://www.wired.com/story/notpetya-cyberattack-ukraine-
rus...](https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-
crashed-the-world/)

~~~
brazzy
This is ridiculous. You're basically saying that having any security at all is
pointless because _someone_ will always still be able to break into your
system in _some_ way.

Nothing could be further from the truth. Only ignorant amateurs believe that
security is an all-or-nothing game.

By making your system _more difficult_ to break into, you:

* increase the effort and thus cost for the attacker, thereby reducing the number of opponents that can successfully attack you

* reduce the damage they can do before the attack is discovered and stopped

* make yourself a less attractive target compared to others

> Therefore, any company in a similar circumstance should probably not be
> deploying connected systems that allow this level of attack.

That is simply not an option. You'd increase operating costs far more than the
damage caused by this attack, and at the same time lose capabilities that
customers have come to expect. Most of the cost of this attack came from
having to operate without connected systems.

~~~
Veserv
No. I made a very specific statement about the cost of attack relative to the
benefit of attack for _this class_ of attack. The cost of attack is so far
from the benefit of attack that there is no meaningful defense being offered.
To use an analogy, making a tank from paper provides more defense than tissue
paper. That does not, however, mean that there is any meaningful defense
against credible threats. To provide a theoretical example, which should not
be misconstrued as my specific belief on effectiveness, if the existing
techniques could only stop an untrained child, but the best techniques in the
world could only stop an untrained teenager, I do not think anybody would
consider that to be a meaningful defense.

To go through your arguments in order:

Increasing the effort to attack is meaningful if it reduces aggregate harm in
excess of the cost of implementation. In the specific case of the NotPetya
attack on Maersk and generalized to similar attacks, there is no evidence that
any of the proposed measures would meaningfully reduce the probability or
raise the cost to be other than extremely profitable (this is my statement
that $30M cost of attack is profitable). This is because the benefit of attack
is so high relative to the cost of exploit development and deployment. So, in
the specific case of techniques designed to prevent large, valuable attacks,
it has no significant impact since you would need to raise the cost of such an
attack to around the benefit of doing such an attack.

Reducing the damage they can do would be useful. The damage done in the Maersk
attack occurred over the course of a few hours at most. Any defense against
such a technique would need to already be prepared or completely automated.
From my reading, damage mitigation usually occurs far after and mostly only
prevents the marginal long tail, so I will contend, as in my previous post,
that an organization with $30M in funding would be able to do the same amount
of damage to any system given that they have the element of surprise and
reconnaissance.

Making yourself a less attractive target is only meaningful if nobody wants to
attack in you particular, it is not easy to wantonly attack all vulnerable
parties, and the attacker does not have enough resources after attacking all
even more profitable targets before attacking you since, as we stated before,
it is still very beneficial to attack you. For the first, that is a bad bet
when running a large-scale multinational. For the second, that is literally
what software is good at, mass synchronized automated attacks. NotPetya is
literally an example of a wanton attack. It is mentioned in that article that
Maersk was not even the target. They were _accidentally_ attacked for $250M in
damages. Being a less attractive target means nothing if somebody has a weapon
that hits all attractive targets at the same time for no extra effort. And for
the third, that is a terrible bet in the long run because profitable targets
means they have money after each attack, so they will have money to spare to
go after you. The only comfort is that it may take them time to hockey-stick
to the point where they can saturate the market, but saturate they will. We
are already seeing this with the increase in attacks with meaningful economic
upside for the attacking parties instead of cheesy little $200/computer
attacks.

Not deploying vulnerable systems is always an option, it just depends on the
cost-benefit analysis as you state. My thesis is that the cost of vulnerable
systems is, in the long run, significantly worse than almost all companies
realize and there is no effective solution. My justification for this is that
I am firmly of the belief, as my questions above indicate, that there is no
company that can defend against a $30M attack and that a $30M attack can
easily and credibly cause $300M in damages, and, even assuming good-faith
extortion so they do not just extort for more money with the same attack,
there are enough more $30M attacks that can cause $300M in damages that any
such company will go bankrupt either from paying the extortion or from the
extortion following through on their threats.

As a thought-experiment to go with this, if Maersk offered a $30M bounty for
each unique vulnerability discovered that could cause them over $300M in
damages, do you think they would run out of such bugs first or go bankrupt
first? If the answer is "bugs first", why do they not offer such a bounty
since each such vulnerability discovered is at least a 10:1 ROI for criminals
and thus would be highly attractive to discover?

Just to get ahead of a common response to the above thought experiment
corollary. Some people will respond that companies do not need to offer that
much to get such vulnerabilities reported to them. This indicates that the
problem is even worse than I stated since the cost of discovery is lower which
means the criminal ROI is even higher. If they offered $30M for all such
vulnerabilities they would be more likely to remove the highly attractive
10+:1 ROI attacks that can do tremendous amounts of damage to them which is a
great ROI for the company.

~~~
aj3
You are wrong to focus on 300M. That's the cost of dealing with consequences
of an attack, not the cost of measures that would have prevented it. So,
you're right to say that attacking some businesses leads to >10 ROI for an
attacker (actually much higher ROI are quite common although with lower
thresholds), but that's only assuming that these businesses do not invest into
proper protections ahead of the attack.

The whole point of InfoSec is to find the right balance of investment into
preventive measures and incident response teams so that the cost/risk/reward
ratios of an attack make it non viable for economically motivated attacker.

~~~
Veserv
I agree with your statement on the point of infosec. I disagree that any
particular infosec organization is equipped to deal with problems of this
class in any meaningful way. In fact, it is so far off as to be mind-boggling
and probably criminally irresponsible.

To this end, I will clarify what I meant.

I believe that an attack funded on the order of $30M would be able to do $300M
in damages to Maersk even if Maersk adopted best-in-class preventative
measures and implemented them as a primary focus with support from management
at all levels. An attack, able to do $300M in damages that Maersk can not
prevent after we have assumed it already did the best it possibly can, should
be able to support a $100M extortion payment. This is an ROI of 3 for an
attacker with high threshold and an ROI of 3 for Maersk, so I think this is a
valid assessment.

So, a counterargument/example is an organization where an attack funded on the
order of $30M can not impact operations by more than 1%. I chose the number of
1% because Maersk has a revenue of $39B, so a $300M attack is only ~1%
reduction in company output. I hope this clarifies my statement.

I also stated in a different response that I believe that the number of $30M
attacks that could do $300M in damages probably exceeds Maersk's profit if
they paid for all of them at $30M, let alone paying extortion at $100M or
having the attack follow through at the cost of $300M. Therefore, in the long
run the potential market size is enough to destroy Maersk.

As a mildly related note, if anybody here is a member of the infosec community
I have a question:

How much do you think it would cost for a targeted attack to breach and cause
significant damage to the best system you have every personally observed? How
did you verify that number? Three pentests by three different competent
companies paid that amount and that found no vulnerabilities of note would be
convincing. I would likely find other things on that general level convincing,
but I can not declare them off-hand.

~~~
aj3
> I disagree that any particular infosec organization is equipped to deal with
> problems of this class in any meaningful way.

FAANGs are bigger targets and yet none of them suffered anything remotely
similar.

> An attack, able to do $300M in damages that Maersk can not prevent after we
> have assumed it already did the best it possibly can, should be able to
> support a $100M extortion payment.

That's incorrect. I already mentioned it in another reply, but in short it
does not matter that attack caused 300M in damages. What matters is how much
Maersk could possibly save by paying out ransom. And that's a fraction of
those 300M not even including secondary effects such as potential problems
with tax office, reputational damage or having to deal with becoming a target
#1 for every other ransomware crew out there - as they just proved that they
are ok with paying out for such "unsolicited penetration tests". It also
misses that even if Maersk payed out 100M, criminals wouldn't have any way to
actually benefit from all of that, as laundering such amount of BTC isn't
trivial and historically that's exactly the step with highest risk for the
criminals.

> How much do you think it would cost for a targeted attack to breach and
> cause significant damage to the best system you have every personally
> observed?

That's a non-answerable question as it does not mention other resources that
are available to attacker besides pew-pew internet weapons, restrictions that
they potentially face, the risk they are comfortable with, and basically all
of these questions applied to the defensive side as well (i.e. on a furthest
side of the spectrum there are certain systems, attacking which would put you
above ISIS leaders on a to-be-droned-soon list).

~~~
Veserv
That is not an unanswerable question at all. To clarify, I am literally asking
for a simplified threat model. Take an existing threat model, reduce it to
cost of doing those actions, done. Order of magnitude is fine. If there are
parameters, pick a set of parameters within the non-totally-stupid range and
state them. Estimate when reasonable. The question is just me looking for
broad strokes anecdotes.

~~~
aj3
> To clarify, I am literally asking for a simplified threat model.

That's a simplification beyond any usefulness. You wouldn't be able to do as
much damage with $100k budget in a few months as a well-staffed national
agency in a week.

> The question is just me looking for broad strokes anecdotes.

Even Jeff Bezos wouldn't be able to orchestrate a cyberattack that crashes
International Space Station with astronauts aboard.

~~~
Veserv
I do not care what parameter set you choose if you actually want to answer the
question. Pick one, state the parameters, and then specify it.

To illustrate:

How much damage could somebody do with a $100K budget in a few months to the
best system you have been personally involved in?

How much damage could a well-staffed national agency do in a week to that
system?

For all credible adversaries that could cause $X in damage, choose the 20th
percentile cost adversary, how much would that cost?

The question is also specifically limited to systems the answerer has worked
on to avoid speculation on practices or "grass is always greener" mentality.
Did you work on the ISS on software or software security?

Jeff Bezos has over $100B. Therefore, I take your answer to mean:

With $100B nobody could orchestrate an cyberattack that could crash the ISS?

The cost to develop the Stuxnet attack has been estimated to be $1M according
to former director of the NSA General Hayden. This is likely an underestimate
only accounting for the cost of the exploit itself. Kaspersky Lab claims it
cost in the regime of $100M to develop and deploy. So, lets take the high
number and multiple it by 10 leaving the cost of disabling the secret air-
gapped Iranian Nuclear Weapons Program at $1B. Do you think the cost of a
critical attack against the ISS is 100x higher than a critical attack on the
Iranian Nuclear Weapons Program?

Please avoid limiting your imagination to just direct attack on the ISS
itself. There are multiple entities which when attacked would likely be able
to de-orbit the ISS and kill all the astronauts. Please verify that none of
these could occur for less than $100B: taking over a rocket to the ISS, taking
over a rocket to LEO, active satellites in the correct orbital plane,
decommissioned satellites that are no longer tracked but with enough fuel to
intercept, scientist laptops that connect to the ISS network, over-drawing the
laptop batteries so they blow up while in the ISS, etc. This also ignores more
clever tactics you could do with $100B such as buy a company directly
supplying critical needs of the ISS and then insert backdoors into the
software.

