Hacker News new | past | comments | ask | show | jobs | submit login
BlackCat ransomware group implodes after apparent payment by Change Healthcare (krebsonsecurity.com)
146 points by todsacerdoti 6 months ago | hide | past | favorite | 162 comments



It was always a bad idea to pay ransomware groups, but I'm surprised they'd eat their own. I guess "There is no honor among thieves" after all.


Bad idea to guarantee the improbable (deleting their own copies), a valid choice for a one-time result (unlocking your data on your device).


That's typically what backups are for. No business should be so negligent as to ever need to pay a ransomware group in order to get their own data back.


As a data point, the Toronto Public Library decided to take the "restore from scratch" approach after being hit by ransomware, and it took four months before books could be borrowed again. Now, I'd expect a library would move slower than an IT-heavy company, but there are substantial costs either way and only one of them is probably covered by their cyber insurance.


So that’s what happened. I was wondering why they were having so much trouble. Their wording was evasive and I thought I til now that it was Covid closures related somehow without really understanding how covid would have that impact.


Four months wtf, they should have offsite backups maybe every 15 days at most, but I can understand how even that can be a headache to restore from...


But if they are, it may make sense to pay. For the company. Not from a more macro view.


That I can agree with. If the company has already screwed up so badly they're stuck paying the price and hoping for the best.


In this case I would include "getting their systems back online"


There could be times when there's an immediate need to get everything back up and running, but I'd be willing to bet that in at least some situations the time spent going back and forth with the extortionists to arrange the payment, then gather the funds, and then wait for them the verify that they have your money could take longer than it would to just re-image a new server, reset some passwords, and copy over the needed data from backups. It's the same stuff they'd have to do either way.

My guess is that companies that have their shit together enough that they could get back to a "We're total compromised and vulnerable, but at least we're online for now" state fairly quickly without paying up are a lot less likely to have ransomware problems in the first place.


It may also be about not wanting their data to be made public.


Yeah...


fwiw they have a pretty good reputation to release data after payment, not much the companies can do ex-post if they don't have a backup to their valued data.


Historically its been a reliably good idea to pay ransomware groups

This is the worst case scenario and its ridiculous

I hope it prompts reforms in that industry such as using smart contracts as escrow to handle payment to affiliates

a direct payment to an EOA is just sad

affiliates, moderators, and the ransomed should demand a model smart contract behind every address presented for payment, cybersecurity industry (and even the feds) could help craft this and give more confidence to the outcome


Are… you really suggesting that the real problem here is not criminal extortion but that the payment approach isn’t safe enough to ensure their criminal associates get paid, and that this is what the feds should help improve?


I'm not invested enough to do the napkin math, but I wonder what costs more: ignoring security and paying the ransom, or investing in the right strategy to prevent being held ransom. If I consider my own experience in tech for the past few decades, I have to think it could go either way. (This says nothing of the damage releases of private data can do, which of course makes investing in the right strategy the correct thing to do, no matter the cost.)


From a cost perspective it'd be easy to be tempted into thinking you can play the odds and come out ahead. It's not like getting infected with ransomware is something that happens all the time.

Fortunately not being negligent when it comes to security does a lot more than just protect you from ransomware extortionists. It can make it possible to easily recover data after all kinds of incidents (human error, software bugs, hardware failures, fires/floods, etc) and also help keep you protected from other types of viruses/malware, malicious employees, corporate espionage, whistleblowers, and anyone else who would take your data and then actually use it instead of just demanding payment to make it go away. It can also prevent the reputational harm a company can suffer by having a data breach go public.

Good security is one of those things that could easily save a company way more than it costs them, but the costs are immediate and non-trivial and companies seem to love to cut corners even when they know it'll screw them down the road because they're pathologically short-sighted and it's hard to brag about doing something that didn't have an immediate and obvious impact on their next quarter's bottom line


No, that is not what you need at all.

You are correct in saying it has historically been the smart choice to pay a ransomware group, but that is only the best decision for that individual company; as a society, we are better off when we make a blanket policy to never negotiate with terrorists (or ransomware groups). The fact that what is the best policy for individual companies is the opposite of what is the best policy for society as a whole is simply because this is a prime example of a prisoners dilemma; everyone is better off if no one pays a ransom, but any individual company is best off paying the ransom.

So what we should do instead of making ransom payments reliable, is make it nearly impossible to reliably pay a ransom without the ransomware group being caught in the end.


And in the meantime we can streamline the payment process

it will help many low trust business relationships and entire industries


I generally do not think it is wise to pay out ransomeware payments but in this case I do not blame Change for doing so.

The data involved could severely hurt millions of people if released and not having that data was actively hurting people - potentially putting their lives and health in danger by delaying access to medical care and prescriptions.

(Full disclosure: I am an employee of a competitor to Change Healthcare)


What's the simplest way to prevent this from happening these days? Daily Backups off site?


In this particular case, it isn't an issue of 'data loss' which could be recovered from good backups.

It's a problem of 'data leakage' where the bad guys have a copy of corporate data - supposedly 4TB of personal medical information - and threaten to release it to the public which can cause all sorts of reputational or other damage.

Backups don't help much in this situation, you need to convince the attackers to delete the data they copied from your network, usually via lots of money but even then there's no guarantee they'll actually delete it and may extort you for more money in the future with the same data.

Should one trust a criminal to keep their word?


Should you trust them? Well, sort of. It depends on the group. If the group has a rep, then possibly yes, because they trade on making good on the promises. A group that takes the money and still does whatever bad thing they promised not to do isn't going to get paid when word gets out.

The other component is that these situations are usually negotiated by lawyers that are hired by insurance companies that offer cybersecurity policies. They tend have working relationships. It's crazy, but these are actual businesses that do, in fact, have rules of engagement and often abide by their guarantees. That is part of the problem, I suppose...


>In this particular case, it isn't an issue of 'data loss' which could be recovered from good backups.

Do we know that they had good backups in this particular case? The fact that their systems are not yet back online makes me wonder if maybe they didn't.


Despite the slant of this article, yes.

These groups work on reputation. If prior ransoms do not result in seemingly perfect dealing from their end, new orgs won’t consider it.

The significance of the rug pull on affiliates confirms this. It shows fair dealing even with conspirators is the norm, and this example as a major diversion.


Right but isn't that why stuff is encrypted? Is there even a way to guarantee this doesn't happen?


You can't encrypt all your data at all times. It has to be decrypted somewhere along the line to be used, and hackers can gain that access as well unless the security is truly extremely tight.

Security is not anywhere close to "good" in most corporate environments however, and many things are still stored in plaintext that should not (e.g. passwords), let alone data that is merely "private".


Encrypting data while at rest (in storage) as well as in-transit is the way to go. The servers on your infrastructure should be considered hostile at all times.

Put on a hat and pretend you’re a bad actor. Give yourself access to the server where your most important data is stored.

Now look around. Is there anything you can do to extort money?

You could encrypt/destroy the data. (A backup solution saves you here).

You could exfiltrate the data (download or upload to a remote server). What can you do with this data if it was encrypted at rest? Not much.

What else could you do on thisnserver while you have access? This is where things get interesting. Can you force the application to decrypt the data or dump the data somehow? Unlikely, if the cert management is done properly.

The thing is, majority of organisations do not encrypt data at rest. Databases are not encrypted, hard data is not encrypted. If this was not the case, we wouldn’t be hearing about these data leaks.


You are emphatically, logically, ethically, technically, securely, and in all other ways correct.

Yes and:

> Encrypting data while at rest (in storage) as well as in-transit is the way to go.

All PII must be encrypted at rest at the field level.

Just like how passwords are properly stored. This is not rocket science.

The book Translucent Databases demonstrates this technique for common use cases. Highest recommendation.

https://www.amazon.com/Translucent-Databases-Peter-Wayner/dp...


> You could exfiltrate the data (download or upload to a remote server). What can you do with this data if it was encrypted at rest? Not much.

How do I encrypt a database at rest? How does it work?

Say, I run a hospital and I want to write patient data to a database. Do I have to decrypt the whole database before I add new data? Each time? Do I also have to decrypt the database each time I query data? How does that work when two doctors want to access the database at the same time?

I assume that constant decryption and encryption of large amount of data adds a significant overhead. So in practice, while data is encrypted at rest, most of the time the data isn't resting, but actively loaded and used and unencrypted.

And now, when a bad actor gets access to that live running database, they can exfiltrate the data.


Please see my sibling comment about Translucent Databases.

Additionally, proper protection of medical records will require globally unique identifiers (aka PID, MRN).

As you know, today, medical record PII must be stored as plaintext to allow record linking across heterogenous orgs. This is bad.


This isn't quite right.

You don't need to decrypt the entire payload at any time.

You decrypt parts of the data.


As others have pointed out, encryption has never worked that way. You can't just encrypt data then throw away the key and somehow have it be useful. You have to store the key. Which if attackers can compromise your data store, they can compromise your key storage infrastructure.


Proper password files store the salted and hashed key, not the key.

The practical downside is that if the user forgets their password, they lose access to their account. So for medical records, the original key (password) requires offline paper (or equiv) backup.


This doesn't seem to be the right way to handle keys for bulk data at rest, even assuming you were doing it to enforce patient approval.

A password wouldn't be appropriate for direct use as an encryption key. So I guess you could hash the password and use that for the key for data encryption and then hash it again and store the second hash for authentication. Salting appropriately of course.

That feels weird to me, I wouldn't trust I saw every angle on that. Regardless, it would drastically lower the entropy of the key space that way unless you have downright draconian password requirements.

How did you implement this in the POC you spoke of?


I think I crossed some wires here, confusing encryption key with database (uuid) key. Let me try again.

Very briefly, a proper password store must not use plaintext. Rather it must use ( password + salt ) * hash.

Similarly, PII (name, MRN, etc) must also be salted and hashed.

What Translucent Databases adds is showing how to cleverly use those hashed values for common use cases. Like how to use hashed MRNs as indexes (keys) to other data such as lab results.

It takes some getting acquainted, but designing schemas will start being intuitive after a bit.


I am trying to figure out how hashes being used as indices and password storage has anything to do with the storage and encryption of bulk data at rest. Can you elaborate?

Edit: I see you aren't actually encrypting it. So you are as the post said, just throwing away the keys (in a sense). You are talking about storing and indexing metadata.


Translucent Databases applies the design of a properly implemented password store to additional use cases. Such as hiding PII.

Sensitive data (passwords, PII) is absolutely encrypted. I apologize for assuming everyone here knows how passwords should be stored. My reference to "hash" is shorthand for "secure one-way hash".

Please refer to the Translucent Databases book for any further questions you may have.


I was also pointing out hashing is not encrypting. They are not the same thing. You don't "encrypt" passwords by hashing. This is probably why your wires were crossed. It's just terminology, I get what you are saying though.

You can read here https://www.google.com/search?q=hashing+vs+encrypting to further understand.


Medical records are not stored for the patient to access. They are stored for the provider to access and in the event of a legal dispute, for the courts to subpoena.


As a patient, do you think that's satisfactory?

Have you seen patient portals like (Epic's?) MyChart?

Aside: During the mid-aughts, I created a patient portal POC, to compliment our physician portal product. Customers were very interested. Ditto the clinical (vs diagnostic) quality web-based DICOM viewer I made. (Sadly, the 2008 meltdown /dev/null'd all of our work.)


Authorized members of a patient's care team need to be able to directly share clinical data with each other. Trying to route everything through some sort of patient portal isn't practical, and would lead to delays in care or quality problems. Patients should have access to all of their data but the patient portals are spokes, not hubs.

Patient portals tied to a single vendor like Epic MyChart aren't great, but they work well enough for patients who receive all of their care through a single integrated health system. Attempts to create universal patient portals not tied to a particular payer or provider organization have generally failed because no one wants to pay for them. Apple is trying this again but they only have interfaces with a limited set of data sources.


Yup. We first shipped a Physician's Portal and merely demoed the Patient Portal POC. Our patient facing product was way too early for the market (late-aughts).


as a patient I'd prefer all my records destroyed the minute the doctor is done looking at them.

Almost all of my PII and medical information is in the hands of bad actors because of this needless retention


Ah. In that case, you should employ care givers who accept cash.

Like a shaman. Or a doctor whose license has been revoked.

Good luck!


In reality, good basic practices mitigate this risk to almost nothing.


It's either easy and common to protect data or easy and common to steal it. Reality keeps providing evidence of the latter. Can't have it both ways.


I can't imagine how. Please tell me.


No real way to guarantee though you can go to great lengths to reduce the chance it happens.

Because it gets decrypted at some point.


That helps with "we've encrypted your data; pay us for the key" but doesn't help you with "we've made copies of your patient records, leadership's emails; pay us or we publish it all".

The phrase to describe this is double extortion.

As for your question, https://www.cisa.gov/stopransomware is a decent start, but it's a complicated issue. In short, if a pentester can get inside your environment and gain privileges, so can an attacker. You want to slow down attackers enough to buy time for detection and response capabilities.


Hm - is the expectation that this stuff isn't encrypted at rest?


Since the user's eyeballs don't have builtin decryption there is a window of opportunity to steal information after encrypted at rest and encrypted transport. Hopefully vendors will be able to fix this defect by using Neuralink.


You can also solve this problem by air-gapping any part of the system with physical human interaction.


If it’s at rest encrypted you generally do not manage to get 4TB of data before anyone notices.


If it’s at rest encrypted you generally do not manage to get 4TB of data before anyone notices.

If you're talking about your own personal hard drive in your home, yes.

But in this case, we're talking about a huge company conducting literally billions of database queries for tens of thousands of clients an hour.

You only have to have a listening post in one small part of the system that can see things in plaintext for a short time in order to accumulate 4TB in a matter of days.


Looks like the issue has to do with claims processing between pharmacies and UHG, effectively data in flight not at rest.

"We estimate more than 90% of the nation’s 70,000+ pharmacies have modified electronic claim processing to mitigate impacts from the Change Healthcare cyber security issue; the remainder have offline processing workarounds," Mason said.

https://www.bleepingcomputer.com/news/security/unitedhealth-...

The pharmacy network, which connects pharmacies and PBMs, is in final end-to-end testing with our partners. We anticipate that our Change Healthcare Pharmacy network will be back online for the vast majority of submitters as soon as Thursday.

https://www.unitedhealthgroup.com/ns/changehealthcare.html


I think you'd be surprised.

How many organizations have their "encrypted at rest" data in a cloud provider account that's set up to give all developers (or at least all production support engineers) access to decrypt the data, maybe even transparently?

How many have the "encrypted at rest" data on servers that are set up to give all administrators transparent access to the data?

How many only allow application service accounts access to decrypt the data directly, but the credentials for those service accounts are stored as Kubernetes secrets that anyone in IT can read?

Etc.


I can guarantee you that UnitedHealth Group (Change Healthcare) doesn't give regular developers the credentials to decrypt production data, or access production environments at all.


probably not "developers," probably "data scientists"

Executives at UnitedHealth Group told workers to mine old medical records for more illnesses, to identify diagnoses of serious diseases that might have never existed, inflating bills paid by the federal government's Medicare Advantage program.

https://en.wikipedia.org/wiki/UnitedHealth_Group


Presumably they did not break into the data center and lift a bunch of hard drives. Instead they compromised a server which had credentials to read the data in a clear format.


The whole problem has historically been that any "simple" solution lacks nuance. I've been across a number of these sorts of environments. Just look at your answer: Daily backups off site.

Sure, so I've seen this. There's a Veeam server with a backup repo in the main datacenter. And it takes backups to a server "offsite". Let's forget for a moment I've got an environment that take six days to do a full backup.

Then one day the attacker gains a vendor's Teamviewer account and finds themselves on a server console that's been left logged on as a Domain Admin. And they open windows file explorer, and browse to \\offsiteserver\backups, and then press "delete". Noone cares if it's offsite, it's gone.

OK, so when you said "offsite backups" I'm sure you meant something not contactable from an average network machine right? Maybe ACLs only allow access from the backup server itself. Well fear not, the attacker can still just RDP to that Veeam server and repeat.

OK OK so what you really wanted was a properly isolated network for all the backup content and they can't make a connection from the general network to it right? Once again fear not, there's a Group Policy deploying this ransomware, which means it's going on every domain joined machine including the backup server.

Look, now we're getting somewhere, there should be an entirely separate administration domain for backups and infrastructure. Well firstly someone from Microsoft will yell at you because "ESAE" is deprecated, and some overpriced consultant is about to explain to management that you're incompetent because you separated the networks (from personal experience). Fortunately that doesn't matter to the one guy with a popped Domain Admin account on the general domain used the same password on the administrative domain against policy, and the attacker spreads anyway.

Yes you've got options here. For example someone might mention "Veeam Immutable Storage", which is pretty effective. But now you'll find the iLO for that server still presents a forgotten entrypoint to wiping it.

There's absolutely ways to do this properly but it's never simple, and the further you go down the hole the more likely you are to hit pricing or political stumbling blocks.


Opsec is expensive and requires discipline. The average for profit organization can at best perform box checking security theater. Any real hindrance to their business goal is gonna get diluted or sidelined into irrelevance.

Security is not a technical problem, it's a sociopolitical problem. That's my main gripe with the business of computer security (even the name cybersecurity rubs me the wrong way... cybernetics is about robotics control systems). All these hard selling seemingly highly advanced stuff, all these script kiddies showing companies how much vulnerability is peppered all over their systems. And in the end you can call up people and they will cheerily give away credentials against vague verbal assurances.


I'm well with you there. It has been the bane of my existence trying to sell "lets enforce MFA" when you can command far more authority and respect for saying "let's buy security copilot".


It's frustrating to say the least. Venting on a forum where I don't get stared at like I've grown a second head for daring to think about more context than my own deliverables is my single outlet...


It's a small part of the whole Disaster Recovery Plan. Having a backup isn't worth a lot if you don't practice the recovery process and can't bring it back in a timely manner.

For the data exfiltration issue, there's not much you can do other than paying and hoping for the best, and it depends on their reputation of keeping their word.

You won't get paid much and last long in the ransomware world if you have a reputation of leaking the data when paid.


The scenario at hand is data leakage; accordingly the prevention strategy is by having a more secure storage system.

One way of addressing that is by using a secure vault for sensitive data storage. This is beyond the skillset of most backend engineers, but new products such as Piiano may bridge the gap (proper disclosure: I'm a friend of one of the people behind Piiano)


The city of Hamilton, Ontario is going through this right now in real-time. I think they only admitted today that it was a ransomware attack. I'm very curious to hear if they do any kind of public postmortem as other mid level cities in the country as just as vulnerable.


Layers and looking at what the business needs to achieve for business continuity and recovery. How quickly do you need to restore services after a major incident (natural disaster to internal user with God access)? What is an acceptable amount of data loss at any point in time? What are your legal obligations? Then finally, compare the financial/social/legal impact of those events against what the company will allow against what it can afford. This same conversation should tie directly into your Service Level agreements/uptime goals.

Simple? Nah, doing this properly is an entire department/role (CISO) plus other groups.

Some quick practical basics.

  - The fewer things, the smaller the potential attack surface.
  - NIST publishes some good starting points https://www.nist.gov/cyberframework
  - Shared credentials are evil.  All admins should be operating off their own accounts
  - MFA and SSO
  - A backup isn't a backup if you haven't validated you can restore it
  - Replication isn't backup.  You need something that is immutable and has history.  To start I would suggest a nightly backup, with each backup retained for 45 days at minimum.  GDPR complicates this with right to be forgotten but this is a starting point.


Its crazy that Ramp doesn’t offer or require escrow

it is very easy to do this with crypto and much cheaper than payment processor or attorney solutions

and you can do it with any money source lol


It's a funny story, but I don't know that we'd be able to verify the difference between this story and "Russian hackers infect healthcare network, take money, make up a funny story for deniability and then wreak havoc on critical US services anyway, secretly get medals from Putin"

These kinds of organizations sit always somewhere on a blurry line between "state-tolerated" and "state-affiliated" in the first place. Given current geopolitical circumstances it wouldn't surprise me if they've been given a green light to hit more significant targets.


This seems pretty far fetched, there are way too many governments who would like some extra cash, and way too many normal people who would like some extra cash.


Not far fetch at all. Governments already get individuals into large corporations to steal IP.


So Kreb's comments and a bunch of others so far talk about "don't trust criminals" and "no honor amongst thieves" etc, but putting aside the atrocious moral angle musing on this a bit I think there's a broader point that applies to legal communities as well, which might be something along the lines of:

>Beware the change from an ITERATED prisoner's dilemma game to a SINGLE game.

Or perhaps alternatively to remember the difference between a "salary" and an "exit". Particularly when there is a long history of iteration people have gotten used to. The interesting HN discussion that comes most immediately to mind was last year's implosion of Silicon Valley Bank and the Stratechery article [0] about it. Classic game theory points to major differences in any ecosystem where the players are playing iterated vs single games. Iterated games encourage thinking about the longer term health of the overall ecosystem, not burning bridges, etc. The optimal strategy isn't pure defection but more cooperate+punishment.

However, the ransomware ecosystem, like the startup ecosystem, seems to have followed an arc from small to large where the amounts of money start to pass an inflection point where they hit "set for life with a single payday" amounts. Ie, groups can chase "unicorns", hit one, and then at least from a pure economic standpoint potentially burn all bridges and reputation and be done forever. That in turn pushes towards short term thinking and extracting maximum value as fast as possible even at the cost of future returns.

It being a black market certainly accelerates this further, because a lot of traditional controls to help push more towards iteration (from information symmetry to flat out physically coercive criminal punishment). But at least in terms of idle hot take contemplation, it seems to me there are parallels in a lot of different industries through history.

So yeah don't pay ransomware anyway due to it funding a host of evils, encouraging more, governments should punish companies that do etc. But from a pure cold realpolitik standpoint it's perhaps also worth thinking about what the person on the other side can do afterwards. If a company is effectively paying them a "salary" class money, as if it was a $1000/hour pen tester, so a 100 hours of work attack is $100k, they may be more likely to be treating it as a "job" where they'll be doing it again and again. Which is bad here, but also perhaps more reliable, they have reputational skin in the game and an interest in the "health of the ransom ecosystem". But if the company is paying them "founder exit" class money, tens of millions of dollars, the odds of someone being ready to take the money and run, and having the amounts needed to make that possibly work, are probably going to be higher?

Anyway just interesting to think about a bit.

----

0: https://stratechery.com/2023/the-death-of-silicon-valley-ban...


> Dmitry Smilyanets, a researcher for the security firm Recorded Future, said BlackCat’s exit scam was especially dangerous because the affiliate still has all the stolen data, and could still demand additional payment or leak the information on his own.

  It is wrong to put temptation in the path of any nation,
      For fear they should succumb and go astray;
  So when you are requested to pay up or be molested,
      You will find it better policy to say:—

  "We never pay any-one Dane-geld,
      No matter how trifling the cost;
  For the end of that game is oppression and shame,
      And the nation that plays it is lost!"
https://en.wikipedia.org/wiki/Dane-geld_(poem) & https://en.wikipedia.org/wiki/Danegeld

… now imagine if they'd put that $22M — an amount that would fund my entire team for the remainder of our lives — into engineers.


I should be against the law to pay a ransom for data.


Ransomware becomes a death sentence to the business if this were to apply, which the US has no appetite for. We even let critical infra out from improving their cybersecurity [1] [2] [3], because it is expensive and hard. The asymmetry of cybersecurity makes effective defense challenging for even the most resourced orgs [4]. You have to win every single day, against social, phishing, auth/identity, and vulnerability attacks throughout the stack. They only need to win once.

(head of infosec, holds tabletop exercises with legal counsel on a cadence as part of ransomware insurance requirements)

[1] https://www.cybersecuritydive.com/news/epa-rescinds-cybersec...

[2] https://www.epa.gov/system/files/documents/2023-10/action-me...

[3] https://www.epa.gov/system/files/documents/2023-08/2023.08.0...

[4] https://arstechnica.com/security/2023/09/hack-of-a-microsoft...


Doesn’t the existence of a ransom “out” put a cap on how much money/seriousness a company willingly puts into infosec? Why would a company invest $22M into security if they can just pay criminals when they get owned?

If ransom was off the table, maybe they’d be motivated to actually secure their data? I don’t know—I’m not in infosec. It’s probably not that simple.


Correct. You calibrate your budget to your risk appetite (board/C-level tolerance, industry specific compliance requirements, civil considerations, etc). Every company puts a budget on how much they're willing to spend, as resources are finite. Even the US DoD has a budget, there are limits. We risk accept what we deem within our risk tolerance, or too expensive to derisk.

I think on HN, there is this belief that you can use incentives to force organizations to have perfect security, which does not exist. Employees are human, people make mistakes, budgets constrain staffing as well as control implementations and operations; there are simply limits to what you can do. You can use policy and incentives to encourage good/best behavior, but failures will still occur. The goal is attempts at desired outcomes, measuring those outcomes, and iterating; not 100% success (as that is impossible).


> how much money/seriousness a company willingly puts into infosec? Why would a company invest $22M into security if they can just pay criminals when they get owned?

Because it's not a one-time cost. If attackers know you have weak security and deep pockets they will persist.


“We do not negotiate with terrorists.” - Richard Nixon

Does it work? Depends on who you ask. https://www.chathamhouse.org/2022/01/we-do-not-negotiate-ter... says that individuals (in the case of corporate ransomware - corporate entities) end up paying and not reporting the kidnapping:

“Historical evidence from Colombia and Italy shows that outlawing ransom payment has various adverse consequences.

Where ransom payments are illegal, victims’ families have no state support, while reporting of the kidnapping goes down and understanding of its prevalence is diminished.”


It's a crime in Japan to pay protection money to Yakuza. It seems to be working. They are a shadow of their former selves.

You can mitigate adverse consequences. Punishments for child kidnapping used to be severe, but then abductors would just kill the hostage since they had little more to lose. Today's sentences are next to nothing to encourage surrender.


Or simply make exchanging bitcoin for anything of value illegal. It makes extortion of all kinds too easy, and company data is just the tip of the iceberg.

I was in Italy recently, and saw articles about the epidemic of kidnappings there in the 70s. It won't be long before organised crime figures out how to use crypto to bring back the glory days.

Killing bitcoin would shut down an enormous illegal economy overnight. And stop the crazy electricity consumption at the same time. Maybe you can help me here, but I'm having difficulty thinking of a single real downside.


> shut down an enormous illegal economy overnight.

Despite not owning any Bitcoin, I find it quite comforting to know that there is a currency that exists outside of the purview of a central bank or a government that can devalue or outright take the accruement of my labor on a whim.


Then what's stopping the criminals from going back to good ol' wire fraud like in the 90s and 2000s?

PS. All of the smart ransomware groups are not demanding payments with Bitcoin anymore, they are using another cryptocurrency called Monero. It turns out that Bitcoin is actually traceable by governments via its public ledger, but Monero is a private currency that can't be traced, hence why the IRS posted bounties some time back to encourage people to break Monero's obfuscation.

The only gangs that are still demanding Bitcoin are the less-educated and savvy ones.


Can't they receive the money in Bitcoin and then run it through Monero to "clean" it?


Monero can be de-anonymized relatively easily.


source?


Oh yeah.. there were no ransom business before Bitcoin.


Policy is quite far from that: ransoms are even tax deductible.


Are there no legal consequences for knowingly paying money to a known criminal group based in Russia? What about the existing OFAC sanctions?


I don't really think companies do KYC on ransomware groups. The government just does not prosecute it.


Hiring bounty hunters to hunt down the perpetrators should also be tax deductible then.


The stories I've read about these ransomware companies are wild. They have whole customer service departments to help you easily pay your ransom. They operate like a legit business.


I'll make an exception for payments with tracable money made on behalf of the fbi.


Or better yet pay to Ukraine who is at war with the governments allowing this.


I would agree, except I don’t think it would keep people from paying regardless.


You'd end up with a bunch of shady "data recovery" firms that may or may not be related to the ransomware crews.


War-gaming this, what if it were legal to pay out bounties with the ransom amount as a war-chest to collect scalps of hacking groups, or damage their reputation or operation in some way?

This tit for tat type response would seem to be more consistent with how governments respond to terrorism, so I'm assuming it would be better to deter future hacks.


Attribution is easily deflected. You really don't want to recruit mercenary vigilantes to respond to a false flag operation.

> This tit for tat type response would seem to be more consistent with how governments respond to terrorism

Lol. Not a selling point these days.

The US has always had a very strange policy of criminalizing hacking, regardless of intent.

Places like Russia and Israel look the other way as long as the target is foreign, and we outsource our own phone forensics to the latter (Cellebrite). Thus, Israel has a better understanding of our own vulnerabilities than we do.

So you never know who you're up against given some ambiguous heuristics. As retribution, you might end up inadvertently attacking an "ally." It's safest to keep us disadvantaged.


Paying ransoms to ransomware groups needs to be made illegal and prosecuted under the RICO regime.


Try visualising that on individual level, someone points a gun and person gives up his wallet. Police comes and arrests victim under RICO.


There is no threat of violence involved?

At the very least it is participation in tax fraud to buy protection from a criminal IT-gang. I guess they don't pay VAT?


[flagged]


In 2020, ransomware was $29 million of $4.3 billion in online scams. Digital currencies were involved in $246 million of that $4.3 billion. The rest was wire transfers, gift cards, checks, ACH, and cash. Perhaps we should ban those, too.

Blockchain analysis shows less than 1% of transactions are considered illicit.

Unless this data is wrong or misleading, then it stands to reason the approach you suggest won't make a difference.

https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pd...

https://www.chainalysis.com/blog/2024-crypto-crime-report-in...


2020 is ancient and the second link doesn't look impartial. Denying cryptocurrency facilitate criminal activities is as like denying the sky is blue.


So does cash, gift cards etc. Should we ban everything? You can use regular banking for big operations as well. Should we ban that too then? But then nothing is left :)


Try going to your bank (even as a business with a lot of liquid assets or even cash on hand) and tell them you want to send 22 million dollars to an account.

Try going out and buying, then sending $22 million dollars worth of gift cards.

Crypto clearly makes the whole process far quicker and more anonymous.


I agree that gift cards are also overrepresented in frauds, bring little value to society and should be banned.

Surely we can all agree that the benefit that cash bring still outweigh the wrongful use. When it's no longer the case I imagine it will be phased out too.


Well, the whole idea is that crypto is digital cash, and that we should strive to keep the state where positives outweigh negatives. Attempt to, at least


I would interpret the data as agreeing, not denying, that digital currencies are used in crime. I don't believe that was being debated.

The open question is whether legitimate uses of digital currencies outweigh their illegitimate uses. The supplied data, which you're welcome to rebut, strongly suggests they do.

More to the point, the data suggests that GP's simple solution to stopping ransomware crime -- banning an entire class of media of exchange -- won't be effective. Not only has ransomware existed at least since the 1980s, well before the 2009 introduction of Bitcoin, but it isn't even dependent on digital currencies. (My personal and anecdotal intersection with it shows 0% usage; one in the 1990s demanded a mailed money order, and the other in 2023 provided a phone number to call and read gift-card scratch-off codes.)

Banning money doesn't stop crime.


> ransomware existed at least since the 1980s

Did you just go to Wikipedia article for Ransomware, saw 1989 AIDS Trojan being mentioned and now claim that ransomware existed since 1980? AIDS Trojan didn't really work and didn't provide any kind of anonymity for its creator. It was created by a mentally ill person. There were no instances of successful ransomware before Bitcoin. It really started with CryptoLocker. CryptoLocker gave victim an option to pay with either cash card or bitcoin. Cash cards were converted to bitcoin, which was outsourced to third party low-level criminals who took the risk of being caught. So bitcoin was absolutely critical piece here. And it was true for all successful ransomware schemes since. I stand behind my claim that ransomware wouldn't be possible without crypto. It's especially true about extortion attacks against corporations, like what article talks about.


Apple II floppies in the 1980s that contained a bad version of DOS 3.3, circulated on BBSes. I was never personally affected, but I heard (at the time, not after the fact) that the point was to drive traffic to a certain pirate BBS that charged for membership. I don't know how such memberships were paid for, but I do remember that Sprint long-distance codes were being traded at the time, so it would make sense to use that method. I also saw a Mac virus that encrypted word-processing files. This was in either late 1980s or early 1990s (I wasn't sure in my earlier post, so I erred on the side of recency). That was the one that promised recovery by postal mail. I don't know anything about the "AIDS Trojan" that you mention.

It seems very important to you that you successfully draw a bright line between digital-currency-funded ransomware and the other ransomware that preceded it. You might strengthen your argument by taking a position closer to reality, which is that digital currencies have facilitated online transfers of value, and thus have become more popular. From that point of view, one would hope that digital currencies would come to dominate, including in the realm of ransomware. But again, that gets us back to the original topic: whether the beneficial uses of such currencies outweigh their negative uses.


Feel free to provide more recent and impartial data.


The onus is not on me here. I was only pointing out this is old data and the date is all I needed.


There is a newer report: https://www.ic3.gov/Media/PDF/AnnualReport/2023_IC3Report.pd.... Figures are based on the FBI's tally of $12.5 billion in 2023 online scams.

Ransomware has declined (0.674% in 2020, 0.4768% in 2023): $59.6 million. Digital currency usage has increased (5.7% in 2020, 30.4% in 2023): $3.8 billion.

Which means we're still in agreement.


Barely 4 years old is not particularly old.


Specifically, if the money is just as gone if the instructions are "wire 22 million to this Russian bank address" instead of "wire 22 million to this Bitcoin address".


The bitcoin option is far easier to implement for most of us (and there are more anonymous cryptos).


...did you just admit to being a ransomware author? My impression is that ransomware is run by organized crime with ties to Russia, and isn't the work of a small team, but, well, I don't know much about running a criminal ransomware organization. If you do, I'd love to hear more!


What frightens me is that of all places, this is the most common opinion of crypto on HN. You know, like the angry parents that in the 90s called for the Internet to be banned because it contains porn.

Some ideas are so inane they should not rise to the top of a forum for technologists and software engineers, yet somehow here we are.


There was always obvious utility to the Internet. None of the ideas behind why bit coin is useful are panning out besides it being the best way to pay for illegal activity by criminal gangs internationally.

The fees are more expensive than banks and credit cards.

Its value is extremely volatile, so legitimate businesses immediately convert to cash if they accept it.

It's not anonymous like cash. Funds are actually easily traceable. So it is pretty much a guarantee that any scam involving bit coin crosses an international border with the blessing of the state.

Mining is a massive waste of energy at this point and if it continues will likely become a major contributing factor to climate change.

I would say bit coin was a nice idea, but currently is a failure in every way except to the people who got in early and to scammers.

Maybe a future crypto could solve some or all of these problems so I'm not in favor of banning the idea, but there's a legitimate argument to be made for banning bit coin. But the people who made a bunch of money from it will fight it extremely hard.


Agreed, those ideas should not rise to the top.

It’d help if more people asked themselves a question more often:

“Am I being a useful idiot right now?”

Or they could be usefully malicious, hard to say.


The argument that crypto bros are useful idiots to criminals, sanctioned states and rich early adopters is much more reasonable, no?


Dividing the world into crypto bros and "reasonable" people is not a good way to deal with the complexities of life.

One should try to avoid black-and-white thinking. Unless they are collecting likes on social media.


fyi most of HN would ban porn too if they could.


People these days do not understand the concept of personal freedom. What is good for them must be enforced upon anybody.

I think porn is a terrible drug, yet prohibition is even worse. Same for real drugs. Same for crypto. Where have the Californian techno-utopian disappeared to, that all we are left with are the left-leaning socialist posers paid by Big Tech but hating a whole technological niche?

I like Bitcoin because it is the realisation of the crypto-anarchist dream. A digital thorn in the side of the State. At least us anarchist posers still take personal freedom seriously.


"I don't agree with huffing glue but I'll defend your right to do what you want with your short time on this earth to my death." Voltaire [paraphrased]

It's almost like the "think of the children" kids grew up at we are at "think of the adults" now.



Do you really believe that banning cryptocurrency would be "easy"?


It would kill the ETF and all exchange. It would make paying ransom impossible. The value would plummet. It would continue to exist of course, but would be functionaly dead. Sounds easy really.


Ransom was paid before and will be paid long after crypto. It's just means to an end.


Banning math worked so great in the 90's with the cryptomunitions legislation.


Googling "cryptomunitions" just directs to your comment...

What is this "cryptomunitions" you're talking about from the 90s...?


Basically any kind of encryption. It was considered munition by the US government and fell under that category of export control.


Yeah, for real. Isn’t the entire point of cryptocurrency (or at least a key principle) to be resistant to censorship / centralized attempts to “kill” it? I don’t doubt that there is some damage that could be done to crypto as a whole by governments taking steps to “ban” it, but I don’t think we can put the toothpaste back in the tube here.

I suppose the highest leverage card that legislators could play to try to hurt cryptocurrency would be to put a stranglehold on the fiat currency on/off ramps (eg making businesses like coin base flat out illegal), but even after doing that peer to peer markets would survive (albeit likely after also taking some damage to user base, maybe).


If there was no easy way to exchange large amounts of FIAT for crypto and vice versa (other than peer trading with a person), that would effectively make crypto like any other form of tradeable unofficial currency. Gold bars or diamonds or krugerrand. Insofar as crypto is "just" digital cash, it'd still work just fine, but it would make it trickier to move large sums around for criminal enterprises - much like how you'd need to jump through laundering hoops to translate your suitcase of undocumented diamonds back into FIAT, if you didn't have a reasonable answer for how you came to have them.

It'd also put a dent in crypto as a perennial pump-and-dump investment scheme, which in the long run would probably be quite good for its ability to actually work for daily, legitimate use.


You ban the on ramps/off ramps


this way of thinking is outright evil. literally everything can be used for crime, but what do we do when we get to the point where we're all naked and living outside and we realise that we can't ban rocks?


The problem with crypto is that it's overwhelmingly used for crime and almost nothing else.


Nah. Unenforceable for one. Bad actors can literally just use a VPN.


How do you suggest we do this in practice? Decent countries already make it illegal to buy cryptocurrency for real money, but it's difficult to enforce when it's a mutually profitable transaction for somewhat desperate people (just as in many cases paying a ransom to a kidnapper is illegal, but that doesn't completely stop it happening).


> Decent countries already make it illegal to buy cryptocurrency for real money,

I don’t know if English is your first language or not. The use of the word “decent” here seems inappropriate and totally lacking justification or nuance.


I don't know if English is your first language or not. The use of a condescending tone here seems inappropriate and totally lacking justification or nuance.


If you can shrink the marketplace of US crypto holders with a simple ban from tens of millions of users down to a five-figure number, it becomes considerably easier to police.


You're right, but "if" is doing a lot of work. We all know how well banning cocaine has worked, after all.


...why is this the top comment? What happened to HN, did it get taken over by prudish parental figures like those in the 90s who called for internet and computers be banned?

People have killed other people with forks. Do what that info what you will, but do apply it to your analogy.

Also let's not pretend the receiving side is innocent; choosing to receive money in crypto is already a huge red flag.


hn is full of old people yelling at clouds


A better, more practical approach would be to making paying any sort of ransom or extortion a federal crime. Then no sane corporate manager will risk a felony conviction and prison by authorizing payment, regardless of whether it's done in cryptocurrency or some other store of value.

I understand that this could lead to business bankruptcies, job losses, and severe impacts on customers. That is acceptable collateral damage to prevent additional funds from reaching organized crime and terrorist groups.


You're being naive. Cybercrime and ransoms happened before cryptocurrency was a thing. Criminals do real life criminal things, and that goes especially for intelligence agencies, mobsters and ransomware agencies. If they couldn't sell it for crypto, this data would end up in the hands of such people. Sounds like someone trying to make an argument for shutting down the future of finance, which is far less criminal than the existing system of finance, on the basis of "cybercrimes occur"

No thanks.


"Ban cryptocurrency", lmao. Ok. Some of the outrageous things people say here that should know better.


Banning crypto won’t solve the problem. The existence of crypto forces software vendors to increase their security. It’s like trying to outlaw alcohol to fix the problem of drunk driving.


[flagged]


I think there's a point you could make here without the "caveman mode".

I agree that attacking healthcare infra is serious, and that human lives could suffer as a result.

But, it seems like people believe the hacking groups here are based in Russia[1][2]. "send[ing] SEAL teams to hunt them down" sounds like a quick ticket to WWIII.

I'd rather lean on organizations to secure their infra.

[1]: https://en.wikipedia.org/wiki/FIN7 [2]: https://en.wikipedia.org/wiki/DarkSide_(hacker_group)


Well sending seal teams might be extreme but I’m sure in Russia there are ways to let criminal elements know that they could receive a hefty payday if certain individuals are found dead. And that the CIA is capable of determining these ways and using them.


The US is doing enough extra judicial killings as it is.


Something about not caring about sick people getting hurt sets me off too. Same response for scammers targeting old people.


Also, based on this Krebs bit, sounds like attacking a medical organization wasn't just a fluke -- they'd declared open season:

> The ransomware group also declared it was formally removing any restrictions or discouragement against targeting hospitals and healthcare providers.


> Ransomware attacks and their perpetrators need to be officially classified as terrorists.

I understand where the sentiment is coming from, but don’t do that. There are a couple reasons why you shouldn’t do that.

First is that these kind of blunt categorizations end up becoming a powerful tool in exactly the wrong hands. If punishments are more severe because of some categorization, then the lawmakers and politicians in charge of defining these categories, legally, end up wielding that power against their enemies. It is simply too easy. Label one group terrorists, label another group child abusers, etc. It’s an old playbook and unfortunately it works all too well.

Second is that if you mete out capital punishment for some crime, then anybody who commits that crime would be more ready to murder to cover their tracks. Like, imagine if you gave the death penalty to horse thieves—well, if you do that, any horse thief with an ounce of sense will fight to the death to avoid getting caught. This is why only the most serious crimes can be capital crimes.


Incidentally, horse theft was capital punishment in many places in the Old West (19th century). I know that’s not your point.


That is exactly what I was talking about, but it wasn’t actually punishable by death in the Old West (that’s a myth).


Indeed. And there is probably also something to be said about Change Healthcare, their systems and their suppliers. Are they running Windows XP?


Doesn't the definition of terrorism imply political motives? I don't think it should be applied to blackmail/ransomware.

Totally agree with being very aggressive towards them though No need to label them terrorists to swat them. However I doubt that they operate from US soil.


I agree. Actually everyone I don't like should be classified as a terrorist and denied any sort of due process or civil rights. It's fine as long as we only do it to really really bad people who make us very very mad.


First time I hear of something like this happening... quite surprising.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: