Hacker News new | past | comments | ask | show | jobs | submit login
CryptoLocker's crimewave: A trail of millions in laundered Bitcoin (zdnet.com)
77 points by hseldon15 on Dec 22, 2013 | hide | past | favorite | 50 comments



I'm actually pretty happy cryptolocker's around. No, wait, don't hang me yet, stuff your pitchforks, let me explain.

Let's face it, computer security has been pretty bad trough pretty much the entire personal computing era. I don't need to point any fingers, the guilty know who they are.

Now unlike some others, Apple and Linux do try to do a few things about that, in different ways, with varying degrees of success. But it's better, by far, than the teeming mess on that other platform we won't be mentioning.

I believe if we want to use IT in the future, that this stuff's got to get pretty much bullet and foolproof. And without a credible threat it won't get there. Now, one of the things that held the development of a credible threat back, was the limited ways in which security holes could be monetized. "Fortunately" that's no longer a problem.

And now that I think we have a credible threat. Will we now, please, with suggar on top, get computer security right? Isn't like, kinda time?


So getting fucked by criminals now is good, because it prevents us from getting fucked by criminals in the future? I fail to see how either of those options is better than the other. Reminds me of the database crackers who claimed that publishing a bunch of innocent people's passwords was actually helping those people out by letting them know their passwords were vulnerable.

Look, if I think I'm safe from being punched in the face, and someone says "we'll see about that!" and punches me in the face, they haven't done me a favor.


Yes, because more will be at stake in the future.

If we didn't have immune systems the species would have long been extinct in the primordial soup. Strong systems are built from living in a hostile and competitive environment, not from living in a utopia like the first academic computer systems lived in. We have to go through these growing pains at some point.


Yes and no. Well mainly no. Crime can leech too much, do too much damage that it actually holds back development (pretty much any dictatorship, many Mafias)

It's pretty likely that the major reaction to crypto locker will not be "darn it, let's breakout a can of OpenBSD on our systems." It will be let's buy more norton, let's disconnect from the Internet except Wednesdays, let's print out our important emails.

Anyway I still really really struggle with how anyone can get stolen / hot money off the block chain and into their hands, cleanly. There is a limited supply of fools who will accept 10M from a Nigerian in return for laundering it.


While I would be glad if people would stop using that unnamed OS, what I meant isn't that everybody should just hop to some *nix. What I meant is that pretty much every OS out there does security completely wrong, for no discernible reason.


Yes, lets. In the meantime, if your data is not in at least two places, it doesn't exist. I'm a fan of three.


That's one of the real headaches of Cryptolocker, apparently. If you have automatic backups set up, the encrypted files will most likely be backed up there and overwrite the clean ones. To really be safe, you need either secure versioned backups - default Windows and MacOS backups may not work, since the drive is connected and fully accessible to the computer - or to do backups manually to media not normally connected to the computer, and notice that something has gone wrong sometime between when the infection starts and when you connect the drive. Did I mention yuck?


>do backups manually to media not normally connected to the computer

Yup. This is really the only 100% way to have control over your stuff, local, physically separated storage.

> notice that something has gone wrong sometime between when the infection starts and when you connect the drive.

This assumes a chronological/incremental order for complete backups, which in practice is lazy and not related to the value of your data. Your only enemy should be the hardware. Dealing with the ugliest virus/worm/trojan turns into a mere nuisance when you have that kind of headroom.


Eh, cryptolocker will get those duplicates too.


He meant physical locations. Of course keeping your data on external backups has been a good practice since forever but many lack the discipline.


Real backup is both offsite and offline. If your backup doesn't have both attributes then it isn't backup.


Computer science solved this problem (the "Confused Deputy" [1]) decades ago [2] [3] with capability-based security [4].

I don't have the time to find the exact reference I'm thinking of right now, but consider taking a look at one of the papers that give a background to CapDesk [5]:

> Which addresses, among other things:

> "All Windows and Unix operating systems (referred to as “Winix” hereafter) utterly disregard the concept of POLA [Principle of Least Authority]. When you launch any application—be it a $5000 version of AutoCAD fresh from the box or the Elf Bowling game downloaded from an unknown site on the Web—that application is immediately and automatically endowed with all the authority you yourself hold. Such applications can plant Trojans as part of your startup profile, read all your email, transmit themselves to everyone in your address book using your name, and can connect via TCP/IP to their remote masters for further instruction. This is, candidly, madness." [6]

Since then, things have changed slightly - UAC under Windows, for instance, means applications now only have the ability to steal and hold your highly valuable and personal documents for ransom, but hey at least these sneaky trojans don't have admin rights! Which is of course the exact scenario that Cryptolocker happily exploits.

There's really no reason a piece of junk attached to your email application should execute any more authority than you explicitly grant it. (And no that doesn't require clicking a bunch of buttons to "Allow" access -- intelligent UI design can make much of this completely transparent, provided the host platform is capability-based.)

It's not that companies like Microsoft aren't well aware of capability-based security [7], it just seems to be that the appetite isn't there to really solve user's problems (breaking stuff like the Start Menu appears to be more important), despite the valiant efforts of some really smart people [8]. To be fair, shifting to a capability-based system would be a significant engineering effort, but definitely well within the realms of Microsoft or Apple's capabilities.

(Interestingly, some of the ideas on erights.org were influenced by Nick Szabo, who created "Bit gold" and who a few people think might be Nakamoto himself [though he denies it] [9])

[1a] http://www.cis.upenn.edu/~KeyKOS/ConfusedDeputy.html

[1b] http://erights.org/elib/capability/deputy.html

[2] http://www.cis.upenn.edu/~KeyKOS/Gnosis/Gnosis.html

[3] http://www.cis.upenn.edu/~KeyKOS/Key370/Key370.html

[4a] http://www.skyhunter.com/marcs/capabilityIntro/index.html

[4b] http://erights.org/elib/capability/3parts.html

[5] http://www.combex.com/papers/index.html

[6] http://www.combex.com/tech/edesk.html

[7] http://research.microsoft.com/en-us/projects/singularity/

[8] http://en.wikipedia.org/wiki/Capability-based_security

[9a] http://erights.org/related.html [9b] http://unenumerated.blogspot.com/2011/05/bitcoin-what-took-y...


>(And no that doesn't require clicking a bunch of buttons to "Allow" access -- intelligent UI design can make much of this completely transparent, provided the host platform is capability-based.)

I think that's actually a bigger problem than you make it out to be. Probably because people involved in this particular field tend to be dismissive of "lusers" and "marketing", it seems to me that almost every security tool out there has terrible UI. Just think of GPG, something we know we should be using since, uh, the 90s? and still nobody really does, because even just managing keys is a total pain in the neck. UAC? "Oh, I'll just click Yes all the time". Mobile apps asking for extremely-granular permission to marry your firstborn child? "Yeah, I'm sure they don't really need it, whatever".

Building secure systems is hard, but building secure systems that users will use in a secure way without having to think about it is much harder, IMHO.


This sounds a lot like how Android does security - each app requests a specific list of permissions at install time, and you can either accept or reject it. That, and the apps being signed to prevent tampering, and by default can only be installed from the Play store.

I think Android security has proven to be pretty solid - there have been a few spyware apps, mostly killed quickly, but nothing that was able to spread on it's own like the big desktop viruses.

One thing that I'd like to see added that Android doesn't seem to have right now is a more limited Internet permission. Right now, apps have to either request full permission to send and receive anything on the internet, or no connection at all. Why not a permission to communicate only with specific domain names? Like Evernote can request permission to only communicate with addresses resolved from evernote.com, instead of anything on the internet. It might also have the effect of pushing apps to use the Android ad API instead of their own.


Only problem is that I don't know of anyone that examines that list of permissions, and will reject an app if those permissions seem to much. How is a regular user to know if it is bad for an app to access your phone book, or internet connection? Depending on the app, this may be necessary for the app to function. Or for it to serve ads.

What is needed, is for trusted third parties to verify if a given list of permissions is needed, and give (via a security software add-on) a popup with an assessment of how safe it is to install that app -- green, yellow, or red, for example. That is about the only thing that most end users (and even busy geeks) can really comprehend.


There's at least some truth to that, but that's the point that I was trying to get to. GP claims that "Computer science solved this problem decades ago" in reference to capability-based security. Android is the only OS that I'm aware of implementing anything like that on a mass scale, and while security there has proven to be at least decent, I'd hardly call it a solved problem.

Capability-based security is ultimately just another buzzword, no more of a perfect solution than any of the others. I don't think you can call any type of security problem solved by your pet technology until it is deployed at scale in the real world and proven to work. Until you have had tens of millions of users and hundreds of thousands of developers bashing away at it for years, you just don't know if you've really solved the problem.

Android's capability-based security is good, but IMHO the more important security innovation is secure app-specific data stores. You might request more permissions than you should, and some clueless users might install it anyways, but you still can't ever get access to the data or credentials stored by the banking app, the social media app, etc.


Ah, so instead of asking the user for permission to access something, an app also would need another app's permission to access that data. Of course, this has already been pioneered by companies implementing DRM, but still, I wonder how this can be implemented in a general sense.


Er, what I meant is that Android already does this. All apps' data stores are private to that app by default. The only way for any other app to access them is for the owning app to specifically set up an interface and permissions to access it, and the accessing app to request those permissions at install time.

I don't have much experience with iOS, but I expect it does something similar.


Symbian and iOS had/have data caging; apps can't see each others data.


I'm not consistent with this, but I do reject games from things like "Developer file system access" - WTF does a game need with that?

The challenge is more widely used apps that ask for permissions that make no sense to me as a developer. App developers should be able to provide a justification for each permission.

Ideally I'd like to be able to reject specific permissions, and the app should just tell me what functions are disabled... or I'll take my own risks.

Basically, IMHO, Android permissions is almost as broken as UAC in Windows, with all but a very small % of people actually reading and bothering to understand application permission requests.


Windows 8 does an even better job IMO. When you start a program and it requests permissions, then Windows allows you to approve/deny. So launching a bad flashlight app will pop up a contacts request.

Starting a Disney game has Windows ask if you'd like to hand over your personal identification (age, gender, location, some other stuff) or not.

In my limited experience, it seems apps work if you deny the permission (I'd hope MS enforces that via the Store approval process). And having an in-your-face dialog that you can dismiss without penalty is far better than "click yes to continue" installers like Android.


Like Evernote can request permission to only communicate with addresses resolved from evernote.com, instead of anything on the internet.

That would seem to be a bit meaningless, since the same people that control Evernote also control evernote.com, so if they want Evernote to connect to some other place they can easily add a name for that place under evernote.com.


Really? Windows Vista tried this, and got ridiculed for UAC elevation prompts and dialed it back. But even so, downloaded apps get flagged ("blocked") and run in a lesser integrity mode requiring an extra dialog.

The problem isn't technological, it's how to pitch programming-level permissions to end-users without overwhelming them. And then hoping to get them to make informed choices.

I've seen Linux users blindly take patches from a mailing list and apply and then recompile a certain software - considerably more effort to run arbitrary code than the average user will put up with. I don't believe there's any real solution to get users to make intelligent decisions about execution policy.

So far, the only thing that appears to work is to simply limit what users can execute.


Linux has SELinux, though this is hideous to configure (last I checked) and most people just switch it off.


I'm surprised that the NSA hasn't looked into it yet (or have they?...), with what they've been up to over the past decade.


I am more worried about cryptolocker than the Silk Road or its derivatives.

These people must be getting the money out somehow via the visible blockchain.


The article talks about mixing services. Destroy those and what little anonymity that can be had in virtual currency ceases. As for getting money out, why bother, if you take the (perhaps not that) long view that BTC itself will be generally useful.


Wouldn't it be the FBI's job?


I had a similar reaction to the Target credit card number situation. Not happy about it, but the thought did cross my mind "maybe this will finally make banks/card companies/retailers take security more seriously."

Not really holding out much hope though.


Why would they? Most of the damage (time wasted by their customers dealing with the fallout) is an externality.


There are situations in which "there is no bad publicity"; a massive security breach isn't one of them. Even if customers are protected by their credit card provider, it's still a massive inconvenience at best to have your card data stolen. I guarantee Target takes a non-trivial sales hit from the incident, particularly online.


In a (more) perfect world, the NSA would put considerable resources towards stopping this activity as opposed to spying on their own citizens and allies. A guy can dream...


In a more perfect world, the people would understand that for the NSA to be able to find/track/stop this type of activity, they would have to have access to a large amount of global traffic, mine it, throw away what they don't need, keep the rest.

You can't have your cake and eat it too.


No they wouldn't need that, there is plenty of evidence to investigate here without the need to buy half of the world's production of hard drives in perpetuity. What's needed is law enforcement's cooperation to bring the perpetrators to justice—that might be a little harder to do actually, but as the scale of this sort of things increases even the most crooked of eastern bloc countries are going to start getting a few offers they can't refuse.


If you imagine the NSA minus the espionage, isn't that basically the FBI computer crimes division?


But with a real budget and people who are actually trained for the job.


That is not the only approach. False dichotomy.


Except they're not doing that. Quite the opposite infact, they've invented a lot of the malware themselves. maybe this too. If I cant have my cake at least i can choose not to fucking pay for someone else(taxes). Smartass.


Hell, that's nothing. Here's something truly criminal: http://www.rollingstone.com/politics/blogs/taibblog/outrageo...


There is no evidence anyone at HSBC was willfully assisting or complicit in money laundering. The fines are for weak controls that failed to detect money laundering.


Oh please! The fines are for disabling the controls that would have automatically flagged money laundering transactions. Source: the link OP posted.

Still, it is a mere slap on the wrist that is guaranteed to have no permanent repercussions to deter such behavior. It's a clusterfuck and I don't see a solution emerging without both the war on drugs ending and multi-year incarceration becoming mandatory for financial crimes.


Oh Please! Matt Taibbi is in no way a credible source.

> Disabling

Or alternatively, just not enabling tougher controls. There's no evidence, or indeed suggestion by prosecutors, that controls were deliberately set low to deliberatly faciliate money laundering.


Accepting regular daily cash deposits of hundreds of thousands of dollars at HSBC branches in Mexico without question shows assistance and complicity.

The only people making cash deposits of this size, in Mexico especially, are drug cartels. The HSBC employees at the branch knew what this money was.


I remember reading about that earlier this year. Not surprised by the outcome.


This is completely and totally off topic.


Why does Windows still let arbitrary applications downloaded via email run? I'm sure the vast majority of cryptolocker victims have no desire to share binary executables via email.

I wish there was an "only run Microsoft approved applications" option I could enable for my parents. Kind of like OS X's Gatekeeper.


Is the file with instructions on how to decrypt your files named "REAMDE"?


That novel has too much Russian mafia inside.


Could the bitcoin network reverse spends into the attacker's address(es) if everyone came together and agreed to do so? Or at least prevent spending out of it?


If every miner would join in, preventing spending would be possible. Reversing the spending is near impossible (it would take cooperation of not just the miners, but also every address that was in a transaction since the offending transaction).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: