I really don't understand what they are talking about. It's as if someone showed me a photo of my child and said, "pay me or I'll burn this photograph".
What am I missing?
You could never trust that the attacker actually deleted their copy of the repo, but then, the whole cryptolocking business model falls down if the attacker isn't at least moderately honest, so I can see why people would respond to that threat.
Nitpick: it only requires most attackers to be somewhat honest. Having a few unscrupulous ones may make life harder for the “honest” ones, but they themselves can be better of, e.g. by, after receiving payment, demanding more money.
In the same way that it's worse to shoot someone with an actual gun than to threaten to shoot them with a Nerf gun.
The negative network effects on other scammers are also nice.
An "honest cryptolocker" helps support more cryptolocker use, as people trust that if they pay the criminal they'll get their stuff
If dishonest ones were the norm, than maybe cryptolocking would cannibalize itself as nobody would pay since they know its useless. So in a sense the dishonest one while having less ethical intention has more ethical results. But only at scale. Hmmm.
This is a classic prisoners' dilemma: if no one payed, every one would be better off, but it is very hard to be that one guy or company who loses all his files for the "greater good".
society works with mutual cooperation and hackers seem to understand that more than the "technically cooperating in this context" that the legal field would employ
(This is probably not true, but society would benefit from "cryptolockers are usually fake" being in the zeitgeist)
That seems unwise. I don't have many local repos on this 128GB MacBookAir, but as well as BitBucket all the projects I have ever worked on are on other several machines and/or hard drives I have locally, and also zipped up in S3 buckets and on tarsnap.
Like they say, there's two kinds of people. People who've lost important data because they didn't back it up properly, and people who haven't yet.
Part of your backup strategy depends on external services. Not necessarily in your case, but people who only have their backups externally on a service could be affected.
> all the projects I have ever worked on are on other several machines and/or hard drives
And depending on your strategy, since they're so distributed it could mean they're outdated repos. If not, and they pull automatically, they could be affected.
Local backups also have issues. The disk might die, the data might be corrupted or any other myriad of things could happen.
> People who've lost important data because they didn't back it up properly, and people who haven't yet.
Is there such a thing as a perfect backup strategy?
At work, there's "Can meet contractually agreed RPO and RTO with 99.99% certainty". Automate the standard setup, and sleep well at night. Perfect.
At home, there's "I've done enough that I think the next improvement is an unfeasibly large amount of extra time&money for an unreasonably small improvement".
I've, for myself at home, settled on Apple's Time Machine backing up my Macs (and their phone/ipad iTunes backups) to a raid 10 set, that raid 10 set rsynced to another one at the opposite end of the house, and a weekly backup of that stored on a single drive that only powers up for 6 hours every Sunday night then powers back down again - so if my whole network gets breached and cryptolockered (for example) I'll still have at most 7 day old data at home. I also push that weekly backup out to S3 and tarsnap for off-site in-case-my-house-burns-down, or I've set it all on fire and moved to Belize scenarios...
I've been running most of that for ~8 years now. I've called it "done", while not "perfect", its certainly good enough against "not-Mossad threat models". If Mossad or The NSA want to delete my backups, so be it - I'll go be a carpenter or a gardener or something.
You'd be surprised how often you'll be rolling up to your competition to compare the virus files you pulled from each of your networks and reverse engineering on VMs together... then next week you have to pretend you hate them until something else goes wrong again.
Not saying they shouldn't have issued their analysis, of course they should have, it mostly looks on target. But...it will all happen again.
2. Never store your password in .git/config. Why are you doing that? That shouldn't be stored in .git/config.
As in https://example.com/mlindner/project1/821372asd1786d21das or something?
If you use this approach to manage access to a repository, then .. that gets stored in the .git/config. No need to store a password or something.
(Then again, maybe I didn't understand the explanation correctly?)
Criminals can also trade BTC for physical cash.
They could also by some means (for example permissionless decentralized exchanges) convert it to a cryptocurrency with private transaction properties such as ZCash, Monero, Beam or GRIN and then back again.
The "laundering services" you refer to (generally called "mixers") are still around but most of them have been shown reversible with high degrees of confidence. CoinJoin is the state of the art here, with the most well-known implementations being JoinMarket and Wasabi Wallet.
But in general law enforcement and investigators are definitely wising up to cryptocurrencies and to be fully untraceable one has to go through a lot of hoops and not make a single mistake in the process. Even the above mentioned approaches can leak information that can be used to tie an individual to the transactions if not executed properly.
Most likely they will use the same tried and true approach they would have used for stolen fiat funds; identity theft. I personally know people who have been drawn into a criminal investigation for money laundering because they had been initiated a transaction selling BTC on LocalBitcoins via bank transfer (unwise unless you know and trust the person), turned out already stolen BTC had been converted into fiat on a compromised bank account, which was then supposed to be converted into "clean" BTC again. Fortunately the investigation was already underway when the transaction happened, my friends bank account was blocked as a result when the transfer was initiated and the whole thing was sorted out in the end.
As for spending, there used to be pre-loaded credit cards that you could from bitcoin.
The fact that one time passwords expire and change is what makes them a different factor than a static password.
If you're getting your 2FA code by SMS message or the like, this can be true.
If you're using TOTP (e.g. Google Authenticator), that's just as static as your other passwords. The TOTP code never expires nor changes. What changes is the code you're supposed to send over the wire.
(and you are not likely to leak it anyway -- with something that changes that often, you are not going to have an incentive to write it to files)
This makes it identical to a password from a theoretical perspective. There's really no difference between a TOTP secret that you keep in a TOTP app and haven't memorized, and a password you keep in your password manager and also haven't memorized. Both are "something you know", and nothing else.
You're correct that leaking a temporary code from a single login attempt doesn't compromise the TOTP secret. That is an artifact of the login process, not of whether the mechanism is labeled "2FA" or "password". You can do the same thing while calling the secret a password: https://en.wikipedia.org/wiki/Secure_Remote_Password_protoco...
Ultimately, everything is "permanent, unchangeable secret", including private key and biometric data. Where the data is stored and how is it accessed makes all the difference.
I could not find the original definition of "something you have", but modern standards like PCI actually give OTP auth as an example of "something you have" (p. 4 of )
(I am not looking at the degenerate case of running TOTP app on the same device / same security domain -- it does not describe most cases, and there are some fairly straightforward technical measures to defeat this)
But none of these things are true. For example, my most recent job involved sharing a 2FA-protected online account. We all had the code.
I think analogy to physical house keys is very helpful. What did your work do?
Did you show the enrollment QR code, and multiple people scanned it --> this is like duplicating house key.
Did you put the key into password manager -> this is like that combination lockbox that releases house key if you enter the right combination.
People do all sorts of unusual things, this does not change the properties of intended usage.
Well, no. Everyone who uses TOTP, without exception, has their secret stored in a password manager. That's what the TOTP app or device is.
The password manager returns passwords directly. They can be viewed, memorized, passed to another person, copied to another device, or checked into git.
With TOTP, there is a private key inside, but it is not accessible to user. You cannot view it, or memorize it, nor can you pass it to another person or check it into git. It is purely implementation detail which is not exposed in any way.
Disclaimer: this is the case with classical TOTP devices, like RSA SecurID hardware token, or un-rooted Android phone running Google Authenticator. I have those, and everyone I know have them as well.
There are exceptions, like people using LastPass 2FA or people who store TOTP secret on their PC. This is not intended usage, and it does not matter for most users.
Some 2FA apps also allow you to back up your codes to a cloud service.
For example, you can put your spare house key under doormat. This effectively makes a lock on your house door require "something you know" (you need to know where the key is stored).
However, that does not mean that we can say that all keys are "something you know". The fact that many people decided to compromise their security does not reflect on other intended use of locks and keys.
I don't understand: when I clone a repo, I get a copy of all the branches/tags and the commits they point to & the trees/blobs from those commits. If the repo is wiped, I get a single master branch with a single commit with a single tree and a single blob, and no reflog because that is local to the repo, and I (as a fresh cloner) haven't updated any refs.
Perhaps they are thinking about a mirror clone? That still won't include the reflog, but you can at least find dangling commits and guess which one was master.