I can't blame them for this. A surprising number of apps ask for root (inc. Adobe installers and Chrome). As far as I know, it's to make updates more reliable when an admin installs a program for a day-to-day user who can't write to /Applications and /Library.
We're long overdue for better sandboxing on desktop (outside of app stores).
I only do root for administration tasks. Filesystem stuff, hardware, server config. All the goodies are in my homedir. Exfiltration is easy as that. Running bad binaries is easy as running under my username.
In the end, there's no protections of what my username can do to files owned by my user. And that's why nasty tool that:
1. generates priv/pub key using gpg
2. emails priv key elsewhere and deletes
3. crypts everything it can grab in ~
4. Pops up nasty message demanding money
The only thing I know that can thwart attacks like this is Qubes, or a well setup SELinux.. But SELinux then impedes usage. (down the rabbit hole we go).
Edit: Honestly, I'm waiting for a Command and Control to be exclusively in Tor, email keys only through a Tor gateway, and also serve as a slave node to control and use. I could certainly see a "If you agree to keep this application on here, we will give you your files back over the course of X duration".
There's plenty more nefarious ways this all can be used to cause more damage, and "reward" the user with their files back, by being a slave node for more infection. IIRC, there was one of these malware tools that granted access to files if you screwed over your friends and they paid.
As I see it, the problem on the Mac boils down to:
1. Sandboxing your app is often a less-than-fun experience for the developer, so few bother with it unless they're forced to (because they want to sell in the App Store).
2. Apple doesn't put much effort into non-App-Store distribution, so there's no automatic checking or verification that sandboxing is enabled for a freshly-downloaded app. You have to put in some non-trivial effort to see if an app is sandboxed, and essentially nobody does.
I think these two feed on each other, too. Developers don't sandbox, so there's little point in checking. Users don't check, so there's little point in sandboxing. If Apple made the tooling better and we could convince users to check and developers to sandbox whenever practical, it would go a long way toward improving this.
I think most apps don't sandbox not because it's especially hard, but just because it never occurs to the developers.
If these issues were fixed I believe that sandboxing would quickly become the norm. Many of us want to use the sandbox but don't want to waste too much effort fighting it.
Worst case, you can see exactly what is being blocked in Console and then add word-for-word exceptions via the com.apple.security.temporary-exception.sbpl entitlement. You can also switch to an allow by default model by using sandbox_init manually.
Even if the sandbox doesn't work for your entire app, you can use XPC to isolate more privileged components in either direction (i.e. your service can be more or less privileged than your main app). What specific abilities are not provided that you think would help?
Using sandbox_init manually sounds like it should be possible in theory, but it is way too complicated in practice. There is barely any documentation on it, and you'd need to be familiar with macOS at a very low level to effectively use it -- which is highly unlikely for application software developers.
(allow network-outbound (remote unix-socket (path-literal "/private/var/run/syslog")))
For examples where (at least to my knowledge) the macOS sandbox isn't flexible enough, consider trying to write a reasonably capable file manager or terminal that works within the sandbox's bounds. Or even a simple music player capable of opening playlist files which could point to music files sitting anywhere – not just the user's home directory or the boot volume but anywhere on the local file system.
(allow file-read* (regex #"\.mp3"))
True story. My files were fine (although my heart jumped a bit)
Even most applications that they use that did come with the system, such as web browsers, have a quite limited set of files they should be writing. Browsers, for example, will need to write in the user's downloads directory, anywhere the user explicitly asks to save something, in their cache directory, in their settings file, and in a temporary files directory.
It's also similar for most third party applications they will use, such as word processors and spreadsheets.
It seems it should be possible to design a system that takes advantage of this to make it hard for ransomware and other malware that relies on overwriting your files, yet without being intrusive or impeding usage.
Nowadays there's Sandstorm with a similar model for networked apps. https://sandstorm.io/how-it-works
Or the easier method.
rdiff-bacukp + cron job. Or Duplicity. Or Tarsnap. Or CrashPlan. Or...
That is to say backups with multiple stored versions, to another system where the (infected) client does not have direct write access. Ransomware can infect my home directory if it wants to. A fire can burn down my house. Zaphod Beeblebrox can cause my hard drive to experience a spontaneous existence failure. But I've got off-site automatic backups, so I'll never pay the ransom. (I will pay more over time for the off-site storage, but given that I'd pay for that anyway to guard against natural disasters / disk failure / etc it's not really an added cost).
That's irrelevant though if they can also get all your credentials, stuff in the Keychain et al -- as they apparently did with the Handbreak malware.
- Backup files are encrypted with gpg.
- Pull from local backup server with a backup account that only has read-only access to the directories you need to backup.
- Push to remote backup server with versioning (I'm using rclone with s3, if you need to backup large amounts this could potentially get too expensive).
You can restrict the s3 credentials so that the user pushing from your server isn't able to permanently delete any files.
There are plenty of other options out there, the key takeaway is a staging server for offsite backups and the principal of least privilege.
Tarsnap can be configured so that a special key is needed to delete backups.
I used to be in boat where my first instinct was to disable SELinux but I must say it wasn't that hard.
You may consider writing a custom SELinux policy such that only the git executable can access the .git directory. This would be a much more useful mitigation against this attack, but it would also move the difficulty barrier significantly.
This was on a server so no popup - you also need to know where to look (/var/log/audit/audit.log) to actually work out what is causing the 'Bad Gateway' error in nginx.
At work, I have written an SELinux module for our java application servers. It properly reduces the permissions from system for the tomcat startup procedure, and then drops further permissions once the startup procedure actually executes a java binary. This two-step process is mostly necessary because the tomcat startup executes a bunch of shell scripts to setup variables, but I don't want to give the application server any execute-rights.
Conceptually, it's not hard to build such a module with some careful consideration about the files and permissions the process needs at different stages - I was surprised by this. But getting this module to work properly was a real hassle, because there's very little practical documentation on this.
Quite sad, actually. I want to be as smug as Redhat about SELinux stopping pentesters cold.
Correct me if I'm wrong, but most ransomware is operated almost completely through Tor. Doing email this way may be a problem (for obvious reasons), but for anonymity and uptime's sake most rely on it pretty heavily.
Or worse yet, I can see a daemon sitting around, snarfing juicy details and exfiltration. Along with that, it could contribute to booters' network. And as a near-last resort, it crypts everything to extract more out of the user. It can then monetize even this by being an infector and staying on the network (not reformatting).
Another thing that goes along with this infector idea, is by using OnionBalance, and using a load-balanced onion site to promote and speed up various "things". Since we're dealing with illegal, well, there's plenty of things that could be leveraged to host.
Yes, I do a lot of things in Tor onionland. ALl of my network exists in there, as does control to much of my services, MQTT, database, and more. This is how I use it: https://hackaday.io/project/12985-multisite-homeofficehacker...
It's cool for a variety of technical reasons, but if you just want to run a booter, you're better off using reflection attacks today than a botnet. Things like proxying web traffic to random home machines, performing layer 7 attacks on webapps, etc are pretty nice from a technical perspective and I think a lot of tech people can appreciate them in that aspect.
But that's pretty much where it ends. They don't make easy money like ransomware does. Ransomware produces customers, doesn't require hard business side work to acquire them, doesn't have competition, etc. From a business perspective, ransomware is just better.
EDIT: Your Tor automation solution seems pretty cool - do you use a VPN to authenticate things or are you relying on the privacy of your .onion names?
Thank you. Nope, no VPN. I run 2 types of onionsites. One side is for services like Moquitto and DB and Node-Red. The other side is an "onion with password", or HiddenServiceAuthorizeClient in the Torrc file. I use that for SSH backend. That means you need to know: onion site, key, username, password, root password; in order to escalate to gain control of the machine.
I'm also experimenting on things like GUN for types of databases that can live between them. Once I have a stable distributed database between my nodes, can start building webapps where the endpoints start and end in Tor.
In a side note, I thought about using OnionBalancer, a DB, and Boulder, and making my own OnionCA and talking with the EFF about funding assistance. Frankly, no CA just stinks, and I want to do something about it. I do know that the onionhash is the last 15 characters in the hidden site public key... but there has to be a better way than this.
Unlike with regular http vs SSL, Tor provides confidentiality, integrity and host authentication integrated simply by connecting to the right name.
I've got three separate encrypted copies of my homedir spread across two different locations and a fourth snapshot taken once a week on a drive that's physically powered down when not in the middle of a backup - and I've regularly tested restoring from each of them.
How does any of that help when malware grabs my .git .aws .gnupg .keypassx, etc directories from my running system - and unknown 3rd parties start downloading (or worse, backdooring) my "private" stuff from github?
All I can tell my customers is to restart their Macs.
I have ~/Applications and ~/Library which is where anything I install should go.
Edit - added detail.
There's Adobe's update etc. app that needs to run that checks for that, it's not the individual apps themselves.
Maybe given the limited scale of this one and the obvious interest the attacker has in producing trojaned versions of popular software, this is actually what they were hoping for in the first place.
Frankly I can only think of a small number of processes that need to automatically access the file: backupd, sshd, and Carbon Copy Cloner. Everything else should require my attention.
Essential OSX software.
Looks like F-Secure just bought it in the last month or two. :(
The other thing I find interesting is this comment:
> We’re working on the assumption that there’s no point in paying — the attacker has no reason to keep their end of the bargain.
If you really want to be successful in exploiting people through cyber attacks, I guess you will need some kind of system to provide guaranteed contracts, i.e. proof that if a victim pays the ransom, then the other end of the bargain will be held.
It might seem that there's some incentive for ransom holders to hold up their end of the bargain for the majority of cases if they want their attacks to be profitable.
You're describing a legal system and the rule of law. I'm not sure there's way to guarantee anything like you describe when there is some illegality in the nature of the process.
Trade only works when you can trust either the parties involved or the system as a whole to uphold their promises (for the system, that's that involved parties that don't uphold their ends will be punished).
Legal systems aren't the only way to give confidence that both ends of a bargain will be held. As one example, some darknet markets have escrow systems for this purpose. It's not too hard to imagine a way to do this with ransomed code. Reputation-based systems also provide incentives for sellers to deliver on their promises.
Those only function because the darknet functions as the system, and the punishment for not following through is that the party loses access to or prestige in that market. What entity exists that is trusted and has leverage with both the people that are ransoming (criminals) and average citizens (ostensibly law abiding)? Should I trust a darknet broker to not screw me? No. They have no incentive not to, as long as their actual client, the ransomer, doesn't care. For the same reason, the ransomer should not trust any legal entity, because they can not deliver the money and give it back to the victim (since they are the client).
There may exist a way for this to work, but I certainly can't think of one, and what you described doesn't work either. Trust is the integral factor as I see it, and while you can have trust within a criminal community, and within a law-abiding community, I'm not sure how you get that trust to cross that boundary.
Again, how do you trust a criminal person or organization? By their nature, they don't follow the same rules.
You don’t need an authority vouching for you to become a ‘trusted’ criminal. You just need proof of identity, and a reputation established over time. Drug dealers do this all the time, even though they’re criminals. Hell, it’s even how legitimate businesses work - the FBI isn’t going to shut down Bic for selling shoddy pens, so they build a reputation on “we’re Bic and we did right by you last time”.
An example: a malware group sends every target an RSA-signed demand (with public key disclosed on Pastebin or something). The few people who pay up find that they follow through, so they grow a reputation as sincere. They could even kick things off with a round of freebies - “Here’s your data, here’s our sig, we deleted/unlocked/whatever it for free this time to prove ourselves.” I suppose they’d have to publish demands and outcomes since most targets won’t disclose on their own.
There’s likely a flaw in my specifics (probably around disclosing attacks and proving followthrough), but I only put five minutes into it. As long as you can prove identity, you ought to be able to build ‘trust’.
Drug dealers and those buying from them are both committing illegal acts. That changes the dynamic. Neither party can rely on the legal system to enforce misconduct. That allows an entirely criminal system to work. For example, if you don't pay the drug dealer, they'll just hurt you. If the drug dealer doesn't give you the drugs, or gives you crappy/cut drugs, you just won't use them next time. It's important to note that this transactional relationship does not begin with one party accosting the other, as in the ransomware case.
The ransomware scenario is the equivalent of being mugged in an alleyway, but only of your smartphone, and the mugger offering to give your phone back if you go to an ATM and come back with $100. The whole interaction began with an crime perpetrated by one party on the other.
> As long as you can prove identity, you ought to be able to build ‘trust’.
One problem is that the identity, because it is anonymous, it worth fundamentally less for this purpose than any real identity. The ransomer could decide law enforcement is getting too close, and stop responding to all payments, or abandon the system and someone else could take it over. For any identity used just for this scam, the loss of reputation is irrelevant, and if they are using the same identity for multiple scams they are inviting more law enforcement response. There are no future consequences of mention to screwing people over, since the identity can be changed at any time.
The only thing that really protects you in any of these situations are the incentives of the criminals, but those incentives, be they economic or liberty based, are subject to very different constraints than a legally operating entity. The bottom line is that the person or people involved has started the whole relationship by showing they are willing to screw you over. Establishing trust is not impossible (some people will trust), but it's very hard to do, a large percentage of will never actually trust you, and they likely shouldn't, because you don't have the same incentives or punishments they do.
It's not a requirement that the authority be legal. Note that a person's name isn't required to establish authority, pseudonymous reputation provides assurance as well. Darknet markets have reputation systems, and have already figured this out.
> And how do you ensure you are dealing with the same person from one transaction to the next?
The same way we do it with pseudonymous systems now: by having an authoritative identity somewhere that can verify their actions. @shittywatercolour could make a new account on HN, do an AMA, and post on his Twitter that he's doing an AMA with <name> for proof. Banksy can claim work by posting it on his website. In the same way, a reputable seller on any marketplace (such as a darknet marketplace) could do the same thing.
But again, why should I trust a darknet? What makes a group of criminals trustworthy when a single one isn't?
You haven't really addressed the fundamental problem of trust, just kicked it down the road to a new point. Any legitimate entity seeing usage in an effort to authenticate a criminal will likely be seeing subpoenas for access information. If they are resistant to those subpoenas, then they are helping the criminals, and are acting illegally. Both states have severe negatives for one of the parties.
Only a small fraction of trust among non-criminals is backed by force of law. The rest is backed by past record. If you don't have one, you put up collateral, get someone else to stake you (e.g. loan co-signers), or start small until people get to know you.
The only real question here is how you verify who you're dealing with. That's doable, and once it's done everything else is a pretty established process.
It's not just about how reliable they are, it's about what incentives they have to follow through, and what recourse you have when the do not. Entities acting illegally have very different incentives than legal ones, and your recourse if they do not follow through is very limited, especially if you are acting legally.
> Only a small fraction of trust among non-criminals is backed by force of law. The rest is backed by past record.
Past record accounts for some of it, that ability to exact your own punishments accounts for some of it. Any drug dealer that screws over a client needs to account for that person taking the matter into their own hands.
> The only real question here is how you verify who you're dealing with.
That's not the only question. I believe I've outlines many more in my other responses in these threads (one of which was to you).
This isn't true, think Yelp. Why couldn't Yelp exist for ransomers?
Even so, Yelp is renowned for extorting restaurant owners for money (whether or not illegal, and officially extortion). That's in a market where all participants are supposedly acting legally. Why am I to believe that illegal, anonymous entities won't be willing to burn their reputation (which may only exist for this scam) when they decide to stop?
Returning digital goods (or more general "knowledge") works either based on trust or through enforcement. The latter is the rule of law.
Just brainstorming, but:
1. Trusted third party creates a service that (a) provides a one-time-use encryption key (b) provides an endpoint to upload an encrypted blob of information along with an email (or a passcode) and a date after which the decrypted content will be made available to that email (or via that passcode), (c) provides a UI that allows a user to pay $x (redeemable via email/passcode) to wipe the encrypted content from their server, if paid before the ransom date.
2. Malware author compromises system, encrypts content using (a), uploads encrypted content with their email/passcode to (b), sends user a link to (c).
3. Malware author provides some evidence that they haven't also uploaded non-encrypted content elsewhere to give confidence that once the user pays, the content will not exist elsewhere. Some ideas: system/network logs, malware analysis that shows that it only uploads to trusted third-party, providing proof in decompiled source that malware only uploads to trusted third-party, and/or a reputation/review system. Note that this doesn't need to be airtight proof, it just needs to give the victim enough confidence that they think it's worth the risk to hand over some money.
Would this work well, in practice? Who knows. But I think it's a proof-of-concept that shows that there are potentially other ways to escrow ransomed content.
Any amount of information that could show this would invariably give away the identity of the hacker. Even then, since the information comes from them, it can't be trusted.
> But I think it's a proof-of-concept that shows that there are potentially other ways to escrow ransomed content.
There's a difference between keeping the owner from their own materials and threatening to spread those materials to others. In the first, you at least know whether you get the files back (for the most part, it might be hard to notice small changed/omissions). In the second, not only do you not necessarily know it's been shared, the blackmailer retains the right to spread it in perpetuity (whether it still retains value or not).
Of course, you can never verify that they will not release the code or keep using it maliciously.
Also, I'm not familiar enough with ethereum to know whether there are downsides to using it, such as it leaving a trail until laundered (like bitcoin).
Harder to achieve online but not impossible, though plenty of criminals make enough without essentially having to place themselves at risk of physical attack from organised crime.
Could a smart contract system work here ? In this example, the smart contract would assure you the hash of the repo sent to you corresponds to the one you already had locally. You'd add automatic payment when conditions are fullfilled...
Is that feasible?
Unless someone fancies setting up a trusted hacker escrow that acts an intermediary between compromised servers and hackers? That sounds incredibly complicated, highly illegal and unlikely to be trusted by either hacker or hacked though.
There's also the fact that they don't care about who you are or what you do, their only consideration is financial.
Most computers are always connected to the internet when they're on, even if they don't necessarily need to be. Airgapping isn't really used outside of very sensitive networks, but I'm starting to think we need to head towards a model of connecting machines only when really needed.
Of course the cloud based world doesn't allow for that, and perhaps I'm a luddite, but I increasingly find myself disabling the network connection when I'm working on my PC. Kind of like the dial-up days.
As a good corporate drone, this arrangement is kind of forced on me, but a lot of small company / startup folks totally mix the two. Might be a good thing to not do.
Sure it doesn't protect you from e.g. a tool you need for work being compromised, but it reduces the attack surface - this guy probably wouldn't have installed handbrake on his work machine.
Another thing we do specifically because medical data is, a lot of the time I'm forced to work inside a non internet connected network that I vpn and then remote desktop to. Firewall rules mean the only thing getting in from my laptop is vnc. Some systems also require plugging into a specific physical network. Overkill for most uses but it makes losing laptops fat less scary if you can keep a lot of your stuff on a more secure remote system.
Try out Qubes: http://qubes-os.org
Something like this could be good if you wanted to rapidly switch between different compartments on a single device. It would be great for e.g. keeping a 'sensitive data' compartment seperate from a 'emails and paperwork' compartment on a work laptop.
Doing something like this is certainly better than using a single device with no seperation or just user accounts.
Psychologically, I still think that training people to use different devices for different things is more likely to stick than (account seperation on steroids). This extends to physical security - not leaving a work laptop in your backpack in a nightclub cloakroom like you might a personal device. But in the end with that reason, at a small comapany where you can avoid hiring idiots, it's up to each person to decide what psychological tricks they need to get themselves to do things.
I wouldn't trust something like this to keep high security information seperate. When some exploit that escapes Xen or (for a corp) accesses windows systems otherwise securely configured, there is nothing like isolated networks to keep your blood pressure low. For most software a service dev type people you already have this - your data lives in a data center on carefully configured production servers. But for data science type users, you see a lot of people (especially in accademida) doing work with potentially scary datasets on local laptops they probably also watch pirate TV on at home, which is a bit concerning. I guess at least if they were using qubes it would be a bit better though.
Because we can always not care about those others in the context of what we should do.
Technology can only do so much to "protect" users from themselves, and from miscreants. Couple this with an indifference to privacy on most of the connected population, and you've got a recipe for a world where nothing is safe.
I would both prefer and hate this setup. I use my personal laptop for work and having all my apps, data, settings, etc available in one place is amazing. I could get past using different computers but the sad reality is my provided work computer is underpowered compared to my 3.5 year old macbook. I can run circles around my coworker's machines on the simple fact I have an SSD. IDEA opens in seconds for me while they go get a cup of coffee. Our desktops haven't been updated in probably 4+ years and I strongly believe they'd be more productive on macOS than on whatever flavor of linux they are using (Most use Linux because they'd rather die than use Windows and they can eek a little more performance out). A number of them have older macbooks they use for meetings but they aren't powerful enough to actually develop on.
Do not install unsigned software is a good start. Does that dialog need a secondary 'Are you really really sure?' absolutely .. but the basic defence in this specific case was in place.
pacman -S foo
yaourt -S foo
See for example:
* Kivy: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=pytho...
* Chrome: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=googl...
* Vivaldi: https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=vival...
* Plymouth (over HTTP too!): https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=plymo...
* Oracle JDK (also plaintext HTTP!): https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=jdk
Note the included hashes — if the file on the server gets replaced, the building process will complain. (Sure the package maintainer will probably just replace the hash :D But if the file changed but the version number didn't change, or there was no release announcement, that's suspicious…)
I don't remember how Arch does it, but in FreeBSD Ports you need to actively replace the hash in the text file, there's no easy ignore option. (FreeBSD also mirrors the files on the project's servers, which is pretty cool)
It isn't on the Windows store either as far as I can tell.
Why, I don't know - maybe nobody involved wants to pay the fees to become an Authorized Developer, maybe there's a Free Software religious argument going on, maybe Apple doesn't want a program whose original function was "ripping DVDs" to be on there because of the many deals they have with the entertainment industry.
tl;dr: the program in question ain't in the operating system's package manager.
cough Arch cough
In the end its all about trust. If you trust some web domain you can
also trust their software. If that software is compromised you're out
of luck. No package manager or walled apple garden can help you with
The solution, of course, was to add a fourth question...
1) Don't install random crap off of the internet: only use the Mac App Store, with sandboxed apps and "System integrity protection" turned on.
2) If you absolutely need to have some non-MAS app, check the checksum, download the DMG, but let it rest, and only install it a month or so later, if no news of breach, malware etc has been announced.
3) Don't give a third party program root privileges -- don't give your credentials when a random program you've downloaded asks for them.
4) Have any sensitive data (e.g. work stuff, etc) on an encrypted .DMG volume or similar, that you only decrypt when you need to check something. Even if your mac is infected, they'll either get just an encrypted image of those, or wont be able to read it at all.
5) Install an application firewall, like Little Snitch.
6) Keep backups.
> 99% of the time these hash changes are innocent
That's actually not very good at all and proves they shouldn't just trust hash changes! Very odd
EDIT: But in this case, the software in question is signed, so the (fallback) technique described above is not necessary. The download page  contains a GPG signature along with a link to the author's GPG public key. Checking the signature would have prevented the attack.
Yes, it will take alot of effort to setup, and some effort to maintain, but it helps.
We are looking at four or five Macs of differing types but all running the latest OS, a number of iPhones, iPads, more Raspberry Pi's than I'm going to admit to and a number of other IoT devices.
Also, I really wish more companies would be this forthcoming when they pwned. I think it's really good when are large company comes out with this type of mea culpa, mea maxima culpa. If professionals can get totally pwned, I really do think it tends to make ordinary users think about their security a little more. Or maybe I'm just hopelessly optimistic!
Thanks for the answer, greatly appreciated. :-)
Apple really needs to fix this. In particular open source applications don't sign for whatever reason and it's clear that barring some change they aren't going to start now.
Most open source applications are signed, just not by Apple's Appstore. Instead, most OSS downloads provide a GPG signature. You should not execute downloaded code before checking signatures - either by the Appstore or package manager, or manually.
The main difference is that Apple provides a management tool for revoking a developer key, and that OSS projects must have their own trusted server where they publish their public keys (and issue a revocation in case of GPG).
I realize that this often is not the /easy/ way. But IMO /no/ software is worth running without verifying its integrity. The good news: the more you verify your downloads, the easier it'll get :)
The better approach is to turn that question around and ask what extra security can you gain by not checking the signatures? None? Then it's best to check. Worst case, it doesn't help you at all. Best case, it saves your ass. A win on average, I'd say.
There's no such thing as a free lunch.
You can get a similar ssh 2fa setup with Google Authenticator's PAM module (https://github.com/google/google-authenticator-libpam), and maintain full control over your infrastructure.
Of course, all you're doing with any of this is preventing your key from leaking. A sufficiently motivated attacker could just backdoor your ssh/git binary and access things through that instead, but it's still a good defense-in-depth mechanism, IMO.
Can it do full disk encryption?
Works with GPG agent?
>arguably better UX
>The key never leaves your mobile device.
Can't backup the key? I need to buy two iPhones to have a backup in case one is lost?
My understanding is that the only use-case is SSH right now, but that could change, I guess.
> Requires battery. Not waterproof. Not crushproof.
Those are all valid. OTOH, it doesn't require yet another device that can be forgotten or lost, it's far easier to set up (compared to YubiKeys or other smartcards), and it's free.
> Can't backup the key? I need to buy two iPhones to have a backup in case one is lost?
That's correct. They plan to add paper backups and syncing via QR, IIRC. A second (regular) key that's kept offline would do as well.
(FWIW, I use YubiKeys for GPG/SSH/OTP/U2F as well, but I'd definitely recommend this to anyone looking for a cheaper or more usable alternative.)
iPhones can be lost too.
>it's far easier to set up
Yubikey can do U2F and OTP out of the package with no setup required.
While it can be easier (I wrote a nice shell script for myself), I don't consider setting up a Yubikey for SSH hard for the type of person who uses SSH,
It's literally copy/pasting a few lines out of Yubico's docs into a shell.
So is the Yubikey software.
>They plan to add paper backups and syncing via QR, IIRC.
I would consider that a fatal flaw. Once the private key is on the device, it should not be easy to recover it from the device.
emphasis on yet another. One more thing to keep track of. Not everyone likes to have their (door) keys hanging off their notebooks all day.
> Yubikey can do U2F and OTP out of the package with no setup required.
Yes, but we're talking about git/ssh here.
> While it can be easier (I wrote a nice shell script for myself), I don't consider setting up a Yubikey for SSH hard for the type of person who uses SSH,
That's not my experience, but it suppose it might depend on who you know/work with, or in what area you work.
> So is the Yubikey software.
I was talking about the free beer kind of free, but it's not accurate to say YubiKey is all free (libre) either, it depends on the product. There was quite a controversy a while back. Personally, I'm fine with closed-source security products. Ideological reasons aside, I don't think making decisions based on whether the code is open source makes sense.
> I would consider that a fatal flaw. Once the private key is on the device, it should not be easy to recover it from the device.
I wouldn't say it's a fatal flaw. You rely on your phone's security to manage access to signing operations anyway, so if an attacker has access to the app, you're pretty much screwed either way. Again, there are trade-offs, but it's a step-up from keeping keys on-device.
As if a whole phone is better... That's not a requirement with a yubikey anyway. Mine hangs on a lanyard on a dust plug. Many people have yubikey nanos and basically leave them plugged in their laptop all the time.
>I was talking about the free beer kind of free, but it's not accurate to say YubiKey is all free (libre)
Kryptonite is not libre. It's not even free. It's all rights reserved.
Yubico piv tool is 2 clause BSD, which is GPL compatible.
>Personally, I'm fine with closed-source security products.
So is RMS, assuming device software is fixed. See his comments on microwave ovens. Yubikey fits this description. Firmware is not updatable, for better security. Anyone who is trying to make a controversy out of what yubico is doing is either more extreme than RMS, dumb, or a competitor.
And, although I'm not the commenter, it doesn't read anywhere near it trying to imply Kryptonite is a replacement for Yubikey. They were replying to someone comparing it to the 2FA PAM module, and explaining it's more like a smartcard or Yubikey in that it stores the key, rather than add a second verification outside the key.
I looked at the source repo, and at a glance, it looks like the app stores the key pair on the iOS keychain. My guess is that means the key can be removed from the device if the user chooses to do so, or if the user give access to the keychain to another application. Perhaps I'm wrong about that. I keep hearing "The key never leaves the device" repeated, and I'd mainly like to know how that guarantee is made.
> After the first unlock, the data remains accessible until the next restart. This is recommended for items that need to be accessed by background applications. Items with this attribute do not migrate to a new device. Thus, after restoring from a backup of a different device, these items will not be present.
Seems like it's completely random that an app needs admin or not. Blender3d? No admin. Unity3d? admin. etc.
This is probably true. I'm surprised we don't get after companies for unnecessarily requiring admin with their apps.
I've participated in, and run, exercises where such damage is inflicted on purpose to surface gaps in the the response processes and to fix them. I was inspired by the Google DiRT (disaster recovery) and NetFlix Chaos Monkey exercises. Both of these create not simply review processes but simulation by action, or actually doing the damage to see the process work. Setting up your systems so that you can do that is a really powerful tool.
ssh-keygen -p -f keyfile
- Do not install personal software on your work computer
Read: The attacker could have accessed all that data but didn't send me an e-mail telling me that he did.
The "stolen" part bugs me — even though it would be incredibly shitty to distribute cracked-from-source versions of Panic apps, I hope that Apple wouldn't prevent users from running them. I appreciate the malware protection built into macOS, but this might be an abuse of it.
Look at an example of one way the word "steal" is used in speech. If I say "Good artists copy; great artists steal", and I saying that great artists break into a building and illegally remove a physical artifact, or am I saying that they copy something for their own benefit? If one can "steal" an idea, then isn't that a "stolen idea"? And if that stolen idea is directly used to create some salable product, then isn't that a "stolen product", in that sense?
edit: The comment I responded to made the claim that source code couldn't be stolen, only copied (similar to the standard argument "it's copyright infringement, not theft", often applied to copied media). There was more, but I don't remember the wording, and I don't want to misrepresent the position.
It's not necessarily true that part of the value of source code is its secrecy, though. We'd like to believe that, but it's difficult to come up with evidence to support it. Most instances where source code is leaked result in no damage to the owner, for example.
Pretty sure the same could be said of KFC's secret spices recipe.
FBI, seriously? Calling the cops, over malware, as a cool independent software company?! I mean, sure, fuck malware, but what happened to "fuck the police"? :D
You use the same machine for development of commercial, closed source software and video transcoding for most probably private use.
Your postmortem can be summarized as "[advertisement]".
I get that real security is too hard for most people. But even a few precautions can make a big difference. In order of effectiveness (least effective first):
* Don't have sensitive data mounted automatically (yes, ubuntu, your encrypted home directory is a joke).
* Don't have sensitive data on the OS-drive. Even if you are limited by archaic USB2, RAM is cheap and so is a virtual memory backed disk. Pushing your closed source into it won't take more than 30s.
* Work hard and party hard. But keep that separated. One computer for fun, one for work. The one for work should not even think about talking to external devices until it's sure the environment is friendly.
PS: I do drink my own kool-aid - I always carry 2 laptops that run 4 operating systems. My development and sysop environment is not even capable of playing a movie.
Edit: "too hard for most people" may sound harsh, but it is not meant like this. I teach OPSEC to activists in developing countries, work for a non-profit with real privacy concerns in a first world country and make real money doing audits for rather large companies. When I say "for most people" it should probably have been "in most circumstances".
Folks, don't mix your business & professional lives. The cost is not worth the benefit!
Hard advice for the co-founder of the company to follow, I'd expect.
If my co-founder used his personal machine for work, or his work machine for personal use, we'd have words.
I honestly can't believe this got downvoted to -4. Is it really so insane to say, 'don't mix your personal and business computing'? Is it really so crazy to impose a small amount of discipline which prevents a personal breech from endangering your entire business and all your customers?
edit: seriously, will one of the 8 people who've downvoted these comments post a substantive comment? I honestly don't understand the anger.
What if it was another piece software?
Who says he used it only for private use?
and please don't ask about why you get voted down, it derails the discussion (and pretty much always is yet another down vote magnet)
It's a chore, but it prevents the sort of cross-domain spillage we see in this article. I think it's worth some minor inconvenience to me in order to prevent major damage to my company and my customers.
Because IMHO that's the most likely reason for a developer to have Handbrake installed. It's not the only reason, as I noted, but I believe it's the most likely one.
As I already said, you provided a valid business case, at which point the rest of your holier-than-thou comment goes out the window because it could very well have been that and the OP has no incentive to say claim otherwise.