* The disk is encrypted by default, and the OS is aggressive about keeping it encrypted; a booted-up computer is almost always decrypted.
* The language runtimes on the phone are hardened.
* The phone's kernel, in addition to being more important attack surface than the Linux kernel (because of the jailbreak market, among other things), is auto-updated.
* The phone's users, at least on secure phones, are never superuser.
* Virtually all of the shell command attack surface is gone, since you can only install apps through the one app-install interface approved by the phone vendor.
I could go on, and I'm probably missing a big one. Phones are more secure than "computers". They kind of have to be: their OS design benefits from 2 decades of security architecture lessons learned.
I use a Y4 for some of my SSH keys, but it's a gigantic pain in the ass, and one that Egor Homakov is not entirely wrong about calling a marginal bit of security theater (whoever owned up your computer also owned up your ssh binary). It's true that the phone software token doesn't fix that, but it also doesn't cost anything.
Most android phones are not updatable at all since manufacturers don't publish any update.
That, alone, should be enough to put phones among the most vulnerable devices on the planet.
If your assumption is that Android + yubikey is as good as iOS, you need to state that. Tptacek disagrees with that elsewhere in this thread, anyway.
Personally, I think Android itself was not designed with as strong security as iOS because it was designed for openness and this has turned out to be a problem. One which iOS has to a much lesser extent.
Openness and security are orthogonal - it is possible for a software to be both open and secure: see OpenBSD.
Of course, Android has always sandboxed and prevented apps from talking to each other directly, but the restrictions on iOS have also always been much higher. iOS has never had the issues Android has had with SD cards. As someone who became an Android developer almost right after it came out, it was this freedom that attracted me.
As you can see from this thread, some people are bored and just want the distraction and will go out of their way to revive the slapfight.
Despite having friends working hard to deliver a secure Android phone for Google, my advice to generalists is the same as yours: avoid Android.
One might think that anyone savvy enough to want an app to manage his ssh private keys would also think about his phone's security. In practice, from my observation, a lot of people who have to use ssh keys don't really understand them and would not think twice about loading an app like this onto their Android phones, alongside all manner of ad-laden apps of unknown quality or provenance.
I myself use an Android phone. But I assume it's not secure.
Note that you are trusting this app with your private key. While other apps are sandboxed away from having access, all it takes is one update to the app to sneak away your private key to any remote server. You have to trust the publisher of this app, including their entire chain of source code repository management and app build/release process. It takes a single instance of a malicious person inserting code that steals all users' private keys (disgruntled employee at this company, or social engineering attack to gain access to commit to their GitHub, etc.).
It's the same reason I am horrified that anyone trusts password managers from for-profit companies. Those companies have unfiltered potential access to your entire online identity, including critical logins like your bank and government services. The app is sandboxed, and the company promises that your (unencrypted) credentials never leave the local device. But it is impossible for them to offer an absolute guarantee that their app will never be the target of an attacker slipping in code that steals information.
In this particular case, you're giving a single app access to potentially expose your key to theft. Less risky than storing your key in the default ~/.ssh/id_rsa, where any app running on your desktop machine has access to read it. Also likely to be less devastating of a loss compared to the compromise of the contents of a password manager.
On Android, the Kryptonite code uses the AndroidKeyStore to store the private key, which means that the app does not have access to it. At a minimum (on old devices), AndroidKeyStore keeps the private key material in a separate process, so it never exists in the app's process space. On newer devices (launched with M or later), the private key material is kept in the Trusted Execution Environment, so nothing in Android user or even kernel space has access to it.
EDIT: Actually, there's one small flaw in the Kryptonite code that may make the private key accessible to a sophisticated attacker who compromises the app. The key allows signing without using a hash function. Signing a sequence of carefully-chosen plaintexts can reveal the private key. I filed an issue and sent a pull request.
Fair point, not really applicable in this case though. It looks like everything you'll need you can find on their GitHub https://github.com/kryptco yourself.
Which means, you could check/skim the code strange "phoning-home calls" and get rid of them in your own fork. It would be nice to have some security experts doing some code review :)
> Also likely to be less devastating of a loss compared to the compromise of the contents of a password manager.
I often compare this situation like when you've lost your actual keys/keychain. The person who find your key's needs to figure out where to use them (if you don't store the address with your key's you're kind of fine)..
If someone find my private keys, github is something I would worry about the most. The good thing is you can add a passphrase to you ssh keys (something you can't do with your "real" / physical keys ;) )
The fact remains that app installs are bundles whose base source could come from anywhere. There's no guarantee that what you install from an app store was built from their GitHub. On a non-jailbroken iOS device, you're installing a closed-source binary with no inspection possible. I believe you could build your own Android apk from their GitHib, but how many users would ever do that (let alone rebuild it on every update)?
As I mentioned, I suspect that the first attack against one of the major password managers will result from a commit to the official repository (ex: GitHub). Users are not looking at every commit to GitHub before updating an iOS app to make sure that the latest version doesn't have a backdoor.
At the end of the day, you must trust the app publisher with your unencrypted data within the app. Anyway, for this ssh product, I wouldn't be all that worried. Password managers though... I'm waiting for the first attack to happen, at which point I can point to my HN history to show I saw it coming. ;)
It's a real pain, but I can buy a laptop/desktop that doesn't have Intel ME (or its AMD equivalent) running on it. But there's no such thing as a working phone without a baseband OS. And AFAICT there isn't a single available baseband OS running FLOSS, even including the zerophone.
In fact, there isn't even a phone that has a physical hardware switch that allows the user to turn off the baseband.
edit: clarification of "Intel ME"
Are there cases where a similar situation has been described with a USB 1.1 audio device using an open source driver? Or any other class of USB device used by tens of millions of people?
As for vulnerabilities, I'm just assuming here, but it's a pretty reasonable assumption. Radio firmwares are old legacy code, written by hardware people, likely in C or even assembly, closed source, and are highly guarded secrets. We already know that they, you know, have a radio in them, and that LE can use devices like stingrays to spoof towers. This sounds like a disasterous combination of factors for security. I'm 99% certain that most phones have an RCE waiting to happen, and most of them will have DMA on the primary CPU.
For what it's worth, at least for Apple phones, I think he's wrong about there being any component that matches the description he's provided that is a memory corruption bug away from being able to overwrite AP memory (and I'm even more sure there's no component that can do that to the SE).
But I'm not sure what exactly they're talking about, so it would be weird to rebut at this point.
Both Google and Apple design against the baseband chips as adversarial components. Before we play the "but HSIC is complicated and there's DMA in it", we're talking about security teams that have designed and implemented encrypted memory busses; I think they can handle bog-standard interconnects. I can't speak to random Xiaomi phones, but the baseband security risk story on modern flagship phones is pretty much a myth.
"Phones are insecure because baseband DMA" is a pretty reliable way to spot people who, for lack of any better way to put it, tend to get their security news from Boing Boing.
I've done some platform work on phones, but I'm not close to a leading expert on this stuff (I'm an old-school bug hunter with a focus on crypto bugs, which drags me into some odd corners sometimes). I try hard not to assume that my exposure to this stuff means I know more than other people, not least because that habit has gotten me pantsed in debates with people who know a lot more than me. Modern phones are super complicated and I do not have their block diagrams available to recall from memory. Maybe there is some component that can DMA over arbitrary AP memory that I just wasn't aware of. Are you aware of one? Have you worked on these systems?
However, I'll admit that you seem more knowledgable than me on the subject, so I won't spout FUD if you say so.
For those of you playing along at home, it's this "there are a lot of crappy Android phones" thing that dominates our concerns about Android, and is the reason we recommend to laypeople (lawyers, reporters, NGOs) to get iPhones and avoid Android phones.
The theater is that you're protecting a key but not the account the key protects. Yes, you'll never lose the actual RSA key bits to an attacker. But that'll be cold comfort after an attacker uses a hijacked SSH process to compromise every account you log into.
There's some value to the Y4 SSH key, don't get me wrong. It's just much less value than you'd expect from all the ceremony.
Added bonus is that not only does it work with SSH, but you can use those certificates for TLS mutual authentication in web browsers. Works out of the box on Mac/Windows.
But if you enable (and require) the touch feature, it'll be pretty obvious if someone's trying to use your agent to authenticate somewhere else, no? I'd certainly notice if I had to press the button twice while I'm logging into a machine that should only request auth once (i.e. I'm not proxying through another box).
Happily, this won't work if you only ever use SSH with a non-shell command, e.g. as a fancy encrypted netcat(1).
i think part of the "theatre" is the whole convenience for safety thing. a few years ago it was gospel to change your passwords all the time. then some data started to point to this actually might be bad because it forced users into to using easy to remember things like sequences, or write them down (often in something like notes app or under keyboard on a sticky).
while this may not be the good for military gov etc, it's probably a lot better for average joes. by making it easier to have pretty good security method, i can reasonably expect my users to stop doing stupid shit like keeping their keys on google drive (because "wernt you the guy who made us use drive because it's more secure than ftp?!"- yes many users are petty like that).
How is this different from your phone?
("Traditional" meaning, "to admit to": http://dictionary.cambridge.org/us/dictionary/english/own-up)
Don't ever get your security news from Boing Boing.
I'm not privy to HSIC details for phones, but most high-speed chip interconnects I've seen are a proprietary, vendor unique solutions with little global commonality, and some of them do feature direct memory access too. So, though I'm speaking with a fair amount unknown on my part, it still seems like a fairly high risk - even if one removes the DMA feature from the interconnect, it still ends up running some sort of lesser reviewed proprietary protocol stack.
In modern flagship phone designs, the baseband is an adversarial component. It isn't trusted with access to AP memory. It's accessed like a peripheral.
Nobody doubts that there are vulnerabilities in baseband processors. There certainly are. What I object to is the accompanying story that an exploit in the baseband gives you unfettered access to memory on the phone. I don't know how far back you have to go for that to be the truth, but it isn't true today for flagship phones.
Nobody should minimize the importance of baseband flaws. They're very bad for the security of the phone in its cellular phone capacity. But there's a reason why an Apple phone has separate silicon for the Secure Enclave and for the encrypted memory bus it uses to access memory: it's because the components of the phone don't trust each other. Should you trust GSM? Fuck no. Are your SSH keys more secure on an iPhone than they would be on your laptop? Fuck yes.
If you must use Android, don't use random Android phones. Stick to Google's phones.
Security Problems: Android-specific
-- Android 5 was supposed to make encryption mandatory, but it killed performance, so they rolled that back. Android 6 was supposed to do it again, but if your phone isn't fast enough, or you are upgrading, again it isn't mandatory. My Android 6 phone didn't use encryption until I enabled a pin and biometrics.
-- Many of the old versions of Android that will never get patched are still in use, meaning many phones will never be secure. New malware attacks come out regularly because there's such a large market of old Android devices.
-- Only certain things are auto-updated, and holes which require a firmware patch seldom make it to the phone due to forking.
-- Vulnerabilities in chipsets happen every other year and make hundreds of millions of devices vulnerable at once, and require patches to travel the entire supply chain.
Security Problems: Both iPhone and Android
-- Every year, more application-specific and privesc vulns are found in core phone software, compromising credentials and providing new opportunities to compound exploits. Browser exploits are notoriously effective.
-- Anyone who wants to use a useful feature of their phone on a carrier-branded device has to become superuser.
-- Ransomware isn't just for desktops anymore! Ransomware exists for both platforms that will lock your device and demand money to unlock it, or it will attempt to destroy data. How does it get in? The app store.
-- Don't want to ransom the user's data? Just silently steal their credentials, take over accounts, then sneak backdoors into the phone.
-- It's been shown that vendors and government organizations have their own service code that can run without user intervention. Also, Cheinah.
-- Call and SMS hijacking using SS7 are carrier-independent, and GSM providers that run vulnerable older encryption protocols allow calls and SMS to be intercepted. The NSA can process A5/1 encrypted GSM connections - thanks for the heads up, Snowden - and in general it's possible to force a weakened or unencrypted network connection. (If you have to use SMS two-factor, you should use a dedicated sim+number that no one knows about, so hopefully your phone is dual-sim to make that easy) And to add to the embarrassment, both iPhone and Android users have suffered DoS attacks just from being sent a single SMS.
-- If you do lose your phone, PINs, swipes, and biometric data can be recovered to unlock the phone. Studies have shown that credentials can be recovered from some phones even after a remote wipe procedure.
What moving all your secure tokens to a phone costs you is a genuine sense of security. There's a wild wild west of an app market, phishing's easy, vulns are plentiful, there's tons of varying hardware and software platforms to find holes in, and everyone now thinks they're secure, but nobody actually secures them. If I'm a malware writer I'm definitely going to target phones.
I mean, consider that if they were really bulletproof secure, it wouldn't take years to certify an old, functionally handicapped version of a phone just for POTUS to use it. It's so difficult to make a secured smartphone that Trump was allowed to keep his Samsung Galaxy to tweet with on an unsecured wifi network.
Your iPhone list has virtually nothing to do with the difference in security between phones and computers. It's just a bunch of things you don't like about phones. What on earth does A5/1 have to do with any of these issues? Cracking A5/1 keys doesn't break your IP sessions.
If you break the encryption you can intercept 2fa at the least. Any time encryption is broken security assumptions have to be re-evaluated. Your HTTPS session might be secure but that doesn't mean there aren't new holes somewhere else in the stack.
First argument: If your token generator or private keys are on a device connected to a network, they will eventually get compromised, given enough time and resources. The more attack vectors there are, the more likely a compromise is. Phones have more attack vectors than non-phone computers, so they are more likely to get compromised.
Second argument: Of the computing devices available today, phones tend to have at least as bad a security track record as modern non-phone computers. The average user can generally harden a non-phone computer's security* and pick more secure hardware, to improve their security position. But the user cannot generally do the same with a phone.* *
Non-phone computers are not only more secure due to having less attack vectors, but their ability to be secured more than a phone makes them even more trustworthy. If you had to pick a place to keep keys, a non-phone device is more secure than a phone.
* by extending existing os security features or using 3rd party software
* * or at least, I'm not aware of comparable methods for phones; there may be some software hardening tools out there for phones, but the hardware is much more limited
We are currently working on a new
license for Kryptonite. For now,
the code is released under All
Soon as I see that, I've got to close the tab, so does anyone who cares about IP.
(disclaimer: i'm a co-founder of ScaleFT)
> If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.
To anyone more versed in US law than me, what usage specifically does "use, display and perform your Content" permit?
Seems like they're planning to change that in the future, but in the meantime the existing license seems pretty clear to me.
First time I've seen them paired together in the wild like this, very cool!
Nice idea, but only if the phone was a platform you could trust.
* Can you have multiple private keys (site specific?)?
* Does the Android client support ed25519 keys or is it iOS only?
* Please modify your curl statement to be something like how Honeycomb does it (that is include a checksum test). https://honeycomb.io/docs/connect/nginx/
2) Android doesn't yet support ed25519 but it is coming.
3) We would consider something like this but the question is how that checksum test itself is served. We'll definitely look into improving this. The curl script itself does check the hashes of the installed binaries downloaded from github.
EDIT: I'd love to talk about this, if someone has a differing opinion. Once set up, it was dirt simple to use.
this exact thing is on my devs teams backlog. if you figure out a good license it would be AMAZING for us.
The MuPDF app is available under AGPLv3+. https://itunes.apple.com/us/app/mupdf/id482941798?mt=8
You will catch some flak for discouraging free commercial re-use.
If you see your app or portions of it as more of a library for others to use then LGPL is an option that requires reciprocity. In an unusual case, a "lax" license was recommended for the reference Ogg/Vorbis audio codec implementation: https://lwn.net/2001/0301/a/rms-ov-license.php3
Fixed that for you.
Why'd I even want to remove id_rsa? What's the problem being solved here?
This is why we move it off the computer and onto a phone. The security is comparable to using a Yubikey. I'm not sure why you say your phone is less secure than your laptop. On the phone, apps are sandboxed and the private key never leaves the Kryptonite sandbox.
Furthermore, it is nowadays largely trivial to set up sandboxing within a single user (using SELinux, Apparmor or whatever else) or to use multiple users and classical privilege separation to achieve the same effect.
It is also telling that your "Threat Models" in the link above do not discuss attacks against the phone at all.
Edit to add: You currently also do not have the ability to use my keys. If I were to install the app (and set it to auto-update as suggest so vigorously elsewhere), all it takes is for a tiny little update by you with no public oversight to own every server I have access to. How is that possibly improving security?!
If you have a passphrase encrypted key, you can see this for yourself:
$ eval `ssh-agent` # make sure an empty agent is running
$ ssh user@server # enter passphrase on first login
$ ssh user@server # passphrase no longer needed
Forgetting something like enabled config options is nothing uncommon for a user, but it doesn't exactly speak well for a company making an ssh-agent alternative as a product. Also I tend to agree with the poster adjacent to me which emphasized the inadequate threat model analysis in your blog post, including entirely ignoring or failing to address critical points.
> At the core, phone operating systems are built with better sandboxing than their desktop counterparts. This is why security experts like Matt Green recommend phones for your most sensitive data.
Having a better sandboxing is not the same as having a "safe sandboxing". How secure is the application once an attacker is able to compromise the sandboxing?
IMHO the rest of the threat model "deep dive" has no value once we take that attack scenario into account.
What about a non-dictionary based 20chars password protecting your private key, or storing your SSH key on an OpenPGP Smartcard in a USB token, a Yubikey or a Nitrokey (www.nitrokey.com)?
I believe that it would be much more secure than application whose security model is based on the sole sandboxing.
A phone is much more prone to theft and I don't password protect it because if I'd have to input a 10+ character random password every time... I might as well not have a smartphone but just pull out my laptop every time. Since I carry it around all the time, there is little opportunity for unauthorized access...until it gets stolen. And then I don't want to lose all access to my infrastructure.
How is that more secure than letting me backup the private key in the first place?
The easiest way to do that probably results in a callback to Big Daddy G every time I access anything sensitive and I'm cool with that.
Although the security of cloud copy-paste I haven't investigated...
For many many threat models this pairing is a more competent defense than just about any thing a single person could come up with.
Although the question remains: is this more secure than just storing your key on your computer? If you're assuming your machine to be compromised, then as soon as you login to another server you've basically given your attacker potential access there as well.
If the main objective is to prevent other apps in user space from reading unlocked private keys, why not just ssh/sudo into a secondary account where the default shell is set to an ssh client?
Personally I think one key per client is a good way when not using a hardware security module (e.g. yubikey) as the public key then identifies a unique client machine (e.g. your work laptop). This would help identify which client was breached in an eventual attack. I do think however that I would prefer the Kryptonite solution or using a Yubikey going forward.
One suggestion regarding PCI DSS: you should probably make a page/whitepaper that outlines the compliance story of Kryptonite. ScaleFT has a great one: https://www.scaleft.com/use-cases/pci-dss/. By the way, have you checked that you do not need to be compliant yourself?
Beyond that, there isn't much else this does regarding PCI. SSH does the rest.
I submitted a ticket on the repo.
But I wish people would be aware of smartcards more, they are all around us, but sort of invisible and unnoticed.
1. But cheap blank "Java" smartcards, more or less disposable
2. Install this applet on it https://github.com/philipWendland/IsoApplet
3. Works with OpenSC
I guess it's enough information to concoct some kind of 2-factor auth, but what is stopping you from promoting your real smart card into "always-with-me"? Or one of smartcards, since you can have many.
NFC-capable phones can act as a card reader for contactless smartcards AFAIK, so that's something you can look into also.
Thanks much for the pointer!
i'm reminded of theo deraadt's answer to a slashdot question back in the say about making a bootable openbsd firewall on a floppy. his response was along the lines of, "firewalls are supposed to be among the most reliable things. floppy drives are among the least reliable things."
On the other hand, I can see an immediate use-case for this for me. I use mosh to log into my cloud dev environment and since mosh doesn't support ssh-agent forwarding (and unlikely to ever to support it) ... this seems much a much better alternative.
You can use the built-in signature digest support and still add support for whatever you want in the future.
ED: Seems it's just as if you re-did the cipher negotiation, so no asymmetric interaction.
1. So I have to update all my servers to use my Kryptonite SSH key from the current Private Keys that I have?
2. This solution still doesn't allow me to SSH into my servers from another machine that doesn't have my private keys on it (such as a colleague's Mac), does it?
2. It does actually. All you need to do is pair with your colleague's mac. Run `kr pair` on their machine, Kryptonite can be paired with unlimited computers such as your work and home computers. You'll be able to ssh to all your servers using the Kryptonite key.
i.e. On iOS there is the opensource Blink client.
Not sure how the protocol would change, but it'd be nice if Kryptonite could store the keys in one place focused on securely storing the keys and then ssh clients can use them as needed. (Also for things like an iPad using a key on a phone)
For some things I trust a paper backup in a fireproof safe over some nebulous cloud thingy on my phone.
Atleast everything is open source, I'll favorite it and check back some time in the future.
1. iCloud backups are encrypted.
2. If you're not comfortable with #1, you can manually exclude an individual app from iCloud backups through Settings > Storace & iCloud Usage > Manage Storage (in the iCloud section). Click on your device in the Backups section. Turn off Kryptonite in the "Choose Data to Back Up" section.
This would mean that the phone needs to be physically close in order to accomplish auth and I would think that's a good requirement based the premise of this app.
GitHub user agreement allows other users to view and make copies of your content on github but not for "free use" in general.
If your checklist for being "secure" involves being up-to-date, there is no amount of 'getting a better phone' that will put it on par with a desktop OS that receives daily patches.
Updates software does not instantly mean more secure, which I am sure you are aware of.
As you stated, for the generalists, yes, an out-of-the-box iPhone has more hardening than an OOTB Linux desktop, but for the type of person on HN, this is not necessarily true at all. There are too many variables.
I also have to wonder what kinds of backdoor access the NSA has to my phone.
I agree with another commentator, I trust my PC more than I trust my phone. The latter is probably already compromised.
If so, how does the server contact my phone? Through your server, right?
What software do I install on the server for that?
> Our system consists of three components:
> (1) the Kryptonite phone app for iOS and Android,
> (2) the krd daemon that runs in the background on a macOS or Linux computer, and
> (3) the kr command line utility that manages krd.
Sounds like `krd` is why I likely won't be using this.
Try implementing it as a PAM module or something.
So my suggestion re: PAM is irrelevant because you aren't changing the server, you're changing the client.
Ok, I'm interested ...maybe... I'll wait until people more familiar with ssh-agent chime in. :)
Will be keeping an eye on it though.
We detached this comment from https://news.ycombinator.com/item?id=14241907 and marked it off-topic.