Hacker News new | comments | show | ask | jobs | submit login
Show HN: Kryptonite – a new home for your SSH private key (krypt.co)
338 points by 4kevinking 266 days ago | hide | past | web | favorite | 219 comments



It sounds very hipster and all, but how is a phone more trustworthy than a Linux PC? Cool, we don't need to trust a PC, now we have to trust a phone and pretend that malware for smartphones don't exist at all. Hardware USB token looks much better as its attack surface is so much smaller than iOS/Android.


* Every application on the phone is sandboxed.

* The disk is encrypted by default, and the OS is aggressive about keeping it encrypted; a booted-up computer is almost always decrypted.

* The language runtimes on the phone are hardened.

* The phone's kernel, in addition to being more important attack surface than the Linux kernel (because of the jailbreak market, among other things), is auto-updated.

* The phone's users, at least on secure phones, are never superuser.

* Virtually all of the shell command attack surface is gone, since you can only install apps through the one app-install interface approved by the phone vendor.

I could go on, and I'm probably missing a big one. Phones are more secure than "computers". They kind of have to be: their OS design benefits from 2 decades of security architecture lessons learned.

I use a Y4 for some of my SSH keys, but it's a gigantic pain in the ass, and one that Egor Homakov is not entirely wrong about calling a marginal bit of security theater (whoever owned up your computer also owned up your ssh binary). It's true that the phone software token doesn't fix that, but it also doesn't cost anything.


>>> The phone's kernel, in addition to being more important attack surface than the Linux kernel (because of the jailbreak market, among other things), is auto-updated.

Most android phones are not updatable at all since manufacturers don't publish any update.

That, alone, should be enough to put phones among the most vulnerable devices on the planet.


It puts those Android phones among the more vulnerable devices. Don't use those Android phones. I recommend iPhones to anyone concerned about security, but you can substitute the Google phone of your choice; I don't want the argument today.


Thomas, I'm sure I'm speaking on behalf of many on HN - your ability to tirelessly battle through incorrect information on HN revolving around security related posts is commendable and always appreciated. As they say, patience is a virtue.


In a decent phone it puts the key in the tee/se


What's cheaper - an iOS phone, or an Android phone + a Yubikey?


Tptacek is telling you that the most secure phone is iOS. If what's most important to you is price (and, therefore not security), yeah, you might find something else to be more compelling. That fact your different value system leads to a different choice has no relevance to the discussion.

If your assumption is that Android + yubikey is as good as iOS, you need to state that. Tptacek disagrees with that elsewhere in this thread, anyway.


My read is that tptachek claims that not all android phones are insecure. You should avoid those android phones whose vendors don't do serious security updates. Among the vendors that are well known to care about updating their phones there is Google. So, he seems to say that an android phone branded directly by Google (e.g. Nexuses, pixels) is a valid alternative to iPhones as far as this topic is concerned.


He did not say that, but it's natural to assume that. All android phones run android (naturally) so if for whatever reason you believe iOS to be more secure and you also know most android phones do not receive updates correctly then it's probably easier to avoid all android phones than try to accurately predict the development roadmaps for companies other than Google's flagship phones.

Personally, I think Android itself was not designed with as strong security as iOS because it was designed for openness and this has turned out to be a problem. One which iOS has to a much lesser extent.


> Personally, I think Android itself was not designed with as strong security as iOS because it was designed for openness and this has turned out to be a problem.

Openness and security are orthogonal - it is possible for a software to be both open and secure: see OpenBSD.


I completely agree, by "openness" I meant the lack of ACL/permissions for different data. This is improving on Android, but I don't keep up with it anymore so I'm not sure how much they've caught up or if what is released/planned is marketing hype.

Of course, Android has always sandboxed and prevented apps from talking to each other directly, but the restrictions on iOS have also always been much higher. iOS has never had the issues Android has had with SD cards. As someone who became an Android developer almost right after it came out, it was this freedom that attracted me.


I am in fact implying that there are some Android phones that experts might be able to configure to a point of security comparable to iOS.


Cost is not the only factor leading users to Android. And cost certainly isn't driving users to Google's Pixel phones - they are priced similarly to iPhones.


Not to mention the vast majority of android devices have vulnerable wifi firmware, that will never be updated. Somehow fans of phone security think there is nothing strange about replacing a phone every 12 months to keep receiving updates...


What's really happening is that fans of phone security want to say "iPhone" rather than "phone", but when they do that on HN, Linux-on-the-desktop Android supporters come out of the woodwork to yell at them, and it's a winnable argument on a thread that is only about whether iPhones are more secure than Android phones, but not on broader threads where it's just an enormous distraction.

As you can see from this thread, some people are bored and just want the distraction and will go out of their way to revive the slapfight.

Despite having friends working hard to deliver a secure Android phone for Google, my advice to generalists is the same as yours: avoid Android.


Since Kryptonite is available for Android phones, it's a point at least worth highlighting.

One might think that anyone savvy enough to want an app to manage his ssh private keys would also think about his phone's security. In practice, from my observation, a lot of people who have to use ssh keys don't really understand them and would not think twice about loading an app like this onto their Android phones, alongside all manner of ad-laden apps of unknown quality or provenance.

I myself use an Android phone. But I assume it's not secure.


Edit: misread the message. Removed.


Agreed. Until the last year or so there was not a single Android OS+Hardware combo that basically had the same full file based encryption capabilities of iOS 4.0. 4 .... dot .... zero. It took some super small subset of Android to reach the security of iOS 5 years ago. I think (think!) Pixel comes close to having most of the features of Secure Enclave -- but you can hardly wave one of the most recent and expensive Android devices around and say "Android is as secure as iOS"... Odds are almost certain any Android phone a consumer will buy today will have far less platform security than iOS.


No. There are hundreds of models of Android devices. Only a handful of Samsung, HTC, Motorola, and some lesser known manufacturers used the BCM4339 chipset.


>> Every application on the phone is sandboxed

Note that you are trusting this app with your private key. While other apps are sandboxed away from having access, all it takes is one update to the app to sneak away your private key to any remote server. You have to trust the publisher of this app, including their entire chain of source code repository management and app build/release process. It takes a single instance of a malicious person inserting code that steals all users' private keys (disgruntled employee at this company, or social engineering attack to gain access to commit to their GitHub, etc.).

It's the same reason I am horrified that anyone trusts password managers from for-profit companies. Those companies have unfiltered potential access to your entire online identity, including critical logins like your bank and government services. The app is sandboxed, and the company promises that your (unencrypted) credentials never leave the local device. But it is impossible for them to offer an absolute guarantee that their app will never be the target of an attacker slipping in code that steals information.

In this particular case, you're giving a single app access to potentially expose your key to theft. Less risky than storing your key in the default ~/.ssh/id_rsa, where any app running on your desktop machine has access to read it. Also likely to be less devastating of a loss compared to the compromise of the contents of a password manager.


>> Note that you are trusting this app with your private key.

On Android, the Kryptonite code uses the AndroidKeyStore to store the private key, which means that the app does not have access to it. At a minimum (on old devices), AndroidKeyStore keeps the private key material in a separate process, so it never exists in the app's process space. On newer devices (launched with M or later), the private key material is kept in the Trusted Execution Environment, so nothing in Android user or even kernel space has access to it.

EDIT: Actually, there's one small flaw in the Kryptonite code that may make the private key accessible to a sophisticated attacker who compromises the app. The key allows signing without using a hash function. Signing a sequence of carefully-chosen plaintexts can reveal the private key. I filed an issue and sent a pull request.


> Note that you are trusting this app with your private key. While other apps are sandboxed away from having access, all it takes is one update to the app to sneak away your private key to any remote server. You have to trust the publisher of this app, including their entire chain of source code repository management and app build/release process. It takes a single instance of a malicious person inserting code that steals all users' private keys (disgruntled employee at this company, or social engineering attack to gain access to commit to their GitHub, etc.).

Fair point, not really applicable in this case though. It looks like everything you'll need you can find on their GitHub https://github.com/kryptco yourself.

Which means, you could check/skim the code strange "phoning-home calls" and get rid of them in your own fork. It would be nice to have some security experts doing some code review :)

> Also likely to be less devastating of a loss compared to the compromise of the contents of a password manager.

I often compare this situation like when you've lost your actual keys/keychain. The person who find your key's needs to figure out where to use them (if you don't store the address with your key's you're kind of fine).. If someone find my private keys, github is something I would worry about the most. The good thing is you can add a passphrase to you ssh keys (something you can't do with your "real" / physical keys ;) )


>> everything you'll need you can find on their GitHub

The fact remains that app installs are bundles whose base source could come from anywhere. There's no guarantee that what you install from an app store was built from their GitHub. On a non-jailbroken iOS device, you're installing a closed-source binary with no inspection possible. I believe you could build your own Android apk from their GitHib, but how many users would ever do that (let alone rebuild it on every update)?

As I mentioned, I suspect that the first attack against one of the major password managers will result from a commit to the official repository (ex: GitHub). Users are not looking at every commit to GitHub before updating an iOS app to make sure that the latest version doesn't have a backdoor.

At the end of the day, you must trust the app publisher with your unencrypted data within the app. Anyway, for this ssh product, I wouldn't be all that worried. Password managers though... I'm waiting for the first attack to happen, at which point I can point to my HN history to show I saw it coming. ;)


So if I buy such a phone, what is the relationship between the baseband chip/OS and all the bullet points you listed above?

https://www.fsf.org/blogs/community/replicant-developers-fin...

It's a real pain, but I can buy a laptop/desktop that doesn't have Intel ME (or its AMD equivalent) running on it. But there's no such thing as a working phone without a baseband OS. And AFAICT there isn't a single available baseband OS running FLOSS, even including the zerophone.

In fact, there isn't even a phone that has a physical hardware switch that allows the user to turn off the baseband.

edit: clarification of "Intel ME"


Use a Google or Apple phone and stop worrying about their basebands, which are effectively USB peripherals.


> While working on Replicant, a fully free/libre version of Android, we discovered that the proprietary program running on the applications processor in charge of handling the communication protocol with the modem actually implements a backdoor that lets the modem perform remote file I/O operations on the file system.

Are there cases where a similar situation has been described with a USB 1.1 audio device using an open source driver? Or any other class of USB device used by tens of millions of people?


You have to be very specific about what Android phones you're talking about, and I'm the wrong person to debate Android security with. I have friends who work on Android platform security and I am very hesitant to cast aspersions on the Android security effort as a whole. But my advice for people who really care about security is: get an iPhone.


The radio typically has DMA on the primary CPU, and remotely exploitable vulnerabilities.


Which phone are you talking about, and which vulnerability?


A specific list requires a lot of research to produce, and I don't have one handy. It's generally understood that the answer is "most" phones.

As for vulnerabilities, I'm just assuming here, but it's a pretty reasonable assumption. Radio firmwares are old legacy code, written by hardware people, likely in C or even assembly, closed source, and are highly guarded secrets. We already know that they, you know, have a radio in them, and that LE can use devices like stingrays to spoof towers. This sounds like a disasterous combination of factors for security. I'm 99% certain that most phones have an RCE waiting to happen, and most of them will have DMA on the primary CPU.


I don't think this is accurate for modern Android or iOS devices. AIUI basebands are connected via HSIC (a chip-to-chip variant of USB) nowadays, which doesn't give the baseband DMA.


My assumption is that they're talking about a component that is not the baseband. The baseband itself definitely isn't connected via DMA. I'm trying to make sure we're using the same terminology before I respond.

For what it's worth, at least for Apple phones, I think he's wrong about there being any component that matches the description he's provided that is a memory corruption bug away from being able to overwrite AP memory (and I'm even more sure there's no component that can do that to the SE).

But I'm not sure what exactly they're talking about, so it would be weird to rebut at this point.


HSIC (and USB in general) is a very complicated stack. Host-controlled DMA is probably in use, which is fragile at best. Historicically evil has definitely been possible with USB, and we all know how good phones are about getting updates.


Wait, so you were talking about the baseband chips?

Both Google and Apple design against the baseband chips as adversarial components. Before we play the "but HSIC is complicated and there's DMA in it", we're talking about security teams that have designed and implemented encrypted memory busses; I think they can handle bog-standard interconnects. I can't speak to random Xiaomi phones, but the baseband security risk story on modern flagship phones is pretty much a myth.

"Phones are insecure because baseband DMA" is a pretty reliable way to spot people who, for lack of any better way to put it, tend to get their security news from Boing Boing.

I've done some platform work on phones, but I'm not close to a leading expert on this stuff (I'm an old-school bug hunter with a focus on crypto bugs, which drags me into some odd corners sometimes). I try hard not to assume that my exposure to this stuff means I know more than other people, not least because that habit has gotten me pantsed in debates with people who know a lot more than me. Modern phones are super complicated and I do not have their block diagrams available to recall from memory. Maybe there is some component that can DMA over arbitrary AP memory that I just wasn't aware of. Are you aware of one? Have you worked on these systems?


I trust Google, but I don't trust vendors. I expect vendors to make compromises left and right to deal with shitty hardware (oh, the radio doesn't work with this security feature enabled? Meh, we don't need it).

However, I'll admit that you seem more knowledgable than me on the subject, so I won't spout FUD if you say so.


This is a valid concern for off-brand Android phones. Google's Android security team is on top of this stuff with their hardware vendors. Nobody in the industry does hardware security better right now than Apple. In neither case is the Boing Boing narrative of "bug on the baseband means you own the whole phone" true. I'm sure there's an Android phone on the market where it is true, but there are a lot of crappy Android phones.

For those of you playing along at home, it's this "there are a lot of crappy Android phones" thing that dominates our concerns about Android, and is the reason we recommend to laypeople (lawyers, reporters, NGOs) to get iPhones and avoid Android phones.


I'm not sure if "Evil/BadUSB"-style vulnerabilities really apply to HSIC, where the code would know what kind of device to expect. It's possible there are ways to make this work - I'm not aware of any past vulnerabilities of this nature, but I'm not familiar enough with this topic to really make an educated guess. Either way, it doesn't seem like a matter of "just find a zero-day in the baseband and you get DMA", it's more along the lines of a vulnerability in, say, Safari, which then needs all kinds of sandbox escapes and privilege escalations to be useful for an attacker, and it still does nothing to get past Secure Enclave (and possibly Android's variant of that).


If the threat was solely limited to an attack on the baseband that could be valid.


What specifically do you consider a gigantic pain in the ass about using SSH keys on a Yubikey? Also, how is keeping the private key ensconced in a (presumably) secure HSM a 'bit of security theatre'? Genuinely interested, here.


The agent process is tricky to set up and persnickety, especially if you use the token for anything else besides SSH. At one point, before I tableflipped and switched back to software keys, I had to switch agent processes any time I used my Y4 to sign into a VPN.

The theater is that you're protecting a key but not the account the key protects. Yes, you'll never lose the actual RSA key bits to an attacker. But that'll be cold comfort after an attacker uses a hijacked SSH process to compromise every account you log into.

There's some value to the Y4 SSH key, don't get me wrong. It's just much less value than you'd expect from all the ceremony.


I switched my SSH keys on my Yubikey from using the OpenPGP applet and all the funkiness of gpg-agent over to using X.509 certificates with the PIV applet.

Added bonus is that not only does it work with SSH, but you can use those certificates for TLS mutual authentication in web browsers. Works out of the box on Mac/Windows.


Can you provide a link or reference to this path? I can't find much with basic searches and have given up several times on the gpg-connect-agent path while using Windows or Ubuntu. The path on Mac seems okay but on other platforms is way too difficult.


>Yes, you'll never lose the actual RSA key bits to an attacker. But that'll be cold comfort after an attacker uses a hijacked SSH process to compromise every account you log into.

But if you enable (and require) the touch feature, it'll be pretty obvious if someone's trying to use your agent to authenticate somewhere else, no? I'd certainly notice if I had to press the button twice while I'm logging into a machine that should only request auth once (i.e. I'm not proxying through another box).


They don't have to use your agent to authenticate again. They just feed commands to your SSH client binary that it then sends over the wire in your existing SSH session-context, with the SSH pty temporarily set to not echo input and discard output (or return it over the network to the attacker, but not render it in the local PTY character-buffer.) A hijacked SSH binary can even detect an elevation-requiring command you provided in the terminal and wrap it in a "sudo -c" along with a payload, such that the whole thing happens as one elevated "transaction", only hitting your agent once.

Happily, this won't work if you only ever use SSH with a non-shell command, e.g. as a fancy encrypted netcat(1).


Right, and that's still a problem, but not as much of a problem as if they were able to then use your key to authenticate to every other machine you have access to. The problem is at least contained to a single system at a time.


FWIW, I never had that issue with my NEO on OSX/macOS. I was using my NEO for git signing, SSH, as well as U2F.


tptacek's VPN probably uses PIV applet instead of GPG one and you can't switch them easily (requires stopping and starting agents). If you stay in GPG+U2F land then you don't have these problems.


totally agreed on agent. also on provisioning the keys. my windows users eyes, even the more technical of the bunch, glaze over when teaching intro to your yubi key


y4 is one more thing to worry about. to lose. to have stolen. to drop in the toilet. etc.

i think part of the "theatre" is the whole convenience for safety thing. a few years ago it was gospel to change your passwords all the time. then some data started to point to this actually might be bad because it forced users into to using easy to remember things like sequences, or write them down (often in something like notes app or under keyboard on a sticky).

while this may not be the good for military gov etc, it's probably a lot better for average joes. by making it easier to have pretty good security method, i can reasonably expect my users to stop doing stupid shit like keeping their keys on google drive (because "wernt you the guy who made us use drive because it's more secure than ftp?!"- yes many users are petty like that).


> y4 is one more thing to worry about. to lose. to have stolen. to drop in the toilet. etc.

How is this different from your phone?


Presumably most of the people in this thread already have a smartphone. So it isn't really one _more_ thing.


Most of us have a keyring too - and losing that is a major pain in the ass. FOr me - I'd say my wallet/keys/phone are the objects I own that are "most secure".


i'm keyless and loving it, true most aren't


How did you do that? I am genuinely curious.


no car and hideakey for home

:)


So I have 4 motorcycles each with it's own key/keyring, my house keys, and my work keys. I've almost _always_ got three keyrings in my pocket.


"owned up" -- Our language has evolved again! (I only comment because I find this genuinely interesting, this mutation evidently coming from the security subset of the tech world.)

("Traditional" meaning, "to admit to": http://dictionary.cambridge.org/us/dictionary/english/own-up)


I thought the big fundamental vulnerability was that many baseband processors operating the phone's radio has dma to the memory of the entire device.


This is a myth.


I don't understand, I'm going off of information such as this [1][2]. Do you mean that shared memory vulnerabilities described in [2] has since been closed? Or maybe I'm being imprecise with terming it "dma" access?

[1] http://boingboing.net/2016/07/20/baseband-vulnerability-coul...

[2] https://www.usenix.org/system/files/conference/woot12/woot12...


On modern phones --- for the last $many years --- the baseband processors are connected to the AP via things like HSIC, not DMA.

Don't ever get your security news from Boing Boing.


That Boing Boing article did reference actual CVEs. Though I like him as an author, Doctrow's writing style in articles rub me the wrong way too.

I'm not privy to HSIC details for phones, but most high-speed chip interconnects I've seen are a proprietary, vendor unique solutions with little global commonality, and some of them do feature direct memory access too. So, though I'm speaking with a fair amount unknown on my part, it still seems like a fairly high risk - even if one removes the DMA feature from the interconnect, it still ends up running some sort of lesser reviewed proprietary protocol stack.


HSIC is USB. It's an industry standard.

In modern flagship phone designs, the baseband is an adversarial component. It isn't trusted with access to AP memory. It's accessed like a peripheral.

Nobody doubts that there are vulnerabilities in baseband processors. There certainly are. What I object to is the accompanying story that an exploit in the baseband gives you unfettered access to memory on the phone. I don't know how far back you have to go for that to be the truth, but it isn't true today for flagship phones.

Nobody should minimize the importance of baseband flaws. They're very bad for the security of the phone in its cellular phone capacity. But there's a reason why an Apple phone has separate silicon for the Secure Enclave and for the encrypted memory bus it uses to access memory: it's because the components of the phone don't trust each other. Should you trust GSM? Fuck no. Are your SSH keys more secure on an iPhone than they would be on your laptop? Fuck yes.

If you must use Android, don't use random Android phones. Stick to Google's phones.


Ah, thanks! It was my misunderstanding confusing HSIC with a more generic chip interconnect.


pls explain


You are completely right up to the point that people are able to update their phones.


If you want to add the proviso that this is true only for iPhones, that's fine, but I wasn't looking for that particular message board slapfight.


Was that satire?

Security Problems: Android-specific

-- Android 5 was supposed to make encryption mandatory, but it killed performance, so they rolled that back. Android 6 was supposed to do it again, but if your phone isn't fast enough, or you are upgrading, again it isn't mandatory. My Android 6 phone didn't use encryption until I enabled a pin and biometrics.

-- Many of the old versions of Android that will never get patched are still in use, meaning many phones will never be secure. New malware attacks come out regularly because there's such a large market of old Android devices.

-- Only certain things are auto-updated, and holes which require a firmware patch seldom make it to the phone due to forking.

-- Vulnerabilities in chipsets happen every other year and make hundreds of millions of devices vulnerable at once, and require patches to travel the entire supply chain.

.

Security Problems: Both iPhone and Android

-- Every year, more application-specific and privesc vulns are found in core phone software, compromising credentials and providing new opportunities to compound exploits. Browser exploits are notoriously effective.

-- Anyone who wants to use a useful feature of their phone on a carrier-branded device has to become superuser.

-- Ransomware isn't just for desktops anymore! Ransomware exists for both platforms that will lock your device and demand money to unlock it, or it will attempt to destroy data. How does it get in? The app store.

-- Don't want to ransom the user's data? Just silently steal their credentials, take over accounts, then sneak backdoors into the phone.

-- It's been shown that vendors and government organizations have their own service code that can run without user intervention. Also, Cheinah.

-- Call and SMS hijacking using SS7 are carrier-independent, and GSM providers that run vulnerable older encryption protocols allow calls and SMS to be intercepted. The NSA can process A5/1 encrypted GSM connections - thanks for the heads up, Snowden - and in general it's possible to force a weakened or unencrypted network connection. (If you have to use SMS two-factor, you should use a dedicated sim+number that no one knows about, so hopefully your phone is dual-sim to make that easy) And to add to the embarrassment, both iPhone and Android users have suffered DoS attacks just from being sent a single SMS.

-- If you do lose your phone, PINs, swipes, and biometric data can be recovered to unlock the phone. Studies have shown that credentials can be recovered from some phones even after a remote wipe procedure.

.

What moving all your secure tokens to a phone costs you is a genuine sense of security. There's a wild wild west of an app market, phishing's easy, vulns are plentiful, there's tons of varying hardware and software platforms to find holes in, and everyone now thinks they're secure, but nobody actually secures them. If I'm a malware writer I'm definitely going to target phones.

I mean, consider that if they were really bulletproof secure, it wouldn't take years to certify an old, functionally handicapped version of a phone just for POTUS to use it. It's so difficult to make a secured smartphone that Trump was allowed to keep his Samsung Galaxy to tweet with on an unsecured wifi network.


I didn't bother reading the Android list. If you're concerned about Android security, as I am, don't use Android phones.

Your iPhone list has virtually nothing to do with the difference in security between phones and computers. It's just a bunch of things you don't like about phones. What on earth does A5/1 have to do with any of these issues? Cracking A5/1 keys doesn't break your IP sessions.


There are more ways to exploit a phone than a computer, and you can't control how it works. A computer you can pretty much completely control. Moreover, it is easier to surreptitiously own a phone. It's this false sense of security that's dangerous.

If you break the encryption you can intercept 2fa at the least. Any time encryption is broken security assumptions have to be re-evaluated. Your HTTPS session might be secure but that doesn't mean there aren't new holes somewhere else in the stack.


No, this is in fact why everyone says not to use SMS as an authentication factor. When you use TOTP applications like Google Authenticator on your phone, you are no longer relying on the security of the GSM network. I don't think your arguments here are very coherent.


I will try to simplify; it doesn't make any more sense, I give up.

First argument: If your token generator or private keys are on a device connected to a network, they will eventually get compromised, given enough time and resources. The more attack vectors there are, the more likely a compromise is. Phones have more attack vectors than non-phone computers, so they are more likely to get compromised.

Second argument: Of the computing devices available today, phones tend to have at least as bad a security track record as modern non-phone computers. The average user can generally harden a non-phone computer's security* and pick more secure hardware, to improve their security position. But the user cannot generally do the same with a phone.* *

Summary:

Non-phone computers are not only more secure due to having less attack vectors, but their ability to be secured more than a phone makes them even more trustworthy. If you had to pick a place to keep keys, a non-phone device is more secure than a phone.

* by extending existing os security features or using 3rd party software

* * or at least, I'm not aware of comparable methods for phones; there may be some software hardening tools out there for phones, but the hardware is much more limited


The attack surface may seem smaller for USB tokens, but storing the pin for say a Yubikey allows malware to use the key without restriction. Also, these USB devices don't have a UI, so you never know what you are actually approving, i.e. which username or server you are logging into.


Erm, not true - changing the Yubikey setting to require a touch for key use (S/C/E) is trivial. Malware can't use it without restriction if it requires you to physically touch it every time you want to approve use.


Point taken. Unfortunately when you touch the key you still can't verify exactly what you are approving.


There's a GPG/SSH applet for the ledger Nano S now that has a tiny screen and buttons...


So, capture the auth and use it for the malware, show the user some failure and allow their retry to pass. Stupid dodgy Yubikey fails half the time.


Hey HN! We've built a way to generate an SSH key on your phone and use it from your computer such that the private key never leaves the phone. We were inspired by the threat model of USB HSMs like the Yubikey and set out to build a free, public source, and easier to use BYOD alternative. Looking forward to your questions!


I love the transparency of having your source on Github, but the license ambiguity isn't ideal when revealing this to the world:

  We are currently working on a new 
  license for Kryptonite. For now, 
  the code is released under All 
  Rights Reserved.
https://github.com/KryptCo/kr#license

Soon as I see that, I've got to close the tab, so does anyone who cares about IP.

(disclaimer: i'm a co-founder of ScaleFT)


Is proprietary code even allowed on GitHub?


You grant some rights to other users of GitHub, but I'm not seeing anything that prevents proprietary code:

> If you set your pages and repositories to be viewed publicly, you grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality. You may grant further rights if you adopt a license.

https://help.github.com/articles/github-terms-of-service/

To anyone more versed in US law than me, what usage specifically does "use, display and perform your Content" permit?


Yes


How's that ambiguous? The code is proprietary; all rights reserved.

Seems like they're planning to change that in the future, but in the meantime the existing license seems pretty clear to me.


For one of the main repos:

https://github.com/KryptCo/kr

57% Go

40% Rust

First time I've seen them paired together in the wild like this, very cool!


Thanks! We love Go for its SSH support and Rust for its dylib / crypto libraries.


https://news.ycombinator.com/item?id=14121780 on go ssh security And what is the use of this when it runs on a platform that is an active target for every intelligence agency in the world with irregular updates, regular security problems and other problems like preinstalled spyware and so on?

Nice idea, but only if the phone was a platform you could trust.


Those two languages are lowering attack surface by huge factor ! :) What a amazing combination.


Some questions & comments:

* Can you have multiple private keys (site specific?)?

* Does the Android client support ed25519 keys or is it iOS only?

* Please modify your curl statement to be something like how Honeycomb does it (that is include a checksum test). https://honeycomb.io/docs/connect/nginx/


1) We are planning to support multiple private keys, but currently only support a single key pair.

2) Android doesn't yet support ed25519 but it is coming.

3) We would consider something like this but the question is how that checksum test itself is served. We'll definitely look into improving this. The curl script itself does check the hashes of the installed binaries downloaded from github.


Multiple private keys are more important than you might think, because Github won't let you use the same key in more than one account, so everyone who has more than one Github account and wants hardware tokens has to have multiple hardware tokens.


As you seem to use multiple GitHub accounts with SSH, you might be interested by the ssh_config tricks that I implemented in github-keygen https://github.com/dolmen/github-keygen/


I didn't realize that, thanks for letting us know! We're actively working on this


The Yubikey (as in generate and store SSH keys on the Yubikey) is not exactly a hard to use option. It's not as nicely packaged, but quite functional.

EDIT: I'd love to talk about this, if someone has a differing opinion. Once set up, it was dirt simple to use.


I am of the same opinion. I added the library in my SSH config and now SSH will just use the yubikey, if it's plugged in. The only problem is that I haven't been able to find an agent supporting all of (gnome keyring for persistent passwords, ed25519 keys, the yubikey).


Any plans to support FreeBSD?


license :(

this exact thing is on my devs teams backlog. if you figure out a good license it would be AMAZING for us.


GPLv3 should be enough for an app; AGPL adds additional protection for server-side/self-hosted projects. https://softwareengineering.stackexchange.com/questions/1420...

The MuPDF app is available under AGPLv3+. https://itunes.apple.com/us/app/mupdf/id482941798?mt=8

You will catch some flak for discouraging free commercial re-use.

If you see your app or portions of it as more of a library for others to use then LGPL is an option that requires reciprocity. In an unusual case, a "lax" license was recommended for the reference Ogg/Vorbis audio codec implementation: https://lwn.net/2001/0301/a/rms-ov-license.php3


> AGPL adds additional restrictions for server-side/self-hosted projects

Fixed that for you.


Can't seem to create both an RSA and an Ed25519 key.


Correct, we currently only support one key, but multiple key support is in the works!


Any chance of a UWP/Windows Phone app in the future?


In terms of possible compromise, I rate the possibility that my phone is compromised way higher than my laptop. Adding a factor is a good idea in terms of security (not in terms of availability and ease of use, but definitely in security), but replacing it entirely... No.

Why'd I even want to remove id_rsa? What's the problem being solved here?


The problem is that your private key stored in ~/.ssh/id_rsa can be read by any user-level application. The private key is even vulnerable if you passphrase encrypt it. See our deep dive into the threat model: https://blog.krypt.co/why-store-an-ssh-key-with-kryptonite-9...

This is why we move it off the computer and onto a phone. The security is comparable to using a Yubikey. I'm not sure why you say your phone is less secure than your laptop. On the phone, apps are sandboxed and the private key never leaves the Kryptonite sandbox.


Neither Google nor Apple have root access on a Yubikey, nor does that key have some sort of wireless transmitter included which would allow for unnoticed data transfer to or from the key.

Furthermore, it is nowadays largely trivial to set up sandboxing within a single user (using SELinux, Apparmor or whatever else) or to use multiple users and classical privilege separation to achieve the same effect.

It is also telling that your "Threat Models" in the link above do not discuss attacks against the phone at all.

Edit to add: You currently also do not have the ability to use my keys. If I were to install the app (and set it to auto-update as suggest so vigorously elsewhere), all it takes is for a tiny little update by you with no public oversight to own every server I have access to. How is that possibly improving security?!


Thanks for the one sane post in this thread, as opposed to the iPhone is more secure than your Linux desktop FUD that tptacek seems to be spreading here.


From your blog:

  If you have a passphrase encrypted key, you can see this for yourself:

  $ eval `ssh-agent` # make sure an empty agent is running
  $ ssh user@server  # enter passphrase on first login
  $ ssh user@server  # passphrase no longer needed
This is wrong. Keys not added explicitly with `ssh-add` to ssh-agent will not be available unless you explicitly enabled AddKeysToAgent in ssh_config. [0]

Forgetting something like enabled config options is nothing uncommon for a user, but it doesn't exactly speak well for a company making an ssh-agent alternative as a product. Also I tend to agree with the poster adjacent to me which emphasized the inadequate threat model analysis in your blog post, including entirely ignoring or failing to address critical points.

[0] https://www.freebsd.org/cgi/man.cgi?query=ssh_config&sektion...


Some distros do this automatically for you. I know macOS did until the recent major update.


I also share the opinion that the threat model is flawed, biased. You define it "deep dive" but you didn't even scratch the surface of the issue.

> At the core, phone operating systems are built with better sandboxing than their desktop counterparts. This is why security experts like Matt Green recommend phones for your most sensitive data.

Having a better sandboxing is not the same as having a "safe sandboxing". How secure is the application once an attacker is able to compromise the sandboxing?

IMHO the rest of the threat model "deep dive" has no value once we take that attack scenario into account.

What about a non-dictionary based 20chars password protecting your private key, or storing your SSH key on an OpenPGP Smartcard in a USB token, a Yubikey or a Nitrokey (www.nitrokey.com)?

I believe that it would be much more secure than application whose security model is based on the sole sandboxing.


My phone has been on so many more networks, does not get properly updated, and is just a personal device that I use for lots of stuff and trying out lots of apps. My laptop consists of 98% software from repositories, has disk encryption, is updated very frequently, and has perhaps 2 or 3 applications from outside of the repos.

A phone is much more prone to theft and I don't password protect it because if I'd have to input a 10+ character random password every time... I might as well not have a smartphone but just pull out my laptop every time. Since I carry it around all the time, there is little opportunity for unauthorized access...until it gets stolen. And then I don't want to lose all access to my infrastructure.


Unrooted Android and iOS are much more secure than Windows or common UNIX variants. Mac OS X has similar sandboxing, but few people only use sandboxed apps. I guess the same can be true for Windows with store apps


I don't think OS X's sandboxing has seen nearly as much scrutiny as the iOS/Android counterparts. An OS X sandbox escape buys you barely anything since the vast majority of apps don't come form the Mac App Store and don't bother enabling it. I wouldn't put a whole lot of faith in it.


Vast majority? Well, for the Hacker News audience, sure :) But the App Store is popular enough for Apple to require a checkbox in the settings for installing outside apps, like on Android.


You're confusing the App Store and Gatekeeper. If you register for Apple's developer program they'll issue you a certificate that you can use to sign and distribute your applications outside the App Store, with no input or restrictions from Apple, while bypassing that checkbox. These applications are the majority that aren't required to be sandboxed, and rarely are.


IIRC (haven't used modern Macs in a while) Gatekeeper has three modes — allow App Store only, allow App Store + signed outside, allow all (unsigned) apps. Didn't they switch to the first one by default??


The most recent OSX version (Sierra) made the change of hiding the option that totally disabled Gatekeeper, but the default hasn't changed. I have a Sierra VM I set up about a month ago, and I just checked the setting and it's at "App Store and identified developers".


The faq says there is intentionally no way to extract the private key due to security. But this means I need a second account in case my phone gets lost - the key of which I once again need to secure.

How is that more secure than letting me backup the private key in the first place?


The current version is designed for hosted services like GitHub, redeployable infrastructure, and servers to which multiple people have access. We totally understand your use case and are actively working on implementing transferring a key to another device or printing out a paper backup.


That makes sense. Looking forward to the development of your product then!


Not everything needs a key backup. If you create a backup key, you can create it on another device (ideally the one it's going to be used from) and don't have to move it off the phone in a way where it could be exposed (As much as one would think people using stuff like this should know better, I wouldn't be surprised if at least some got the backup off their phone by e-mailing it to themselves or something along those lines)


I haven't reviewed the implementation, but this is a really good idea. I want one.


I will likely end up using this in personal capacity, and would also appreciate if the UX of using Google Authenticator were more similar to this, rather than requiring me to screenscrape my phone with my eyeball and then type information into another device (or, more painfully, another window on the same phone).

The easiest way to do that probably results in a callback to Big Daddy G every time I access anything sensitive and I'm cool with that.


The Google App can do this on iOS, rather than Google Authenticator, for some reason.

https://support.google.com/accounts/answer/6361026?hl=en


With Apple's devices, you can copy-paste your 2nd factor token between devices, which makes things much more convenient. Although cloud copy-paste is still significantly slower than tapping "Allow" in a notification, it's lower friction than manually typing those numbers.

Although the security of cloud copy-paste I haven't investigated...


I've not checked how it works, but I don't think Universal Clipboard pushes the data that's being copy-pasted via the cloud. I've certainly used it to copy Wifi access codes from my phone to laptop when neither device had internet access, so I think it's implemented using peer-to-peer Bluetooth/Wifi.


> The easiest way to do that probably results in a callback to Big Daddy G every time I access anything sensitive and I'm cool with that.

Why?!


Because they have one of the most competent security teams in the world and enough money to mount legal defenses against governmental adversaries.

For many many threat models this pairing is a more competent defense than just about any thing a single person could come up with.


Although I'm a bit skeptical of its security, you might want to check out Authy.


Clever that it uses the client instead of the server! I've dabbled with phone-based authentication via a server-side PAM module before, but you generally don't have full control over the servers you SSH into.

Although the question remains: is this more secure than just storing your key on your computer? If you're assuming your machine to be compromised, then as soon as you login to another server you've basically given your attacker potential access there as well.


You've hit the nail on the head. If your computer's ssh binary can be compromised so can krd.

If the main objective is to prevent other apps in user space from reading unlocked private keys, why not just ssh/sudo into a secondary account where the default shell is set to an ssh client?


The point is, even when krd is compromised, the malicious party cannot gain access to your private key. They key is only stored on your phone and you have to physically confirm the login from your phone.


Is the private key still worth something if the attacker has access to the server?


Yes, because the key is still private. Any other machine to witch you can login with that private key is still off limites.


Ah, I was under the assumption that it is standard practice to have a different key for each machine that you log into.


I think there are no real standard practice or consensus regarding ssh keys and where you need different keys. Some people use one key for each ssh client machine (the machine logged in from), some one key for each ssh server, some use Yubikey to store a single ssh key, etc.

Personally I think one key per client is a good way when not using a hardware security module (e.g. yubikey) as the public key then identifies a unique client machine (e.g. your work laptop). This would help identify which client was breached in an eventual attack. I do think however that I would prefer the Kryptonite solution or using a Yubikey going forward.


Hi, that looks good! I'll probably try it out soon.

One suggestion regarding PCI DSS: you should probably make a page/whitepaper that outlines the compliance story of Kryptonite. ScaleFT has a great one: https://www.scaleft.com/use-cases/pci-dss/. By the way, have you checked that you do not need to be compliant yourself?


Theoretically this solution could be classified as a mutlti-factor authentication, which covers PCI DSS Requirement 8.2. See also https://www.pcisecuritystandards.org/pdfs/Multi-Factor-Authe...

Beyond that, there isn't much else this does regarding PCI. SSH does the rest.


Looks interesting. However, it seems to maybe assume you're using bash? Using the fish-shell it seems to have simply broken ssh and git operations with incorrect syntax. I had to run `kr uninstall` to get things back to normal (it fails when it's running fish or if I drop into bash so it's somehow looking at my default shell.)

I submitted a ticket on the repo.


Thanks, will fix this asap. We have tested on bash, zsh, and fish on macOS but it seems we missed an edge case. I'll follow up with your ticket.


came here to make note of this as well. thanks for the prompt fix, i'll check it out again in a couple of days :)


fish shell should be working now, let us know if you have any other issues!


It's working great in my fish shell now. Thanks for the speedy update!


great to hear, thanks!


works perfect -- this is really slick!


Looks like you're using pkcs11 instead of inventing your own stuff, so kudos for that at least.

But I wish people would be aware of smartcards more, they are all around us, but sort of invisible and unnoticed.

1. But cheap blank "Java" smartcards, more or less disposable

2. Install this applet on it https://github.com/philipWendland/IsoApplet

3. Works with OpenSC


Is it possible to use a chip/EMV credit card as an X.509 certificate? Let the credit card company know your private key (paranoid assumption; not necessarily true) & skip straight to step 3!


Look, I'm not an expert, I just dabble a bit. In theory there's no need for anyone to know your private key, it is generated on the card and kept there, unextractable. As I understand it there's nothing stopping credit card companies from allowing you generate your own keys on it (on a technical side that is), it just wasn't done AFAIK.


I have a smart card so I have the reader, but when I put in my credit card it doesn't even appear as though it can read it. I would love to use my "always-with-me" credit card for home PC sign-on and whatever else but there's nothing out there on the integration. Any pointers would be appreciated!


To read a bit of info about your credit card you can use this https://github.com/martinpaljak/GlobalPlatformPro, it will output something like

Card CPLC:

ICFabricator: 4790

ICType: 5049

OperatingSystemID: 8241

OperatingSystemReleaseDate: 2218

OperatingSystemReleaseLevel: 1520

ICFabricationDate: 3086

ICSerialNumber: 06575696

ICBatchIdentifier: 6664

ICModuleFabricator: 4810

ICModulePackagingDate: 3086

ICCManufacturer: 1180

ICEmbeddingDate: 3086

etc

I guess it's enough information to concoct some kind of 2-factor auth, but what is stopping you from promoting your real smart card into "always-with-me"? Or one of smartcards, since you can have many.

NFC-capable phones can act as a card reader for contactless smartcards AFAIK, so that's something you can look into also.


I don't spend with my smart card, so it's not "always-with-me".

Thanks much for the pointer!


> The private key is stored on your phone

i'm reminded of theo deraadt's answer to a slashdot question back in the say about making a bootable openbsd firewall on a floppy. his response was along the lines of, "firewalls are supposed to be among the most reliable things. floppy drives are among the least reliable things."


Your FAQ says you cannot backup your private key. So does that mean if your service gets attacked by DDoS or has unexpected downtime, you will not be able to SSH into your server?


Kryptonite works over bluetooth too, so even if AWS SQS is down, you'll still be able to use your private key


Obvious question - What happens when the phone containing the private key is lost?



First make sure you remove the old SSH public key from any of your accounts. Once you have Kryptonite installed on your new phone, add the new public key to the accounts you were using SSH with before.


Sounds like you ultimately need a backup method for logging into your server -- probably a second, non-Kryptonite key (or another admin user). Is that correct?


That is correct. We will also soon release a way to copy a key from one device to another by scanning a QR code.


I have read through the FAQ and many of the comments on the thread. They seem rational to me. I have mixed feelings about this, mostly around the emotional inertia for changing something.

On the other hand, I can see an immediate use-case for this for me. I use mosh to log into my cloud dev environment and since mosh doesn't support ssh-agent forwarding (and unlikely to ever to support it) ... this seems much a much better alternative.


Why are you doing this instead of SHA[N]withRSA? https://github.com/KryptCo/kryptonite-android/blob/master/ap...


We do the hashing ourselves so that it's easy to use any hash function in the future. If you create a Keystore key but decide to use a hash function you didn't specify at generation time, it will be rejected by the API.


I don't know if you've done it correctly.

You can use the built-in signature digest support and still add support for whatever you want in the future.


How does this handle SSH session re-keying, does that need further authentication from the device? openssh does this pretty infrequently, I can't immediately remember if that needs participation with the asymmetric key or not.

ED: Seems it's just as if you re-did the cipher negotiation, so no asymmetric interaction.


Seems pretty cool - I've just installed it and having a play with it. A couple of questions:

1. So I have to update all my servers to use my Kryptonite SSH key from the current Private Keys that I have?

2. This solution still doesn't allow me to SSH into my servers from another machine that doesn't have my private keys on it (such as a colleague's Mac), does it?


1. You have to upload your Kryptonite public key to ~/.ssh/authorized_keys on all the servers you want to access with your Kryptonite key. Take a look at the unlisted command `kr add`to help with this. It automatically adds your kryptonite public key to a server you specify: i.e. `kr add user@server` add your Kryptonite public key to the authorized_keys file for account `user` on `server`.

2. It does actually. All you need to do is pair with your colleague's mac. Run `kr pair` on their machine, Kryptonite can be paired with unlimited computers such as your work and home computers. You'll be able to ssh to all your servers using the Kryptonite key.


Ah! Great - thank you. Overall, I am amazed at the simplicity of managing keys using this platform.


Well as of now it apparently manages one key.


It'd would be nice if Kryptonite supported ssh clients on mobile devices too.

i.e. On iOS there is the opensource Blink client.

Not sure how the protocol would change, but it'd be nice if Kryptonite could store the keys in one place focused on securely storing the keys and then ssh clients can use them as needed. (Also for things like an iPad using a key on a phone)


This looks interesting but until Ed25519 on Android, importing existing keys and Paper Backups are supported it's a no-go for me.

For some things I trust a paper backup in a fireproof safe over some nebulous cloud thingy on my phone.

Atleast everything is open source, I'll favorite it and check back some time in the future.


Would iTunes Sync or iCloud Backup include app data for Kryptonite (possibly including the private key)?


Assuming Kryptonite does back up data to iCloud, there are two things to note:

1. iCloud backups are encrypted.

2. If you're not comfortable with #1, you can manually exclude an individual app from iCloud backups through Settings > Storace & iCloud Usage > Manage Storage (in the iCloud section). Click on your device in the Backups section. Turn off Kryptonite in the "Choose Data to Back Up" section.


Are the communications between the phone and the computer going via the kryptonite servers?


No -- we treat every communication channel as untrusted. All communication between the phone and computer is encrypted with session keys established when you pair by scanning the QR code in the terminal. Check out our architecture post for more details: https://blog.krypt.co/the-kryptonite-architecture-a385e7aaa3...


If I'm reading this, the answer is actually ~Yes? The requests pass via SQS/SNS run by Kryptonite, or via Bluetooth not run by kryptonite?


Indeed, (encrypted) requests pass through SQS/SNS with credentials owned by us. We can see the amount of traffic, but not any of its contents or who sent it.


Is there an option to configure this to run across only BT?

This would mean that the phone needs to be physically close in order to accomplish auth and I would think that's a good requirement based the premise of this app.


Follow-up: Since it seems all the code is open-source, is it possible for me to run this service on my own server (or at least in my own AWS setup)?


The lack of a license leaves this very unclear.


GitHub's user agreement allows for the free use of any code posted in a public repo.


No, it doesn't. Standard copyright applies if there is no licence.

GitHub user agreement allows other users to view and make copies of your content on github but not for "free use" in general.

https://help.github.com/articles/github-terms-of-service/#5-...


I love the concept, but I think the added complexity is hard to trust, at least for now.


Why do I want my private key on my phone instead of the computer where I am using it?


Any user-level application on your computer can read the SSH key -- you'll never know if it's used or sent off somewhere. Even passphrase encrypted keys are vulnerable. Check out this blog post for a deep dive on our threatmodel and why you should store your SSH key on your phone https://blog.krypt.co/why-store-an-ssh-key-with-kryptonite-9...


Well the phone is at least equally insecure, given it's often way more opaque than a computer.


No, the opposite is true. See upthread.


I'd like to know this too. My computer is patched daily, my phone less often.


Respectfully, this is easy: get a better phone and turn autoupdate on.


Also respectfully, he stated patched daily. Generally, security updates for both iOS and Android devices happen in a monthly roll-up.

If your checklist for being "secure" involves being up-to-date, there is no amount of 'getting a better phone' that will put it on par with a desktop OS that receives daily patches.


This is simply not true. No matter how often you patch your Linux desktop, it is less secure than an iPhone.


Conflating secure with up-to-date is wrong in such a broad and sweeping statement such as 'your Linux desktop is always less secure than an iPhone'.

Updates software does not instantly mean more secure, which I am sure you are aware of.

As you stated, for the generalists, yes, an out-of-the-box iPhone has more hardening than an OOTB Linux desktop, but for the type of person on HN, this is not necessarily true at all. There are too many variables.


I disagree, but that's fine.


I guess its like 2 factor authentication - I can let a friend use my computer & not worry about them SSHing in to my server. Or I could just create a 2nd account on the computer instead of using Kryptonite (easier in my opinion).

I also have to wonder what kinds of backdoor access the NSA has to my phone.


Even if your computer is 100% compromised, SSH can't be used without your explicit permission and knowledge.


I think you're saying that if you use this app, and remove other ssh keys from your computer, then "SSH can't be used without your explicit permission and knowledge.".

I agree with another commentator, I trust my PC more than I trust my phone. The latter is probably already compromised.


Set this up earlier for github and a few servers. It's very convenient, and I like having the kr commands for adding the public key to whereever. As others commented, the licensing is important to get worked out


What, if any, integration points do you have with Keybase.io? One of the things that I think Keybase got right is the sharing of public key information and it would be awesome if you guys could work together.


We'd love to do an integration with keybase.io: i.e. add your Kryptonite SSH public key to your Keybase profile.


Does this allow me to ssh into my server, for example, a shell server on the internet?

If so, how does the server contact my phone? Through your server, right?

What software do I install on the server for that?


I found the answer on your blog:

    > Our system consists of three components:
    >   (1) the Kryptonite phone app for iOS and Android,
    >   (2) the krd daemon that runs in the background on a macOS or Linux computer, and
    >   (3) the kr command line utility that manages krd.
...from https://blog.krypt.co/the-kryptonite-architecture-a385e7aaa3...

Sounds like `krd` is why I likely won't be using this.

Try implementing it as a PAM module or something.

[edit: formatting]


Ok, `krd` is an alternative `ssh-agent`, I see.

So my suggestion re: PAM is irrelevant because you aren't changing the server, you're changing the client.

Ok, I'm interested ...maybe... I'll wait until people more familiar with ssh-agent chime in. :)


An SSH agent is the correct way to provide access to private SSH keys, just like OpenSSH's ssh-agent does. I could be wrong but I'm not aware of any way to implement this as a PAM module, unless it was something that was installed on all the servers you SSH'd into, which would be super annoying to setup.


Kryptonite allows you to use SSH on your computer, but authenticate with a private key protected on your phone. It doesn't require any changes to the server, just the Kryptonite app and our workstation software.


Seems quite similar to using a Yubikey to house your SSH key, just with bluetooth and your phone. A bit less secure, but still quite the interesting tool.


They certainly seem to have a marketable product for easy ssh key management. They'll make buckets of money on this I bet.


Great stuff! Could there be an option to disable the notification for certain "less security critical remotes"?


Looks cool, but I don't believe it's mature enough at the moment for me to use it.

Will be keeping an eye on it though.


Is one of these repositories the server?


That looks pretty cool. You do have to make sure to back up your phone.


Nice idea - played a bit. Any plans to add Linode.com ?


Can this work with Userify to distribute the public key?


Yep! Just type 'kr me' to print your public key, or 'kr copy' to copy it right to your clipboard.


I don't have root access on my phone, and Samsung only releases patches if their phones catch on fire. No thanks. Good idea though, but flawed execution.


That sounds more like a flawed phone.


... Exactly my point?


There are better phones out there, and nobody is stopping you from using them. You can't say this app/service is flawed just because some people might use a flawed phone.


Very cool!


[flagged]


Please be civil.

We detached this comment from https://news.ycombinator.com/item?id=14241907 and marked it off-topic.


So the new way to secure your private keys is in a location you DON'T control and validation by text message? It's May 1st not April 1st.


I think you may have misunderstood our architecture. SMS doesn't play any role in Kryptonite and all communication between the phone and computer is encrypted and authenticated. Check out a full explanation of the architecture here: https://blog.krypt.co/the-kryptonite-architecture-a385e7aaa3...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: