Hacker News new | comments | show | ask | jobs | submit login
Show HN: SeKey: An SSH Agent for OS X, Using Secure Enclave and TouchID, in Rust (github.com)
169 points by ntrippar 7 months ago | hide | past | web | favorite | 49 comments



see also the really excellent kyptonite, which i found linked here a few months ago and i've been positively delighted with:

https://krypt.co/

i'm not a security expert, and i have a lot more trust in my iphone for managing secrets than my personally configured linux pc's.


I have a slightly different point of view:

In Linux, I can easily monitor how much data, what kind of data are transferred to the remote IP and disable them on per apps base anytime I want.

In IOS, I can't do any of that. At most, I can only disable an app from using cell data. If anyone else know how to do monitor/block network connection on per APP base in IOS, I would love to hear about it.

For me, it is a different between "trust" and "trust and verify".

I use IOS and have some level of trust on Apple/IOS. But I don't trust majorities of the IOS APPs for security/privacy.


Random iOS apps can't just pull all your secrets out of SE and send them off somewhere. That's what makes recent iOS devices a better place to put a secret than your typical Linux box. 'Not being able to get a hold of your secrets' is a much better line of defense than 'I think I can make it hard for something to exfiltrate my secrets that they've pilfered'. Especially since the latter is probably not really true.


>'Not being able to get a hold of your secrets' is a much better line of defense than 'I think I can make it hard for something to exfiltrate my secrets that they've pilfered'. Especially since the latter is probably not really true.

reminds me of this:

> Air-gapped networks are isolated, separated both logically and physically from public networks. Although the feasibility of invading such systems has been demonstrated in recent years, exfiltration of data from air-gapped networks is still a challenging task. In this paper we present GSMem, a malware that can exfiltrate data through an air-gap over cellular frequencies. Rogue software on an infected target computer modulates and transmits electromagnetic signals at cellular frequencies by invoking specific memory-related instructions and utilizing the multi-channel memory architecture to amplify the transmission. Furthermore, we show that the transmitted signals can be received and demodulated by a rootkit placed in the baseband firmware of a nearby cellular phone. We present crucial design issues such as signal generation and reception, data modulation, and transmission detection. We implement a prototype of GSMem consisting of a transmitter and a receiver and evaluate its performance and limitations. Our current results demonstrate its efficacy and feasibility, achieving an effective transmission distance of 1 - 5.5 meters with a standard mobile phone. When using a dedicated, yet affordable hardware receiver, the effective distance reached over 30 meters.

https://www.usenix.org/system/files/conference/usenixsecurit...


    Virtual Keyboard Developer Leaked 31M Client Records (mackeepersecurity.com)

    Apple is sharing your facial wireframe with apps (washingtonpost.com)


None of these are secrets stored in SE, which is what this entire thread is about


FYI, IOS is operating system for routers by Cisco.

iOS is operating system for mobile devices by Apple.


The funny part is the parent's comment applies perfectly well to either.


Linking to my comment downthread in case you're using email notifications:

https://news.ycombinator.com/item?id=15857589

It should be possible to write an app that does what you want; I'm not sure whether or not such apps already exist.


  If anyone else know how to do monitor/block network connection on per APP base in IOS, I would love to hear about it.
I've been seeking a Little Snitch type connection monitor/firewall for iOS for some time. I believe Apple purposely makes this difficult/impossible on iPhones due to the constant metrics/analytics sent to Apple servers.

Look at what they have done with turning off bluetooth and wifi from the Control Center.. It's really eye-opening to see how often Apple products ping home.


I believe Apple purposely makes this difficult/impossible on iPhones due to the constant metrics/analytics sent to Apple servers.

I don't think that follows. Allowing applications to intercept and mess with other application's network traffic would be an obvious security issue; that's far more likely to be the reason than some kind of vague "we want to track your data" thing.

It's certainly possible that Apple could construct an appropriate API for allowing users to configure apps in such a fashion that they could monitor network traffic, much the same way as similar APIs exist for accessing e.g. photos. But since it's a niche application at best, I'm hardly surprised they haven't done so.


Such an API already exists: the NetworkExtension APIs, specifically NEPacketTunnelProvider and NEAppProxyProvider. They're meant for VPNs, but it should be possible to use the same APIs to monitor traffic and send it on, rather than tunneling it through a VPN. (There is also NEFilterProvider, but that API is designed to run the filter in a tight sandbox that doesn't allow it any outbound communication, so that it can filter packets but not exfiltrate them.)

…though if you're concerned about privacy, perhaps you should be using a real VPN anyway, in which case you could handle traffic monitoring and filtering on the server side.

Edit: I guess the server-side approach wouldn't allow identifying on which app is making the connection. The NetworkExtension APIs, however, should allow that: you get a flow of NEPackets, each of which has a 'metadata' property containing a 'sourceAppUniqueIdentifier' and 'sourceAppSigningIdentifier'. I don't have personal experience using these APIs though.


  Allowing applications to intercept and mess with other application's network traffic
Look at your cellular data usage in "system services" .. Apple is already intercepting and sending home a lot of your information without your ability to stop it.


I block all that via firewall, authoritative DNS and gateway on home network. It works very well. Trying to accomplish this via an app on an un-rooted iPhone would be a headache, IMO, and ultimately is subject to defeat by Apple if they so choose. As seems to be the computing paradigm du jour, Apple more or less has "remote control" over these devices, whether through automatic updates, their control over an AppStore or some other mechanism.

When someone buys an iPhone, there is an expectation that an ongoing relationship with the company is created. It is assumed every purchaser wants to use Apple's time servers, Apple's messaging service, Apple's cloud storage, Apple's software review process, etc. and there is no opt-out. Consequently the purchaser is expected to establish a means to identify themselves to the company (AppleID) in the future. Fingerprints may be collected, facial recognition, etc. Apple has the means to know its hardware customers, very well. It does not really feel like we own the hardware. More like a lease or rental. Feels like we are being used as a source of further revenue generation. A massive user base tethered to the company that it can use as a bargaining chip to make deals with other companies.

Here is a different approach. Imagine you have two mobile devices. 1. An iPhone. 2. A portable computer running an open source OS that can act as firewall, authoritative DNS server and/or gateway. Apple has no control over #2. #1 can only access the internet through #2. #2 belongs solely to the user and it is controlled by the user, not any company. Perhaps one day we will see Apple controlling the user's routing table and any network settings entered by the user will be subservient to Apple's.


If you use VoLTE or VoWiFi, your network settings are ignored for those - there's tunnels back the the carrier for voice/RCS.


As you say, they are already controlling the routing table on the user's device for voice calls. Does the user opt-in to that or is it automatically turned on?


Is there a way to turn off cellular data for "system services" in iOS?


Not sure about that; but with past iPhones, one could remove SIM card and still use device as a portable WiFi-enabled computer.


> Look at what they have done with turning off bluetooth and wifi from the Control Center.. It's really eye-opening to see how often Apple products ping home.

Are you trying to imply that iOS devices ‘ping home’ when you disconnect from WiFi or Bluetooth? Or are you just complaining about the previous behavior (updated in 11.2 to be more obvious) that disconnected WiFi I stead of disabling it?


a yubikey can hold your private ssh key, and it can't be taken out of it


Very, very cool.

To use a public key with something like Userify (https://userify.com, plug ssh key management) or a service like Github, use --export-key to export the public key in OpenSSH format:

    sekey --export-key <key-id>
(see also in the readme that you can use sekey --list-keys to get key ID's.)

This is seriously such an awesome project that I might have to get a new MBP just for this.


For TPMs on non-Apple computers: https://github.com/ThomasHabets/simple-tpm-pk11

But with TPM there's no external unlock mechanism like Touch ID, the TPM unlock happens from the operating system.


This is a really cool project. I just tried to compile it myself, and code-signed it with my mac dev key. I changed the assets/sekey.entitlements file by replacing 5E8NNEEMLP with my own hex id for my developer key.

However, when i run ./bundle/Sekey.App/.../sekey, i keep getting a "Killed 9" message.

When i run the unsigned version, the binary at least runs (shows the -h messages). Any hints on how to fix?


thats because the code signing its invalid. you are signing the whole app? or only the binary. could be many reasons, one is the lack of the provisioning profile on your computer (for your key).


This looks useful. I'd love to use this with gopass (https://github.com/justwatchcom/gopass). It would have to support preexisting keys for that to work for me.

I wonder if the limitation around elliptic curve keys originate with the Secure Enclave, or is that just the one type of key this tool supports?


I would love to see this usable as a U2F device too.


Very cool project. I’ve done similar things with yubikeys before, having it built into the hardware is great.

Can it also support a pin to go with the biometric auth?


as far i read in the documentation no. i will do more research on the implementation. also you device has to be unlocked for the enclave to work.

https://developer.apple.com/documentation/security/ksecattra...


> Can’t import preexisting key

That makes me sad :( Does anyone know if that's a SE limitation, or the app's?


it's SE limitation :(. also makes me sad, will be awesome if you can import a key generated by yourself. so if you reinstall you can import again to the enclave


It's common for hardware security tokens like this to limit themselves to self-generated private keys. The intent is the device needs to provide a guarantee that the private key is not otherwise accessible, whereas it would be if the user generated it, perhaps wrote it to disk, provided it to the SE, and perhaps SEs on other devices.


YubiKeys allow you to load your own private key, which turned out to be a great feature. [0]

[0]: https://www.yubico.com/support/security-advisories/ysa-2017-...


I was thinking more in terms of: I trust openssl to generate a non-broken key. (Yeah, even with all its faults...) But if SE turns out to use generate bad keys, like the recent popular issue, I can't use a better one instead.


You can wrap your private keys with the SE key, though! The same way you might wrap your SSH keys with a password.


Is there a way to back up your keys? If not it could be a problem to lose your laptop.


This shouldn't be a problem. Backing up a securely stored key is not a great solution. Instead you can generate a backup key that you provision everywhere as well, (the public part) but store completely offline.

Basically treat this the same as you would a physical 2fa token.


I'm thinking more about whether this could be used to store cryptocurrency keys, in place of a hardware wallet. It's a different elliptic curve (according to another HN commenter recently) but that's not necessarily a deal killer forever.

Without an export it could maybe be one key in a multisig.


I may be talking shit, but for what I understand of SE, it wouldn't make sense to loose the data if you reinstall macOS.

I tried to look up the info, but the only thing I found was this: "But because its backing storage is physically part of the Secure Enclave, you can never inspect the key’s data."

https://developer.apple.com/documentation/security/certifica...

That means that it get stored in SE instead of your computer's hard drive. Also, Apple have instructions to clean Secure Enclave if you're going to sell your macbook pro with touchid.


It's a design detail. You can't have a "secure enclave" if it accepts external private keys.

That's also why it only generates one kind of key. It's a black box that spits out public keys.


Secure enclave is just Apple's name for a secure cryptoprocessor implementation. Some other implementations are happy to accept your private key generated outside of the system.


Awesome work, this is a really cool idea and well executed.


I'm amazed there's a market for this kind of thing. Since the 1990s its been known how to implement an undetectable backdoor in hardware based crypto which leaks your keys to an attacker.

E.g. http://paper.ijcsns.org/07_book/201006/20100623.pdf details how to do it for Elliptic Curves. But its been studied since https://link.springer.com/content/pdf/10.1007%2F3-540-69053-...

The tl;dr is that any device in which you can't check the implementation or get the private key out (i.e. Secure Enclave, TPM etc), can leak your key to a passive attacker in a way that you provably can't detect.


On the flip side, you can also demonstrably prove that private keys stored on disk can be exposed with a minimal intrusion of your user account, also with no indication they were leaked.

Another layer of security is never bad.


Additional layers of security can be bad when they reduce usablity, reduce accessibility/availability, or increase complexity too much.

As usual it's trade-offs all the way down.


Because this research exists, there can't possibly be a market for a security product whose threat model might not include a malicious hardware manufacturer?


Admittedly its less of a risk with your SSH keys than with say your bitcoin wallet. (There's a clear economic incentive there and most bitcoin hardware wallets have pretty low levels of assurance).

But nonetheless, the number of people who are security conscious enough to lock their keys into their hardware, but not worried about malicious hardware seems quite limited.


>But nonetheless, the number of people who are security conscious enough to lock their keys into their hardware, but not worried about malicious hardware seems quite limited.

Maybe I'm wrong but it seems like you're misinterpreting these people. TouchID is an ease of use feature that you feel good about because you also get to improve your security (save for malignant hardware manufacturing). Its very easy and it improves your security. You don't have to be excessively security conscious to be interested in that. I like TouchID but I'm not a security obsessed person (although I'm not quite on the same level as your average joe), and im pretty sure its easy to sell this and any TouchID people to anyone regardless of how security conscious they are on the basis that using TouchID is even safer.

I just don't like your view that people who like TouchID must be obsessive about security and understand it inside and out. Most people do things regardless of how much they understand.. you won't be an expert in everything.


That market exists because the market for perfectly secure but completely impractical products is much smaller, roughly zero.


My threat model excludes hardware compromise because it's dead easy for an attacker in the position of sticking malicious hardware into my laptop to e.g. run a malicious BIOS that notices when I'm doing crypto operations and wakes up and steals secrets off the CPU. If I worried about that I wouldn't use my computer at all.

I think the market for people whose threat model includes hardware compromise is extremely tiny. It should include purchasers of external Bitcoin hardware wallets, as you suggest, but it probably doesn't include the average SSH user deciding whether to trust hardware built into their laptop.

Also, in the specific case of macOS, your hardware manufacturer can more easily just ship you a malicious ssh binary or ssh-agent....




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: