i'm not a security expert, and i have a lot more trust in my iphone for managing secrets than my personally configured linux pc's.
In Linux, I can easily monitor how much data, what kind of data are transferred to the remote IP and disable them on per apps base anytime I want.
In IOS, I can't do any of that. At most, I can only disable an app from using cell data. If anyone else know how to do monitor/block network connection on per APP base in IOS, I would love to hear about it.
For me, it is a different between "trust" and "trust and verify".
I use IOS and have some level of trust on Apple/IOS. But I don't trust majorities of the IOS APPs for security/privacy.
reminds me of this:
> Air-gapped networks are isolated, separated both logically and physically from public networks. Although the feasibility of invading such systems has been demonstrated in recent years, exfiltration of data from air-gapped networks is still a challenging task. In this paper we present GSMem, a malware that can exfiltrate data through an air-gap over cellular frequencies. Rogue software on an infected target computer modulates and transmits electromagnetic signals at cellular frequencies by invoking specific memory-related instructions and utilizing the multi-channel memory architecture to amplify the transmission. Furthermore, we show that the transmitted signals can be received and demodulated by a rootkit placed in the baseband firmware of a nearby cellular phone. We present crucial design issues such as signal generation and reception, data modulation, and transmission detection. We implement a prototype of GSMem consisting of a transmitter and a receiver and evaluate its performance and limitations. Our current results demonstrate its efficacy and feasibility, achieving an effective transmission distance of 1 - 5.5 meters with a standard mobile phone. When using a dedicated, yet affordable hardware receiver, the effective distance reached over 30 meters.
Virtual Keyboard Developer Leaked 31M Client Records (mackeepersecurity.com)
Apple is sharing your facial wireframe with apps (washingtonpost.com)
iOS is operating system for mobile devices by Apple.
It should be possible to write an app that does what you want; I'm not sure whether or not such apps already exist.
If anyone else know how to do monitor/block network connection on per APP base in IOS, I would love to hear about it.
Look at what they have done with turning off bluetooth and wifi from the Control Center.. It's really eye-opening to see how often Apple products ping home.
I don't think that follows. Allowing applications to intercept and mess with other application's network traffic would be an obvious security issue; that's far more likely to be the reason than some kind of vague "we want to track your data" thing.
It's certainly possible that Apple could construct an appropriate API for allowing users to configure apps in such a fashion that they could monitor network traffic, much the same way as similar APIs exist for accessing e.g. photos. But since it's a niche application at best, I'm hardly surprised they haven't done so.
…though if you're concerned about privacy, perhaps you should be using a real VPN anyway, in which case you could handle traffic monitoring and filtering on the server side.
Edit: I guess the server-side approach wouldn't allow identifying on which app is making the connection. The NetworkExtension APIs, however, should allow that: you get a flow of NEPackets, each of which has a 'metadata' property containing a 'sourceAppUniqueIdentifier' and 'sourceAppSigningIdentifier'. I don't have personal experience using these APIs though.
Allowing applications to intercept and mess with other application's network traffic
When someone buys an iPhone, there is an expectation that an ongoing relationship with the company is created. It is assumed every purchaser wants to use Apple's time servers, Apple's messaging service, Apple's cloud storage, Apple's software review process, etc. and there is no opt-out. Consequently the purchaser is expected to establish a means to identify themselves to the company (AppleID) in the future. Fingerprints may be collected, facial recognition, etc. Apple has the means to know its hardware customers, very well. It does not really feel like we own the hardware. More like a lease or rental. Feels like we are being used as a source of further revenue generation. A massive user base tethered to the company that it can use as a bargaining chip to make deals with other companies.
Here is a different approach. Imagine you have two mobile devices. 1. An iPhone. 2. A portable computer running an open source OS that can act as firewall, authoritative DNS server and/or gateway. Apple has no control over #2. #1 can only access the internet through #2. #2 belongs solely to the user and it is controlled by the user, not any company.
Perhaps one day we will see Apple controlling the user's routing table and any network settings entered by the user will be subservient to Apple's.
Are you trying to imply that iOS devices ‘ping home’ when you disconnect from WiFi or Bluetooth? Or are you just complaining about the previous behavior (updated in 11.2 to be more obvious) that disconnected WiFi I stead of disabling it?
To use a public key with something like Userify (https://userify.com, plug ssh key management) or a service like Github, use --export-key to export the public key in OpenSSH format:
sekey --export-key <key-id>
This is seriously such an awesome project that I might have to get a new MBP just for this.
But with TPM there's no external unlock mechanism like Touch ID, the TPM unlock happens from the operating system.
However, when i run ./bundle/Sekey.App/.../sekey, i keep getting a "Killed 9" message.
When i run the unsigned version, the binary at least runs (shows the -h messages). Any hints on how to fix?
I wonder if the limitation around elliptic curve keys originate with the Secure Enclave, or is that just the one type of key this tool supports?
Can it also support a pin to go with the biometric auth?
That makes me sad :( Does anyone know if that's a SE limitation, or the app's?
Basically treat this the same as you would a physical 2fa token.
Without an export it could maybe be one key in a multisig.
I tried to look up the info, but the only thing I found was this: "But because its backing storage is physically part of the Secure Enclave, you can never inspect the key’s data."
That means that it get stored in SE instead of your computer's hard drive.
Also, Apple have instructions to clean Secure Enclave if you're going to sell your macbook pro with touchid.
That's also why it only generates one kind of key. It's a black box that spits out public keys.
E.g. http://paper.ijcsns.org/07_book/201006/20100623.pdf details how to do it for Elliptic Curves. But its been studied since https://link.springer.com/content/pdf/10.1007%2F3-540-69053-...
The tl;dr is that any device in which you can't check the implementation or get the private key out (i.e. Secure Enclave, TPM etc), can leak your key to a passive attacker in a way that you provably can't detect.
Another layer of security is never bad.
As usual it's trade-offs all the way down.
But nonetheless, the number of people who are security conscious enough to lock their keys into their hardware, but not worried about malicious hardware seems quite limited.
Maybe I'm wrong but it seems like you're misinterpreting these people. TouchID is an ease of use feature that you feel good about because you also get to improve your security (save for malignant hardware manufacturing). Its very easy and it improves your security. You don't have to be excessively security conscious to be interested in that. I like TouchID but I'm not a security obsessed person (although I'm not quite on the same level as your average joe), and im pretty sure its easy to sell this and any TouchID people to anyone regardless of how security conscious they are on the basis that using TouchID is even safer.
I just don't like your view that people who like TouchID must be obsessive about security and understand it inside and out. Most people do things regardless of how much they understand.. you won't be an expert in everything.
I think the market for people whose threat model includes hardware compromise is extremely tiny. It should include purchasers of external Bitcoin hardware wallets, as you suggest, but it probably doesn't include the average SSH user deciding whether to trust hardware built into their laptop.
Also, in the specific case of macOS, your hardware manufacturer can more easily just ship you a malicious ssh binary or ssh-agent....