What's really disappointing is that there seems to be an all-or-nothing security model here. If I pair my phone with a computer, then suddenly it has complete access to spy on me, install monitoring tools that can continue to run, etc. Why can't there be a way where I can transfer music/photos to/from my phone without providing this full device access?
You'd be pretty annoyed if the front door to your house, when you opened it, also opened up your document safe, emptied your wallet onto the floor and invited visitors to leave bugging devices to spy on you later.
Also, the defence of "just don't agree to pair your phone with an unknown USB device" can actually be tricky. On a flight, I plugged my phone into the USB port on the seatback to charge it. The phone repeatedly kept asking if I wanted to pair it with something (who knows what it was? the entertainment system, maybe?). If I had accidentally hit the wrong button only once (on a prompt that randomly appeared), my phone could have been owned, and there's no easy way to un-pair.
Why can something even repeatedly ask for permission? (My iPhone was asking me every 5 seconds the other day due to a faulty cable.) Even if there's a reason for that, why isn't there a "don't ask again" button?
Why doesn't such a thing need a pin code or iCloud password entry?
Why isn't services like file_relay or pcap a setting deep inside the Advanced section of Settings.app that requires password entry on enabling and features a warning message?
These things are advanced features. Few use them, why not make it a little more difficult?
Why can't I opt-out of Apples access to my files? Sure, I can say "No" when they ask at the store, but I could say no as well in the Settings app.
It's easier to enable packet capture than complete the Provisioning Profile process when releasing an app update to App Store.
These are the things Apple should be answering instead of the vague support note entry they published today.
I think my iPhone was repeatedly asking about it for similar reasons to yours, as the connector is worn and it can take several plug attempts to make it even charge. However, a malicious USB socket could cause repeated prompts by just briefly dropping the power from time to time to trigger it. (n.b. I'm sure my experience wasn't a sinister attack on my phone, just stating that this is possible to do)
The "Don't ask again" is a little tricky - is there enough information present for the phone to tell if it is plugged into the same or different computer?
A better UI would be for the phone to always default to not pairing. No popup choice would be shown at all. After all, how often do you need to pair to a new computer?
Not really. the USB charging protocol requires data pins for chargers to communicate with the device about how much current it can draw. Suppose the data pins are left open, a compliant device can only draw <100mA, which is not even enough power to keep device from draining battery. Even if the data pins are shorted, I don't think it works with most apple devices, which use a different non-standard protocol.
Google shows some uses before hand[1], but Wayback Machine does not corroborate. I think Google is using the self-reported page date and pages dynamically update with "latest news' content.
1> Buy a shit Nokia, use the 2 pin charger. TBH if it goes flat it won't charge off USB anyway. Then use a microSD for transferring music/photos.
2> Buy an Android handset with USB OTG. Transfer files on and off via a USB stick that you control. Charge it with a USB cable with the data pins shorted (you can buy these off Amazon).
Both of these ignore the main attack vector that I'd be concerned about which is over-the-air (via GSM/packet data) and a fuck load more scary than plugging your USB thing into a dirty USB hole and getting some USB STD.
There are also solutions for iPhones: send via AirDrop, email, imessage, iCloud share,
Dropbox, copy.com, etc. AirDrop works even without an Internet connection and will work that way with macs too.
It's not like it's 2003 again and the only way to share a picture or a album is with a cable.
His work on security in iOS is quite interesting, but he seems determined to spin everything for maximum publicity rather than, well, accuracy or truth, which is a shame. For example, on that blog post he writes about pcapd and developers:
"Lets start with pcapd; I mentioned in my talk that pcapd has many legitimate uses such as these"
Yet in the slides for his talk[1] under theories he writes"
"Maybe for Developers for Debugging? No."
There are many examples of things like this in his writing, where actual facts are unsaid in order to gain the maximum melodrama for a particular statement.
On top of that he seems to continually avoid the point that to enable these you need physical access to the device (for the pairing process to have a machine marked as trusted). If you have physical access, enabling debug[2] features are probably the least of your worries.
Anyway, rant over. It just annoys me that genuinely interesting information often seems to be spun by personalities to give it artificial gloss these days, making it all feel a bit slimy and self-serving.
>Yet in the slides for his talk[1] under theories he writes:
>"Maybe for Developers for Debugging? No."
Followed by 6 bullet point reasons why this isn't a general excuse for all of the backdoors - it's mentioned in reference to all of his findings and not specifically pcapd (which is only mentioned on 2 consecutive slides out of 60, separated from this statement about debugging by 15 slides.)
Your comment is far more misleading than what he's written.
I was only providing an example for pcapd rather than all of the items he is classing as backdoors. The entire slide is:
Maybe for Developers for Debugging? No.
- Actual developer tools live on the developer image, and are only available when Developer Mode is enabled
- Xcode does not provide a packet sniffing interface for developers
- Developers don’t need to bypass backup encryption
- Developers don’t need access to such sensitive content
- Apple wants developers to use the SDK APIs to get data
- There are no docs to tell developers about these “features”
To me all those points seem to be provided to systematically deny legitimate uses for pcapd, which is contrary to the blog entry where he states, "I mentioned in my talk that pcapd has many legitimate uses". However it's entirely possible I'm reading it wrong.
As I mentioned, there is good information in there. Adding extra, potentially misleading. fluff is unnecessary and counter-productive to my mind. That's just my opinion though.
Since I no longer work for the company, i'll mention that I've worked with the team at Apple that used pcapd in the iphone. It was an extremely valuable tool for finding/testing issues.
The fact that the other major mobile OSs gets 98% of the mobile malware (according to studies), makes this point about the "nonchalant attitude" rather week...
> The fact that the other major mobile OSs gets 98% of the mobile malware (according to studies), makes this point about the "nonchalant attitude" rather week...
No it does not.
It's not acceptable when any company is nonchalant about any security problem on their device, product or service.
And, taking Android as an example, Google is very open about the malware and malicious app problem[1] -- and takes steps to help mitigate said problem.
Apple is just straight-up telling users it's not a problem. There is a key difference here.
> I thought this required me to unlock my phone and say 'I trust this computer'
It does, but only once, and then it's almost impossible to "un-trust" the computer without wiping your phone. Besides, you probably have already "trusted" your home computer -- which could be exploited between then and now and then used as a vehicle for attack.
And, as some commenters have mentioned, when you plug into a new device, use an airplane USB charging port for example, the phone may repeatedly ask you to accept a pairing with the other side (the USB charger/device), and an accidental tap on the wrong button can leave the door open.
That 98% figure is caused primarily by users disabling the default-enabled restrictions on third-party non-Play-Store apps, and particularly in the pirated app space.
The attitude you exhibit towards iOS doesn't fly when Linux advocates mention that Windows is the target of 98% of PC malware.
And maybe it's true: 98% of the PC malware is targeted at Windows and 98% of the mobile malware is targeted at Android. I certainly take advantage of the Windows malware situation by running Linux, and not running any malware checkers.
Apple is a respectable company who cares about their brand image, so they're obviously allowing only high-profile adversaries' malware on your device! </s>
I think that buying an Apple device is implicitly consenting to have all of your communications monitored and not to have access to your own data. In other words, I don't think this is a big deal, but there's also no need to spread FUD against the people who are specifically pointing it out.
edit: It's in the EULAs. I didn't think that I was saying something controversial:) I always underestimate people's level of denial...
I don't know what the situation is with Windows Mobile or any of the other mobile OS's, but with the big two it goes like this:
iOS: You get updates for your device for a decently long time after you buy it, even after new devices come out.
Android: You stop getting updates (all updates: security fixes are NOT backported) somewhere between 6 months to 2 years after getting your device, assuming it's a brand new product line. If it's not the latest and greatest there's a chance it's already out of date, and won't be receiving any updates at all. Even if you do get updates, depending on the device they may be months or even years after they are released.
So, ignoring any other mobile OS's, your choice is "a few known issues and an attitude problem" with iOS or "walking around with well publicised wide-open security vulnerabilities in your pocket" with Android.
Only if you compare certain aspects of it to certain (other!) aspects of other manufacturers.
That was the point. Saying "oh but other manufacturers do other things badly" needlessly polarizes the argument and detracts from Apple's fuck-up, which is the topic of discussion here.
Of course not providing security updates after a a relatively short time is a bad thing to do, but how is that even relevant to the backdoors in this article?
(edit: changed "Android" to read "manufacturers", as em3rgent0rdr rightly pointed out your complaint doesn't have anything to do with the OS, but the manufacturers providing locked devices with Android OS on it. with Apple/iOS this just happens to be the same party)
Actually if you use an android open source based distro like cyanogenmod or ornirom, you will likely be able to continuously get nightly updates. Android is a code ecosystems...you should really be harping criticisms of lack of updates to the individual manufactures such as Motorola or Samsung or HTC if you are comparing to Apple.
You just described significant portions of the security industry, which runs on maximizing the fear and FUD factor.
It's not just true of computer security. It's really true globally of the entire "security" sector, from infosec to police to the global "national security" defense/intelligence industry and so forth. Step 1: frighten, step 2: sell protection, step 3: profit.
Not saying there aren't risks out there, just that the industry markets itself through bombast and sometimes exaggerates them.
Back to the infosec realm, the simple truth is that the only absolutely secure system is one that is off and the only absolute privacy is in your own head (maybe). Everything else is a matter of degrees of risk, and the curve is hockey stick shaped. It's relatively easy to mitigate the big risks, but that leaves a long tail of small risks and small vulnerabilities that require an exponentially increasing amount of effort and inconvenience to deal with.
every industry trumps up the usefulness of their product, it's called marketing. It's on the consumer to cut through the marketing-speak and understand what they actually need to pay for.
Marketing can degrade into con artistry. It isn't always, but it certainly can.
That being said, I think in the long run people appreciate it if your marketing isn't scummy.
(Not directly aimed at the original article, though I do think there's a lot of deceptive, confusing, and overly hyperbolic FUD in the security sector.)
It's also on each industry to inform truthfully and honestly. As the work to "cut through the marketing speak" can obstruct business activity there are already various laws in place to punish a too liberal interpretation of the word "marketing".
Well, yes, but without this huge marketing effort on the side of the infosec industry, we'd still be running the web on http instead of https, and we'd still have to convince people that XSS is bad and not just a "neat trick".
Those are just the first two things from the top of my head that IMO rightly took great effort to be taken seriously by the mainstream (developers and consumers alike).
Also I disagree with the "Uncertainty & Doubt" part of FUD. Security researchers are generally extremely clear about what exactly the issues and risks are, with few exceptions when required for responsible disclosure.
No, definitely not. I was partially ranting :) I couldn't find a recording of any talks he gave, only the slides, so he may have covered that in those.
So if i understand correct, anyone who steals or temporarily has physical access to your iPhone can access your data including (email, etc)account passwords, which makes the encryption on the device completely useless.
If that's not a serious back door, i don't know what is.. the melodrama seems in order.
I'm a little conflicted about this. On one hand it's good to learn about the security of your device, on the other hand he's far too partial and sensationalist about these iOS features. Yes, features.
• It's good to know packet capture can be remotely enabled on your device from data collected on a computer the device has trusted.
• It's good to know Apple has the power to look through your encrypted files given physical access (file relay).
• It's good to know one can extract files from his phone using a trusted computer (house arrest).
However, that's it. There's no "back door". There's no (implied or otherwise) NSA conspiracy. There's a reason why the media "misunderstood" his talk: it was full of hyperbole.
> It's good to know Apple has the power to look through your encrypted files given physical access (file relay).
So the requisites are: "be Apple" and "have physical access"? That's awfully little for what's supposed to be encrypted files.
It (seems to be) no secret that law enforcement sends devices to Apple when they can't handle them themselves. So if I'm understanding it right: you can't protect yourself with an iDevice, consider all your data compromised?
What is the threat model that concerns you? If you don't trust Apple then don't use iOS full stop.
I trust Apple engineers, but not Apple Store employees. So, for me the question to ask is what effort is required to circumvent the system. If any employee at an Apple Store can circumvent your encryption then that is an issue. If circumventing requires your passcode, then that's far less concerning.
Are you joking? Remote access for a specific party will virtually always become remote access for malicious actors, whether that was the original intent or not.
> It's good to know packet capture can be remotely enabled on your device from data collected on a computer the device has trusted or been tricked into trusting
> It's good to know Apple or anyone who can spoof Apple has the power to look through your encrypted files given physical access (file relay).
> It's good to know one can extract files from his phone using a trusted or seemingly trusted computer (house arrest).
> Apple has the power to look through your encrypted files given physical access
> However, that's it. There's no "back door".
What? How is that not a backdoor?
Edit: Also, from the article, "Apple apparently has admitted to the mechanics behind file relay, which skip around backup encryption, to get to much the same data. In addition to this, it can be dumped wirelessly, without the user’s knowledge."
So not even physical access, just proximity/same wireless network?
>As usual, the media has completely derailed the intention of my talk.
Lol. The connotations in his presentation and his retweeting of all the press it got were pretty clear. Seems to me like this guy is looking for his next gig.
Ah, thank you. It was a 500 earlier, and now it just says,
> Checking your browser before accessing zdziarski.com.
> This process is automatic. Your browser will redirect to your requested content shortly.
> Please allow up to 5 seconds…
> DDoS protection by CloudFlare
So in short: Apple has back doors that they claim aren't really back doors since only Apple apps can use them. If the NSA hasn't been using them already, it is only a matter of time.
As I understand it, there is more than just figuring out how it works; one also needs to have physical access to the phone and be able to imitate Apple cryptographically. Not out of reach for the NSA maybe, but not exactly typical hacker stuff.
No, SSL certs are using to sign packages and software too. And Apple would not have a root cert, their cert would be signed by a root CA, which could be used to sign other certs if it's tricked into thinking its' Apple requesting them (like in the recent Google cert example).
So one could impersonate a company if they have a cert that says they are that company.
You are conflating the CA system traditionally used on the Web with SSL itself. Apple does not depend on other certificate authorities to sign its software. Anybody can create their own root CA — it just won't be trusted by browsers out of the box.
No, it's about the browser CA system. I don't know exactly how Apple implemented their signing for iDevices, but it's a reasonable assumption that the certs need to be signed by Apple, and they didn't effectively hand the keys over to every registrar in the world.
They are likely using a plain-old SSL Cert signed by a plain-old public CA, which is how your computer would know if the executable appears to come from Apple or not.
First of all, code signing certificates are not "plain-old SSL certs". They're for code signing, not SSL.
Second, Apple includes their own root certificates in their own operating systems just like everybody else. I've personally implemented a code signing mechanism for a platform that had no root certificates except for those I personally generated (and still control).
The public CA system is just irrelevant here. It has nothing to do with anything.
Backdoors that require the user to unlock their device and have paired with a PC in the past.
If a paired PC is compromised (a trivial task for a sophisticated hacker or the NSA, if the millions of windows pc bot nets are evidence), and wifi sync is enabled, and the device is unlocked and in use, then the compromised PC could theoretically harvest personal information from the device without any warning or notification to the user.
IMO it seems like the only requirements to compromise someone's iDevice is A) they use their iDevice with a compromised PC, and wifi sync seems to make it all much easier.
Well, true, but copying personal data to the PC is exactly what sync is supposed to do - a good chunk of that data is actually synced with the computer, and the rest needs to be included in backups (which need to be able to be restored on other devices, so they can't be encrypted with a device-specific key). If Wi-Fi sync is enabled, all that needs to happen over Wi-Fi. So I'm not sure what Apple could do about it, other than make it harder to compromise Macs.
The part of this story I think deserves more attention is security against a sophisticated adversary who does not have the passcode or access to a paired computer. In this case, data protection should be effective (the data is encrypted with a key that requires going through the hardware AES engine to derive from the passcode, i.e. slow), but for some reason most data is apparently not protected. This doesn't seem hard to improve to me, and I'd like Apple to do so.
In other words, News flash: physical access allows an attacker in the know to compromise computer security. You laid it out, a minimum of 4 circumstances need to happen to allow these exploits to work.
If we're going to have "lawful intercept" legal requirement, I'd rather have the mechanisms require this type of intrusive action that require a warrant in most cases.
I was really hooked by this talk until he characterized supervision/enterprise enrollment as a "backdoor", and the more I read about it, the more bullshitty it really is.
Guess what. If you hacked into my Mac, then yes, you can see my pictures in iPhoto. As my iPhone keeps syncing to the Mac, yes, you'll be able to see the pictures from my phone. But you'll also be able to view my emails (from Mail on the Mac), all my documents, and install a key logger on my Mac and steal my bank credentials.
I get a notification asking if I want to trust the computer I've just connected to every time I connect, regardless if I've trusted it in the past or not.
This sounds a bit like the same sort of customer-experience related hacks that MSFT used to (maybe still does) put in all their software and that caused so many holes in security. Poor attention to security won't just let US government intrusion, it'll also let in other governments and hackers. Seriously, letting a 'trusted computer' enable that data syncing? They're playing with fire. (feel free to let me know if I'm missing anything here)
That's an easy thing to say. But on the same token, if Apple required individual authentication to access the filesystem remotely, critics would scream about the privacy issues associated with linking a file transfer to a human identity.
The process of establishing "trust" between computer and iOS device probably needs a little work, but the concept itself isn't inherently insecure.
Why isn't the security press screaming about the scary inclusion of a massive black hole of security risk on OS X.... OpenSSH? All I need to do get physical access, click a checkbox in a preference pane and copy a public key to a user's home directory, and "poof" I own the box!
No, what's really disappointing is FUD directed at debugging tools. These sorts of "presentations" and "research" are pointless. Anyone who does IOS development knows about these tools, they're not secret. Apple has Tech Notes and documentation in Xcode on them going back years. Let's please try to focus our ire where it's needed.
And to whoever said that Google is "very open" about their malicious app problems, well, gosh, where to start...
Google's Android is the cause of the malicious app problem. By not allowing users to have fine-grained access control on the various entitlements in Android, Google is forcing users to adopt an all-or-nothing approach to every app they download. Don't like that this app wants access to your Contacts? Fine, then don't install it. The root problem here is not allowing the user to determine, after-the-fact, what privileges an app should have. Apple gets this right, Google fails miserably.
Of course there's also no one Android. You know that, right? There's a bunch of different Androids from a bunch of different carriers all of which run different hacked-up versions littered with a bunch of crap code from carriers that almost no one wants. Code which I imagine is also littered with security bugs because it's written by carriers who barely give a damn if this junk even works and wouldn't know "secure" if it hit them in the head.
And on top of all that, depending on your phone and depending on your carrier, that brand new phone you just bought might even be running an Android that's years out of date and full of known vulnerabilities. There's no comparison when it comes to timely IOS security updates and Android. The Android ecosystem is a complete fail on the security front at the moment. Period.
Google can play dumb if they want. Plausible deniability is oftentimes quite useful after all...
What's really funny is that you turned an article completely about an Apple security problem that they should probably fix, into a completely off topic rant about Android.
These type of stories are up voted without being read because, despite intimations that iOS users are hipsters, the real hipsters are people who use Android because of some imaginary freedom. Most Android users don't care about fake hipster stances and just want a cheap phone. They are not willing to pay for security and they have none as any moderately talented hacker can own an Android easily even if the user only uses apps from Google's walled garden due to carrier foolishness. Pretty sad to see HN so taken in by this nonsense.
How could a cracker own my Android? I bought an unsubsidised Moto E. It has no carrier bloatware, and prompts me when there is a new version of firmware available. If it's not rooted, and I only install apps from Google Play, what are the known attack vectors?
The definition of malware in that article is quite broad, and includes apps which are just 'collecting and sending GPS coordinates'.
Even if one of the apps I've installed from Google Play is malware (by the definition in the article) it doesn't mean my phone is 'owned'. An attacker can't run arbitrary code on the device or get copies of my data or send texts pretending to be me.
Seems like iOS 8 should offer a settings screen to allow you to revoke sync keys and/or see a list of computers you've trusted in the past. Perhaps it should default to deleting the keys if you haven't sync'd with a specific computer in some timeout period (30 days?).
A few of the services should be locked down a bit further regardless of anything else.
I also don't see this as a valid bypass of encrypted files - you need the device to be on and have its passcode entered. That's a far cry from taking a cold device, booting it, then connecting with a stolen sync key. Besides the fact that we've known you were unsafe if the device was unlocked for some time - some police even carry Faraday bags and portable chargers to keep them accessible probably for this very reason.
I remember when it was trivial to examine artifacts from itunes backup until backup encryption was implemented with passphrase. (v6 I think?)
Something that still has the capabillity to bypass backup encryption sounds incredibly dangerous from my perspective.
There are plenty of legitimate concerns mentioned in his talk. I agree with the no cause for panic, but what about the fact that there are obviously services not disclosed to us, developers, users, enterprise executives relying in this for a trusted platform, etc…
The potential risk this poses (or implies) makes the lack of initial disclosure to be criminally ignorant at least.
If Apple wants to balance the scale, they will need to do more than address and resolve these issues. They need to extend their transparency a smidgen. :)
Resolving the hyperbole debate: asking a user "May I connect to some device?", then installing permanent remote access to the device, and never prompting the user again nor giving them further information, is a plain and simple backdoor.
The difference between this and malware is malware authors create web pages explaining to users to "Just click OK and don't ask what this is" before they deliver you a backdoored application.
If the prompt said "May we install remote access tools that allow us to remotely control and remove data from your device forever?", then it wouldn't be a backdoor. It would be a front door.
Kind of glad Apple just confirmed the services are there and ignored him otherwise. I'm sure he has a nice career ahead of him of complaining the cp command in the adb shell on Android isn't hard coded to ignore any path with DCIM (user pictures) in it next and other nonsense. Honestly, he isn't helping anything and he is just making it harder for Apple to fix broken phones and provide better customer service in general.
Wonder what he thinks of amazon MayDay showing your screen to custom support remotely. Users love it since the custom support can now guide you to exactly the right settings and other things, but I think privacy nuts like this will have seizures.
I'm more surprised at the fact that Apple decided to actually confirm the existence of a back door in their product (even though they are "misleading" (as stated in the article) about what really is at risk here).
The fact that Apple was downplaying this tells me they haven't realized that a product, especially operating systems and computers, depends a lot on the userbase; if the userbase is kept ignorant then Apple will keep itself in its 'comfortable zone' since its not being pushed by the users to improve.
Nonetheless, its still pretty good that Apple has confirmed this, baby steps I guess.
You'd be pretty annoyed if the front door to your house, when you opened it, also opened up your document safe, emptied your wallet onto the floor and invited visitors to leave bugging devices to spy on you later.
Also, the defence of "just don't agree to pair your phone with an unknown USB device" can actually be tricky. On a flight, I plugged my phone into the USB port on the seatback to charge it. The phone repeatedly kept asking if I wanted to pair it with something (who knows what it was? the entertainment system, maybe?). If I had accidentally hit the wrong button only once (on a prompt that randomly appeared), my phone could have been owned, and there's no easy way to un-pair.