Hacker News new | past | comments | ask | show | jobs | submit login
Our Approach to Privacy (apple.com)
237 points by fjk on Sept 17, 2017 | hide | past | web | favorite | 179 comments



Apple sells hardware differentiated by integrated software for premium pricing. They want to build better and more expensive products through superior design and quality control. Collecting and analyzing personal data isn't the most important thing for their business. They might see more benefit by eschewing personal data collection and marketing a privacy-focused message, which they seem to be doing.

In contrast, Google and Facebook are companies that sell advertising. The value they can offer to publishers and advertisers completely relies on how well they know their users. Their competitive moat involves collection of proprietary data to continuously improve their products. These are clear incentives to be sticky and greedy with your information, but also to keep it private and proprietary for their own sake.

With these incentives in mind, I more readily trust Apple when it says that it will not collect my data compared to data conglomerates, and Apple hasn't done anything to aggressively betray that idea in its history (to my knowledge). Combined with the technical security advantages of iOS, I'm inclined to believe Apple products to be the least-bad option for the security-minded today.


There's only one way to fight back.

First: Move out of Google and Facebook. In particular email. It's a good idea as they can lock you out for nothing without explanations. They've been doing this a lot recently. You are not famous so a twitter mob won't help you.

Second: Since most pages out there have trackers from these companies, install Ad Nauseam extension to not just block tracking but click on everything in the background. This will ruin the information they gather about you with noise.

https://adnauseam.io/


Ad Nauseam is a curious recommendation. While it devalues you individually, it also consumes bandwidth locally and remotely. Feels like accelerating the arms race.


I do like Apple's approach to privacy. A lot. And I love their phones. While their PC hardware is still pretty, it's getting more-and-more outdated and overpriced. But that's no longer their focus, so hey.

And indeed, Apple seems to handle privacy well, in practice. I've managed to create pseudonymous accounts at Apple's online store, and fund them with anonymized Bitcoin. And I've managed to make digital purchases.

However, I haven't tested buying physical devices, for pickup in meatspace. I wonder if they'd require official ID, or just proof of purchase. Anyone know?


Hardware-wise, they are top! Privacy-wise, they've done plenty, but there is significant improvemens to be made!

I appreciate the alert when an app tries to grab my contacts/calendar/location.

On the other hand I DO NOT appreciate, when an app (e.g. British Airways) always informs Facebook, Google Analytics, tens of other trackers, that I am using the app, and I am left in the no-mercy of those folks, and Apple is not helping me to protect my privacy.

This is why an iPhone is only good when it's jailbroken (replace hosts file, install ProtectMyPrivacy, install Firewall IP -yes it works on 10.2-)


The last time I went to pick up a product (AirPods) it required government issued photo ID (Australian driver's license), even though I had the Apple Store app which made the purchase installed on my phone.


That's an issue with Australian law, not Apple. You can also buy such devices overseas.


Ah, thanks. What sorts of purchases in Australia require ID?


Anything over 10,000 AUD (about 8,000 USD) purchased with cash. AirPods are expensive but not _that_ expensive.

My guess is that they paid online for in-store pickup, and Apple asked for ID to confirm they're the one that made the purchase. In which case that's an Apple thing, not a government thing.


Bought my AirPods from Apple in the UK: no ID required.

Bought my iPhone 6S from Apple in Ausralia: ID required under Australian law.

Very likely the person also bought a phone with a sim/service, hence being asked for ID (under Australian law.)


None (well, some medicine, firearms probably, that sort of thing, not headphones). This is entirely on Apple.


Bought my AirPods from Apple in the UK: no ID required.

Bought my iPhone 6S from Apple in Ausralia: ID required under Australian law.

Very likely the person also bought a phone with a sim/service, hence being asked for ID (under Australian law.)


Well, that sucks. So much for privacy, in Australia anyway.

In Australia, do you need to show ID when you buy random electronics for cash? If you pay cash in an Apple Store, do you need to show ID?

Buying Apple hardware used, at swap meets etc, seems the best option. As long as it's not been stolen, anyway. But you can test for that, before paying.


No, there is not requirement to show ID when buying electronics in Australia. I suspect that GP's experience was due to them collecting a pre-purchase, and Apple stores having some sort of policy around verifying the purchaser in order to prevent theft.


I am not sure if one can trust any 'don't be evil' pledges. With Apple products one is often required to use the appleid - and this can be used for tracking purposes. Also Apple is also a media company (iTunes among other things) so it's not just a pure hardware vendor. They too sit on a big heap of user data - beware. Also they are required to comply with local regulations - and these may not be compatible with your high expectations on matters of privacy and confidentiality.


I agree with this reasoning (although I'm not particularly privacy minded, and prefer Android and most Google products to iOS and iCloud).

The older tech companies like Apple and Microsoft have built business models with a different set of incentives around user data. Google, and particularly Facebook's, business models entirely rely on monetising user data.

My personal 'ranking' of a companies incentives to respect privacy is something like:

1. Microsoft 2. Apple 3. Amazon 4. Google 5. Facebook

Obviously this ranking is very debatable, but I agree business model incentives are a pretty good heuristic.


I'm curious why you would say that Microsoft has the most respect for privacy given Windows 10 telemetry. Is there a MacOS or iOS equivalent?


nb: ms employee

I would flip MS/Apple personally, but I understand the argument from the perspective of business incentives. Microsoft's incentives don't really care about collection of telemetry, but they have a strong incentive to keep customer data secure from outside entities (e.g., businesses on MS products losing corporate data to outsiders).


I wasn't ranking the company's practices so much as their incentives, based on their business model.

Microsoft derives most of its revenue from enterprise software, the others from consumers. Enterprises are generally more concerned with privacy of their data than consumers.

Arguably Apple have shown better practices of late. My ranking is more what one would expect 'in theory'.


Microsoft puts ads into Windows 10. With ads comes tracking.


Nope, Apple even goes to great lengths to preserve user privacy in the data they collect (See Apple's use of differential privacy)


Pre-Win10, pre-telemetry Microsoft was pretty much "hands-off", i.e. you bought the software and then they mostly left you alone.


Maybe hands off compared to now, but not compared to any other OS of the same era. If it's not activation licences (it's always amused me that performing a hardware upgrade on a cracked copy of XP was less troublesome than on a legally purchased copy), it's frequent notifications "you're not running our firewall", "you don't seem to be running our preferred antivirus", etc. And worse yet, Windows Updates often sneaks in updates for new software you didn't even request and often don't want: Bing search bars, Microsoft's shitty AV, MSN Messenger (back in the 90s), Silverlight, Windows 10(!!!) etc.

Compare that to Linux, OS X, BeOS, any of the BSDs, etc and the difference is night and day. Ok, Ubuntu had a stint of including Amazon results in their search but they were relatively transparent about that and it was quickly disabled by default after the backlash they, deservingly imo, received.

Windows has always been very "hands on" because it's designed for users who are very "hands off".


Apple is building the tools needed to better invade people's privacy, and those tools will proliferate, regardless of how responsibly Apple uses those tools themselves.

The pattern has repeated itself so many times throughout history. Technology developed with the best of intentions ends up getting used for bad.


I'm happy Apple is at least trying hard to deal with privacy but honestly I don't think they are doing enough, at least for me.

For example, I don't really want to give most apps constant access to my photos, my camera, or my mic but I really don't have a whole lot of choice if I still want to use popular apps and services like Facebook, Instagram, Messenger, Hangouts, Line, WhatsApp etc..

I really wish that every time they wanted a photo they had to get it through some OS level UX and they only got to see the photos I selected. As it is they get to see all my photos the moment I want to give them a single photo. Similarly if I take a photo in one of them they require permission to read all my photos when all I want them to be able to do is save the photo.

As for the Camera I don't give any of those apps access to the camera because I don't trust them not to spy on me in some way (I assume camera = mic access so they could be doing the subsonic listening for ads things etc...). Instead I take the picture with the built in app then access that picture from the app I want to use the picture in. Of course that leads to the problem above.

Mic access is more problematic. I don't want any of them to have mic access when I'm not using the mic function directly but I can't talk to my friends who use all those services to call me if I don't give the apps mic access.

I feel like if Apple was more serious about privacy they'd handle these issues in some way. The photo one seems mostly straight forward for most use cases. Don't let the app access them at all, only the OS. The camera one is less straight forward. I get that there are innovative things apps can do with the cameras by using them directly. On the other hand most of the current use cases could be handled by letting the OS access the camera only, not the app, and then just giving the result to the app.


They actually are changing the Photos privacy defaults in iOS 11: https://developer.apple.com/videos/play/wwdc2017/505/

Apps won’t need to ask for permission to your entire library - the Photo picker will just appear, and only user selected photos will be available to the app.

On the iOS 11 GM at the moment, Facebook still seems to need access to my entire library. So perhaps it is only for apps that link against iOS 11. Guess we’ll see soon.


I agree with this but just so you know mic and camera access are two different things and iOS users get two different prompts if an app wants camera AND mic access


Thinking about it some more. For Camera and Mic they could have the option to "ask the user" every time and revoke that permission if the top is not the front app after a minute or so. That would at least prevent using the camera and mic when the user has not recently given permission. Whether people would use it or not I don't know. I would.


> Whether people would use it or not I don't know. I would.

They could allow an app in their app store to provide this functionality to users who want it.


That's actually quite a sensible proposal. I wish the implement it.

I often asked myself what happens when I give an app access to my photos. Can that app upload all my photos to their servers? The same question for things like calendar, address book, health data etc.


> they require permission to read all my photos when all I want them to be able to do is save the photo

I don't think this is true though. From the code I've seen you only get returned the single UIPhoto the user chose.

also remember you can revoke permissions at any time via Settings.


The apps are presenting their own UIs showing all the photos. They also have options to upload all the photos in the background. They clearly get permission to access all the photos.

Going to settings doesn't help

1) Giving an app permission to access the photos and then revoke it when I'm done doesn't prevent the app from accessing as many photos as it pleases until I revoke access. In the time it would take for me to give an app access, select a photo, and then revoke the app could easily have looked at 10s or 100s of photos.

Add some ML in there for their ad targeting

2) With the Mic off I can't receive calls in any of the messaging apps. If iOS asked each time then that issue would go away but as it is I need to leave on mic support or miss calls. I suppose I could miss the first call, turn on mic support, return the call, turn off mic support.

Still, I'd argue if Apple wanted to be even more serious about privacy they wouldn't allow apps to have unrestricted access to the mic as they do because it makes the burden of trying to prevent spying pretty cumbersome.


You are correct, thanks for educating me!


I think you mis-understand how iOS privacy controls work. An app doesn't get to 'see all your photos' just because you grant photo access, you still have to select which photos to put in the app. Same goes for camera, that just let's the app pull up the camera interface, not be able to access it 24/7 for whatever purpose they want. Same with the mic.


The apps do get permission to see all photos. How else would facebook or google photos automatically upload all your photos. Both apps have the option to do this. It requires no extra permissions on your photos and you don't have to select anything. It just requires giving permission once to "photos"


I don't think this is correct. The Facebook app regularly shows me all the photos I have taken today and asks if I want to post any of them.


The Facebook mobile website can't do any of this, is just as functional, and easier on your battery life. I highly recommend it!


I can't tell you how many times I've definitely had a photo show up in the "do you want to post this" preview that I most certainly would never want to post.

I feel like I'm having a heart attack every time thinking I posted it already. That feature sucks.


uninstalls app and uses website


Honestly that sounds really creepy.


Of the 6 apps under discussion by greggman, 4 are Facebook properties. Facebook is creepy.


Whenever an app asks for access to Photos to save an image or to read a photo, it gets access to the entire camera roll (all photos). So it can certainly upload your entire photo library someplace if it wants to, subject to foreground usage time, network speed and background activity (limited by iOS).

Compared to that workflow, if you open the stock/official photos app and use the share sheet to share one or more photos with a specific app, then it would get only the selected photo(s).


No offense, but I guarantee nobody wants to spy on you.

Mark is definitely not sitting around thinking "ayyyyy what's Gregg up to?"


True but incomplete. FB and similar make money on bulk collection. They do care but mostly in the aggregate. More data is directly related to more ad revenue.


I don't really follow how that's something that Apple needs to change. You either want to use the apps or you don't. If you don't, just don't download them or don't give them access. If your friends absolutely have to use the apps, then they clearly don't care about the same things you do and you need to convince them to use something else that you are ok with using.

I guess I just don't understand why Apple needs to put things in place to allow you to work with companies you don't want to work with.


Free Scenario: I install a photo sharing app. I use it to only post photos that I take while going on hikes or of my food. Those are always in an Album called "foodpics" or similar, and every time I open their app, I select the "foodpics" folder, then the image inside of it and then go through the app's upload flow.

Why should the photo app get access to the rest of my library of photos, like the ones in "business receipts" or "my son" that I would never upload to said photo sharing site?

Sometimes you want to use the app, but you don't want it to have a world-view of your phone. In a perfect world the photo sharing app should only get access to the individual photos that I choose to share with it (either through the Photos app or an OS-level file selector)


Sure... but you don't have to use the app. That's the point. If that's a deal-breaker for you, then don't use the app.


>I guess I just don't understand why Apple needs to put things in place to allow you to work with companies you don't want to work with.

What's not to understand? Apple is selling you the phone. Its in Apple's interest to put as many features on the phone as potential buyers might be interested in. Allowing the owner's of a device to control access to different parts of a device rather than 3rd party software is not just good business practice, its common sense.


Apple chose to let the China government into their China data center and Google chose to leave the country instead. So not so simple on who you trust. Personally I trust Google more as they are just a lot better at keeping things secure, imo. Plus governments getting into data is a bigger deal to me than a targeted ad. But it is a personal decision.

"A Local Chinese Government Will Oversee Apple’s New iCloud Data Center"

http://fortune.com/2017/08/14/apple-china-icloud-data-center...


Why was this downvoted? It's a relevant point to the discussion.


At least one company is "trying" keep my photos private. The other day Google Photo told my wife it had prepared an album for our trip to SFO. We were surprised because she already disabled Geo-tagging but whataya know... Google still figured it out!


If you have location enabled on your phone, Google appears to cross reference your location with your photo timestamps to work out where you were. I know this because when I import my DSLR images to Google Photos it estimates the location, usually very accurately.


I've also seen it geotag things based upon the content of the image alone.

I went to the photo store not long ago and had dozens of rolls of negatives scanned, from a vacation to Europe 20 years ago. I uploaded them into Google Photos and it geotagged many of them automatically.

They have a public API that does the same, it detects landmarks in images and can give you a lat/long position for it as well as a confidence score.

https://cloud.google.com/vision/docs/detecting-landmarks


apple cloud did release a lot of celeb pics over the internet...let's not forget that


That was a phishing attack, nothing to do with Apple other than that the people attacked by it were iPhone users.


I discovered the same last week. I made a trip to Amsterdam some 11 years ago, and since then uploaded my entire photo library to Google Photos. Last week I was playing around and searched "Amsterdam" in Google Photos, whaddaya know, (most of my) photos from Amsterdam showed up, at least those with discernible features/buildings/landmarks.


Google Photos uses ML to tag photos and this feature released amidst much fanfare at Google I/O a couple of years ago. They don't need location data to geotag photos anymore.


https://www.technologyreview.com/s/600889/google-unveils-neu...

They took 126 million geotagged images from the web, bined them into 26,000 squares, and trained a neural net to predict which square on the earth an image was taken in. That's very poor resolution, but if you see that 20 images taken around the same time are tagged as the square that contains San Francisco, you can be pretty sure they all happened during a trip to San Francisco.

Reminds me of https://geoguessr.com


Thanks for explaining one possible method. It could still mean Google is tracking location "without consent". I suppose it depends on whether they use that location data for anything other than Google Photos.


"Could"? Sounds like they 'did' with OP's SFO album.


Did you book a hotel/airbnb/flight and received a confirmation mail in your Google email account?


That is true as well. I will buy something at Costco and I will see ads related to that. Book a hotel and see ads on Facebook (let alone Google). Problem is not the fact that I am looking for these things but the amount of information they steal without my consent.


Privacy is one of those things I can't tangibly describe why I like it, but it just feels good to know that nothing is being saved, even in contrast to just targeting you for ads and nothing else.


I like it because of this: imagine you're having an intimate conversation with your best friend. In the room silently sit representatives from Google, Facebook, and your government. Before you speak, you (sub)consciously transform your thoughts and phrasing taking into account both the theory of mind of your friend, and those other organizations. This necessarily has low-level overhead, and results in a less-intimate conversation.

Privacy means you only have to care about communicating with the person with whom you're communicating. Organations with incomprehensible motives are cut out of the loop. You can be yourself, not a self-censoring corporate-approved robot.


There is a very simple reason to demand privacy.

We generally speak of organizations, companies, and governments that are seeking the penetrate the veil of individual privacy. And I think that leads people to forget that these entities are made up of people. And people can, and do, do bad things.

A soft example is Snowden stating that NSA operators regularly pass around intercepted nude photo/video from people who had their privacy unjustly compromised: "These are seen as the fringe benefits of surveillance positions.." But the particularly disconcerting issue is that this information can be used against individuals. This article from the EFF highlights the FBI's actions against MLK:

https://www.eff.org/deeplinks/2014/11/fbis-suicide-letter-dr...

That letter was the FBI, posing as a disillusioned black supporter, detailing various embarrassing information that had been collected on MLK and encouraging him to commit suicide -- stating that it would all be published otherwise. And these couple of examples are only stuff that's being done by the "good guys." Information can, and will, be leaked, stolen, traded, and so on. One can only imagine the sort of things the "bad guys" could cook up.


We have a track record of things going south, fast, for societies that sacrifice privacy. It makes sense to tread warily with it.


Can you give a few examples?


Pretty much every authoritarian overthrow that became genocidal relied on a previous bureaucracy's meticulous record keeping.


Can you please give a few examples?


The classic ones are Pol Pot and Hitler. With the latter, the Danish example is useful.


In your original comment, you were implying that societies that lose privacy go south very quickly. As in that loss of privacy is causal.

I can't speak for Pol Pot, since I'm not very familiar with his regime, but with Hitler and the Danish, my understanding is that record keeping aided in the German occupation. Not that the Denmark went downhill after it started doing meticulous data gathering.

Can you give causal examples?


Also include the Stasi in East Germany.


There's an analogy with side-effect-free functions: things are easier to reason about when you don't have to worry about peripheral state changes.


A few things have struck me really strong on Apple's stand and implementation on protecting users' privacy:

1. Though it's understandable that Apple earns money primarily by selling hardware, it's sort of amusing and alarming at the same time that a proprietary almost-closed-source software company is focusing on protecting and preserving privacy whereas partially open source platforms competing with Apple seem to be nowhere close on this aspect. Do any of the Android forks try to do as much as Apple does for privacy right out of the box (something a non-technical lay person could get)?

2. It's abundantly clear that a lot of thought has gone into the foundations of a design focused on protecting privacy and in creating silos of information in/with different SDKs and features.

3. It's a bit unclear to me as to why Health data is stored encrypted in iCloud whereas messages aren't. Is there a distinction here between iCloud sync and iCloud backups? The documentation on messages suggests turning off iCloud backup as a protective measure.

4. To me, the weakest link in the ecosystem seems to be third party apps, where Apple relies more on them adhering to the developer guidelines and on publishing a privacy policy.


Regarding your first point, it's difficult to implement some security schemes at the operating system level alone. With full vertical control of the product, you can have nice things like secure enclaves and de-facto hardware cryptography acceleration.

See here for details: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

A lot of the features would be very difficult to implement in Android without cooperating hardware, and hardware is notoriously expensive to get right and scale up. Projects like neo900 and Purism regularly encounter delays, unexpected costs, and pricing issues. It's really tough.

On a broader note, people are spending more and more time in data-hungry apps anyway, which can send almost anything they want to the network. This is sure to chip at any device-level security, pushing it towards irrelevance. I wish I had a log entry every time an app used the location service on my phone along with a database containing a history of Internet-transacted data.


Thanks for the explanation on the Android side. It still seems weird that nobody wants to take this up as a USP for their devices (referring to non-Google entities).

> On a broader note, people are spending more and more time in data-hungry apps anyway, which can send almost anything they want to the network. This is sure to chip at any device-level security, pushing it towards irrelevance. I wish I had a log entry every time an app used the location service on my phone along with a database containing a history of Internet-transacted data.

I've long wished for network access permission on iOS, allowing the user to decide which apps can never connect to any networks. To reduce the total attack surface, I'd want to keep many apps (especially games) running only within their sandboxes and having access to only the data they create/generate on-device and no other external resource/server.

AFAIK, Android has had this even in the days of permission requests at app install time. I don't know if granular control is available on this from Android 6 onwards.


I know that Samsung is trying to push its Knox product to enterprises. I'm unsure about their technical or financial success. I don't know of anyone trying this in the consumer space either, but I'd definitely love to see more activity in this long-neglected sector.

Knox: https://www.samsungknox.com/en


For point 1, I'd like to note that Copperhead provides Pixel devices that come with a privacy and security focused OS.

https://copperhead.co/android/store


Where are you getting the info about iMessage not being stored in the cloud encrypted? Wasn't that a major feature point of iOS 11?


It seems like Apple could read them in iCloud. I have to admit this is a guess based on the wording on the privacy page. The privacy page still has the same text as two days ago (quoting an excerpt here):

> Apple has no way to decrypt iMessage and FaceTime data when it’s in transit between devices. So unlike other companies’ messaging services, Apple doesn’t scan your communications, and we wouldn’t be able to comply with a wiretap order even if we wanted to. While we do back up iMessage and SMS messages for your convenience using iCloud Backup, you can turn it off whenever you want.

Compare that with what it says under the Health section of the same page (quoted excerpt):

> And any Health data backed up to iCloud is encrypted both in transit and on our servers.


Apple's approach to privacy also includes being a partner in PRISM, a fact which they chose to vigorously deny as false allegations until it was proven to be true.

Every story about Apple and privacy chooses to omit thus huge piece of info.

Why should anybody trust them now? What has changed to make anybody believe they aren't still lying about privacy?


PRISM is a system for dispatching FISA 702 directives, which are the documents containing "selectors" (search queries) pursuant to a court-approved FISA 702 certification. It is to search warrants what Stripe is to credit card authorizations.

"Not" being a partner in PRISM doesn't mean much; it just means you're legally obligated to handle that paperwork by hand. Like every other company in the country, you're still required to comply with a valid 702 directive.


That has nothing to do with the questions that I posed. Apple being part of PRISM is what I referred to. Not the nature of the program. The intentional lying about it. Why should anybody trust them?

“We have never heard of PRISM. We do not provide any government agency with direct access to our servers, and any government agency requesting customer data must get a court order,” stated Apple spokesperson Steve Dowling.

http://www.cheatsheet.com/technology/is-apple-lying-about-it...


This article doesn't say anything at all, other than that all the major tech companies denied the original Greenwald claim that the NSA had direct access to servers (claims that every major outlet who published them have subsequently walked back), and that they had anything to do with "PRISM".

Other investigative reporters have worked out what PRISM was. Now we're all pretty sure we know. But that doesn't mean the tech companies at the time had any clue. It is somewhat unlikely that the thing that sent them 702 directives was, to them, referred to as "PRISM".

Generally, I find, when PRISM gets introduced into a discussion about Internet privacy, the discussion gets rapidly dumber. I'm not saying that's happening here, but it's a risk. If you're going to talk about it, you should have better sourcing than whatever the hell "The Cheat Sheet" is.


If their technology is built such that even they themselves cannot peer into the inner workings of your content, what good is their association with PRISM?


How do you know Apple is telling the truth?

One thing we're certain of, however, is that Apple has the signing keys. They also encrypt their firmware and even other apps to hide how they work.


I believe if they weren't telling the truth, _somebody_ would've caught on by now. We have lots of people in sec watching court cases involving police trying to break into iPhones. If somehow a department was able to access a phone without a user complying and no known backdoor was exposed, I'm sure we would've heard of it by now.

I hope you're not expecting them to open source the secure enclave.


But it's not. It says "encrypted when sent and, in most cases, when stored on our servers". Even if encrypted at rest that doesn't mean they can't decrypt it.

The calls and messages should be end-to-end encrypted but they are in control of the PKI so they could probably eavesdrop if they wanted.


Reread Apple's security papers. Much of what's "encrypted on their servers" is encrypted in ways deliberately designed to make it untenable for Apple to decrypt. For instance, material in iCloud is encrypted with a key derived from your device PIN. So concerned is Apple with maintaining their inability to decrypt, even with a brute-force search on the PIN space, that they've contrived an elaborate quorum scheme of HSMs to manage the key space and count failure attempts. Nobody does this and Apple could have stopped here and rightly claimed the most secure large-scale cloud architecture of any mainstream tech company. But they didn't. They used programmable HSMs to implement the system and were concerned that a serious power bent on coercing Apple would target the HSMs. So, once they get the systems deployed, the admins meet and run the programming keys for the HSM through a "physical one way hash function".

On stage at Black Hat, Ivan Krstic claims that hash function to have been a Vitamix blender.

Almost always, when companies claim to encrypt things serverside, they're doing something shady. And it is always preferable that secrets be kept on devices and never touch servers. But Apple is taking the problem seriously.


I definitely take your suggestions and prose seriously Mr. Ptacek. Do you use iCloud? How about macOS computers? Do you have an article or blog on what you do for backup/cloud?

Thank you for this kind of information. Very grateful to have this stuff at hand.


I wish the EFF still made those "Who has your back" infographics, they were really helpful if you wanted to find out which companies respected your data both online and in the courts. But IMO Apple is the only large multinational corporation that is actually taking steps to protect my privacy so I'm way more inclined to trust them with my personal information. Can't say the same for Google or others.


How much of this can be independently verified? I suppose we just need to take their word for it?


It all boils down to what you see in the public. The FBI was trying to get Apple to create an iOS variant to extract data from a device, to the point where Tim Cook wrote a letter refusing to do so and was willing to fight it in court. In a similar fashion, Apple has given talks and white papers on iOS security. In tandem with these well documented white papers and talks discussing the internal functions of iOS, we haven't seen any well known or trivial exploits appear to surface that demonstrate flaws in them.

In the case of the FBI fiasco, for example, we know that the FBI later paid Cellebrite for an exploit that allowed them to unlock the device. We don't know the details of how that happened, but the FBI had to reach out to an independent company and exploit an older device to do it.

To be clear: if they were lying about the lengths they go to with encryption, on device security, and user trust, this story would have been different. The FBI would have already had a backdoor or been able to trivially break the device, or they would have complied and created an iOS variant to break in. All we know is what they stood their ground on, and the effort that was required for the FBI to get in.

iOS 10 security white paper: https://www.apple.com/business/docs/iOS_Security_Guide.pdf

Talk about iOS security at Black Hat: https://www.youtube.com/watch?v=BLGFriOKz6U


We know that they lied to customers claiming they couldn't help law enforcement get data off customers' devices.

"Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it's not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8."[1]

After it became clear that it is technically feasible for Apple to assist with those data requests, they quietly removed that claim from their website.[2]

[1] https://arstechnica.com/gadgets/2014/09/apple-expands-data-e...

[2] https://www.apple.com/privacy/government-information-request...


By "technically feasible" are you referring to Apple's ability to build a software update that allows for quicker brute-forcing of the passcode?


Yes. Brute forcing pin codes can be done very quickly. Apple realized it was technically feasible too, which is why they no longer make that claim. That they didn't apologize to their users and offer refunds demonstrates the disdain they have for their customers.


Perhaps both the FBI and Apple had a chance at good PR for their causes, and the FBI had a chance to set legal precedent, so neither admitted that the FBI could break in if needed.

The idea that iPhones were inaccessible to highly-resourced attackers like the U.S. government seems unbelievable. Everything has security holes, even if they are expensive to discover or exploit; the U.S. government's resources are immense and the payoff - access to iPhones when needed - is clearly worthwhile. And in the end, when the PR debate had run its course, the FBI found a way in.


Yes. And we can.

There is no upside and potentially massive downside/fallout for Apple to just lie about something like this.

Maybe they will 'screw up' - but it would be completely stupid for them to say one thing, and then intentionally do another.

A 0.5% drop in sales would be a disaster.

Twitter can cause quite a storm these days, companies are generally afraid of it.

If it were Google, or FB, it would be another story, because their business is your personal information.

But Apple - not so much. They want to hustle hardware. For them 'your data' is mostly a hassle they'd rather wash their hands of.


After reading this, I have two questions:

1. Are there any statistics showing the ratio of security issues between Android and iOS?

2. What's the most common phone among security researchers?

To be honest, I'm wary of believing this, but I own a Nexus and if I had to give the benefit of the doubt to either Google or Apple, it would most certainly be Apple because they don't make business out of my data (or, not as much as Google, at least).


FWIW tptacek and a few others recommend iPhones for journalists:

https://news.ycombinator.com/item?id=13798321


Sounds about what I was looking for. Thanks!


> 2. What's the most common phone among security researchers?

Not sure about which phone Schneier uses, but he argues that having a smartphone is a logical choice in today's society. However he also brings a nuance to that: that we have the pragmatic choice of leaving our smartphone at home instead of taking it with us, and that we should consider that.

Keep in mind a dumbphone also has big brother capabilities. Just likely less precise/severe (assuming no remote holes).


It is very hard to get this information in depth because researches publish on Android much more frequently (more users, more open platform).

You can look at the CVEs reported on each platform though and the number of critical vulns is comparable at this point.


I'm an Android user, but I feel the same way.

I sometimes wonder what would happen if I were to create a new Google identity and start fresh. Would they try to re-establish my old email's profile and data to the new one without my consent?


I tried this as an exercise when they were pushing Google+. Created a new account, was very cognizant to read through all settings and ensured that nothing from my old account can be traced back to this new one. I'm not joking when I say nothing from my old account and tracking: 1. I timed my move (to different city inside bayarea) 2. Changing ISP to wave from comcast 3. Bought new computer 4. Closed my old phone account and only started using work phone. Never used hotspot in my work phone 5. Even changed my coffee drinking habit + free wifi access locations from Starbucks to Peets. Only thing I couldn't change was my name + SSN. Serious paranoid level of things for my online life, but they still populated my digital life with same info and recommendations. After about 10-15 days of doing all this, I enabled Google+ (because at that time to use youtube account, I had to enable it). You know who the first contact recommendation that was there for me under my family - my brother who is living in a different country. I just gave up at that point. Probably I did something wrong during the setup (or) slipped and used the accounts for some period of time on same computer or browser (or) the data companies which sell my offline data have really rich data about me. Whatever the goof-up was, it was just disheartening to learn that after all that work my online privacy just lasted at the max 15days. I really wish there were better ways. Considering the number of security/data breaches (e.g. recent equifax one) I feel like, I have little to no control on things.


That is pretty frightening.... thanks for sharing your story.


Yes, definitely.


> 2. What's the most common phone among security researchers?

Probably a dumbphone, so neither Android nor iOS...


Security researchers are people as well. Most would have smartphones as well, but possible try to use them differently then the rest.


> While we do back up iMessage and SMS messages for your convenience using iCloud Backup, you can turn it off whenever you want.

Wouldn't they be able to hold that privacy promise much better if they actually allowed people to keep iCloud backup on let's say for pictures, but still be able to disable iMessage messages? I think many people would like to use iCloud but without it backing up personal conversations, too.

Also, iMessage's end-to-end encryption was rather flawed last Matthew Green checked, compared to other end-to-end messaging apps.

https://blog.cryptographyengineering.com/2016/03/21/attack-o...

As for their use of differential privacy, when they introduced that it was essentially a hidden way of gather more of your data than before, not less, but while still being able to say "hey, we may gather more data than ever on you starting with the new iOS, but it's pretty private, so it's cool, don't worry about it".

All of that said, I know Apple is still miles ahead of Google on privacy. If anything, over the last 1-2 years, Google has become increasingly bolder and more shameless about tracking users without them realizing (except in the EU, where they are forced to make it a little easier for users to understand how they are being tracked, and even that happened because of the anti-trust lawsuit).

Here's just one example of Google's increasingly privacy-hostile behavior:

https://www.extremetech.com/internet/238093-google-quietly-c...


I agree it's not perfect. I kind of get around it by disabling iCloud backup (which includes iMessage, etc.). I just use iCloud to back up photos, and I back up my phone locally.


Differential privacy is obviously worse for privacy than absolute privacy. But the latter is simply to restrictive and would preclude Apple from applying the benefits of machine learning to their users' benefit. I think, for example, what Apple is doing in health has some awesome potential. Also animated shit emojis!

But "differential privacy" has a specific meaning that isn't just "sharing less data". It's trying to find provable methods to get the aggregate data needed for ML without exposing any single user's data. Kinda like the big data version of having one random soldier with a blank in his rifle in any firing squad.


> Apple has no way to decrypt iMessage and FaceTime data when it’s in transit between devices

What about when it's not in transit between devices?


Well they decrypt it on your behalf when it's on your device. The point is that couldn't do that without your device/account.


They then go on to say:

> While we do back up iMessage and SMS messages for your convenience using iCloud Backup, you can turn it off whenever you want.

Sure, I can stop backing my data up - but presuming I don't (like the vast majority of people), does that mean if they got a wiretap order they could just read the data straight from their backup servers?

I'm not sure if this was just phrased poorly, or if backing up your iMessages effectively undoes any protection your messages once had.


"The iCloud Loophole" (article from March-2016): https://www.macrumors.com/2016/03/02/icloud-backups-less-sec...

Although in a link in this HN article Apple says backups _are_ encrypted: https://support.apple.com/en-us/HT202303

So, a little confusing...

EDIT

bah, found this:

"Right now, although iCloud backups are encrypted, the keys for the encryption are also stored with Apple. "

https://9to5mac.com/2016/02/25/apple-working-on-stronger-icl...


Good question. If you backup via iTunes you have the option of encrypting with a password. I've never been prompted to enter a backup password for iCloud...


On the subject of privacy, I'm interested in migrating away from Google services, but I have years' worth of data tied up there (email, photos, music, movies, etc). I know Google offers ways to download your data, so it's technically possible to migrate, but it would be tedious and time consuming, to say the least.

Does anyone have suggestions for making that process easier?


Apple does so many things well, actually. I would return to them for some things and even recommend them (versus exclusively GNU/Linux and LineageOS) if they'd only change the stupid iOS terms that prohibit GPL software.


Isn't the issue that the App Store doesn't have a mechanism to redistribute the source in a way that is compliant with the GPL? Is it fair to say Apple prohibits the GPL? Anyway that tried to sell GPL licensed software on the App Store is "breaking the law" so Apple prohibits it? Or is there more to it?


http://www.fsf.org/blogs/licensing/more-about-the-app-store-... goes into more detail, but the summary is:

> Apple's Terms of Service impose restrictive limits on use and distribution for any software distributed through the App Store, and the GPL doesn't allow that.


Specially the issue is that I cannot replace your binary with one I create myself from your source.

1. I can’t use your app ID

2. I can’t sign it with your cert

3. I can’t tie it to your website for deep links (because of #1)

It used to be the case that I couldn’t even get the binary to the device without paying $99/year, but that’s not necessary any more.

The dev program fee was definitely anti-GPL. I don’t know if the other things I mentioned are.


As others clarified, this is not about source. I don't even care about that really, I mean I do in principle — but I just want to use and recommend to others and have GPL software within the iOS ecosystem, on these devices. I'm fine if the source were available through just a link in the software to a website where you separately download it.

No, here's the deal:

GPL: You CAN modify and share this software, and if you share, you have to keep these terms for others so they have freedoms too.

iOS terms: You MAY NOT modify or share ANY software you get as a normal consumer through the app store OR install any software from outside the app store.

The problem is that Apple's unwillingness to ALLOW a user to exercise these basic freedoms means that there's no way to distribute in the App Store any software that requires these freedoms be maintained. It's incompatible terms. And the fault for the incompatibility lies 100% on Apple for their terms that are the ones that demand that freedoms not be allowed.


> Isn't the issue that the App Store doesn't have a mechanism to redistribute the source in a way that is compliant with the GPL?

Is it? I know nothing of this subject and am speaking out of my ass, but that seems.... trivial to do. I'm curious as to why that is hard for them to do?


[flagged]


And yet, the GPL has been absolutely critical in establishing free software as a credible player, so I'd disagree with you on the license being stupid.


No more so than the BSD/MIT/Apache licenses. The BSD license predates the GPL. And has none of the dead weight or baggage.


> The BSD license predates the GPL

And the BSDs predate Linux. Yet Linux distros are the most popular free software OSes out there.

I did not say the GPL invented open-source, but I don't think you can deny that it was the force to move it from obscurity, at least at the beginning.


Do they just block the GPLs or any FLOSS license?


Just GPLv3


That's the one with the anti-tivo clause. Presumably, they'd have to support jail breaking, etc if they distributed gplv3 software. (They could get around this by supporting side loading, of course...)


I'm pretty sure all copyleft terms of any sort are prohibited, including GPLv2; Apple requires that all software fit their terms, so only sublicensable free-software licenses are compatible.


Wake me up when they let one sync contacts without giving icloud cleartext access in the process


Can you say more about why this is important to you?

I mean, more stuff encrypted without provider escrow keys is a generally good thing, but are your contacts specifically more important than, say, you calendar events?

What threat model are you worried about with your personal, curated contacts?


all data is important, indeed. contacts are just an easy to access example. I'm worried about countries creating laws, secret and public, that force Apple to hand over my data. Specifically those governments that I spend time in and that I also spend time criticising or protesting within as an activist. I'm worried about regime changes. I'm worried about 3rd party data breaches. I'm not worried about how my data exposure would effect me now but how it would effect me in the future when mining and correlating data to use against activists will be much easier.


On Android you can Sideload VPN apps, iOS on the otherhand banned them in China. Apple mounted the most successful attack on General Computing with it's walled garden. The whole Apple is good for privacy is marketing.


Android and its community are sorta schizophrenic towards security.

On one hand, users are told that they should never ever, ever install applications from untrusted sources. They should always use the play store because applications are scanned for vulnerabilities and whatnot.

On the other hand, we have people telling us that one of the great advantages of Android is that you can sideload apps - bypassing the store security model completely.

On one hand, a VPN needs root access for transparent proxying (or at least TOR does). Changing the hosts file needs root access. Changing gps.conf requires root. Lots of useful operations need root access, so if you are concerned about privacy and security, or just want more control, you probably want root.

On the other hand, root is stupidly hard to obtain, and users are strongly discouraged from doing it anyway because it opens all sorts of attack vectors. Unless the manufacturer provides a legitimate method to do it, the operation of obtaining root itself, like on iOS, often relies on an unpatched local privilege escalation vulnerability. Note that any application can exploit this, not just the rooting app.

I don't support Apple's walled garden approach, but we can't argue that they have a much clearer picture with regards to security, and the results speak for themselves. Malware in the Apple devices is rare, whereas in Android it is rapidly becoming routine. Unfortunately, you sacrifice flexibility for enhanced security.


My perception is also that malware ends up in the Google Play Store with much higher frequency than the App Store. Just do a news search for "Google Play Store malware" and "App Store malware" and compare.

Also, one can sideload apps, if you have a Mac, onto iOS. Obviously, that's not anywhere as integrated but maybe that's a good thing. Heck, maybe Apple even added that so people in China could sideload VPNs. Maybe iOS VPNs are good enough (no root, no TOR?)?


Add up all the malicious app installs on Google Play Store, and it doesn't even come close to the 500 million[1] (conservative estimate) users affected by XCodeGhost. It looks worse when you consider that the 500 million is on an order of magnitude smaller total iOS userbase vs. Play Store userbase and when you consider that Google allows third party security researchers to investigate and publish research on the Play Store while Apple does not, so XCodeGhost is likely to be the tip of the iceberg.[2]

[1] https://www.google.com/amp/s/www.macrumors.com/2015/09/20/xc...

[2] https://www.google.com/amp/s/www.cultofmac.com/128577/apple-...


LOL, let's start with StageFright (1 billion+ pwned with just a text message), move on to StageFright2 (because patching is hard...), and then just keep running down the list of malware in the Play store that is still there months after being discovered. XCodeGhost OTOH, seemed to have hit around 40 apps so that would probably not even get it into the top-100 list of Google Play malware families. Malware families. The Play store is such a shitshow that you can actually have different strains of malware running around in fake apps, like some sort of digital syphillus spreading through the brothel that Google forces everyone to visit if they want the shiny apps...


The number of people actually affected by StageFright malware from Google Play Store (which is what we were discussing) is very likely to be zero. It simply blocks the payload. We know for a fact that 500 million+ were actually affected by XCodeGhost, an order of magnitude more than the sum total from all malware seen in the Play Store.

Don't conflate unpatched Linux systems with the Play Store. Anybody who uses Android and cares about the security of their device (like anybody who uses a Linux-based router and cares about the security of their network) uses vendors who deploy timely security updates.


Root is not that hard to obtain on most devices. The instructions are straight-forward[1] and easy to follow, especially for anyone reading this thread.

But lately Google has decided that rooted phones are a security issue. So if you do choose to install some sort of su utility, some functions like Android Pay may cease to work, not because of technical reasons but because Google deliberately disables them on rooted phones.

[1] for example https://wiki.lineageos.org/devices/lux/install


Of course, once you're root you can also prevent those apps (which don't run as root) from finding out that you have root, but it's an ongoing cat-and-mouse game.


Of course. But Google is currently ahead from what I've read.


but we can't argue that they have a much clearer picture with regards to security, and the results speak for themselves

What they've done could be said to be more authoritarian, and indeed if you do the analogous of throwing everyone in jail by default because they could be guilty, then you will basically have no crime. The question is whether that's actually a good idea... it's the old "freedom vs. security" argument.


Yep as someone who is in a cybersecurity job, it's almost trivial to hack Android.


Most Android devices at least, unless you have the latest Nex... Pixel or Samsung phone.


Considering this is HN, you're probably fine with any device as long as you run a patched OS on it. Which is fairly easy with LineageOS.

And they support a lot of devices. I dug out an old Moto G (1st generation, falcon) last week and it runs Lineage 14 (based on Android 7/Nougat) smoothly.


You don't need to sideload VPN apps on Android. They're right in the store, and there's an API for them to run without root, from what I understand.

Here's OpenVPN.

https://play.google.com/store/apps/details?id=net.openvpn.op...

There's also Tor with Orbot and Orfox.

https://play.google.com/store/apps/details?id=org.torproject...

https://play.google.com/store/apps/details?id=info.guardianp...

But only explicitly configured apps will use this proxy. Proxying all traffic does require a rooted phone.

In general, Android gives you more options but iOS seems to offer a more privacy-minded configuration OOTB.


The parent is talking about the lesser-known (for obvious reasons) VPNs, not the well-known ones which are pretty much useless in China for "jumping over the wall".


Do Chinese authorities block OpenVPN based on traffic characteristics? Because you can use this app to connect to any OpenVPN server you choose, including your own.

Same with Tor. You can use an unlisted bridge to get on the network.


They do deep packet inspection. This item, especially the top comment, is well worth reading:

https://news.ycombinator.com/item?id=10101469


Sorry for the silly question. I don't own a mac or an iPhone anymore. My understanding is you can sideload apps on your own iPhone from your own mac running xcode. Are there limitations to what kind of apps you can side load using xcode?


Nothing installed that way can receive push notifications, though not all apps need or use then.


Not strictly true. I regularly test push notifications on sideloaded builds at client sites. Getting provisioning profiles and certificates setup correctly (especially on all extensions) is a bit cumbersome. To get those certificates may require an Apple enterprise developer account. And the builds signed this way are not debuggable.


I'm not certain but I think push notifications can also be used "invisibly" to trigger data sync so this could mean limited functionality beyond just receiving "pop-ups".


That's not true. You can receive push notifications on side loaded apps.


You can (used to be able to?) also sideload without a Mac, that's how some pirate app stores work without jailbreak.


I don't trust Apple. It's a matter of trust, so I have nothing else to say, just I can't trust Apple.


I wouldn't trust on issues of privacy to the likes of Apple. Who can audit their software? It's not FOSS.


You can still contemplate their incentives, their business model, and their reputation/track record.

It's not as good as auditing the code (and making sure the code you see is what actually runs), but it's a lot less effort, and depending on the extent to which you are ready to dedicate resources to improving your privacy, may be an acceptable tradeoff.


You only need less than a half of Freedom 2 ("freedom to study how the program works") to audit the software. Not even the source code - because you can't trust the source to match the binary (unless Apple has reproducible builds, of course, but I'm not sure they care about this).

But, yes, it's mostly based on a trust rather than knowledge obtained during some audit. (I'd even say that trusting is sorta the opposite of knowing)

Disclaimer: never owned any Apple device, ever. But they're doing quite well (or, should I say, others are doing so damn terrible in regards to user security and privacy, Apple starts to look not so bad) so I may consider them as an option.


So a cop can put your phone in front of your face while you're tied up and unable to do a thing. Boom, phone unlocked.

Marvelous.


You always have a choice to not buy an iPhone X, or if you do, not use Face ID.


AFAIK if you keep your eyes closed, it doesn't unlock.

Also, its still an improvement from when cops borrow your thumb. At least this way, you have less chance of getting hurt.


I'd rather lose my finger than my head, thank you.


You are the first person to ever think of this! Congratulations. This place is becoming reddit.


And they would also have access to your fingerprint then. What difference would it make??


In either case, they would have to get a warrant first.

Once they have that warrant, it doesn't matter what biometric locking method you use.


Couldn't the same be said for a fingerprint too?


iOS 11 allows you to disable touch or face ID by pressing the power button 5 times.


They changed this for the iPhone X: you press the lock and one of the volume buttons at the same time.


That's awesome. I wonder if cops could construe that into an "obstruction of justice" (or similar) offense?


Only if they have a warrant for the search at the time you do it. Else, the device is not part of an active investigation.


If you think it's a security risk, then don't enable. It's pretty simple. It's very transparent about how it works.


You can disable Face ID by pressing the power and volume buttons at the same time. They are on either side of the phone. Faster than the multiple press method and more discrete.


Only on the iPhone X and possibly the 8.


Face ID only exists on the iPhone X, so you don't need a way to turn it off on other devices. :)


Touch ID though. (edit: I see what you mean. I didn't see parent didn't mention Touch ID)


All the anti-constitutionalist people at the NSA are getting heartburn as we speak.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: