macOS has been designed to keep users and their data safe while respecting their privacy.
Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices.
Notarization checks if the app contains known malware using an encrypted connection that is resilient to server failures.
These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.
In addition, over the the next year we will introduce several changes to our security checks:
* A new encrypted protocol for Developer ID certificate revocation checks
* Strong protections against server failure
* A new preference for users to opt out of these security protections
Apple has a really strong brand. If this was from Google or Facebook, there’d be angry mob about how dear they logged IP in the first place.
Also, these incidents would be a thermonuclear event for search due to the intimacy and inherent 'Internet-ness' of the data - versus Apple, who can get away with incidents like introducing Intelligent Tracking Prevention - two years later, it turned out it was a globally unique identifier that also leaked your web history  Yet, because Apple didn't have a perceived incentive to leak this information, people see it as a 'mere' competency issue and move on.
It’s a different value exchange, and Apple doesn’t really have a lot to gain by becoming a “bad actor” with data. See also, for example, their planned advertising tracking protections coming up in 2021. They don’t have a lot to lose in this area, and a lot to gain.
Google and Facebook already do that. And of course with them there's no way to stop it either for most of their properties, as they're 100% web based.
Plus, in the excerpt you're quoting the say they stopped logging IP address for dev tests -- logging those (and more) are par for the course in all kinds of debug environments (from MS, Oracle, etc.), that Apple doesn't anymore is probably impressive.
Checking on their site shows an empty list on my account
A few years/months ago this would have been considered a conspiracy, today is just fine for most because "think about my mother" , I agree with the secure by default but don't forget that when Apple decides or is "forced" to remove an application you have no workaround on iOS around that and this "feature" will come to the laptop and desktop if users don't demand it and keep bringing the mom argument.
Apple's online verification scheme still seems to be the wrong approach both for privacy (since it leaks information) and for security (since apps still need to keep working offline and during service outages.) Encrypted queries can still leak information to observers, and we apparently still have to trust Apple to "remove" information from their logs (rather than simply not logging to begin with.)
Dev certificate revocations are rare enough that they can be handled by periodic updates to an on-device revocation list. This is similar to what Chrome does with its CRLSet.
I am still waiting for someone providing a plausible explanation as to why it has to be online check and not like AntiVirus where signatures are pushed to the client.
Instead everything were derailed into Apple Data Collection.
Remember that, kids. I’m as surprised as you are.
Now I have an even bigger and more difficult writing task ahead of me: rms cold emailed me today to ask me, point blank (and presumably non-rhetorically), why I am still running macOS.
That’s going to be a doozy, because he’s damn well right.
As for RMS…well, the issue is that the way he lives is just exceedingly difficult to keep up with in modern society. That being said, you could be instead taking steps to improve yourself by using Linux, but perhaps there exists a place in this world for people to slowly propose changes to macOS and make it better for millions of people alongside those who shout for radical change immediately. Maybe you should probe for what he thinks you should do.
I’m just thrilled that so many people care about privacy. I mostly assumed it was a lost cause, given the status quo on mobile (where basically every app launch notifies 3+ companies and data brokers, with no way to turn it off).
Oh, what I wouldn’t give for Apple to sherlock Little Snitch and port it to iOS! I’m going to assume that their Safari ad privacy backtracking for Facebook means that this probably won’t happen, though.
* option to notify user certificate got revoked. Short description why twitter style and link to details and give user choice to quarantine it or to still use it. This is in similar style what antiviruses do it telling you: 'this kind of malware name was potentially found but if we are wrong feel free to remove from quarantine'.
* revocation list is checked twice a day locally on the machine and twice a week (always on the same days) the current way remotely to strike a good balance as a default. Allow user to adjust frequency.
The general taboo asking users to even disable any of the security protections of macOS needs to disappear in my honest opinion. With Big Sur and Apple Silicon, I don't think that it's hyperbole anymore of a locked up macOS just tailored towards casual users and dragging along developers in the process.
Will they change course on that issue? Or will you have to carry a raspberry pi around as a hardware firewall if you want to have an actually private network connection? (or alternately, not use a Mac)
It also increases the attack surface, as malicious programs may find ways to hijack their traffic, or attach their traffic to OS traffic. All you need is an Apple service that relays the information somehow -- for example, a hypothetical Apple service that requests information/metadata from an application specified URL.
Believe me, I won’t be using a mac not via an external VPN router ever again. I already needed to take that step for iPad/iPhone and was putting it off. This sort of seals the deal, as all of the devices can use the same wifi device.
You can do this with any existing software firewall on any OS, here it just happens on a secondary whitelist (as opposed to the firewall configuration).
The "malicious firewall bypass" (that you can use the same system to add exception for another app) as in the twitter example, though, is not.
If you have root, you can also remove Apple’s exceptions from that list. I haven’t checked, but that should fix the issue of Apple’s services bypassing firewalls.
Aren't they? They're signed, notarized, sandboxed, have ASLR, and other protections, in Big Sur they even recide on a read-only partition iirc...
Even unprivileged user apps can’t read and write to every file their POSIX permissions say they should be able to, due to things like ransomware. When apps want to read or write certain directories, an additional permission dialog is displayed.
I am not aware of any other desktop OS that implements these sorts of protections. How you describe it is definitely how it works on Linux and Windows.
System-level VPNs apparently continue to work as always. It's the new app-level APIs used by programs like Little Snitch that are bypassed by Apple services.
Journalists, activists, and really anyone using something they would rightly consider a VPN, they're fine, because system-level VPNs aren't bypassed.
The VPN leak is only around per-application VPNs and filters, which is the mechanism used by firewalls like Little Snitch and LuLu. These can not be applied to Apple's apps.
System-wide VPNs thankfully don't have the same problem.
Can anyone weigh in on why Apple would prevent this? It's not as though redirecting the traffic from Apple's software will let you impersonate Apple (unless you've also been able to load your own fake certificates, in which case the client is already hosed). At worst you can selectively block them, no? And you can do that anyway with a hosts file.
In my post (https://blog.cryptohack.org/macos-ocsp-disaster) I argued that OCSP was inherently a poor way to perform certificate revocation in this scenario, and that an approach based on Certificate Revocation Lists (CRLs) could be preferable. Regardless, it looks like Apple might be doubling down on OCSP but encrypting the requests, or possibly adding a new protocol altogether.
To me this was the most curious part of the entire situation. The post briefly mentions CRLite and bloom filters; they rely on the list of all if not most valid certificates (which were impossible before CT) and it's understandable that they are not yet widely deployed. But Apple does surely know the list of all developer certificates and can simply publish a (probably compressed) list of serial IDs of revoked developer certificates that would be otherwise valid. I don't see a good reason to use big moving parts like OCSP here especially given the soft-fail behavior.
This is known as CRLite, and Firefox is implementing it for HTTPS certificates. https://blog.mozilla.org/security/2020/01/09/crlite-part-1-a...
Is this in fact the trade-off we’re considering, or am I misunderstanding?
The other idea is that because Apple knows all existing certificates, they could conceivably construct filters that have no false positives for those existing certificates... sort of like construction of a perfect hash.
filter positives still get normal checks.
I think it's fear (useful telemetry) or greed (valuable telemetry) or vanity (look how many people launch x).
Really, just do it. So some customers shoot themselves in the foot. That is how they learn. Others who are perfectly capable of managing things will also be happy with actual privacy. That is how trust is built up, and with trust comes unfettered trade.
Right now power users are alienated, regular customers feel deceived and apple could just do better.
Where Apple went wrong, IMHO, is:
1) not giving people a way to opt out (not just for OCSP, but so many power user-hostile decisions they're making recently), and
2) programming it in a poor way where slow responses ground the machine to a halt (it did mine in the middle of the work day, stopping my work completely for 30-45 minutes).
I don't think malice (i.e. greed) was involved in how this whole thing turned out. As Hanlon's Razor states:
Never attribute to malice that which is adequately explained by incompetence.
Cut them some slack, but not an infinite amount. They can and should well know better, and pledging to delete the logs, encrypt it, and add a knob to turn it off is as close to a “we fucked up” as you’ll ever get from them.
I appreciate their response.
Apple has a different, unrelated telemetry feature that is clearly labeled.
I've always understood this to be a marketing victory on Apple's side. From what I've seen, using Mac/iOS isn't any easier or more difficult than Windows/Android.
Are there even a single example that anyone can point me to of this kind of borking happening to Macs?
My relatives recently bought a Mac because they liked how it looked (big on interior design). They couldn't figure out how to navigate up a directory in Finder or access files from an external hard drive.
As soon as I gave them all iPad mini and iPhone, they all found it easier to FaceTime.
Although, the chat issue was solved by WhatsApp in my opinion, by requiring phone numbers thereby reducing the need for passwords and reducing spam, greatly simplifying chat for the non tech and English literate in my family.
The developer experience on Apple tech is pretty bad and getting worse -- but for the other 99% of the people, Apple seems to be the best tool for them.
No they haven't, proof: the lack of tactile feedback on the iPhone 7+ is a usability impairments for seniors.
I get "never attribute to malice what can be explained by incompetence", but this is Apple. Are we to believe that this public, unencrypted endpoint was set up and is being called tens of millions of times a day because Apple engineers were too incompetent to come up with a better solution for something so fundamental (to Apple) as the security of the software running on their devices? And flying so blatantly in the face of their claim to protect user privacy?
This whole incident is completely bonkers. People should be getting fired over this and there should be an apology and a massive step back from this horrible, horrible approach.
Also in that article, the OCSP protocol is supposed to go over HTTP and not HTTPS: "If you used HTTPS for checking a certificate with OCSP then you would need to also check the certificate for the HTTPS connection using OCSP."
Furthermore, the returned information apparently includes a timeout period for the result to be cached at the endpoint, and according to Jeff Johnson, Apple has raised that timeout in the wake of Thursday's incident from 5 minutes to 12 hours: https://lapcatsoftware.com/articles/ocsp.html
There's certainly room to argue about Apple's approach, but let's make sure we're arguing about the actual behavior.
Checking OCSP is a standard feature of many SSL libraries.
Doing what you suggest is an implementation outside of what standard libraries would provide and was probably completely out of scope until now when stuff started burning.
Don't call relying on standard features of an underlying library "batshit implementation". For the vast majority of cases going with what's already there is the way better solution than NIHing your custom thing
More likely it’s the server side: CRLs have fallen out of popularity because they don’t work well for the scope of most CAs. On the web OCSP doesn’t have the drawbacks it does in this situation because servers themselves can staple OCSP responses instead of forcing the client to fetch them itself. Obviously the same doesn’t quite work for binaries sitting on a user’s disk.
yes. if of course it's their library. But I assume the caller was just calling some (imaginary) function like
That ENABLE_ALL_CHECKS would then go out to check the OCSP server was probably not immediately known or was just seen as something that's part of, well, enabling all checks without much additional thought.
Whether the certificate also lists a CRL endpoint and whether that imaginary ENABLE_ALL_CHECKS also checks that I don't know, but possibly neither did the person who called that (imaginary) function.
Heck, the function might even default to ENABLE_ALL_CHECKS internally at which point the author of this code in question might not even have been aware of the network calls.
How is that remotely good ? It makes it easy to bully / blackmail developer to abide to every Apple will and revoke a certificate in pure retaliation, cf. the Epic debacle. A central authority can't be trusted.
What about kernel extensions? Is it hard to tell which are certified or not and how to go about safely seeing if you can remove/delete any.