While other posts on this topic are too alarmist, this one is way too Apple apologetic for my taste.
* There is no information on how often the validation happens. All this investigation concludes is that it doesn't happen when closing and immediately re-opening an app. Is it every week? Every reboot? Every hour? If it's less, that's essentially the same as doing it on every launch.
* There is no justification for sending this information in cleartext. I don't follow the "browsers and loops" argument. This is a system-service that only has to trust a special Apple certificate, which can be distributed via other side-channels.
* Many developers only publish a single app or a certain type of app. So it still is a significant information leak. It's really not much different from sending a app-specific hash. Think: remote therapy/healthcare apps, pornographic games, or Tor - which alone could get you into big trouble or on a watchlist in certain regions.
I assume they will push a fix with better timeouts and availability detection.
But Apple simply has to find a more privacy-aware system designs for this problem which does not leak this kind of data without an opt-in and also does not impact application startup times. (revocation lists?)
I imagine this data might just be too attractive not to have. Such a "lazy" design is hard to imagine coming out of Apple otherwise.
Most "alarmist" articles have two points you cannot really ignore, not if you don't want to end up living in interesting times one day.
1) Even plain access logs — basically what a HTTP request, or a TCP connection can tell you — is a lot. Gather those for a couple of days, and you have a good map of the user. More so if you have an ID of machine and the actual executable hash.
2) "But we are the good guys" is a non-defense. Good guys can turn bad, they can be coerced by the bad guys, and
3) since the requests fly out in plain text, there is an unknown number of questionably-aligned guys in between capable of sniff your data. You only need one bad enough guy to get into serious trouble if that's what they want.
This is not alarmist. It's just common sense. The same common sense that you use to avoid certain neighborhoods at certain times of night.
If you have #1 and the ability to collect #3, then you’re already an intermediary between the user and Apple.
At that point, what’s to prevent you from providing unacceptably slow service for the certs of those apps you don’t like and soft-locking the user out of particular apps on their own device?
The fact that this slows down devices boils down to a rushed or simply incompetent implementation.
It's sensible to require waiting for a certificate check the first time an app is launched, but after that, the cache validity should be indefinite, and updates should occur asynchronously in batches.
The timeout settings were also excessive.
Can't forget the blatant lack of encryption. They either forgot or thought it would be too much effort to set up.
When you have a good broadband, it gets so easy to assume that internets grow on them trees, latency is negligible, and servers are fast and always up.
Yes it is ridiculous that an internet query is in the path of starting a local app for the first time in X hours. If it has to be done, it could be done in a daily batch for all apps when the connection is idle, and on install. Using bloom filters to check for recent invalidations would be even better.
A positive on on the bloom filter is just an indicator that you do the bigger, more expensive (and privacy-reducing) check, like an encrypted OCSP query for that specific certificate. It's not the final verdict, specifically because of the risk of false positives. Bloom filters are a way of making it so that you don't have to do that bigger, privacy-leaking query every time.
IMO especially when stock holders wanting a monetary return on investment are involved. I give my money to the FSF every month, because they provide value to me, but not because I expect them to surreptitiously extract it from others and give it to me as cash dividends.
I think that's one of the big problems with public companies, especially those that have "regular people" as their main money maker (the "consumers") - invariably, the company's needs (duty) to make money for their real customers (the shareholders) will take precedence over what would be "the best thing" for consumers.
I wish we could do away with the whole "public company" thing - just imagine how much better Facebook, Google, and countless other companies (yes, Apple too) would be if they were private, and more accountable to their users.
Privately owned companies are not accountable to their users, they are accountable to their owners, just like publically traded ones. It's just that they have fewer owners, and you sometimes get owners with really nice ideas. Other times, you get even more tyrannical owners.
Instead, what would be really nice is imagining how those companies would fare as worker-owned companies. Especially with these big internet behemoths, where the entire families of all the workers are users, the standard of user care would easily sky-rocket.
The Soviet Union didn't have even 1 worker owned factory, unless you're talking about the time before Lenin ever came to power. The factories were owned by the state, which in turn was owned by a dictator and his political apparatus - workers had less freedom to control the factory than Amazon warehouse workers.
Yes, they were a despicable regime, and unfortunately their name still mars the idea of socialism. They also claimed they were democratic, and surely many believed that as well, but we haven't let that ruin democracy, so we shouldn't let their laughable claims to socialism ruin socialism.
You had like 120 years to create a version of socialism that didn't suck, or didn't degenerate into a form of oligarchy under whatever guise du jour you please. I'd call it a failed experiment by now. You know why?
Because "true socialism" (like your true Scotsman) requires ideal übermenschen on all levels everywhere. This is not how the humankind works. Humankind is full of flawed, sometimes outright malicious people, and you have to deal with that.
Most versions of socialism at some point came up with a need to breed ideal happy socialist people that won't keep breaking their paradise all the time. And until this Übermensch is born, they chose to break and bend the rest into behaving, like bonsai trees. Of course, Dear Leader and their team is exempted from being broken or bent, and many others aspire to become like them. This is how every socialist rule to date grew into a totalitarian oligarchy.
Thank you but no thank you. I'd better choose a form of government that adapts to and deals with people as they are, and doesn't try to force them into some better version according to their understanding.
private companies are still accountable to their shareholders. But I do think that the very public number of share price encourages slightly different behavior than a private, illiquid, and probably out of date number
Yeah not sure what the poster is trying to say here. Both distinctions almost invevitably result in doing anything that is legal to maximize profits (and oftentimes illegal or gray at best). However I haven't seen anyone propose a decent alternative to corporation status for such large entities. The other option is state owned and that is almost always an utter failure. Even China allows their "state owned" businesses a lot of leeway to account for the ups and downs of capitalism and market forces.
> "But we are the good guys" is a non-defense. Good guys can turn bad, they can be coerced by the bad guys,
That’s true, but not very useful, since if Apple turns bad or is coerced by the bad guys, they could just issue an OS update that begins doing new bad things anyway.
- This give Apple access to data right now. If they turn evil in the future, they have access to data from the past, which gives them more leverage.
- The security industry (overall) pays attention to Apple updates. If Apple turned evil in the future by issuing an OS update, someone might notice it happening. But if they start organizing this data and handing it off to the government, they don't need to change anything public or issue an update. They can do it all serverside without anybody noticing.
- One of the ways we tell whether a company is trending evil is that we pay attention to how its willingness to invade people's privacy evolves over time. This is a more subtle point.
Imagine that I was administering your phone. There's trust involved in that kind of relationship; if I turned evil, I could install some tracking software or viruses and violate your privacy. So imagine that one day you find out I have installed tracking software on your phone, but when you ask me about it, I say, "it doesn't matter whether or not the tracking software is installed on the phone. If you trust me not to invade your privacy, then you might as well trust me not to look at the data the software is collecting. As long as you trust me, it makes no difference what I install on your phone, since you can trust me not to use that software to violate your privacy."
You probably wouldn't be satisfied by that excuse. In reality, seeing that I am now the type of person who is willing to install tracking software on your phone should give a suspicion that I have either already turned evil or that I am on my way to turning evil.
So similarly with Apple, it's true that trusting Apple means putting them in a position where they could start collecting people's private data. The fact that we have now seen them start collecting private data means that we should be more suspicious that Apple either is already evil, or at least that it is more willing now to play with evil ideas than it used to be.
It seems like several people are assuming that Apple is storing the data now and that it is personally identifiable. My assumption was that, of course they would not do that. But of course I could be wrong.
I think the bigger point here is, if Apple started to store the data and make it personally identifiable, you would have no way of knowing that they had.
They wouldn't need to install anything new on your computer to start tracking you in more detail or building a user profile on you, they could just start doing it invisibly behind the scenes on a server someplace. That's a big deal, because even though you're trusting them to administer your device, if they did start pushing out spyware, there's a good chance a security researcher would notice it. But there's no way for us to know what Apple does with this data once it leaves our devices.
I just don’t think that’s a very big deal. Did anyone notice when Apple shipped this update? Maybe so, but it certainly wasn’t a huge ongoing issue in the community. It seems pretty clear that they could get away with a minor evil update if they decided to turn evil.
> There is no information on how often the validation happens.
I wrote a blog post about this. My analysis indicates that Developer ID OCSP responses were previously cached for 5 minutes, but Apple changed it to half a day after Thursday's outage, probably to reduce traffic:
Pure speculation from me, but my guess is that the intention is check an app on every launch, and the 5 minutes is there just to lower the chances of DoS from an app getting repeatedly launched for some reason.
Let's not forget the inherent elitism of thinking everyone has a gigabit internet at their disposal. I happened to live in a developing county for years and this "waiting up to half a minute until the program opens" has been my daily experience for a long long time.
If it's slow in developing countries, it is gonna be slow in smaller towns as well as rural areas in US and Canada, and depending on where the closest Apple server is, perhaps all of Australia and New Zealand.
If you catch malware in the wild you don’t want to wait half a day for the cache to expire.
Negative responses are typically cached for short periods of time. Can you imagine if people cached NXDOMAIN for half a day and someone creating a record had to wait 12 hours for it to go live because someone queried it?
If you care about user privacy, you don't upload stuff from the user side, you download the list of trusted&untrusted certificates to the user's machine and take the decision there.
This is how antiviruses have always worked, without affecting user privacy (of course, most antiviruses also did other things that DID affect user privacy, but malware detection at least worked perfectly fine without it).
> It’s relevant because you argue that there is no value to having the ability to do this.
No, I did not. We haven't talked about that other mechanism, so I've said nothing about it here either positively or negatively.
> Something you have dismissed as a non problem.
I said "Zoom had a serious uninstaller bug". So no, I did not dismiss it as a non problem. It just has nothing to do with Developer ID certificate OCSP.
Please stop putting words in my mouth or completely warping the words that I do say.
You said “But if you have a cached OCSP response for the cert of a malware author, then you've already launched their app, so it's probably too late.”
I.e. once you have launched the app, the damage is done.
This is not the case, and the Zoom situation is a clear counterexample. Even if a problematic app has been launched one or more times, it is still worth preventing subsequent launches if you can.
It doesn’t matter what mechanism is used to prevent the subsequent launch. This applies to any mechanism including OCSP. The Zoom example is a refutation of the particular point you made, a point which dismisses a real security concern.
It demonstrates that there is value in Apple having the ability to prevent harmful software from running, no matter how many times it has already been run.
> This is not the case, and the Zoom situation is a clear counterexample.
I was talking about MALWARE. As I said before, Zoom is not malware, so no, it's not a counterexample.
This is my last reply to you. You're clearly not interested in having a good faith conversation, you continue to misinterpret me and want to score "internet points" or something. I'm done.
Accusations of bad faith are unhelpful, especially in a technical discussion like this.
Zoom is not malware in that as far as we know it isn’t Zoom’s intent to cause harm.
However in this instance it exhibited a behavior which many forms of malware exhibit - opening an insecure or exploitable port. It was shut down because it was behaving the way some malware behaves.
It’s a perfectly reasonable example of using these types of mechanism to mitigate a real security issue.
You can’t seriously be claiming that malware never opens ports, or that malware always does all of its harm on the first run.
Therefore the use of the distinction ‘malware’ is arbitrary and irrelevant.
The mechanism is useful to protect against vulnerabilities, regardless of whether the vulnerabilities were intentional or not.
This was a good read as was the HP OSCP incident you linked to in the post. With regards to HP cert being revoked I'm amazed that this didn't get more attention. You would think there would be checks in place to calculate something like "well there's been 100 million OSCP checks for this printer driver in the last 24 hours so we might not want to revoke its cert."
Any idea how they changed the cache time remotely?
If the OS is honouring the cache control headers of a plain text response this has its own security implications.
The response is signed by Apple, and presumably (!) your Mac is validating that signature correctly. I haven't checked if they are using stapling, but that would be the sensible way to do it, in which case it is a server side parameter (though possibility with client side limits too, but you'd need to disassemble the binary).
> [article] editing your /etc/hosts file. Personally, I wouldn’t suggest doing that as it prevents an important security feature from working.
Exactly the apologetic that you are talking about. Everyone has a different security update cadence (e.g. patch Tuesday for Microsoft), but each application launch is not a reasonable one. Given Apple's recent propensity for banning developers who stand against them (whether you agree with those developers or not), this is aimed squarely at dissent.
I don’t see how you can so confidently reach that conclusion. It seems perfectly plausible that Apple wants a way to quickly quash malware, worms, etc.
> I don’t see how you can so confidently reach that conclusion.
I'm not going to 100% say that control is the reason Apple is doing this. I'm sure that they do genuinely want a way to quickly quash malware, worms, etc...
But we've also seen that Apple is clearly willing to use security features to ban developers that stand against them, so I don't understand how people can be so confident that they wouldn't be willing to use this feature in the same way, even if they did internally think of it as primarily a security tool. It would be very consistent to how we've seen app signing evolve from a pure security feature into a contract-enforcement tool.
Security features should not be used for contract enforcement.
My point stands, Apple introduced a security feature then used it for contract enforcement against a company that opposed them. There is no reason to believe that they wouldn't do the same thing here. Whether or not you believe that Epic was the villain in that story is irrelevant to the current conversation.
> Oh, Epic broke their contract and therefore I think can be seen as bad for security.
> If they are willing to break their contract for money what is to stop them from harvesting my data for money?
This argument was weak enough that a judge specifically rejected it after Apple failed to prove any kind of immediate threat was being presented from the Unreal Engine.
> what is to stop them from harvesting my data for money?
The fact that the contract dispute in question had nothing to do with data harvesting in the fist place.
> I bought a Mac because of that
That's fine. And if Apple wants to try and tie all of this to security, then honestly whatever. But when this signing feature came out, people made fun of critics for suggesting Apple would do the exact thing you're now saying they're justified in doing. Try to lump it under the banner of security, try to lump it under the barrier of whatever you want. When avalys says:
> I don’t see how you can so confidently reach that conclusion. It seems perfectly plausible that Apple wants a way to quickly quash malware, worms, etc.
they're expressing doubt that Apple would do any of the things that you're praising Apple for doing with app signing. And the fact remains, it's very plausible that they would use this as a tool to enforce contracts. You're in the comments, right now, saying that they should use this feature as a tool to enforce contracts.
So what exactly do you disagree with me on? It still seems pretty reasonable to believe that Apple will be willing to use app logging as a contract enforcement tool, and that when they do people will jump on HN to defend them, given that you are currently defending them for doing so right now.
The argument over whether preemptively blocking app updates based on a vague sense of 'distrust' falls into the category of security is a semantic argument, and I don't really care about digging into it. The point stands, people are worried that Apple will use this feature to target apps beyond normal malware, trojans, or worms, and they are right to be worried about that.
Apple didn’t not ban them for standing against them. Apple banned them for breaching their contract.
It’s not each application launch. It’s from time to time. It’s for each application as it might be detected to have malware in the future. Also if the app isn’t signed there is no check.
They have used security features of their OSs to ban developers who were simply in breach of contract with Apple, but not distributing malware or any other kind of content harmful to users.
Sure, Apple was completely in the right to stop distributing Epic software after they breached their contract with Apple. But Epic didn't breach any contract with their users, so there was no reason to remove Epic's software from user devices, or affect companies redistributing Epic software. Those are obvious overreach.
Epic lied about the content of their software. If Apple doesn’t remove software from suppliers who lie about the contents, people will continue to exploit this.
There was no overreach. This was the consequence of Epic intentionally lying about the content a software update.
It’s also worth pointing out that Epic expected this result, and caused it on purpose. Both Apple, and the court gave them the chance to rectify the situation which they refused.
That makes Epic responsible for the outcome. No one else.
Didn't Epic actually create an entire presentation video advertising the contents of their update?
Again, I fully agree that Epic was knowingly in breach of their contract with Apple, and wanted to use the public as leverage. But that doesn't, in any way, make their update malicious for the end user.
The presentation video was released after the update was submitted to the store with the contents hidden and activated later.
As for whether the update was malicious for the end user, we could say we trust epic to operate a payment method, and therefore the update was not malicious.
But there are many actors who would use this exact same methodology, and the update is malicious.
Such Trojans exist on Android.
Security policies always prevent behaviors that could be used for non-malicious purposes.
If the argument is that the end users should be the ones to decide, it’s really just another way of saying that Apple shouldn’t be allowed to enforce any security policy.
Of course there are those who believe that Apple shouldn’t be able to enforce security policies, but there is no overreach here.
It is nevertheless the case that some users are VERY LOUD on particular topics, essentially repeating themselves on many leafs of the discussion. I find this very tiresome. It isn't an ad hom to point this out.
This is true. I’d be totally up for a ‘no repetition’ rule, however that’s completely impractical.
I find myself repeating certain points, usually because I am responding to repeated points.
Having said this, I do it because sometimes the person I am responding to says something new. It sounds like their point is a repeat, but they turn out to have a point of view that is different when you challenge them about it.
The loop argument makes no sense at all. HTTP is being used as a transport for a base64-encoded payload, the actual process of veryfing the validity of the developer certificate is done by the service behind that Apple URL - not by the HTTP stack.
There is no justification not to switch to HTTPS here.
It's convention. With browsers, you wouldn't want to introduce a recursion point in TLS (we already have certificate chains, and now we'd get OCSP check chains and where does that terminate?). Apple just did what everyone else does for OCSP, in a way which is accepted practice for good reasons.
Now in this specific instance, OCSP is being used in quite a different use case. For one, the plaintext issue is not a problem when browsing, as attackers can see what sites/certs you're accessing in the clear anyway (certificates are plaintext in TLS sessions), while app launch is an otherwise offline activity. So in this instance it makes sense for Apple to switch to HTTPS (and if they have OCSP on the server cert for that, that should go via HTTP to avoid loops or further issues).
But what Apple did here is just standard practice, it's just that there happen to be good reasons to diverge from the standard here.
How to we know the certificate presented by the OSCP server has not been revoked? We can’t ask the OSCP server cos that’s what we’re trying to handshake with!
The loop is very real and non trivial to solve. I’d expect something similar to what ESNI/ECH does leveraging DNSSEC + DoH may be possible NOW, but that’s a recent development.
Well, the problem is that OSCP is leaking which applications you open (and when you open them) which is the big deal IMO. One solution would be that the OSCP is checking the HTTPs certificate in cleartext once upon startup (and maybe once every day or so thereafter), and is using HTTPs for all subsequent application requests.
I don't really see a problem here how that could cause a loop. This way, an attacker can only see:
- When you boot your Mac because it verifies the HTTPs certificate once.
- When the OSCP daemon makes a clear text request to check that the HTTPs cert is still ok
- That you have just opened an application (but not which application)
IMO that still leaks an unacceptable amount of meta data but it is miles better then using cleartext. Maybe a bloom filter here would be a much better solution + make the daemon regularly fetch bad signature that are not added the the filter yet instead of pulling. Sure the filter may hit false positives sometimes but in that case, the OSCP server could be checked and apple could see if a certificate has a high rate of false positives and adjust the bloom filter accordingly.
Even if there was some wrinkle about the loop argument that I didn't understand, and HTTPS is out: Apple could encrypt the base64 payload, and the sniffable info is reduced to which computer is phoning home, which is something that someone with the ability to middle comms probably knows already.
"roll your own encryption and send it over HTTP" is a bad idea in general but... this is Apple, they can and do implement encryption. Why not here?
Isn’t OCSP an open standard for handling certificate revocations? The standard specifies plaintext, because the standard can’t assume that the client has a way to form an encrypted connection to the revocation list.
A network intermediary blocking or altering the TLS is an active attack. Plain HTTP is also vulnerable to that, so unauthenticated TLS is no worse than the current situation.
TLS encrypts the payload just fine if you want that. That’s what TLS is for.
PS: You don’t encrypt something to someone else using your own public key.
I'm talking about when they block just the ocsp host TLS port. Heaps of places whitelist https for particular sites, and inspect the content to prevent TLS. Appliances that block TLS via packet inspection are dime a dozen. But the query/response fields can be an opaque encrypted blob and it would get through. Every Apple device obviously has the Apple pub key, and hence they can send encrypted messages back to Apple without needing any further PKI.
Wouldn't an anonymity scheme such as [1] work in this context? Send only part of the hash of the app's certificate, and have the server send you all possible revoked certificates?
I assumed there were too many revoked certificates for something like this to be viable, but I'm not surprised it is.
You probably can't update the whole list that often though, compared to Apple's current OCSP revalidate time of 5 min. [edit: seems "delta patches" are supported by crlite so maybe that can work too]
> I assumed there were too many revoked certificates for something like this to be viable, but I'm not surprised it is.
Given that Apple currently doesn't even encrypt the requests during transit, I think they just didn't pay much attention to the problem, which I think the main reason is why they haven't adopted it yet. As for the number of revoked certificates, I'm not sure it's larger than the number of revoked TLS certificates, given that there are way more websites out there than there are registered apple developers.
> Such a "lazy" design is hard to imagine coming out of Apple otherwise.
That's my biggest issue personally. There's a bit of information leak, but most wouldn't care and would just do the standard and be done with it. Firefox still uses OCSP in some case...
My issue is that a company like Apple, which currently market itself as a company that care about privacy of their user, would have let this comes out of that same process that's supposed to care... and still hasn't said that was a mistake out of their process and that they are correcting it.
They could easily use k-anonymity like HaveIBeenPwned, or even as push, which would means no cache, which is even better for their argument of security.
There's nothing alarmist here, it's all alright, it would just means that this is the same false advertising that so many companies do, but still, is important to be aware of.
Call home features can be spoofed by a poisoning type of attack upstream in various forms.
This is not bullet proof and a cop-out with a poor solution for security.
You know who has effective call home features? Vendors that sell to major enterprises. It is a natural progression and a particularly nasty environment to live within.
If they are legitimately trying to protect the brand through force or merely forcefully controlling the app ecosystem... it's an abusive relationship to be in.
The fact this is not configurable without dead lettering the route is all they need to do to show tethering is something they consider as a viable security measure.
Apple has done a fantastic PR job regarding privacy. I am more skeptical about the status of actual privacy given their iMessage situation and now this.
> Apple: "We have never heard of PRISM"[115] "We do not provide any government agency with direct access to our servers, and any government agency requesting customer data must get a court order."[115]
Certainly American companies are subjects to warrants and NSLs, but Google (to give one example) had its dark fibre connections between data centres tapped by the NSA. Is that the "participation" that was referred to by the Snowden documents?
As to the apple claims that they didn't participate in PRISM, I think they were just lying. Clapper lied to congress as well, so this isn't unheard of. They would likely have breached their government contract by telling the truth. That being said, them having never heard about the program name might be true because it might not have been known to them under that name, but that's just a detail.
Apple was not lying because “PRISM” was an internal source identifier at the NSA for the process of acquiring data through the FISA warrant process. Apple never heard the word PRISM; they got FISA warrants and replied to them as required by law.
This is clearly indicated on the PRISM Wikipedia page that was linked above.
> PRISM is a code name for a program under which the United States National Security Agency (NSA) collects internet communications from various U.S. internet companies.[1][2][3] The program is also known by the SIGAD US-984XN.[4][5] PRISM collects stored internet communications based on demands made to internet companies such as Google LLC under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms.
> Apple was not lying because “PRISM” was an internal source identifier at the NSA for the process of acquiring data through the FISA warrant process. Apple never heard the word PRISM
As I've said, that's a detail and splitting hairs. If a sentence has multiple interpretations and one of them is true, but you phrase it in a way that most people interpret the sentence in the wrong way, you are intentionally deceiving people. They should have said "we have never heard the name PRISM" or something like this.
I thought you just ended up in PRISM you don't "join" it? Just like Google found out from the Snowden leaks and then encrypted all their DC to DC fiber.
I think there were aspects of PRISM that required cooperation from providers like Google. Like the NSA would send queries to them and they would return emails or what have you that match those queries. Though of course this “cooperation” is required by law.
They backup the private key to iCloud unless you manually disable backups. So even though iMessage is advertised as E2E encrypted, for the vast majority of users, Apple can read each and every message.
(And even if you disable backups, Apple can still read most if not all of your messages, because the persons on the other side of the conversations have not disabled backups)
This stance undermines the point of E2E. The messaging system is still E2E even if people backup their plaintext messages or their key on non-E2E storage.
Having you messages deleted because you forgot your iCloud password is good security but a terrible default.
A better privacy solution would be to sync revocation lists every so often (and, if you must, right before opening a new app). Is there any privacy-preserving reason to not go this direction? How often would you expect certificates to be rescinded? You could also use a bloom filter to significantly reduce the false-positive rate.
With OCSP Stapling the remote web server whose identity you want to assure yourself of periodically gets an up-to-date OCSP answer about its own certificate. When you connect to that server, it gives you the certificate, and the OCSP answer, which assures you that the certificate is still good, and is signed by the Issuer of the certificate.
So, you visit Porn Hub, Porn Hub knows you visited and can reasonably guess it's because you like porn (duh). Porn Hub talks to their CA. The CA knows Porn Hub are Porn Hub and could reasonably guess it's a porn site (duh) but this way the CA doesn't learn that you visited Porn Hub. That's Privacy preserving. Nobody learns anything you'd reasonably expect they shouldn't know.
But how can we apply that to an application on your Mac? If every app reaches out from your Mac to Apple to get OCSP responses, they learn what you have installed, albeit I guess you can avoid telling them when exactly you ran it. This is enormously more costly and not very privacy preserving.
CRL-based ideas are much better for your privacy, although they might cost you some network traffic when the CRL is updated.
Of course one reason for Apple not to want to do CRLs is that they're transparent and Apple is not a very transparent type of company. With OCSP you've got no way to know if and when Apple revoked the certificate for "Obvious Malware II the sequel" or equally for "Very Popular App that Apple says violated an obscure sub-clause of a developer agreement".
But with CRLs it'd be easier for any researcher to monitor periodically for revocations, giving insights that Apple might not like. Do revocations happen only 9-5 Mon-Fri Cupertino time? Are there dozens per hour? Per day? Per Year?
That’s assuming that the OCSP is hosted by Apple, which doesn’t have to be the case. It sounds shitty from an app developer perspective, but app developers would have an incentive to host endpoints that make their apps runnable on macOS. This improves privacy, by distributing ocsp traffic across organizations, but also puts the burden of verification on app developers. Not sure if this would harm or help the app ecosystem
Wouldn’t a properly diffed CRL list be much smaller than a hash payload on every app launch? Say, a request like “give me all the revoked certificates since I last asked.”
> But Apple simply has to find a more privacy-aware system designs for this problem which does not leak this kind of data without an opt-in and also does not impact application startup times. (revocation lists?)
The idea that you need apple to certify the developer over the software you run on your phone is nonsense though. You don't do that on your computer, so why do you need to be nannied on your phone?
THANK YOU. I also see no reason that OCSP checks cannot support both HTTP and HTTPS. If there is some reason then the protocol should be split into two, one for unencrypted checks for things like SSL certs, and another for all other/ dev cert checks over HTTPS.
> Apple fanatics routinely deny evidence to support their sorta-religion.
As do anti-Apple fanatics. That’s what being a “fanatic” means. You can say the same about gun fanatics, or meat fanatics, or vegetarian fanatics, or Android fanatics. It’s staggering how often people who are anti something fail to perceive the irony in behaving exactly in the manner they are decrying. Someone having a contrary opinion doesn’t make them a fanatic.
going back to the original topic, apple hardware/software, i've used apple hardware and software (company-issued macbook pro and iphone 7/8).
The software is great as long as you want to stay within apple-defined boudaries. If you want to go outside that, it's an experience similar if not worse than using gnu/linux.
The hardware is great when the machine is brand new but decays very quickly, it's not designed to be serviced by either end-users or specialized users or specialized shops -- you're supposed to return it to an apple store and pay an expensive price to basic maintenance. As an example, cleaning up the fans from dust is very important in those machines but you have to buy special hardware to take off the screws, and in generally you risk breaking something. Keyboards failed spectacularly in last gen, and apple waited like two years before fixing it. Audio is great, until it breaks. My macbook pro (15" top of the line) couldn't sustain full-audio, and distorted audio after ~30 sec of full volume audio (imagine that during a conference call in a meeting with other people). The screen is great, but the glass panel retained ALL of the fingerprints and it was a PITA to clean, i had to buy special glass-cleaning liquids. WTF.
All the above issues appeared all shortly after the first year of life of the laptop. Call me an anti-apple fanatic, I don't care, but I expected more from a 3500+€ machine.
At the new job i've been given a 13" dell latitude 7390. It works flawlessly, it rarely skips a beat and it has none of the problems stated above. Fuck Apple.
You’ve missed my point, which is that you could remove the word “Apple” from your original comment and it would have made no difference. One kind of fanatic does not excuse another, nor have I claimed it does.
There’s no need to list Apple’s faults. I’m aware of them and support a large part of Apple criticism in the Tim Cook era (and not just technical[1]), including most of yours.
Where we disagree is in the insinuation the author is a fanatic simply for defending Apple. They’ve written a technical post and gave their conclusions, which may indeed sound apologetic but are far from rabid fanaticism.
> Fuck Apple.
In sum, it’s fine to decry the company but I disagree that people who like it and accept its tradeoffs should be immediately labeled as extremists.
> In sum, it’s fine to decry the company but I disagree that people who like it and accept its tradeoffs should be immediately labeled as extremists.
Well Apple is notoriously abusive of the developers on its platform. Two things are particularly cried about across most of the ecosystem: the 30% cut they take off pretty much everything and the vague terms that you have to comply with, and that they enforce in a mostly random way (app gets pulled out of the app store, won't tell you why, won't tell you what you did wrong).
Now add the exhorbitant prices for their low-specced, low-quality hardware.
Now add the continual rip-off of their users.
Now add the subject of the original linked page.
At this point I think that yes, defending Apple is extremism.
It's fine to accept the tradeoffs, it's not fine to pretend they do not exists:
- "Yeah this stuff is unreasonably expensive but we have to use it"
that is honest
- "the apple ecosystem is the best for creative and developers and what apple does across all the spectrum is fine"
> I find that you're the kind of person that only find what they're looking for.
I expressed an opinion on a belief you seem to hold, not a value judgement on yourself. I don’t presume to know which “kind of person” you are from a short text-based interaction pertaining to single subject matter. I’ll ask you extend me the same courtesy.
> I don't follow the "browsers and loops" argument.
To log in to my banking account, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I keep it in a password manager. And so on.
Imagine that, but for “verifying the HTTPS connection”.
But there’s an easy fix. I use it with my password manager. To log in to my bank account, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I already know it. If I don’t know it, I look it up from a less secure source.
Technically what I’m describing is that you can vary the behaviour of OCSP lookups such that if you’re already looking up an OCSP certificate to establish an SSL connection to an OCSP server, downgrade and check over HTTP only when trying to connect to the OCSP server itself. Yes, it would mean one more TLS connection to a random server. Yes, it would mean an extra OCSP lookup. But just one, and just for the OCSP server itself. Which means privacy is preserved in regards to which developer certificate you’re checking. It would be only checking Apple’s OCSP server certificate in the clear, which it could equally cache easily.
TLS involves both cert checking (server is truly who they say they are and not MITM) and Diffie-Helman key exchange to set up session keys (messages are end-to-end encrypted).
You can DH with an untrusted cert. It might be interceptable.
HTTP is always interceptable.
But there should be zero reason not to set this connection up with a full proper cert. HTTP is just mega sloppy.
As others mentioned, you can bootstrap TLS by first checking OCSP (in the open) on your cert auth service, then use that opaque, freshly-checked connection to check the rest.
Wait. Is it not common knowledge that Android and iOS log every application you open down to the exact millisecond you open and close them?
Is it not common knowledge how telemetry works for the operating systems? They generally batch up a bunch of logs like this, encrypt them, compress them, and then send them to the mothership (hopefully when you're on WiFi).
Logging and telemetry are completely separate use cases. For example to do some kind of battery use accounting you need some record of when exactly which app was active.
And no, it's not widely known or documented - there is no good description of what telemetry exists or contains on iOS that I know of.
Yeah, because encrypted data should be incompressible, as it should be indistinguishable from random data, which is also incompressible.
Reality is a little different of course, and compression can cause problems for encryption because compressed data tends to be highly predictable (especially things like compression headers and compression dictionaries). This allows for potential “known/chosen plaintext” attacks on the encryption.
Some classic examples of this type of attack are breaking Enigma (known plaintext, no compression) by assuming the content of some messages[0] and the more recent CRIME[1] attacks against TLS using compression to help produce a chosen plaintext.
The simple solution in these scenarios is to avoid using compression completely.
> macOS does actually send out some opaque information about the developer certificate of those apps, and that’s quite an important difference on a privacy perspective.
Yes, and no. If you're using software that the state deems to be subversive or "dangerous", a developer certificate would make the nature of the software you are running pretty clear. They don't have to know exactly which program you're running, but just enough information to put you on a list.
> You shouldn’t probably block ocsp.apple.com with Little Snitch or in your hosts file.
I never asked them to do that in the first place, so I'll be blocking it from now on.
> I never asked them to do that in the first place, so I'll be blocking it from now on.
Apple's working on making sure you can't block it. They already keep you from blocking their own traffic with Little Snitch and similar tools: https://news.ycombinator.com/item?id=24838816
I use adblockios and haven't upgraded because they unblocked the blocking. I keep hearing about charles, I wonder if it is special or if it doesn't really block everything.
Until they front it via cloudflare or aws. I got hit by AWS blocking when setting up a network in Russia for the 2018 World Cup - my unifi controller was on an ec2 instance that was blocked due to telegram shenanigans. Worked around the problem but shows that blocking an AS can lead towards an unusable computer.
You could block still block it externally by running a dns sinkhole (a la PiHole) on the same network, provided that you can still configure the DNS resolver.
I don’t think it’s actually just in Big Sur. At the bottom of this post describing how to stop them from hiding traffic, they mention someone did a test on Catalina and ran into an issue with the Messages app:
Privacy concerns aren’t the only reason to block it. It also makes software way more responsive. I was experiencing daily freezes that would disconnect my keyboard and mouse (particularly when waking the computer or connecting to an external display) on my 2020 MacBook Air before adding the entry to my hosts file which fixed the issue entirely. It was so pronounced and irreparable by Apple support technicians that I nearly ended up getting rid of the computer.
* Your Mac periodically sends plain text information about the developer of all apps you open, which in most cases makes it trivial for anyone able to listen to your traffic to figure out what apps you open. Better not use a Mac if you're a journalist working out of an oppressive country.
* Because of this Macs can be sluggish opening random applications.
* A Mac is not a general purpose computing device anymore. It's a device meant for running Apple sanctioned applications, much like a smartphone. Which may be fine, depends on the use case.
> You should be aware that macOS might transmit some opaque information about the developer certificate of the apps you run. This information is sent out in clear text on your network.
Wow, that is bad from a privacy perspective!
Since certificate revocation is rare, it makes more sense to simply periodically update a list of revoked certificates instead of repeatedly checking each certificate. That would solve the privacy issue while still allowing certificates to be revoked.
OCSP seems like a bad idea for web browsing for similar reasons.
Maybe there is a bizzare reason why they don't use https on their ocsp endpoint. Perhaps they want to avoid situation where the ocsp server's certificate itself is revoked, or anticipated that the ocsp server would still be in use 10 years later where the currently used crypto could have been marked as insecure and removed and thus prevent older clients from working. Or it could be laziness, but come on...
Precisely. This would require more work, but it would only leak the OCSP server’s revocation request, and would make OCSP both more secure (caching OCSP server validity rather than the original certificates) and more private (due to SSL).
You can do unauthenticated TLS, which is no worse than plaintext HTTP, and foils passive listeners by providing privacy. You could also trust your existing trusted certs (prior to OCSP update) when doing the OCSP update, which, again, is no worse than plaintext HTTP.
Apple knows this. They have cryptography experts.
Taken in context with their backdooring of their e2e messenger and collaboration with military intelligence on FISA 702, I tend not to give them the benefit of the doubt any longer. Apple knows how to take pcaps.
There are only so many times the OS design gets to leak either keys or plaintext remotely before you need to stop assuming ignorance over malice.
I don’t know how many times that is, but it’s less than ten, probably less than 5, and because it’s a count of legitimate “assume ignorance”, then “goto fail”[2] also counts in the tally.
Between this OCSP plaintext telemetry leak, and iMessage default key escrow, scrapping their plan for e2e backups at the behest of the FBI that fixes the key escrow backdoor[3], and “goto fail” not authenticating TLS, we’re at 4.
I’m not even counting the recent story about Apple’s history of willing collaboration with intelligence agencies to make a custom classified firmware for the iPod to aid in espionage.[1]
As Goldfinger’s famous saying goes: “Once is happenstance. Twice is coincidence. The third time it’s enemy action.”
“ I’m not even counting the recent story about Apple’s history of willing collaboration with intelligence agencies to make a custom classified firmware for the iPod to aid in espionage.”
I was initially shocked by this as well so I did some more reading on OCSP and it seems this is being addressed through OCSP stapling.
According to Wikipedia "[OCSP stapling] allows the presenter of a certificate to bear the resource cost involved in providing Online Certificate Status Protocol (OCSP) responses by appending ("stapling") a time-stamped OCSP response signed by the CA to the initial TLS handshake, eliminating the need for clients to contact the CA, with the aim of improving both security and performance."
I'm not aware how widely deployed OCSP stapling is in reality. I looked at my Firefox settings which seemed to be the default for OCSP and it looked like this:
So I assume OCSP stapling is enabled but direct OCSP is disabled in Firefox by default but a positive OCSP response is not required in general. I tried to check what was really happening with Wireshark but regardless of the configuration and sites I visited, I couldn't get Firefox to emit an OCSP query.
I also don't know what other TLS implementations (like OpenSSL) do and how users of such libraries usually configure them.
Addendum: Oh and of course, OCSP stapling is useless when you weren't about to open a TLS connection (like in this case when checking software signing certificates). I'm also curious if and how this works for other applications of X.509 certificates such as mutual TLS authentication.
The SLA for being made aware of revocations should be configurable from the client side. OCSP here would be fine if (a) it was sent over an encrypted connection using a preinstalled Apple root CA, and (b) the user could set the the TTL for caching the response. Larger developers (with more resources) could also feasibly implement something similar to OCSP stapling which has several desirable properties.
When it comes to these article, you should really apply the following "smell" test:
Replace "Apple" with "Google", "Facebook", "Verizon". Re-read the article. If it sounds horrifying, then it's also horrifying if Apple does it. There's no such thing as "trust" into a single corporation - especially the one which just argued that you not paying 30% to them is "theft".
Applying this test helps weed out the marketing bias these corpos constantly try to push at you.
Better replace "Apple" with "TikTok", "Zoom" otherwise people might think about "Google", "Facebook" that (paraphrasing) «they may be sons of bitches, but they are our sons of bitches» (regardless the reality).
The tech industry is rife with hypocrisy when it comes to matters of privacy and online tracking. It's something rotten at the very heart of this profession. Developers are more likely to rush to defend companies - rather than scrutinise them. We'd all be better off if we stopped defending these companies. You can like - even love - a company product without feeling you owe the company any loyalty or defence. And we'd all be better off for it.
OCSP doesn't seem like the right protocol for this. Apple should probably just ship you a list of hashes of revoked certificates once a day, and should do the check locally. (Obviously, the global certificate database is too big to send to every user, but Apple should be able to determine the subset of certificates they trust, and the even smaller subset of those that are revoked or compromised.)
To me, it sounds like they decided to take the quick-and-easy path of reusing an existing protocol for the use case of stopping malware, but it doesn't really fit. The latency, privacy, and availability guarantees of OCSP just don't match with the requirements for "run a local application".
This does seem like a situation where a CRL would be a better fit than OCSP. On the other hand, CRLs have been pretty thoroughly deprecated for browser usage, so Apple probably just reached for the first tool that was already available to them.
Going back to a CRL (certificate revocation list) for code-signing certs makes more sense. And, really, there shouldn't be a huge number of developer certs being revoked.
If that's happening, they need to put more work up front into certifying them in the first place.
Can someone explain my why is this significantly less problematic than sending out app hashes? If we accept that most developers don't have many similarly popular apps, then isn't this enough to infer what apps are users running?
In the example from the article: if Mozilla's certificate is sent, then it's very likely that the app that has been opened is Firefox, as the a priori likelihood of using Firefox is way higher than eg using Thunderbird.
If the developer is Telegram LLC, then ... and so on.
There will be a day when all apps on a mac will only be installable from the app store. Developers will be forced to buy macs and subscribe to Apple’s developer program to support it. Customers will be trained to not care. And HN Apple fanboys and fangirls will try to justify why this is a Good Thing(TM).
Apple has programmed macOS to make it appear to users as if un-Notarized apps either don't work or are malicious.
This is bad for users that download apps to solve problems, or to get work done, because then they can't those apps without having an expert tell them what the magic ritual to run un-Notarized apps is. If they don't have an expert around to show them how to perform the magic ritual, then they just think the apps are broken.
I don't think anyone trying to 'solve problems' or 'get work done' has encountered a notarization issue since the types of software they use is always notarized (since they still are only running software distributed by million dollar corporations).
I maintain a few open source Mac apps that I'm not paying to Notarize.
Users frequently comment that the apps are now "broken" because they don't understand the changes Apple made to macOS to treat un-Notarized apps as if they're radioactive.
If you can’t confidently change a system preference back and forth, maybe you are very vulnerable to being hacked in general? So maybe it’s ok for Apple’s defaults, at least, to be restrictive?
I just want a preference that allows me to turn all of this off.
Another commenter added more color, but what I was trying to say is that the scenario the parent poster described isn't a likely one for most Mac users.
As a maintainer of a few open source Mac apps that are un-Notarized, this doesn't help users who download the apps to solve their problems but can't use them because of the roadblocks baked into macOS.
Notarization was IMO the first big step towards this. To this day I have not heard anyone, neither devs nor users, wanting this feature. And to devs it has only costed misery and money.
Customers, for the most part, don't even know or care it exists. But customers will find value in it when Apple is able to quickly disable malware if it proves necessary.
As for developers... I mean, how much of a big deal is it, really? I looked at the documentation and it didn't seem like a huge hassle. It even looks like it is automatable in your CI/CD processes via `altool` and `stapler`.
Up until now, you could change the settings to something dev friendly, while leaving them strict for people like my father, who clicks on things he shouldn't click on. It is not short sighted, it's a useful protection against malware. The only time he got in trouble was when he ran an installer, entered the admin password and installed one of these "protect your mac" apps that don't protect, but only pester you into paying a subscription. I had to remove that file by file. OTOH, the amount of shit my in-laws' PC went through is unbelievable. They no longer use it: they've got an iPad.
You do realize that there are millions of satisfied Mac, iPhone, and iPad customers out there, right? The profits speak for themselves: clearly there is value both for freedom and for security. And it never was a binary question anyway.
Besides, you can still run non-notarized binaries if you want to. The UI does make it difficult, but not impossible.
If you want a totally open computer, that's fine (to the extent you don’t spread it via negligence), but everything has tradeoffs. If you're comfortable with the risk of malware, that's also fine; but not everyone is -- and certainly not the business world.
You're making the assumption the "average buyer" knows anything about this issue, which is incredibly unlikely. Therefore, their buying decisions are not made based on it.
I think a good goal would be to scream it as loud as possible and make sure people are buying it based on this dimension as well.
The water is warm but it isn't boiling yet. All the infra is in place though, and the economic incentives are inevitably going to push Apple in this direction.
It keeps inching closer. You now have a unified arch between mobile and desktop. You were never officially able to cross compile before and now there’s yet another barrier.
Phoned signing verification is another thing that is a precursor to distribution to Apple-only distribution.
What platforms don’t force devs to buy developer kits or use their hardware? PlayStation and Xbox used to force devs to buy exotic hardware. Consoles are similar to phones and they lack simulators or emulators for the newer stuff.
We still use devkits because you cannot possibly develop a console game without one. They provide detailed hardware accelerated instrumentation, and also have specialized hardware that you can't emulate without a dramatic perf hit.
The difference in my mind is that no console markets itself as a general computing device, and the user understands they can't use it as such (you can't install whatever you want on an xbox).
This take is stale. Some people will pay for less freedom on their machines and some developers will gladly take their money. That’s not force, that’s capitalism.
Yeah, what's up with that, having to buy a Mac just to run XCode! And having to register as a developer to get a certificate.
Apple should bring back Lisas and the UCSD Pascal/Clascal for Mac development like it was in the 1980s. And they should also bring back 4-letter developer signatures. ;-)
You’re being sarcastic... but your premise is wrong. I can build for windows and android from linux without issue. I can build for linux and android from windows without issue. I pay nothing in any direction.
Clearly this article doesn't reveal every truth. Certificates authority should have been decentralized but is it happening?
And just by looking ip address, and app usage and other data they receive they can connect the data and identify its me. And what security has apple provided till now?
"You shouldn’t probably block ocsp.apple.com with Little Snitch or in your hosts file."
That's far better than freezing computer which doesn't work, doesn't run any apps. If I don't need apple mercy and protection please don't force me.
Yeesh, "It's not THAT bad, it ONLY leaks the developer of every app you open, via cleartext. Oh, and it cripples your offline software when someone spills coffee over Apple's servers"
Being able to identify the developer of any app I run on my own machine is already too far. You have to assume all these requests are logged and available for state actors on legal demand.
I wonder how big a local revocation list would be. I would support a on-by-default local check.
Has anyone used a pi-hole to block apple privileged servers, like the OCSP one, while running Big Sur? I'm thinking of setting one up---not necessarily to block OCSP, because the points in this post about actually wanting to know when a certificate has been revoked are sensible---but to at least have the option in case of another disaster...
Relatedly, does anyone know if Big Sur allows one to use a custom DNS server on the device level with those privileged destinations? (He says, mulling the complexities of getting a pi-hole working with his mesh system.)
But how are those apps finding the DoH server's IP then?
If they use public DoH servers you could just block those at the network level. Andv if they're running their own DoH service on a fixed IP, they could simply run the app itself over that IP and avoid the whole DNS lookup altogether.
> But how are those apps finding the DoH server's IP then?
I don't know, I haven't dug deep enough to find the answer for myself.
However, after blocking Google's DNS servers on my network and designating my own DNS servers via DHCP, my Chromecast ceased to function, and certain Android apps that serve ads had functionality that ceased to work correctly. That leads me to believe that apps and systems with DoH baked in are actively hostile to mitigations against their DoH implementations.
Custom DNS servers are available on Big Sur. My home network uses pfSense as a gateway for LAN. This gives more options blocking outbound connections or routing connections thru a VPN connection based on certain conditions.
Not sure whether the non-privacy related aspect about OCSP is less worrying. Officially Apple does this to protect innocent users from malware, but as we've seen it also allows them to remotely disable any developers' software. Not really something that I'd want on my machine.
I guess a super obvious question is, why do they do this instead of having a robust antivirus ecosystem?
I mean I guess I already know the answer, "marketing". "Look, macOS doesn't require antivirus!"
Personally I don't want Apple verifying or revoking anything. I bought the computer, it's mine. You don't get to tell me what I can run, period. Inform me, sure, give me links to go learn why you don't want me to run something, sure. Don't prevent me from choosing to do with my machine what I want.
> I guess a super obvious question is, why do they do this instead of having a robust antivirus ecosystem?
Enumerating “all possible badness” is basically impossible, which is why AV software really doesn’t work. Every ransomware attack you read about in the news bypassed up-to-date AV software.
Enumerating “known-good” entities is actually a tractable problem... this is what vendor-signing does. Even Google and Microsoft understand this and have had code-signing infrastructure in place for decades.
OCSP also allows CAs to revoke random websites’ certificates, yet nobody is making a big fuss about that (presumably because no OCSP server has encountered what Apple’s did and prevented websites from opening).
Yeah but the thing is that there are many CAs. The main problem is (IMHO) when you have a single party with conflicting commercial interests that controls all certificates for a given platform.
Other than Internet Explorer (and maybe Edge? I honestly have no idea) browsers don't do OCSP. This is because it's a huge privacy problem (as we saw here for Apple) and because the OCSP servers have too often been unreliable.
Firefox has OCSP Must Staple, but in that scenario the remote web server is responsible for periodically ensuring it has a sufficiently up-to-date OCSP response about its own certificate which it then "staples" to the certificate to prove its identity. So if the OCSP server fails for an hour a good quality stapling implementation just keeps using older responses until it comes back. Also it's optional, most people haven't chosen to set Must Staple anyway.
Everybody else has various CRL-based strategies, so your browser learns about certain important revocations, eventually, but it doesn't pro-actively check for them on every connection and thus destroy your privacy.
is there any statistics of how many innocent users have become victim? Clearly Apple just want control. Just like there is old saying More truth less trust is needed.
Now just waiting for the trolls to write some software that makes the response always cause it to be invalid. With a wee bit of ARP magic, you could make a bunch of mac users very unhappy at the cafe's.
“It doesn’t send a hash of the app, it sends a thing that is a encoded hash that uniquely identifies the app! Totally different!”
It wasn’t a misunderstanding, it was a simplification so that people could understand the issue without me explaining OCSP and app signing and x509 and the PKI. Dozens of people wrote me to thank me for explaining it in a way that they could understand.
It is indeed a hash, and it does indeed uniquely identify most apps, and it is indeed sent in plaintext, when you launch the app (and is cached for a half day IIRC). I very deliberately didn’t claim it is a hash of the content of the app file.
It also doesn’t send a unique identifier, but I would be willing to wager that the set of apps that you launch in 48h is probably enough to uniquely identify your machine in the vast majority of cases.
By default, Android logs every app you use. You have to disable - bafflingly - features including saving locations in Google Maps and fully-functional voice recognition to (supposedly) disable that behavior. What I'm saying is: don't look so surprised.
Turn off usage and diagnostics and try to save a location in Google Maps. Alternatively, open up the usage and diagnostics stats and see just how much they harvest. It's, frankly, a ridiculous non-sequitur on the part of Google.
Web & App Activity
Saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services. https://support.google.com/websearch/answer/54068
> As you probably have already learned during Apple’s OCSP responder outage, you can block OCSP requests in several ways, the most popular ones being Little Snitch
Uninformed advice - apple prevents little snitch from blocking this traffic in big sur.
I get that a dev cert isn't the same as identifying the software itself... but that only applies for developers that have multiple apps, and I suspect most do not.
Then unencrypted requests are also a Bad Thing, because anyone has access to the same info - it may require a lot of work to get general knowledge of what apps someone is using, but if you were looking for a specific one then I don't see any real difficulty identifying that.
e.g. if I wanted to know if someone was using signal I just look for the signal cert being queried. That's a much easier problem, and can be dangerous to the end user.
I write a lot of Go on my Mac at home. The first run is _always_ slow, but I've never measured it or bothered to find out why. This is a real "lightbulb moment" for me.
I just built a Go executable and timed it: 0.194 for the first, and ~0.018 for subsequent. I haven't signed code on Mac platforms before, so I figured I'd give it a go using the Apple code signing guide [0]. So, I created a self-signed certificate using Keychain, changed and built a Go project, signed the executable [1], and ran it: ~0.400 for the first run, and ~0.018 for subsequent. It... doubled? Will this happen on every first run still? Is there a way to exclude executables?
Worked a major virus company. This was the same Basic technique. W e would download a list of all md5 hashes. All executables would have to match against it.
Periodically there would be an issue downloading the updates. Would result in similar problems.
Managing size of updates was a big issue. Just checking against an online server is certainly a more up to date approach
in my opinion, this seems like Apple, once a computer company that catered to computer users and the expectations of computer users, is now a mobile phone company catering to and responsive to the lower expectations of phone users. to engineer these plain text surveillance communications over the public internet between a users private computer and the company responsible for building that computer is like if my home informed the company that built my home every time I started any unique activity while inhabiting said home, as long as I hadn’t been engaged in that activity for some amount of time. It’s extremely disrespectful to Apples users, who are also Apples customers, who are also mostly all of us on this message board. My goal is to one day grow a backbone and stop putting up with this.
Apple has always been a gated community, but now there’s a guard at the gate checking everything that goes in and out. This is something most users probably don’t want. It has me personally considering what a future without Apple would look like.
They’re not mutually exclusive. I have several macs and several linux machines. One of the linux machines (my router) even keeps the macs safe and (relatively) trustworthy.
It’s a good thing to do regardless. I also know way more than I want to about Windows, too, and could make do if given only that for a workweek.
Learn languages, learn OSes, learn architectures. Get more computers. :)
This still allows Apple (and ISPs, employers etc) to correlate very sensitive information: developer certificates and IP addresses. Plenty of developers only create one application,
and most Macs will be used most frequently on a small number of (ranges of) IP addresses. In essence that still let’s Apple see way more than a self-proclaimed “privacy conscious” company should.
Why not take a more privacy-centric approach? Antivirus companies have been working with “virus definitions” for ages. Ad blockers use the same model, but for locally stored blacklists. Why can Apple not regularly download a list of revoked certificates and maintain it locally?
I have always been annoyed by OCSP being HTTP. It is really the fault of the standard that this is the way we revoke certificates. I basically agree that Apple should just be downloading revoked certificates and checking them locally. This is what we are doing at various SaaS companies that have to check these in order to avoid downtime. We have also mistakenly failed-closed. We now default to fail-open but customers have the option to change that if they are paranoid.
whatever happened to letting the user decide which application they wanted to run? now the mothership has to give their blessing before they let you run it... sounds insane.
The request obviously sends lots more information than just the serial number of the developer certificate. Is it "harmless" data or could they have more info about the executable in there?
Why don't the author post the OCSP request of Thunderbird too? And how about another request for Firefox so we can compare the data?
This article really doesn't clear anything up for me...
> Maybe the hash is computed only once (e.g. the first time you run the app) and it is stored somewhere.
This would explain why some games take minutes to launch the first time to run them. I've experienced this many times with Steam. You install a game, you launch it, and nothing happens for up to several minutes, and then the game runs. No delays after in launching after that.
This behavior drives me crazy. The only way to figure out what's going on is to open the Activity Monitor. On my 2015 iMac (top-of-the-line, at the time) initial launch of some large games has taken tens of minutes, and it happens whenever the game is updated, not just after it is initially installed.
I see no reason why OCSP checks on developer certificates cannot be encrypted. This whole "oh no there could be a loop for a SSL cert check" argument seems like gaslighting. Why can't the client know if it wants to access an OCSP server using HTTP or HTTPS, and default to HTTPS when possible?
The idea that sending information about the cert is somehow not exposing the app is crazy. An attacker could easily download apps and sniff the network traffic to correlate cert info with an app.
Also i don't get the argument for using HTTP. Aren't these two separate systems?
If this is just a matter of revoked certificates, Apple could very easily setup a subscription for developer certificates on the machine when an app is installed. Why wait to check if a certificate is revoked once an app is launched?
Because it allows to revoke certificates at a later time. E.g. epicgames certificate was revocable after they noticed that they built in something that was not supposed to be allowed.
Are there any pi-hole settings to prevent Apple from phoning home? And the same for Windows 10? I can't trust my computers any longer so I need to rely on external enforcement.
To be generous, Apple has unwittingly created an app use surveillance possibility. All from the idea of developer certificates and diligent revocation checks.
Historically speaking, OCSP was invented in a world where almost all DNS requests were also in cleartext. So if an attacker can observe DNS requests, then it's already "game over", and the cleartext of the OCSP request is almost redundant at that point.
It's worth noting a couple differences between HTTPS OCSP and Developer ID OCSP. First, with Developer ID, the only DNS request is for ocsp.apple.com, so the DNS request by itself doesn't expose any information about the Mac app being launched, unlike with HTTPS.
Second, the caching of Developer ID OCSP responses tends to be much much shorter than for HTTPS. Prior to Thursday's outage, the standard cache length for Developer ID OCSP responses seemed to be 5 minutes. (Apple seems to have raised it to 12 hours now.) In contrast, I just checked the latest response in my OCSP cache, which was for http://ocsp.digicert.com, and its validity is 7 days. So the rate at which Developer ID OCSP requests are made seems to be much higher than for HTTPS, and thus there's greater chance of exposure.
Which is still sufficient information to narrow down to the set of applications developed by a single entity. And because this is being done over HTTP, anyone along the network chain has visibility as well.
Agreed, this should be sent encrypted, obviously. My point was that the intent here might not be to "snoop" on users, as even the author points out by comparing his analysis with what Jeffrey Paul's article reported ("[...] that’s quite an important difference on a privacy perspective") but likely to efficiently handle certificate revocation. Hopefully they will find a better way.
It clearly shows that Apple is getting fed the dev certificate info for each application being launched.
For developers with multiple applications, then sure, that's not going to be as clear as individually identifying the application.
But there are plenty of developers around with just one popular application. Sending the dev certificate for them is effectively the same as sending the application hash itself.
They already know they exist (they sign them) and most of those are downloaded via the AppStore (they run that) and people tend to log in using iCloud (which they own).
I get it, we're all supposed to trust nobody and have 7 billion independent islands where you don't have to trust anyone or work with anyone.
I have not seen any solution, just people piling on. Having PKI and signatures using a central authority is the least-worst solution we have right now, and until something better is created we don't really have a lot of places to go (unless we accept downgrading common user's security and usability).
"They already know they exist ..." doesn't really seem to match up? Like, of course they do.
Anyway, I was just pointing out that the communication still seems pretty close to sending Apple the list of applications being run. At least, for applications created by dev's with only one major program for their certificate.
A solution would be allowing the user to turn this off. And more importantly, to allow firewall apps to manage all network traffic instead of excepting apple's.
Doubly bad, because the actual post is about how Apple doesn't actually do the "tracks your every use of an app" peeping they original post that made all the fuss says they do.
Quick question on that.
Has anyone tried disabling SIPs (csrutil disable), allowing the Little Snitch kext (spctl kext-consent add MLZF7K7B5R) and just using Little Snitch 4.6 in Big Sur?
That doesn't matter. The total signature is still different from factory default, so if any change is made to any other file it will be different and need to be updated :)
What's being signed is a hash of all the individual file hashes, so any file being different from stock will mean a difference, whether it was changed in the latest update or not.
Seems to work fine here, nothing bypasses my VPN nor does it bypass my firewall. Perhaps that's because my VPN and my firewall aren't running inside the computer but external to it, as it should.
It logs app certificate requests which in real life is pretty much equivalent to logging app runs. And that line about only calling the server from time to time is bullcrap. I have years of experience on this issue because my internet is pretty shitty. And that "from time to time" is every couple of hours.
There are two issues here. One is the privacy problem which I agree is not quite as bad as some think. The second is the stupid fact that if some server goes down you can’t launch apps. That is just awful.
I think it would help if someone could quote or reference Apple's official position / explanation on this (if there is one).
You know, before declaring the end of the world, is there any information from the source (Apple)? Discussions here seem to have had several thousand comments without obtaining this basic info. It would be good to know, I would think?
* There is no information on how often the validation happens. All this investigation concludes is that it doesn't happen when closing and immediately re-opening an app. Is it every week? Every reboot? Every hour? If it's less, that's essentially the same as doing it on every launch.
* There is no justification for sending this information in cleartext. I don't follow the "browsers and loops" argument. This is a system-service that only has to trust a special Apple certificate, which can be distributed via other side-channels.
* Many developers only publish a single app or a certain type of app. So it still is a significant information leak. It's really not much different from sending a app-specific hash. Think: remote therapy/healthcare apps, pornographic games, or Tor - which alone could get you into big trouble or on a watchlist in certain regions.
I assume they will push a fix with better timeouts and availability detection.
But Apple simply has to find a more privacy-aware system designs for this problem which does not leak this kind of data without an opt-in and also does not impact application startup times. (revocation lists?)
I imagine this data might just be too attractive not to have. Such a "lazy" design is hard to imagine coming out of Apple otherwise.