Hacker News new | past | comments | ask | show | jobs | submit login
Does Apple really log every app you run? A technical look (jacopo.io)
621 points by jacopoj on Nov 14, 2020 | hide | past | favorite | 344 comments



While other posts on this topic are too alarmist, this one is way too Apple apologetic for my taste.

* There is no information on how often the validation happens. All this investigation concludes is that it doesn't happen when closing and immediately re-opening an app. Is it every week? Every reboot? Every hour? If it's less, that's essentially the same as doing it on every launch.

* There is no justification for sending this information in cleartext. I don't follow the "browsers and loops" argument. This is a system-service that only has to trust a special Apple certificate, which can be distributed via other side-channels.

* Many developers only publish a single app or a certain type of app. So it still is a significant information leak. It's really not much different from sending a app-specific hash. Think: remote therapy/healthcare apps, pornographic games, or Tor - which alone could get you into big trouble or on a watchlist in certain regions.

I assume they will push a fix with better timeouts and availability detection.

But Apple simply has to find a more privacy-aware system designs for this problem which does not leak this kind of data without an opt-in and also does not impact application startup times. (revocation lists?)

I imagine this data might just be too attractive not to have. Such a "lazy" design is hard to imagine coming out of Apple otherwise.


Most "alarmist" articles have two points you cannot really ignore, not if you don't want to end up living in interesting times one day.

1) Even plain access logs — basically what a HTTP request, or a TCP connection can tell you — is a lot. Gather those for a couple of days, and you have a good map of the user. More so if you have an ID of machine and the actual executable hash.

2) "But we are the good guys" is a non-defense. Good guys can turn bad, they can be coerced by the bad guys, and

3) since the requests fly out in plain text, there is an unknown number of questionably-aligned guys in between capable of sniff your data. You only need one bad enough guy to get into serious trouble if that's what they want.

This is not alarmist. It's just common sense. The same common sense that you use to avoid certain neighborhoods at certain times of night.


If you have #1 and the ability to collect #3, then you’re already an intermediary between the user and Apple.

At that point, what’s to prevent you from providing unacceptably slow service for the certs of those apps you don’t like and soft-locking the user out of particular apps on their own device?


The fact that this slows down devices boils down to a rushed or simply incompetent implementation.

It's sensible to require waiting for a certificate check the first time an app is launched, but after that, the cache validity should be indefinite, and updates should occur asynchronously in batches.

The timeout settings were also excessive.

Can't forget the blatant lack of encryption. They either forgot or thought it would be too much effort to set up.


When you have a good broadband, it gets so easy to assume that internets grow on them trees, latency is negligible, and servers are fast and always up.


Yes it is ridiculous that an internet query is in the path of starting a local app for the first time in X hours. If it has to be done, it could be done in a daily batch for all apps when the connection is idle, and on install. Using bloom filters to check for recent invalidations would be even better.


How many false positives are possible with bloom filters? In the described use case, you don’t want even one.


A positive on on the bloom filter is just an indicator that you do the bigger, more expensive (and privacy-reducing) check, like an encrypted OCSP query for that specific certificate. It's not the final verdict, specifically because of the risk of false positives. Bloom filters are a way of making it so that you don't have to do that bigger, privacy-leaking query every time.


>"But we are the good guys"

Also, this is what every bad guy believed him or herself to be throughout the history of humanity.


@2) + good guys can be hacked.


Or, simply assume that there really are no “good guys”.


Especially where money is involved.


IMO especially when stock holders wanting a monetary return on investment are involved. I give my money to the FSF every month, because they provide value to me, but not because I expect them to surreptitiously extract it from others and give it to me as cash dividends.


I think that's one of the big problems with public companies, especially those that have "regular people" as their main money maker (the "consumers") - invariably, the company's needs (duty) to make money for their real customers (the shareholders) will take precedence over what would be "the best thing" for consumers.

I wish we could do away with the whole "public company" thing - just imagine how much better Facebook, Google, and countless other companies (yes, Apple too) would be if they were private, and more accountable to their users.


Privately owned companies are not accountable to their users, they are accountable to their owners, just like publically traded ones. It's just that they have fewer owners, and you sometimes get owners with really nice ideas. Other times, you get even more tyrannical owners.

Instead, what would be really nice is imagining how those companies would fare as worker-owned companies. Especially with these big internet behemoths, where the entire families of all the workers are users, the standard of user care would easily sky-rocket.


Or not, as Soviet Union with its workers-owned factories can tell. Ever rode a Soviet car that wasn’t copypasted from Fiat?


The Soviet Union didn't have even 1 worker owned factory, unless you're talking about the time before Lenin ever came to power. The factories were owned by the state, which in turn was owned by a dictator and his political apparatus - workers had less freedom to control the factory than Amazon warehouse workers.


But they told people they had been worker-owned, and many actually believed it!


Yes, they were a despicable regime, and unfortunately their name still mars the idea of socialism. They also claimed they were democratic, and surely many believed that as well, but we haven't let that ruin democracy, so we shouldn't let their laughable claims to socialism ruin socialism.


You had like 120 years to create a version of socialism that didn't suck, or didn't degenerate into a form of oligarchy under whatever guise du jour you please. I'd call it a failed experiment by now. You know why?

Because "true socialism" (like your true Scotsman) requires ideal übermenschen on all levels everywhere. This is not how the humankind works. Humankind is full of flawed, sometimes outright malicious people, and you have to deal with that.

Most versions of socialism at some point came up with a need to breed ideal happy socialist people that won't keep breaking their paradise all the time. And until this Übermensch is born, they chose to break and bend the rest into behaving, like bonsai trees. Of course, Dear Leader and their team is exempted from being broken or bent, and many others aspire to become like them. This is how every socialist rule to date grew into a totalitarian oligarchy.

Thank you but no thank you. I'd better choose a form of government that adapts to and deals with people as they are, and doesn't try to force them into some better version according to their understanding.


Wikipedia says:

> The Niva was described by its designers as a "Renault 5 put on a Land Rover chassis"

So I guess one example of a car that was not copypasted from fiat?


> Ever rode a Soviet car that wasn’t copypasted from Fiat?

Isn't the Volga a copy of a Mercedes? ;)


private companies are still accountable to their shareholders. But I do think that the very public number of share price encourages slightly different behavior than a private, illiquid, and probably out of date number


Yeah not sure what the poster is trying to say here. Both distinctions almost invevitably result in doing anything that is legal to maximize profits (and oftentimes illegal or gray at best). However I haven't seen anyone propose a decent alternative to corporation status for such large entities. The other option is state owned and that is almost always an utter failure. Even China allows their "state owned" businesses a lot of leeway to account for the ups and downs of capitalism and market forces.


There is no executable hash in the request, so I don't understand why you bring it up


Developer certificate IDs are almost a 1:1 match with which app you’re running.


To use one of the original examples, how many different applications are signed with the developer key of the Tor browser project?


I don’t know the answer to that, but I would assume they are all Tor related so that tells me everything I need to know about a user anyway.


There are also notarization requests, and those transmit more than enough information about your executable.


> "But we are the good guys" is a non-defense. Good guys can turn bad, they can be coerced by the bad guys,

That’s true, but not very useful, since if Apple turns bad or is coerced by the bad guys, they could just issue an OS update that begins doing new bad things anyway.


A couple of problems:

- This give Apple access to data right now. If they turn evil in the future, they have access to data from the past, which gives them more leverage.

- The security industry (overall) pays attention to Apple updates. If Apple turned evil in the future by issuing an OS update, someone might notice it happening. But if they start organizing this data and handing it off to the government, they don't need to change anything public or issue an update. They can do it all serverside without anybody noticing.

- One of the ways we tell whether a company is trending evil is that we pay attention to how its willingness to invade people's privacy evolves over time. This is a more subtle point.

Imagine that I was administering your phone. There's trust involved in that kind of relationship; if I turned evil, I could install some tracking software or viruses and violate your privacy. So imagine that one day you find out I have installed tracking software on your phone, but when you ask me about it, I say, "it doesn't matter whether or not the tracking software is installed on the phone. If you trust me not to invade your privacy, then you might as well trust me not to look at the data the software is collecting. As long as you trust me, it makes no difference what I install on your phone, since you can trust me not to use that software to violate your privacy."

You probably wouldn't be satisfied by that excuse. In reality, seeing that I am now the type of person who is willing to install tracking software on your phone should give a suspicion that I have either already turned evil or that I am on my way to turning evil.

So similarly with Apple, it's true that trusting Apple means putting them in a position where they could start collecting people's private data. The fact that we have now seen them start collecting private data means that we should be more suspicious that Apple either is already evil, or at least that it is more willing now to play with evil ideas than it used to be.


It seems like several people are assuming that Apple is storing the data now and that it is personally identifiable. My assumption was that, of course they would not do that. But of course I could be wrong.


I think the bigger point here is, if Apple started to store the data and make it personally identifiable, you would have no way of knowing that they had.

They wouldn't need to install anything new on your computer to start tracking you in more detail or building a user profile on you, they could just start doing it invisibly behind the scenes on a server someplace. That's a big deal, because even though you're trusting them to administer your device, if they did start pushing out spyware, there's a good chance a security researcher would notice it. But there's no way for us to know what Apple does with this data once it leaves our devices.


I just don’t think that’s a very big deal. Did anyone notice when Apple shipped this update? Maybe so, but it certainly wasn’t a huge ongoing issue in the community. It seems pretty clear that they could get away with a minor evil update if they decided to turn evil.


But they can’t retroactively gain data. So it’s not the same. Besides, security is something you apply in layers.


> There is no information on how often the validation happens.

I wrote a blog post about this. My analysis indicates that Developer ID OCSP responses were previously cached for 5 minutes, but Apple changed it to half a day after Thursday's outage, probably to reduce traffic:

https://lapcatsoftware.com/articles/ocsp.html


5 minutes is an absurdly short cache time…


Pure speculation from me, but my guess is that the intention is check an app on every launch, and the 5 minutes is there just to lower the chances of DoS from an app getting repeatedly launched for some reason.


Let's not forget the inherent elitism of thinking everyone has a gigabit internet at their disposal. I happened to live in a developing county for years and this "waiting up to half a minute until the program opens" has been my daily experience for a long long time.


If it's slow in developing countries, it is gonna be slow in smaller towns as well as rural areas in US and Canada, and depending on where the closest Apple server is, perhaps all of Australia and New Zealand.


If you catch malware in the wild you don’t want to wait half a day for the cache to expire.

Negative responses are typically cached for short periods of time. Can you imagine if people cached NXDOMAIN for half a day and someone creating a record had to wait 12 hours for it to go live because someone queried it?


If you care about user privacy, you don't upload stuff from the user side, you download the list of trusted&untrusted certificates to the user's machine and take the decision there.

This is how antiviruses have always worked, without affecting user privacy (of course, most antiviruses also did other things that DID affect user privacy, but malware detection at least worked perfectly fine without it).


> If you catch malware in the wild you don’t want to wait half a day for the cache to expire.

But if you have a cached OCSP response for the cert of a malware author, then you've already launched their app, so it's probably too late.


Plenty of kinds of malware are harmful each time they are launched, not just once.


The risk of launching malware a second+ time seems substantially less than the privacy leak caused by more frequent checks.


https://www.zdnet.com/article/apple-update-kills-off-zoom-we...

This was a seriously exploitable issue that was a problem every time it was run.

I agree that this certificate mechanism is absurdly problematic.

That doesn’t justify dismissing the security risks it was intended to prevent.


Zoom isn't malware, Apple did not revoke Zoom's Developer ID certificate, and indeed Zoom still exists on the Mac.

Zoom had a serious uninstaller bug, but that's all it was, and it's not relevant to the current discussion.


Incorrect - zoom exposed a serious vulnerability, and Apple shut it down, using another mechanism but nonetheless the same effect.

It’s relevant because you argue that there is no value to having the ability to do this.

It is also a problem which occurred every time the app was launched. Something you have dismissed as a non problem.

https://www.theverge.com/2019/7/10/20689644/apple-zoom-web-s...


> using another mechanism

> It’s relevant because you argue that there is no value to having the ability to do this.

No, I did not. We haven't talked about that other mechanism, so I've said nothing about it here either positively or negatively.

> Something you have dismissed as a non problem.

I said "Zoom had a serious uninstaller bug". So no, I did not dismiss it as a non problem. It just has nothing to do with Developer ID certificate OCSP.

Please stop putting words in my mouth or completely warping the words that I do say.


No warping going on.

You said “But if you have a cached OCSP response for the cert of a malware author, then you've already launched their app, so it's probably too late.

I.e. once you have launched the app, the damage is done.

This is not the case, and the Zoom situation is a clear counterexample. Even if a problematic app has been launched one or more times, it is still worth preventing subsequent launches if you can.

It doesn’t matter what mechanism is used to prevent the subsequent launch. This applies to any mechanism including OCSP. The Zoom example is a refutation of the particular point you made, a point which dismisses a real security concern.

It demonstrates that there is value in Apple having the ability to prevent harmful software from running, no matter how many times it has already been run.


> This is not the case, and the Zoom situation is a clear counterexample.

I was talking about MALWARE. As I said before, Zoom is not malware, so no, it's not a counterexample.

This is my last reply to you. You're clearly not interested in having a good faith conversation, you continue to misinterpret me and want to score "internet points" or something. I'm done.


Accusations of bad faith are unhelpful, especially in a technical discussion like this.

Zoom is not malware in that as far as we know it isn’t Zoom’s intent to cause harm.

However in this instance it exhibited a behavior which many forms of malware exhibit - opening an insecure or exploitable port. It was shut down because it was behaving the way some malware behaves.

It’s a perfectly reasonable example of using these types of mechanism to mitigate a real security issue.

You can’t seriously be claiming that malware never opens ports, or that malware always does all of its harm on the first run.

Therefore the use of the distinction ‘malware’ is arbitrary and irrelevant.

The mechanism is useful to protect against vulnerabilities, regardless of whether the vulnerabilities were intentional or not.


This was a good read as was the HP OSCP incident you linked to in the post. With regards to HP cert being revoked I'm amazed that this didn't get more attention. You would think there would be checks in place to calculate something like "well there's been 100 million OSCP checks for this printer driver in the last 24 hours so we might not want to revoke its cert."


Any idea how they changed the cache time remotely? If the OS is honouring the cache control headers of a plain text response this has its own security implications.


The OCSP response has a nextUpdate field: https://www.ietf.org/rfc/rfc2560.txt


The response is signed by Apple, and presumably (!) your Mac is validating that signature correctly. I haven't checked if they are using stapling, but that would be the sensible way to do it, in which case it is a server side parameter (though possibility with client side limits too, but you'd need to disassemble the binary).


> [article] editing your /etc/hosts file. Personally, I wouldn’t suggest doing that as it prevents an important security feature from working.

Exactly the apologetic that you are talking about. Everyone has a different security update cadence (e.g. patch Tuesday for Microsoft), but each application launch is not a reasonable one. Given Apple's recent propensity for banning developers who stand against them (whether you agree with those developers or not), this is aimed squarely at dissent.


I don’t see how you can so confidently reach that conclusion. It seems perfectly plausible that Apple wants a way to quickly quash malware, worms, etc.


> I don’t see how you can so confidently reach that conclusion.

I'm not going to 100% say that control is the reason Apple is doing this. I'm sure that they do genuinely want a way to quickly quash malware, worms, etc...

But we've also seen that Apple is clearly willing to use security features to ban developers that stand against them, so I don't understand how people can be so confident that they wouldn't be willing to use this feature in the same way, even if they did internally think of it as primarily a security tool. It would be very consistent to how we've seen app signing evolve from a pure security feature into a contract-enforcement tool.


Can you remind me of which developers have been banned for standing against Apple AND haven’t broken their contract with Apple?


Security features should not be used for contract enforcement.

My point stands, Apple introduced a security feature then used it for contract enforcement against a company that opposed them. There is no reason to believe that they wouldn't do the same thing here. Whether or not you believe that Epic was the villain in that story is irrelevant to the current conversation.


Oh, Epic broke their contract and therefore I think can be seen as bad for security.

If they are willing to break their contract for money what is to stop them from harvesting my data for money?

The security feature is a part of the apple ecosystem. I bought a Mac because of that not desire of it.


> Oh, Epic broke their contract and therefore I think can be seen as bad for security.

> If they are willing to break their contract for money what is to stop them from harvesting my data for money?

This argument was weak enough that a judge specifically rejected it after Apple failed to prove any kind of immediate threat was being presented from the Unreal Engine.

> what is to stop them from harvesting my data for money?

The fact that the contract dispute in question had nothing to do with data harvesting in the fist place.

> I bought a Mac because of that

That's fine. And if Apple wants to try and tie all of this to security, then honestly whatever. But when this signing feature came out, people made fun of critics for suggesting Apple would do the exact thing you're now saying they're justified in doing. Try to lump it under the banner of security, try to lump it under the barrier of whatever you want. When avalys says:

> I don’t see how you can so confidently reach that conclusion. It seems perfectly plausible that Apple wants a way to quickly quash malware, worms, etc.

they're expressing doubt that Apple would do any of the things that you're praising Apple for doing with app signing. And the fact remains, it's very plausible that they would use this as a tool to enforce contracts. You're in the comments, right now, saying that they should use this feature as a tool to enforce contracts.

So what exactly do you disagree with me on? It still seems pretty reasonable to believe that Apple will be willing to use app logging as a contract enforcement tool, and that when they do people will jump on HN to defend them, given that you are currently defending them for doing so right now.

The argument over whether preemptively blocking app updates based on a vague sense of 'distrust' falls into the category of security is a semantic argument, and I don't really care about digging into it. The point stands, people are worried that Apple will use this feature to target apps beyond normal malware, trojans, or worms, and they are right to be worried about that.


Apple didn’t not ban them for standing against them. Apple banned them for breaching their contract.

It’s not each application launch. It’s from time to time. It’s for each application as it might be detected to have malware in the future. Also if the app isn’t signed there is no check.


Apple hasn’t banned any developers who stand against them.


They have used security features of their OSs to ban developers who were simply in breach of contract with Apple, but not distributing malware or any other kind of content harmful to users.

Sure, Apple was completely in the right to stop distributing Epic software after they breached their contract with Apple. But Epic didn't breach any contract with their users, so there was no reason to remove Epic's software from user devices, or affect companies redistributing Epic software. Those are obvious overreach.


“Simply in breach of contract with Apple”

Epic lied about the content of their software. If Apple doesn’t remove software from suppliers who lie about the contents, people will continue to exploit this.

There was no overreach. This was the consequence of Epic intentionally lying about the content a software update.

It’s also worth pointing out that Epic expected this result, and caused it on purpose. Both Apple, and the court gave them the chance to rectify the situation which they refused.

That makes Epic responsible for the outcome. No one else.


Didn't Epic actually create an entire presentation video advertising the contents of their update?

Again, I fully agree that Epic was knowingly in breach of their contract with Apple, and wanted to use the public as leverage. But that doesn't, in any way, make their update malicious for the end user.


The presentation video was released after the update was submitted to the store with the contents hidden and activated later.

As for whether the update was malicious for the end user, we could say we trust epic to operate a payment method, and therefore the update was not malicious.

But there are many actors who would use this exact same methodology, and the update is malicious. Such Trojans exist on Android.

Security policies always prevent behaviors that could be used for non-malicious purposes.

If the argument is that the end users should be the ones to decide, it’s really just another way of saying that Apple shouldn’t be allowed to enforce any security policy.

Of course there are those who believe that Apple shouldn’t be able to enforce security policies, but there is no overreach here.


[flagged]


You'd be more aligned with HN values by refuting parent's point with examples than making ad hom attacks.


It is nevertheless the case that some users are VERY LOUD on particular topics, essentially repeating themselves on many leafs of the discussion. I find this very tiresome. It isn't an ad hom to point this out.


This is true. I’d be totally up for a ‘no repetition’ rule, however that’s completely impractical.

I find myself repeating certain points, usually because I am responding to repeated points.

Having said this, I do it because sometimes the person I am responding to says something new. It sounds like their point is a repeat, but they turn out to have a point of view that is different when you challenge them about it.


The accuser should also be held to the same standard. Without evidence those are just empty words.


Just look at our comment history, it's pretty easy lol


The loop argument makes no sense at all. HTTP is being used as a transport for a base64-encoded payload, the actual process of veryfing the validity of the developer certificate is done by the service behind that Apple URL - not by the HTTP stack.

There is no justification not to switch to HTTPS here.


It's convention. With browsers, you wouldn't want to introduce a recursion point in TLS (we already have certificate chains, and now we'd get OCSP check chains and where does that terminate?). Apple just did what everyone else does for OCSP, in a way which is accepted practice for good reasons.

Now in this specific instance, OCSP is being used in quite a different use case. For one, the plaintext issue is not a problem when browsing, as attackers can see what sites/certs you're accessing in the clear anyway (certificates are plaintext in TLS sessions), while app launch is an otherwise offline activity. So in this instance it makes sense for Apple to switch to HTTPS (and if they have OCSP on the server cert for that, that should go via HTTP to avoid loops or further issues).

But what Apple did here is just standard practice, it's just that there happen to be good reasons to diverge from the standard here.


Correct.

Want to point out that certs are encrypted with TLS1.3, and DNSSEC+DoT/DoH makes ESNI/ECH possible by putting keys in the DNS.

Ultimately maybe OSCP could do something similar, or fall back to DANE or some alternate validation method that wouldn’t cause a “loop.”


No browser supports DANE, or has any plan to do so; in fact, Chrome tried supporting DANE, and stopped.


Ok say we switch OSCP to HTTPS.

How to we know the certificate presented by the OSCP server has not been revoked? We can’t ask the OSCP server cos that’s what we’re trying to handshake with!

The loop is very real and non trivial to solve. I’d expect something similar to what ESNI/ECH does leveraging DNSSEC + DoH may be possible NOW, but that’s a recent development.


Well, the problem is that OSCP is leaking which applications you open (and when you open them) which is the big deal IMO. One solution would be that the OSCP is checking the HTTPs certificate in cleartext once upon startup (and maybe once every day or so thereafter), and is using HTTPs for all subsequent application requests.

I don't really see a problem here how that could cause a loop. This way, an attacker can only see:

- When you boot your Mac because it verifies the HTTPs certificate once.

- When the OSCP daemon makes a clear text request to check that the HTTPs cert is still ok

- That you have just opened an application (but not which application)

IMO that still leaks an unacceptable amount of meta data but it is miles better then using cleartext. Maybe a bloom filter here would be a much better solution + make the daemon regularly fetch bad signature that are not added the the filter yet instead of pulling. Sure the filter may hit false positives sometimes but in that case, the OSCP server could be checked and apple could see if a certificate has a high rate of false positives and adjust the bloom filter accordingly.


Why can't we use a combo of HTTP and HTTPS?


Yeah, that confused me as well.

Even if there was some wrinkle about the loop argument that I didn't understand, and HTTPS is out: Apple could encrypt the base64 payload, and the sniffable info is reduced to which computer is phoning home, which is something that someone with the ability to middle comms probably knows already.

"roll your own encryption and send it over HTTP" is a bad idea in general but... this is Apple, they can and do implement encryption. Why not here?


The OCSP RFC[1] specifies that if requests are made using HTTP, they MAY be protected via TLS or "some other lower-layer protocol".

[1] https://tools.ietf.org/html/rfc6960#appendix-A.1


Isn’t OCSP an open standard for handling certificate revocations? The standard specifies plaintext, because the standard can’t assume that the client has a way to form an encrypted connection to the revocation list.


The standard does not specify plaintext. It says the client may use encryption.

Even doing unauthenticated TLS is better than what they do now, because the current situation allows for full passive monitoring.


The problem with 'may' is that a network intermediary might block TLS connections to ocsp.apple.com knowing it would fall back to plaintext.

Apple could encrypt the payload though, using the Apple public key, which would solve the snooping by intermediaries problem.


A network intermediary blocking or altering the TLS is an active attack. Plain HTTP is also vulnerable to that, so unauthenticated TLS is no worse than the current situation.

TLS encrypts the payload just fine if you want that. That’s what TLS is for.

PS: You don’t encrypt something to someone else using your own public key.


I'm talking about when they block just the ocsp host TLS port. Heaps of places whitelist https for particular sites, and inspect the content to prevent TLS. Appliances that block TLS via packet inspection are dime a dozen. But the query/response fields can be an opaque encrypted blob and it would get through. Every Apple device obviously has the Apple pub key, and hence they can send encrypted messages back to Apple without needing any further PKI.


Wouldn't an anonymity scheme such as [1] work in this context? Send only part of the hash of the app's certificate, and have the server send you all possible revoked certificates?

[1]: https://blog.cloudflare.com/validating-leaked-passwords-with...


As the set of certificates is bounded and known by apple, they can also adopt crlite and just push all CRLs they have to all users, using CRLite.

https://github.com/mozilla/crlite


I assumed there were too many revoked certificates for something like this to be viable, but I'm not surprised it is.

You probably can't update the whole list that often though, compared to Apple's current OCSP revalidate time of 5 min. [edit: seems "delta patches" are supported by crlite so maybe that can work too]


> I assumed there were too many revoked certificates for something like this to be viable, but I'm not surprised it is.

Given that Apple currently doesn't even encrypt the requests during transit, I think they just didn't pay much attention to the problem, which I think the main reason is why they haven't adopted it yet. As for the number of revoked certificates, I'm not sure it's larger than the number of revoked TLS certificates, given that there are way more websites out there than there are registered apple developers.


This is pretty much how chrome's safe browsing feature works for screening URLs without leaking the full details.

There is no valid reason that the full information needs to be sent to the server to implement this kind of protection IMO


Why is Apple limited bu Open Standards? It's not like any other servers are going to be receiving these messages.


> Such a "lazy" design is hard to imagine coming out of Apple otherwise.

That's my biggest issue personally. There's a bit of information leak, but most wouldn't care and would just do the standard and be done with it. Firefox still uses OCSP in some case...

My issue is that a company like Apple, which currently market itself as a company that care about privacy of their user, would have let this comes out of that same process that's supposed to care... and still hasn't said that was a mistake out of their process and that they are correcting it.

They could easily use k-anonymity like HaveIBeenPwned, or even as push, which would means no cache, which is even better for their argument of security.

There's nothing alarmist here, it's all alright, it would just means that this is the same false advertising that so many companies do, but still, is important to be aware of.


Agreed.

Call home features can be spoofed by a poisoning type of attack upstream in various forms.

This is not bullet proof and a cop-out with a poor solution for security.

You know who has effective call home features? Vendors that sell to major enterprises. It is a natural progression and a particularly nasty environment to live within.

If they are legitimately trying to protect the brand through force or merely forcefully controlling the app ecosystem... it's an abusive relationship to be in.

The fact this is not configurable without dead lettering the route is all they need to do to show tethering is something they consider as a viable security measure.

I'll pass.


I feel Apple has done privacy well in so many cases, that the way this works is really disappointing :-/


Apple has done a fantastic PR job regarding privacy. I am more skeptical about the status of actual privacy given their iMessage situation and now this.



What does "participation" in PRISM mean?

> Apple: "We have never heard of PRISM"[115] "We do not provide any government agency with direct access to our servers, and any government agency requesting customer data must get a court order."[115]

* https://en.wikipedia.org/wiki/PRISM_%28surveillance_program%...

Certainly American companies are subjects to warrants and NSLs, but Google (to give one example) had its dark fibre connections between data centres tapped by the NSA. Is that the "participation" that was referred to by the Snowden documents?

* https://arstechnica.com/tech-policy/2013/10/new-docs-show-ns...

* https://www.theguardian.com/technology/2013/oct/30/google-re...

* https://venturebeat.com/2013/11/25/level-3-google-yahoo/

* https://www.washingtonpost.com/world/national-security/nsa-i...


> had its dark fibre connections between data centres tapped by the NSA. Is that the "participation" that was referred to by the Snowden documents?

No, that's a separate thing. They do both. See the "you should use both" slide.

https://github.com/iamcryptoki/snowden-archive/blob/master/d...

As to the apple claims that they didn't participate in PRISM, I think they were just lying. Clapper lied to congress as well, so this isn't unheard of. They would likely have breached their government contract by telling the truth. That being said, them having never heard about the program name might be true because it might not have been known to them under that name, but that's just a detail.


Apple was not lying because “PRISM” was an internal source identifier at the NSA for the process of acquiring data through the FISA warrant process. Apple never heard the word PRISM; they got FISA warrants and replied to them as required by law.

This is clearly indicated on the PRISM Wikipedia page that was linked above.

> PRISM is a code name for a program under which the United States National Security Agency (NSA) collects internet communications from various U.S. internet companies.[1][2][3] The program is also known by the SIGAD US-984XN.[4][5] PRISM collects stored internet communications based on demands made to internet companies such as Google LLC under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms.


> Apple was not lying because “PRISM” was an internal source identifier at the NSA for the process of acquiring data through the FISA warrant process. Apple never heard the word PRISM

As I've said, that's a detail and splitting hairs. If a sentence has multiple interpretations and one of them is true, but you phrase it in a way that most people interpret the sentence in the wrong way, you are intentionally deceiving people. They should have said "we have never heard the name PRISM" or something like this.


I thought you just ended up in PRISM you don't "join" it? Just like Google found out from the Snowden leaks and then encrypted all their DC to DC fiber.


I think there were aspects of PRISM that required cooperation from providers like Google. Like the NSA would send queries to them and they would return emails or what have you that match those queries. Though of course this “cooperation” is required by law.


If there's a court order, (FISA: https://en.wikipedia.org/wiki/Foreign_Intelligence_Surveilla..., or otherwise) companies have to comply. So I don't really see how one can blame a any company for that.


You can absolutely blame companies, specifically Apple, because many things are not E2EE when they could be.



See: the last sentence of my post


Their iMessage situation?


They backup the private key to iCloud unless you manually disable backups. So even though iMessage is advertised as E2E encrypted, for the vast majority of users, Apple can read each and every message.

(And even if you disable backups, Apple can still read most if not all of your messages, because the persons on the other side of the conversations have not disabled backups)


It's worth noting that Google, the big bad guys of privacy, uses a proper E2E encryption scheme.


iMessages is e2e encrypted, so I’m not sure what you’re saying here


Can Apple read your iCloud storage? I’m not saying that it is, but shouldn’t that be encrypted at rest with a customer-specific key?


Yeah, they can read everything:

https://sneak.berlin/20200604/if-zoom-is-wrong-so-is-apple/

They were going to actually encrypt it, but suddenly had a change of heart after the FBI had a chat with them:

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


Ugh


I don't have any detailed knowledge of it, but I've seen various similar comments based on this article and similar ones:

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


This stance undermines the point of E2E. The messaging system is still E2E even if people backup their plaintext messages or their key on non-E2E storage.

Having you messages deleted because you forgot your iCloud password is good security but a terrible default.



Calling Apple's privacy stance PR is extremely misleading.

It's been engrained in them since the 80s and with the growth of Google, it became fun to vilify Apple because of it.


A better privacy solution would be to sync revocation lists every so often (and, if you must, right before opening a new app). Is there any privacy-preserving reason to not go this direction? How often would you expect certificates to be rescinded? You could also use a bloom filter to significantly reduce the false-positive rate.


CRLs are how we dealt with OCSP in the browser, and I feel like those must surely have more insanity than the Developer ID certs


Or something akin to OCSP stapling, which has been mentioned in a few places?


Stapling makes sense for the Web but not here.

With OCSP Stapling the remote web server whose identity you want to assure yourself of periodically gets an up-to-date OCSP answer about its own certificate. When you connect to that server, it gives you the certificate, and the OCSP answer, which assures you that the certificate is still good, and is signed by the Issuer of the certificate.

So, you visit Porn Hub, Porn Hub knows you visited and can reasonably guess it's because you like porn (duh). Porn Hub talks to their CA. The CA knows Porn Hub are Porn Hub and could reasonably guess it's a porn site (duh) but this way the CA doesn't learn that you visited Porn Hub. That's Privacy preserving. Nobody learns anything you'd reasonably expect they shouldn't know.

But how can we apply that to an application on your Mac? If every app reaches out from your Mac to Apple to get OCSP responses, they learn what you have installed, albeit I guess you can avoid telling them when exactly you ran it. This is enormously more costly and not very privacy preserving.

CRL-based ideas are much better for your privacy, although they might cost you some network traffic when the CRL is updated.

Of course one reason for Apple not to want to do CRLs is that they're transparent and Apple is not a very transparent type of company. With OCSP you've got no way to know if and when Apple revoked the certificate for "Obvious Malware II the sequel" or equally for "Very Popular App that Apple says violated an obscure sub-clause of a developer agreement".

But with CRLs it'd be easier for any researcher to monitor periodically for revocations, giving insights that Apple might not like. Do revocations happen only 9-5 Mon-Fri Cupertino time? Are there dozens per hour? Per day? Per Year?


That’s assuming that the OCSP is hosted by Apple, which doesn’t have to be the case. It sounds shitty from an app developer perspective, but app developers would have an incentive to host endpoints that make their apps runnable on macOS. This improves privacy, by distributing ocsp traffic across organizations, but also puts the burden of verification on app developers. Not sure if this would harm or help the app ecosystem


Wouldn’t a properly diffed CRL list be much smaller than a hash payload on every app launch? Say, a request like “give me all the revoked certificates since I last asked.”


> But Apple simply has to find a more privacy-aware system designs for this problem which does not leak this kind of data without an opt-in and also does not impact application startup times. (revocation lists?)

The idea that you need apple to certify the developer over the software you run on your phone is nonsense though. You don't do that on your computer, so why do you need to be nannied on your phone?


Clear-text is the OSCP mechanics. This is nothing to do with Apple or MacOS.

Potentially it could now be tackled with DNSSEC + DoH similar to the records ESNI/ECH puts in the DNS to encrypt initial HTTPS client hellos.

But the loop issue is quite real. How can you validate the certificate the OSCP server gives you has not been revoked, using OSCP???


THANK YOU. I also see no reason that OCSP checks cannot support both HTTP and HTTPS. If there is some reason then the protocol should be split into two, one for unencrypted checks for things like SSL certs, and another for all other/ dev cert checks over HTTPS.


> this one is way too Apple apologetic for my taste.

I'm not surprised. Apple fanatics routinely deny evidence to support their sorta-religion.


> Apple fanatics routinely deny evidence to support their sorta-religion.

As do anti-Apple fanatics. That’s what being a “fanatic” means. You can say the same about gun fanatics, or meat fanatics, or vegetarian fanatics, or Android fanatics. It’s staggering how often people who are anti something fail to perceive the irony in behaving exactly in the manner they are decrying. Someone having a contrary opinion doesn’t make them a fanatic.


that's crude whataboutism.

going back to the original topic, apple hardware/software, i've used apple hardware and software (company-issued macbook pro and iphone 7/8).

The software is great as long as you want to stay within apple-defined boudaries. If you want to go outside that, it's an experience similar if not worse than using gnu/linux.

The hardware is great when the machine is brand new but decays very quickly, it's not designed to be serviced by either end-users or specialized users or specialized shops -- you're supposed to return it to an apple store and pay an expensive price to basic maintenance. As an example, cleaning up the fans from dust is very important in those machines but you have to buy special hardware to take off the screws, and in generally you risk breaking something. Keyboards failed spectacularly in last gen, and apple waited like two years before fixing it. Audio is great, until it breaks. My macbook pro (15" top of the line) couldn't sustain full-audio, and distorted audio after ~30 sec of full volume audio (imagine that during a conference call in a meeting with other people). The screen is great, but the glass panel retained ALL of the fingerprints and it was a PITA to clean, i had to buy special glass-cleaning liquids. WTF.

All the above issues appeared all shortly after the first year of life of the laptop. Call me an anti-apple fanatic, I don't care, but I expected more from a 3500+€ machine.

At the new job i've been given a 13" dell latitude 7390. It works flawlessly, it rarely skips a beat and it has none of the problems stated above. Fuck Apple.


> that's crude whataboutism.

You’ve missed my point, which is that you could remove the word “Apple” from your original comment and it would have made no difference. One kind of fanatic does not excuse another, nor have I claimed it does.

There’s no need to list Apple’s faults. I’m aware of them and support a large part of Apple criticism in the Tim Cook era (and not just technical[1]), including most of yours.

Where we disagree is in the insinuation the author is a fanatic simply for defending Apple. They’ve written a technical post and gave their conclusions, which may indeed sound apologetic but are far from rabid fanaticism.

> Fuck Apple.

In sum, it’s fine to decry the company but I disagree that people who like it and accept its tradeoffs should be immediately labeled as extremists.

[1]: https://news.ycombinator.com/item?id=24738345


> In sum, it’s fine to decry the company but I disagree that people who like it and accept its tradeoffs should be immediately labeled as extremists.

Well Apple is notoriously abusive of the developers on its platform. Two things are particularly cried about across most of the ecosystem: the 30% cut they take off pretty much everything and the vague terms that you have to comply with, and that they enforce in a mostly random way (app gets pulled out of the app store, won't tell you why, won't tell you what you did wrong).

Now add the exhorbitant prices for their low-specced, low-quality hardware.

Now add the continual rip-off of their users.

Now add the subject of the original linked page.

At this point I think that yes, defending Apple is extremism.

It's fine to accept the tradeoffs, it's not fine to pretend they do not exists:

- "Yeah this stuff is unreasonably expensive but we have to use it"

that is honest

- "the apple ecosystem is the best for creative and developers and what apple does across all the spectrum is fine"

that is dishonest.


I find your position on what classifies someone an extremist to be itself extremist.

That is the crux of our disagreement, which I doubt we’ll resolve over an internet text interaction.

Thank you for the conversation thus far. Maybe we’ll resume it if we happen to ever meet.


> I find your position on what classifies someone an extremist to be itself extremist.

I find that you're the kind of person that only find what they're looking for.

> Thank you for the conversation thus far. Maybe we’ll resume it if we happen to ever meet.

thank you too and have a nice day.


> I find that you're the kind of person that only find what they're looking for.

I expressed an opinion on a belief you seem to hold, not a value judgement on yourself. I don’t presume to know which “kind of person” you are from a short text-based interaction pertaining to single subject matter. I’ll ask you extend me the same courtesy.


> I don't follow the "browsers and loops" argument.

To log in to my banking account, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I keep it in a password manager. And so on.

Imagine that, but for “verifying the HTTPS connection”.


But there’s an easy fix. I use it with my password manager. To log in to my bank account, I need the correct password. No problem, I keep it in a password manager. To open the password manager, I need the correct password. No problem, I already know it. If I don’t know it, I look it up from a less secure source.

Technically what I’m describing is that you can vary the behaviour of OCSP lookups such that if you’re already looking up an OCSP certificate to establish an SSL connection to an OCSP server, downgrade and check over HTTP only when trying to connect to the OCSP server itself. Yes, it would mean one more TLS connection to a random server. Yes, it would mean an extra OCSP lookup. But just one, and just for the OCSP server itself. Which means privacy is preserved in regards to which developer certificate you’re checking. It would be only checking Apple’s OCSP server certificate in the clear, which it could equally cache easily.


TLS involves both cert checking (server is truly who they say they are and not MITM) and Diffie-Helman key exchange to set up session keys (messages are end-to-end encrypted).

You can DH with an untrusted cert. It might be interceptable.

HTTP is always interceptable.

But there should be zero reason not to set this connection up with a full proper cert. HTTP is just mega sloppy.

As others mentioned, you can bootstrap TLS by first checking OCSP (in the open) on your cert auth service, then use that opaque, freshly-checked connection to check the rest.


Wait. Is it not common knowledge that Android and iOS log every application you open down to the exact millisecond you open and close them?

Is it not common knowledge how telemetry works for the operating systems? They generally batch up a bunch of logs like this, encrypt them, compress them, and then send them to the mothership (hopefully when you're on WiFi).


Logging and telemetry are completely separate use cases. For example to do some kind of battery use accounting you need some record of when exactly which app was active.

And no, it's not widely known or documented - there is no good description of what telemetry exists or contains on iOS that I know of.


don’t you need to enable analytics?


first compressed and then encrypted. A good encryption is indistinguishable from random data.


That's why it's compressed before encryption?


Yeah, because encrypted data should be incompressible, as it should be indistinguishable from random data, which is also incompressible.

Reality is a little different of course, and compression can cause problems for encryption because compressed data tends to be highly predictable (especially things like compression headers and compression dictionaries). This allows for potential “known/chosen plaintext” attacks on the encryption.

Some classic examples of this type of attack are breaking Enigma (known plaintext, no compression) by assuming the content of some messages[0] and the more recent CRIME[1] attacks against TLS using compression to help produce a chosen plaintext.

The simple solution in these scenarios is to avoid using compression completely.

[0] https://www.quora.com/Did-the-inclusion-of-Heil-Hitler-at-th... [1] https://en.m.wikipedia.org/wiki/CRIME


> macOS does actually send out some opaque information about the developer certificate of those apps, and that’s quite an important difference on a privacy perspective.

Yes, and no. If you're using software that the state deems to be subversive or "dangerous", a developer certificate would make the nature of the software you are running pretty clear. They don't have to know exactly which program you're running, but just enough information to put you on a list.

> You shouldn’t probably block ocsp.apple.com with Little Snitch or in your hosts file.

I never asked them to do that in the first place, so I'll be blocking it from now on.


> I never asked them to do that in the first place, so I'll be blocking it from now on.

Apple's working on making sure you can't block it. They already keep you from blocking their own traffic with Little Snitch and similar tools: https://news.ycombinator.com/item?id=24838816


It's worth noting that on ios you can never block anything - just have to put up with it.


You can still block access by host by using an HTTP proxy like Fiddler or Charles.

Settings > WIFI > Proxy


I use adblockios and haven't upgraded because they unblocked the blocking. I keep hearing about charles, I wonder if it is special or if it doesn't really block everything.


... and that apple wants to merge its operating systems


No they don’t. They keep adding new ones.

They want to provide a consistent user experience across their ecosystem. Not the same thing.


Unfortunately the consistency is moving in the direction of iOS rather than macOS.


The hosts file works for now. Use 127.0.0.1 and ::1 on two separate lines. Used tcpdump to verify.


if they keep doing like this I will block their entire ASN .


Or stop buying stuff that is broken-by-design in the first place.


Apple is old enough that you need only block 17./8: they have a class A(!).


They already are fronting much of their stuff via Akamai, so good luck doing that...


Do you think you are going to win the war against "your own" hardware?


Until they front it via cloudflare or aws. I got hit by AWS blocking when setting up a network in Russia for the 2018 World Cup - my unifi controller was on an ec2 instance that was blocked due to telegram shenanigans. Worked around the problem but shows that blocking an AS can lead towards an unusable computer.


You could block still block it externally by running a dns sinkhole (a la PiHole) on the same network, provided that you can still configure the DNS resolver.


That assumes the hostname will stay the same and not get overloaded by other essential services.


Yep. That seems to be within the realm of possibility though, no?


Maybe downgrade the macos would work. As I'm in old version of os, Little Snitch works quite well.


Isn't that just with Big Sur? Also, I'm using the hosts file method.


I don’t think it’s actually just in Big Sur. At the bottom of this post describing how to stop them from hiding traffic, they mention someone did a test on Catalina and ran into an issue with the Messages app:

https://tinyapps.org/blog/202010210700_whose_computer_is_it....


Apple deprecated kernel extensions like Little Snitch in Catalina, so if I had to guess it probably applies there as well.


The OP is about Big Sur.


[flagged]


The clock is where Notification Center lives, so you can’t get rid of it. (Of course that makes sense.)


Privacy concerns aren’t the only reason to block it. It also makes software way more responsive. I was experiencing daily freezes that would disconnect my keyboard and mouse (particularly when waking the computer or connecting to an external display) on my 2020 MacBook Air before adding the entry to my hosts file which fixed the issue entirely. It was so pronounced and irreparable by Apple support technicians that I nearly ended up getting rid of the computer.


Besides blocking from the hosts file, you can try:

    sudo defaults write /Library/Preferences/com.apple.security.revocation.plist OCSPStyle None
    
    sudo defaults write com.apple.security.revocation.plist OCSPStyle None


So the takeaways are:

* Your Mac periodically sends plain text information about the developer of all apps you open, which in most cases makes it trivial for anyone able to listen to your traffic to figure out what apps you open. Better not use a Mac if you're a journalist working out of an oppressive country.

* Because of this Macs can be sluggish opening random applications.

* A Mac is not a general purpose computing device anymore. It's a device meant for running Apple sanctioned applications, much like a smartphone. Which may be fine, depends on the use case.

Yeah... No Mac for me anytime soon then.


> You should be aware that macOS might transmit some opaque information about the developer certificate of the apps you run. This information is sent out in clear text on your network.

Wow, that is bad from a privacy perspective!

Since certificate revocation is rare, it makes more sense to simply periodically update a list of revoked certificates instead of repeatedly checking each certificate. That would solve the privacy issue while still allowing certificates to be revoked.

OCSP seems like a bad idea for web browsing for similar reasons.


I don't quite understand why anyone would send data in clear text anymore, let alone Apple.


Maybe there is a bizzare reason why they don't use https on their ocsp endpoint. Perhaps they want to avoid situation where the ocsp server's certificate itself is revoked, or anticipated that the ocsp server would still be in use 10 years later where the currently used crypto could have been marked as insecure and removed and thus prevent older clients from working. Or it could be laziness, but come on...


It's explained in the article, there's a loop if you want to verify a certificate and you need the certificate to verify the certificate


Check server certificate OCSP first, send subsequent queries via SSL.


Precisely. This would require more work, but it would only leak the OCSP server’s revocation request, and would make OCSP both more secure (caching OCSP server validity rather than the original certificates) and more private (due to SSL).


You can do unauthenticated TLS, which is no worse than plaintext HTTP, and foils passive listeners by providing privacy. You could also trust your existing trusted certs (prior to OCSP update) when doing the OCSP update, which, again, is no worse than plaintext HTTP.

Apple knows this. They have cryptography experts.

Taken in context with their backdooring of their e2e messenger and collaboration with military intelligence on FISA 702, I tend not to give them the benefit of the doubt any longer. Apple knows how to take pcaps.

There are only so many times the OS design gets to leak either keys or plaintext remotely before you need to stop assuming ignorance over malice.

I don’t know how many times that is, but it’s less than ten, probably less than 5, and because it’s a count of legitimate “assume ignorance”, then “goto fail”[2] also counts in the tally.

Between this OCSP plaintext telemetry leak, and iMessage default key escrow, scrapping their plan for e2e backups at the behest of the FBI that fixes the key escrow backdoor[3], and “goto fail” not authenticating TLS, we’re at 4.

I’m not even counting the recent story about Apple’s history of willing collaboration with intelligence agencies to make a custom classified firmware for the iPod to aid in espionage.[1]

As Goldfinger’s famous saying goes: “Once is happenstance. Twice is coincidence. The third time it’s enemy action.”

[1]: https://news.ycombinator.com/item?id=24212520

[2]: https://www.zdnet.com/article/apples-goto-fail-tells-us-noth...

[3]: https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


“ I’m not even counting the recent story about Apple’s history of willing collaboration with intelligence agencies to make a custom classified firmware for the iPod to aid in espionage.”

What would this ‘count’ as?


Has anyone tried a revocation list-over bloom filters? Similar to how Google's Safe Browsing filter works?

If there's a hit, a subsequent request can be sent to Apple to verify the same - reducing the impact.


I was initially shocked by this as well so I did some more reading on OCSP and it seems this is being addressed through OCSP stapling.

According to Wikipedia "[OCSP stapling] allows the presenter of a certificate to bear the resource cost involved in providing Online Certificate Status Protocol (OCSP) responses by appending ("stapling") a time-stamped OCSP response signed by the CA to the initial TLS handshake, eliminating the need for clients to contact the CA, with the aim of improving both security and performance."

I'm not aware how widely deployed OCSP stapling is in reality. I looked at my Firefox settings which seemed to be the default for OCSP and it looked like this:

  security.OCSP.enabled                     1
  security.OCSP.require                     false
  security.OCSP.timeoutMilliseconds.hard    10000
  security.OCSP.timeoutMilliseconds.soft    2000
  security.ssl.enable_ocsp_must_staple      true
  security.ssl.enable_ocsp_stapling         true
So I assume OCSP stapling is enabled but direct OCSP is disabled in Firefox by default but a positive OCSP response is not required in general. I tried to check what was really happening with Wireshark but regardless of the configuration and sites I visited, I couldn't get Firefox to emit an OCSP query.

I also don't know what other TLS implementations (like OpenSSL) do and how users of such libraries usually configure them.

Addendum: Oh and of course, OCSP stapling is useless when you weren't about to open a TLS connection (like in this case when checking software signing certificates). I'm also curious if and how this works for other applications of X.509 certificates such as mutual TLS authentication.


The SLA for being made aware of revocations should be configurable from the client side. OCSP here would be fine if (a) it was sent over an encrypted connection using a preinstalled Apple root CA, and (b) the user could set the the TTL for caching the response. Larger developers (with more resources) could also feasibly implement something similar to OCSP stapling which has several desirable properties.


When it comes to these article, you should really apply the following "smell" test:

Replace "Apple" with "Google", "Facebook", "Verizon". Re-read the article. If it sounds horrifying, then it's also horrifying if Apple does it. There's no such thing as "trust" into a single corporation - especially the one which just argued that you not paying 30% to them is "theft".

Applying this test helps weed out the marketing bias these corpos constantly try to push at you.


Better replace "Apple" with "TikTok", "Zoom" otherwise people might think about "Google", "Facebook" that (paraphrasing) «they may be sons of bitches, but they are our sons of bitches» (regardless the reality).


Good point there.


The tech industry is rife with hypocrisy when it comes to matters of privacy and online tracking. It's something rotten at the very heart of this profession. Developers are more likely to rush to defend companies - rather than scrutinise them. We'd all be better off if we stopped defending these companies. You can like - even love - a company product without feeling you owe the company any loyalty or defence. And we'd all be better off for it.


OCSP doesn't seem like the right protocol for this. Apple should probably just ship you a list of hashes of revoked certificates once a day, and should do the check locally. (Obviously, the global certificate database is too big to send to every user, but Apple should be able to determine the subset of certificates they trust, and the even smaller subset of those that are revoked or compromised.)

To me, it sounds like they decided to take the quick-and-easy path of reusing an existing protocol for the use case of stopping malware, but it doesn't really fit. The latency, privacy, and availability guarantees of OCSP just don't match with the requirements for "run a local application".


This does seem like a situation where a CRL would be a better fit than OCSP. On the other hand, CRLs have been pretty thoroughly deprecated for browser usage, so Apple probably just reached for the first tool that was already available to them.


Is actually the other way. Active ocsp checks have been removed some time ago: https://www.computerworld.com/article/2501274/google-chrome-...

Stapling and crl-shipped-with-browser still works.


There might also be usage data they collect conveniently.


Going back to a CRL (certificate revocation list) for code-signing certs makes more sense. And, really, there shouldn't be a huge number of developer certs being revoked.

If that's happening, they need to put more work up front into certifying them in the first place.


I agree, how is sending a list of revoked certs not the best idea?


Wild (probably wrong) theory: Apple doesn't want us to know how many developer certs they've had to revoke, and who owned them.


Can someone explain my why is this significantly less problematic than sending out app hashes? If we accept that most developers don't have many similarly popular apps, then isn't this enough to infer what apps are users running?

In the example from the article: if Mozilla's certificate is sent, then it's very likely that the app that has been opened is Firefox, as the a priori likelihood of using Firefox is way higher than eg using Thunderbird.

If the developer is Telegram LLC, then ... and so on.


It is only very very slightly less concerning than sending the app hashes. Coming to the conclusion that this is all great and fine is really absurd.


It’s not.


There will be a day when all apps on a mac will only be installable from the app store. Developers will be forced to buy macs and subscribe to Apple’s developer program to support it. Customers will be trained to not care. And HN Apple fanboys and fangirls will try to justify why this is a Good Thing(TM).


We’ve been hearing that for years, yet it hasn’t happened. Apple seems to recognize the value of the Mac as an general computing platform.


Apple has programmed macOS to make it appear to users as if un-Notarized apps either don't work or are malicious.

This is bad for users that download apps to solve problems, or to get work done, because then they can't those apps without having an expert tell them what the magic ritual to run un-Notarized apps is. If they don't have an expert around to show them how to perform the magic ritual, then they just think the apps are broken.


I don't think anyone trying to 'solve problems' or 'get work done' has encountered a notarization issue since the types of software they use is always notarized (since they still are only running software distributed by million dollar corporations).


I maintain a few open source Mac apps that I'm not paying to Notarize.

Users frequently comment that the apps are now "broken" because they don't understand the changes Apple made to macOS to treat un-Notarized apps as if they're radioactive.


I dunno.

If you can’t confidently change a system preference back and forth, maybe you are very vulnerable to being hacked in general? So maybe it’s ok for Apple’s defaults, at least, to be restrictive?

I just want a preference that allows me to turn all of this off.


Extensively discussed here: https://news.ycombinator.com/item?id=24217116.

Tl;dr: no.


Most mainstream apps are notarized already.


I think you're answering the wrong question?


Another commenter added more color, but what I was trying to say is that the scenario the parent poster described isn't a likely one for most Mac users.


As a maintainer of a few open source Mac apps that are un-Notarized, this doesn't help users who download the apps to solve their problems but can't use them because of the roadblocks baked into macOS.


Notarization was IMO the first big step towards this. To this day I have not heard anyone, neither devs nor users, wanting this feature. And to devs it has only costed misery and money.


Customers, for the most part, don't even know or care it exists. But customers will find value in it when Apple is able to quickly disable malware if it proves necessary.

As for developers... I mean, how much of a big deal is it, really? I looked at the documentation and it didn't seem like a huge hassle. It even looks like it is automatable in your CI/CD processes via `altool` and `stapler`.


Ah yes, the fear angle. "We need to restrict what you can do with your computer in order to keep you safe!" No thanks, I'll pass.

I do imagine that some people would go for that bargain, but it strikes me as short-sighted.


Up until now, you could change the settings to something dev friendly, while leaving them strict for people like my father, who clicks on things he shouldn't click on. It is not short sighted, it's a useful protection against malware. The only time he got in trouble was when he ran an installer, entered the admin password and installed one of these "protect your mac" apps that don't protect, but only pester you into paying a subscription. I had to remove that file by file. OTOH, the amount of shit my in-laws' PC went through is unbelievable. They no longer use it: they've got an iPad.


You do realize that there are millions of satisfied Mac, iPhone, and iPad customers out there, right? The profits speak for themselves: clearly there is value both for freedom and for security. And it never was a binary question anyway.

Besides, you can still run non-notarized binaries if you want to. The UI does make it difficult, but not impossible.

If you want a totally open computer, that's fine (to the extent you don’t spread it via negligence), but everything has tradeoffs. If you're comfortable with the risk of malware, that's also fine; but not everyone is -- and certainly not the business world.


That is fine. And now, because of this, many of us are now considering not having Apple in our tech future. And that is fine too.


> Besides, you can still run non-notarized binaries if you want to. The UI does make it difficult, but not impossible.

On ios and ipad (from your first sentence) you cannot.


You're making the assumption the "average buyer" knows anything about this issue, which is incredibly unlikely. Therefore, their buying decisions are not made based on it.

I think a good goal would be to scream it as loud as possible and make sure people are buying it based on this dimension as well.


The water is warm but it isn't boiling yet. All the infra is in place though, and the economic incentives are inevitably going to push Apple in this direction.


Something like that would be happening very gradually over time.


They’ll never do it.

They just keep making stuff not private so you have to choose between security versus privacy.

A well thought system would be able to provide both.


They'll do it, it's only a matter of time


If that was true, they'd allow "Apps from Unknown Sources" on iOS like Android does.


It keeps inching closer. You now have a unified arch between mobile and desktop. You were never officially able to cross compile before and now there’s yet another barrier.

Phoned signing verification is another thing that is a precursor to distribution to Apple-only distribution.


Tell me more about the future?


devs aren't already forced to buy the hardware for the platform they're targeting?


For apple they are 100% forced to. Not sure what you’re getting at but have you ever seen an iphone simulator on windows or linux?


What platforms don’t force devs to buy developer kits or use their hardware? PlayStation and Xbox used to force devs to buy exotic hardware. Consoles are similar to phones and they lack simulators or emulators for the newer stuff.


We still use devkits because you cannot possibly develop a console game without one. They provide detailed hardware accelerated instrumentation, and also have specialized hardware that you can't emulate without a dramatic perf hit.

The difference in my mind is that no console markets itself as a general computing device, and the user understands they can't use it as such (you can't install whatever you want on an xbox).


This take is stale. Some people will pay for less freedom on their machines and some developers will gladly take their money. That’s not force, that’s capitalism.


> Developers will be forced to buy macs

Yeah, what's up with that, having to buy a Mac just to run XCode! And having to register as a developer to get a certificate.

Apple should bring back Lisas and the UCSD Pascal/Clascal for Mac development like it was in the 1980s. And they should also bring back 4-letter developer signatures. ;-)


You’re being sarcastic... but your premise is wrong. I can build for windows and android from linux without issue. I can build for linux and android from windows without issue. I pay nothing in any direction.


Clearly this article doesn't reveal every truth. Certificates authority should have been decentralized but is it happening?

And just by looking ip address, and app usage and other data they receive they can connect the data and identify its me. And what security has apple provided till now?

"You shouldn’t probably block ocsp.apple.com with Little Snitch or in your hosts file."

That's far better than freezing computer which doesn't work, doesn't run any apps. If I don't need apple mercy and protection please don't force me.

Already installed Linux and its a start.


Yeesh, "It's not THAT bad, it ONLY leaks the developer of every app you open, via cleartext. Oh, and it cripples your offline software when someone spills coffee over Apple's servers"

This is the reason people laugh at this website.


So, not only apple, but pretty much everyone, can eavesdrop on the HTTP request and find out from which developer I'm running apps from?


Being able to identify the developer of any app I run on my own machine is already too far. You have to assume all these requests are logged and available for state actors on legal demand.

I wonder how big a local revocation list would be. I would support a on-by-default local check.


A caveat to blocking ocsp.apple.com is that I discovered Apple is running more than one service on that domain.

http://ocsp.apple.com/ocsp-devid01 is Developer ID, but http://ocsp.apple.com/ocsp03-apevsrsa2g101 is something else, which if blocked can prevent the Mac App Store from loading.


So Apple sends an app-developer identifier in clear text each time you open an app? That sounds really bad.


Has anyone used a pi-hole to block apple privileged servers, like the OCSP one, while running Big Sur? I'm thinking of setting one up---not necessarily to block OCSP, because the points in this post about actually wanting to know when a certificate has been revoked are sensible---but to at least have the option in case of another disaster...

Relatedly, does anyone know if Big Sur allows one to use a custom DNS server on the device level with those privileged destinations? (He says, mulling the complexities of getting a pi-hole working with his mesh system.)


Vendors are already baking in DoH into their apps and systems, and that entirely bypass your DNS servers altogether.

I went from blocking about 45% of my entire network's traffic at the DNS level two years ago, to only blocking 10% of the traffic today.


But how are those apps finding the DoH server's IP then?

If they use public DoH servers you could just block those at the network level. Andv if they're running their own DoH service on a fixed IP, they could simply run the app itself over that IP and avoid the whole DNS lookup altogether.


> But how are those apps finding the DoH server's IP then?

I don't know, I haven't dug deep enough to find the answer for myself.

However, after blocking Google's DNS servers on my network and designating my own DNS servers via DHCP, my Chromecast ceased to function, and certain Android apps that serve ads had functionality that ceased to work correctly. That leads me to believe that apps and systems with DoH baked in are actively hostile to mitigations against their DoH implementations.


Custom DNS servers are available on Big Sur. My home network uses pfSense as a gateway for LAN. This gives more options blocking outbound connections or routing connections thru a VPN connection based on certain conditions.

https://www.pfsense.org


Not sure whether the non-privacy related aspect about OCSP is less worrying. Officially Apple does this to protect innocent users from malware, but as we've seen it also allows them to remotely disable any developers' software. Not really something that I'd want on my machine.


I guess a super obvious question is, why do they do this instead of having a robust antivirus ecosystem?

I mean I guess I already know the answer, "marketing". "Look, macOS doesn't require antivirus!"

Personally I don't want Apple verifying or revoking anything. I bought the computer, it's mine. You don't get to tell me what I can run, period. Inform me, sure, give me links to go learn why you don't want me to run something, sure. Don't prevent me from choosing to do with my machine what I want.


> I guess a super obvious question is, why do they do this instead of having a robust antivirus ecosystem?

Enumerating “all possible badness” is basically impossible, which is why AV software really doesn’t work. Every ransomware attack you read about in the news bypassed up-to-date AV software.

Enumerating “known-good” entities is actually a tractable problem... this is what vendor-signing does. Even Google and Microsoft understand this and have had code-signing infrastructure in place for decades.


OCSP also allows CAs to revoke random websites’ certificates, yet nobody is making a big fuss about that (presumably because no OCSP server has encountered what Apple’s did and prevented websites from opening).


Yeah but the thing is that there are many CAs. The main problem is (IMHO) when you have a single party with conflicting commercial interests that controls all certificates for a given platform.


Your reasoning is wrong.

Other than Internet Explorer (and maybe Edge? I honestly have no idea) browsers don't do OCSP. This is because it's a huge privacy problem (as we saw here for Apple) and because the OCSP servers have too often been unreliable.

Firefox has OCSP Must Staple, but in that scenario the remote web server is responsible for periodically ensuring it has a sufficiently up-to-date OCSP response about its own certificate which it then "staples" to the certificate to prove its identity. So if the OCSP server fails for an hour a good quality stapling implementation just keeps using older responses until it comes back. Also it's optional, most people haven't chosen to set Must Staple anyway.

Everybody else has various CRL-based strategies, so your browser learns about certain important revocations, eventually, but it doesn't pro-actively check for them on every connection and thus destroy your privacy.


is there any statistics of how many innocent users have become victim? Clearly Apple just want control. Just like there is old saying More truth less trust is needed.


Software certificates make sense in general I think, but there shouldn't be just a single party that can grant and validate them.


ITT: Arguing what semantics to use when whitewashing a massive breach of trust, privacy and security with no officially solicited opt out.


Now just waiting for the trolls to write some software that makes the response always cause it to be invalid. With a wee bit of ARP magic, you could make a bunch of mac users very unhappy at the cafe's.


No joke, this would get a fix published very quickly. I’m embarrassed I didn’t think of it myself.


OCSP responses are signed, so no, that doesn't work.


Evidently it does work if you simply suppress the response.


“It doesn’t send a hash of the app, it sends a thing that is a encoded hash that uniquely identifies the app! Totally different!”

It wasn’t a misunderstanding, it was a simplification so that people could understand the issue without me explaining OCSP and app signing and x509 and the PKI. Dozens of people wrote me to thank me for explaining it in a way that they could understand.

It is indeed a hash, and it does indeed uniquely identify most apps, and it is indeed sent in plaintext, when you launch the app (and is cached for a half day IIRC). I very deliberately didn’t claim it is a hash of the content of the app file.

It also doesn’t send a unique identifier, but I would be willing to wager that the set of apps that you launch in 48h is probably enough to uniquely identify your machine in the vast majority of cases.


Your text was understood that way because of something in the words you chose, maybe "hash of the application" for example.


By default, Android logs every app you use. You have to disable - bafflingly - features including saving locations in Google Maps and fully-functional voice recognition to (supposedly) disable that behavior. What I'm saying is: don't look so surprised.


True but Apple markets itself as a privacy-first company. Google doesn't.


"By default" is key here. Apple doesn't allow to change this at all, unless you do a hosts file hack.


Why compare a phone OS which is much more tightly controlled to a desktop OS?


You seem to be confusing sharing "usage and diagnostics" with enabling location history.


Turn off usage and diagnostics and try to save a location in Google Maps. Alternatively, open up the usage and diagnostics stats and see just how much they harvest. It's, frankly, a ridiculous non-sequitur on the part of Google.


It is already off. I've always had it off. I have multiple locations saved in Google Maps.


Web & App Activity Saves your activity on Google sites and apps, including associated info like location, to give you faster searches, better recommendations, and more personalized experiences in Maps, Search, and other Google services. https://support.google.com/websearch/answer/54068


> As you probably have already learned during Apple’s OCSP responder outage, you can block OCSP requests in several ways, the most popular ones being Little Snitch

Uninformed advice - apple prevents little snitch from blocking this traffic in big sur.


Keep reading.

>If you use macOS Big Sur, blocking OCSP might not be as trivial.


>> you can block OCSP requests in several ways, the most popular ones being Little Snitch

> Uninformed advice - apple prevents little snitch from blocking this traffic in big sur.

You can prevent Apple from preventing Little Snitch from blocking that traffic: https://tinyapps.org/blog/202010210700_whose_computer_is_it....


Wouldn't it be hilarious to mitm these requests at open hotspot and basically cripple everyone's macs while connected.


I get that a dev cert isn't the same as identifying the software itself... but that only applies for developers that have multiple apps, and I suspect most do not.

Then unencrypted requests are also a Bad Thing, because anyone has access to the same info - it may require a lot of work to get general knowledge of what apps someone is using, but if you were looking for a specific one then I don't see any real difficulty identifying that.

e.g. if I wanted to know if someone was using signal I just look for the signal cert being queried. That's a much easier problem, and can be dangerous to the end user.


The fact that this was included in the OS without raising alarms is quite revealing in terms of privacy concerns.

Question, is there a good justification to use not use hierachical certificates like web browsers or other OSes ?


Apple's decisions about OCSP decry two indisputable facts that contradict Apple marketing:

Apple does not prioritize privacy. Apple does not prioritize availability.


Good write-up.

I write a lot of Go on my Mac at home. The first run is _always_ slow, but I've never measured it or bothered to find out why. This is a real "lightbulb moment" for me.

I just built a Go executable and timed it: 0.194 for the first, and ~0.018 for subsequent. I haven't signed code on Mac platforms before, so I figured I'd give it a go using the Apple code signing guide [0]. So, I created a self-signed certificate using Keychain, changed and built a Go project, signed the executable [1], and ran it: ~0.400 for the first run, and ~0.018 for subsequent. It... doubled? Will this happen on every first run still? Is there a way to exclude executables?

[0] https://developer.apple.com/library/archive/documentation/Se...

[1] codesign -s <cert_id> <path>


Worked a major virus company. This was the same Basic technique. W e would download a list of all md5 hashes. All executables would have to match against it.

Periodically there would be an issue downloading the updates. Would result in similar problems.

Managing size of updates was a big issue. Just checking against an online server is certainly a more up to date approach


in my opinion, this seems like Apple, once a computer company that catered to computer users and the expectations of computer users, is now a mobile phone company catering to and responsive to the lower expectations of phone users. to engineer these plain text surveillance communications over the public internet between a users private computer and the company responsible for building that computer is like if my home informed the company that built my home every time I started any unique activity while inhabiting said home, as long as I hadn’t been engaged in that activity for some amount of time. It’s extremely disrespectful to Apples users, who are also Apples customers, who are also mostly all of us on this message board. My goal is to one day grow a backbone and stop putting up with this.


Apple has always been a gated community, but now there’s a guard at the gate checking everything that goes in and out. This is something most users probably don’t want. It has me personally considering what a future without Apple would look like.


I’m more and more convinced that I’ve got to learn and find a way to make Linux work for me.


They’re not mutually exclusive. I have several macs and several linux machines. One of the linux machines (my router) even keeps the macs safe and (relatively) trustworthy.

It’s a good thing to do regardless. I also know way more than I want to about Windows, too, and could make do if given only that for a workweek.

Learn languages, learn OSes, learn architectures. Get more computers. :)


Check out these posts I posted about transitioning to Linux from macOS and feeling at home[1].

I made the switch and I'm not looking back.

[1] https://news.ycombinator.com/item?id=23607374


The guard is also shouting brand names of things you carry in your bag.


It's literally called Gatekeeper lol


This still allows Apple (and ISPs, employers etc) to correlate very sensitive information: developer certificates and IP addresses. Plenty of developers only create one application, and most Macs will be used most frequently on a small number of (ranges of) IP addresses. In essence that still let’s Apple see way more than a self-proclaimed “privacy conscious” company should.

Why not take a more privacy-centric approach? Antivirus companies have been working with “virus definitions” for ages. Ad blockers use the same model, but for locally stored blacklists. Why can Apple not regularly download a list of revoked certificates and maintain it locally?


I have always been annoyed by OCSP being HTTP. It is really the fault of the standard that this is the way we revoke certificates. I basically agree that Apple should just be downloading revoked certificates and checking them locally. This is what we are doing at various SaaS companies that have to check these in order to avoid downtime. We have also mistakenly failed-closed. We now default to fail-open but customers have the option to change that if they are paranoid.


Very curious, what's opaque about a uniq developer id of an app you start? Sure looks like gaslighting.

"You should be aware that macOS might transmit some opaque information..."


whatever happened to letting the user decide which application they wanted to run? now the mothership has to give their blessing before they let you run it... sounds insane.


Out of the loop: How does this compare to MS Windows telemetry?


The request obviously sends lots more information than just the serial number of the developer certificate. Is it "harmless" data or could they have more info about the executable in there?

Why don't the author post the OCSP request of Thunderbird too? And how about another request for Firefox so we can compare the data? This article really doesn't clear anything up for me...


> Maybe the hash is computed only once (e.g. the first time you run the app) and it is stored somewhere.

This would explain why some games take minutes to launch the first time to run them. I've experienced this many times with Steam. You install a game, you launch it, and nothing happens for up to several minutes, and then the game runs. No delays after in launching after that.


This behavior drives me crazy. The only way to figure out what's going on is to open the Activity Monitor. On my 2015 iMac (top-of-the-line, at the time) initial launch of some large games has taken tens of minutes, and it happens whenever the game is updated, not just after it is initially installed.


Technically AFAIK, the revocation list could be turned into a Bloom filter (or one of its alternatives) and updated from the servers periodically.

edit: on 2nd thought just a list of hashed cert ids could suffice because it is hard to imagine there ever being thousands of revocations.

That way the provider would have no knowledge of which certs are being verified.


I see no reason why OCSP checks on developer certificates cannot be encrypted. This whole "oh no there could be a loop for a SSL cert check" argument seems like gaslighting. Why can't the client know if it wants to access an OCSP server using HTTP or HTTPS, and default to HTTPS when possible?


The idea that sending information about the cert is somehow not exposing the app is crazy. An attacker could easily download apps and sniff the network traffic to correlate cert info with an app.

Also i don't get the argument for using HTTP. Aren't these two separate systems?


There is a local save which manages your app Screen Time (App Settings -> Screen Time) but did not imagine hashes sent.

How does one go about setting up (easy) server of some sort to see what servers are being connected say when investigating a different area?


If this is just a matter of revoked certificates, Apple could very easily setup a subscription for developer certificates on the machine when an app is installed. Why wait to check if a certificate is revoked once an app is launched?


Because it allows to revoke certificates at a later time. E.g. epicgames certificate was revocable after they noticed that they built in something that was not supposed to be allowed.


It feels like this is the type of response where we were at with Windows when they were forcing updates etc.

They backpedaled a little bit when you were forced to log in through a Microsoft account and people semi-rioted but came back pretty strong.


Are there any pi-hole settings to prevent Apple from phoning home? And the same for Windows 10? I can't trust my computers any longer so I need to rely on external enforcement.


To be generous, Apple has unwittingly created an app use surveillance possibility. All from the idea of developer certificates and diligent revocation checks.


We need a version of Little Snitch that allows these reports to reach Apple, modified so the app appears to always be "Go fuck yourself".


Seems like the solution is just put a short timeout on the OCSP call and fail positive? Nets the same behavior as when you’re offline.


That’s what they do.


Do they bypass/circumvent firewalls doing that? That's the question you have to ask, not what they send (now).


i feel this is irrelevant. apple offers app analytics to developers. which means app usage data is going to apple anyway.

https://developer.apple.com/app-store-connect/analytics/


App analytics is opt-in, and requires explicit advance consent.


I would be surprised if Apple doesn't make changes to this system after this incident.


If anyone is concerned with ocsp activity and verifications being requested all over the web, then oh boy stay away from https.

OCSP is a good thing, and the web - and your signed applications - are better off with it.


Everybody knows that when they request a website their action can be logged. The opposite is true about desktop apps


OCSP which fails open combines pointlessness with terrible privacy. It's why Mozilla is moving to CRLite for privacy-friendly revocation.


I guess you haven't heard of OCSP stapling? https://en.wikipedia.org/wiki/OCSP_stapling

Active OCSP is far from being considered a good thing universally.


Yeah, I feel like I'm taking crazy pills; did everyone just not know about OCSP until Apple did it?

Spoiler alert, you've probably already used OCSP on the web.


Historically speaking, OCSP was invented in a world where almost all DNS requests were also in cleartext. So if an attacker can observe DNS requests, then it's already "game over", and the cleartext of the OCSP request is almost redundant at that point.

It's worth noting a couple differences between HTTPS OCSP and Developer ID OCSP. First, with Developer ID, the only DNS request is for ocsp.apple.com, so the DNS request by itself doesn't expose any information about the Mac app being launched, unlike with HTTPS.

Second, the caching of Developer ID OCSP responses tends to be much much shorter than for HTTPS. Prior to Thursday's outage, the standard cache length for Developer ID OCSP responses seemed to be 5 minutes. (Apple seems to have raised it to 12 hours now.) In contrast, I just checked the latest response in my OCSP cache, which was for http://ocsp.digicert.com, and its validity is 7 days. So the rate at which Developer ID OCSP requests are made seems to be much higher than for HTTPS, and thus there's greater chance of exposure.


Most of the people affected by the issue have no idea what OCSP is.


Most browsers are stopping ocsp because of the privacy use and the triviality to block it. Did Chrome ever do it?

That’s why CT came around.

Some background for those unfamiliar.

https://scotthelme.co.uk/revocation-is-broken/


Chrome uses its own CRL, which pulls from OCSP

https://medium.com/@alexeysamoshkin/how-ssl-certificate-revo...

Although OCSP stapling is used more now IIRC.


Chrome uses CRLset, which generates a cut down CRL when the browser is updated, I don’t see any interaction with OCSP

HN doesn’t set OCSP must staple so we’re still a while away from being able to trust it.


Yes.


[flagged]


People in this thread get downvoted for just paraphrasing what the article is talking about, which is unfortunately defending Apple.


[flagged]


Which is still sufficient information to narrow down to the set of applications developed by a single entity. And because this is being done over HTTP, anyone along the network chain has visibility as well.


Agreed, this should be sent encrypted, obviously. My point was that the intent here might not be to "snoop" on users, as even the author points out by comparing his analysis with what Jeffrey Paul's article reported ("[...] that’s quite an important difference on a privacy perspective") but likely to efficiently handle certificate revocation. Hopefully they will find a better way.


It's called plausible deniability and it's how frog is being boiled slowly.


It clearly shows that Apple is getting fed the dev certificate info for each application being launched.

For developers with multiple applications, then sure, that's not going to be as clear as individually identifying the application.

But there are plenty of developers around with just one popular application. Sending the dev certificate for them is effectively the same as sending the application hash itself.


They already know they exist (they sign them) and most of those are downloaded via the AppStore (they run that) and people tend to log in using iCloud (which they own).

I get it, we're all supposed to trust nobody and have 7 billion independent islands where you don't have to trust anyone or work with anyone.

I have not seen any solution, just people piling on. Having PKI and signatures using a central authority is the least-worst solution we have right now, and until something better is created we don't really have a lot of places to go (unless we accept downgrading common user's security and usability).


I'm not sure what you're getting at. ;)

"They already know they exist ..." doesn't really seem to match up? Like, of course they do.

Anyway, I was just pointing out that the communication still seems pretty close to sending Apple the list of applications being run. At least, for applications created by dev's with only one major program for their certificate.


A solution would be allowing the user to turn this off. And more importantly, to allow firewall apps to manage all network traffic instead of excepting apple's.


So opaque that this journalist figured out what it is and what it stands for in a few hours.


A low effort comment.

Doubly bad, because the actual post is about how Apple doesn't actually do the "tracks your every use of an app" peeping they original post that made all the fuss says they do.


Learn about Big Sur(veillance). You can't block telemetry and it bypasses any VPN.


I've learnt about it. I've also commented about it in other threads here.


Well, you can block it. You need disable SIP and edit a plist.


Quick question on that. Has anyone tried disabling SIPs (csrutil disable), allowing the Little Snitch kext (spctl kext-consent add MLZF7K7B5R) and just using Little Snitch 4.6 in Big Sur?


And then you need to do it again for every update you receive and create a new signature etc.


I guess we’ll have to see if the file gets reverted in updates. I don’t think that’s a given?

I too am not happy about the root snapshot thing...


That doesn't matter. The total signature is still different from factory default, so if any change is made to any other file it will be different and need to be updated :)

What's being signed is a hash of all the individual file hashes, so any file being different from stock will mean a difference, whether it was changed in the latest update or not.


I didn't realize that still mattered once you'd disabled authenticated root...


Not really the question I asked. Also keep in mind that when you disabled SIPs in 10.15, none of the, at least minor, upgrades, reverted the change.


GP didn’t reply to your comment, they replied to mine. :)


Seems to work fine here, nothing bypasses my VPN nor does it bypass my firewall. Perhaps that's because my VPN and my firewall aren't running inside the computer but external to it, as it should.


The direction we are going is built in cell comms for all devices, good luck firewalling that.


tin foil -- not just for hats anymore


tldr: no, but yes.

It logs app certificate requests which in real life is pretty much equivalent to logging app runs. And that line about only calling the server from time to time is bullcrap. I have years of experience on this issue because my internet is pretty shitty. And that "from time to time" is every couple of hours.


We have been warned and ignored the warning: https://prism-break.org/en/


There are two issues here. One is the privacy problem which I agree is not quite as bad as some think. The second is the stupid fact that if some server goes down you can’t launch apps. That is just awful.


I think it would help if someone could quote or reference Apple's official position / explanation on this (if there is one).

You know, before declaring the end of the world, is there any information from the source (Apple)? Discussions here seem to have had several thousand comments without obtaining this basic info. It would be good to know, I would think?


Apple rarely posts their official positions on things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: