Hacker News new | past | comments | ask | show | jobs | submit login
macOS has checked app signatures online for over 2 years (eclecticlight.co)
614 points by giuliomagnifico on Nov 25, 2020 | hide | past | favorite | 430 comments



A common refrain in arguments that we don't need laws to protect privacy is that the market will take care of it. The market can't act against what it can't see. Privacy loss is often irreversible.

A common refrain in arguments that we don't need to reject closed source software to protect privacy is that being closed source doesn't hide the behaviour, and people will still notice backdoors and privacy leaks. Sometimes they do, sometimes they don't.

Parts of the US Governments unlawful massive domestic surveillance apparatus were described in the open in IETF drafs and patent documents for years. But people largely didn't notice and regarded reports as conspiracy theories for a long time.

Information being available doesn't necessarily make any difference and making a difference is what matters.


The market only acts fairly when the product is a commodity. The time for the market to react for a product with the complexity of a mac is decades.

As the ecosystem grows, the cost of switching increases. Therefore market starts acting more and more inefficiently.

This is why countries have state intervention in such cases. And anti trust exists.

If the option was a mac with privacy vs a mac without privacy but $10 cheaper, I'd think the market would pick the mac with privacy. No such choice within the budget exists, and if a company has a monopoly on the ecosystem, then the market cannot react when held hostage. The cost to transition to a different ecosystem is thousands of $'s when you've been using a mac your entire life. It's not like you'd want to relearn many things when you get older, perhaps the sunk cost fallacy as many would define it.

It's a bundled deal, it's not like you can pick the parts you like and throw away the ones you don't like it would be in an efficient market.

If even someone like me cannot be bothered to move to more privacy friendly platforms, then I don't have hope for other people. It's just not worth it. Until the majority face the consequences of their apathy, any action you take is fruitless. Let them taste the pain, then offer the solution.

I know I can be tracked. So what? I can't verify the silicon in my device, can I? Gotta trust someone, maybe blockchain will solve the problem of trust among humans, in a verifiable way.


>maybe blockchain will solve the problem of trust among humans

Absolutely not.

https://www.schneier.com/blog/archives/2019/02/blockchain_an...


There's so much wrong with this post I'm not even sure where to start. Literally almost every paragraph starts something untrue. The whole article is written from a false understanding.


If you’re going to claim Schneier is wrong on crypto stuff, you’ll want to bring a suitcase of evidence along if you want people to take your claim seriously.


Well, he does think that one is unable to establish the integrity and authenticity of a message by using DKIM, so...

He does seem to have a weird cult of personality around him but I can't really understand why. He has been irrelevant for quite a while now.


If you’re going to claim Schneier is wrong on crypto stuff, you’ll want to bring a suitcase of evidence along…

How about $348 billion dollars that says he's wrong about his take on Bitcoin?

Look, I get it. I respect Schneier's knowledge on encryption but he is wrong about blockchains and about Bitcoin in particular.

But he wouldn't be the first establishment technologist/economist/politician to be wrong about Bitcoin.

As I write this, the market cap of Bitcoin is a little over $348 billion dollars [1]; there's no way it gets to this valuation if its distributed trust model didn't work.

He's making social and process arguments, not technical ones.

[1]: https://bitbo.io


> How about $348 billion dollars that says he's wrong about his take on Bitcoin?

Market bubbles are a thing. For a while there everyone was convinced that small plush toys were going to help them retire. The market also convinced itself for nearly a decade that "housing prices never go down". Lots of people can be wrong for a surprisingly long time.


Market bubbles are a thing. For a while there everyone was convinced that small plush toys were going to help them retire.

Yes, bubbles are a thing, but Bitcoin appears to be something different. It's passed ever test and attack. It's now being taken seriously by mainstream financial professionals, CEOs of publicly traded companies and Wall Street.

These facts themselves can't prove that Bitcoin is not a bubble but it does mean if you buy into that line of thinking, then encryption and math have to be suspect as well, since Bitcoin's functioning relies on them.

Also, Bitcoin has been the best performing asset of the past 10 years in which most people didn't take it seriously: https://www.bloomberg.com/news/articles/2019-12-31/bitcoin-s...


> Also, Bitcoin has been the best performing asset of the past 10 years in which most people didn't take it seriously

You're literally describing the bubble. An asset that is worth $20k one day, and worth $6k 6 months later, is worthless to anyone except speculators, speculating on... a bubble.

Disclaimer: I am long BTC.


You're literally describing the bubble. An asset that is worth $20k one day, and worth $6k 6 months later, is worthless to anyone except speculators, speculating on... a bubble.

This is short term thinking and a general mischaracterization.

First, we've never seen a new form of money created in realtime, so it's hard to say how it's supposed to perform.

However, nobody should expect something that will fairly soon have the same market cap as gold (about $10 trillion dollars) to not have a lot volatility as it grows. The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.

There's a lot of ups and downs for Apple as it went from darling startup to nearly going out of business in the mid-90's to a $2 trillion dollar market cap today.

As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.

A publicly traded company that puts its treasury of $425 million into Bitcoin isn't speculating: https://www.microstrategy.com/en/bitcoin.


> First, we've never seen a new form of money created in realtime, so it's hard to say how it's supposed to perform.

This hasn't changed with Bitcoin. BTC is money like a gold bar, beanie baby, or block of IPv4 addresses is money.

> However, nobody should expect something that will fairly soon have the same market cap as gold (about $10 trillion dollars) to not have a lot volatility as it grows. The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.

Nobody describes the tech darlings as money, or as an alternative form of currency. They're equities you can invest in, and comes with the expected volatility.

> As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.

It's nice that there's a backcronym that's been created from a typo. I still don't invest in money, I use money to invest in assets.

Disclaimer: I remain long bitcoin (since 2012)


> First, we've never seen a new form of money created in realtime.

Call me when I can buy my latte and groceries with bitcoin, or pay my mortgage. Until then you've made an investment vehicle, not money.

> The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.

Survivorship bias. Pointing to a handful of random successful companies and trying to draw conclusions is literally meaningless.

> As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.

Yeah, this isn't something you actually say about money. Honestly, it would be a kind of cultish thing to say about an investment too. I don't need a mantra for how I invest in my 401k, why does Bitcoin need one?

> A publicly traded company that puts its treasury of $425 million into Bitcoin isn't speculating

That's the literal definition of speculation, a risky one at that.


Schneier is good at times, but he doesn't always do as much due diligence as he should, and he never bothers to interact with his comment threads, answer questions, or publish retractions.

As a result, I've had more than one experience of arguing with people who take his old blog posts as gospel without ever thinking critically about them.

His article denouncing the XKCD password scheme is a gem, he doesn't bother (if it were anyone else I'd be less charitable and say doesn't know how) to calculate the entropy, and then he proposes an alternate scheme he invented that almost certainly provides less entropy and is more vulnerable to dictionary attacks.


I’m actually curious what’s wrong about it? I read it from an outsider perspective and it’s full of very convincing arguments against “blockchains.” You’re absolutely correct that he’s writing from his understanding, but Schneider’s been in the field for decades (more than many Bitcoin proponents are old), so I’m more inclined to believe he knows what he’s talking about than some other random person on the internet.


I think the main reason for the dissonance is that Schneier talks about the trust that happens (and maybe has to happen in real-world scenarios) while the bitcoin community likes to talk about the minimum amount of trust necessary.

You don't have to trust the software, you can verify it or implement your own. You don't have to trust your internet uplink, the protocol would work over carrier pigeons or with dead drops. You don't have to trust exchanges, just exchange bitcoin for local currency with your neighbor. And even if you use an exchange you shouldn't store money there anyways. The minimum required trust is tiny (basically you yourself), but of course as Schneier points out the amount of trust involved in practise isn't nearly as low, and for many people the failure cases are much worse


> You don't have to trust the software, you can verify it or implement your own.

This is a common refrain among software people, but in reality approximately 0% of the market actually rewrites such software on their own. Most people can't code, and the percentage of those who have the time, interest, and specialized programming skills to rewrite their own financial software is an incredibly tiny pool. In practice, you have to trust someone's code.

And auditing doesn't get you very far either. That takes time too, and auditing secure software is really hard. I doubt I could do it, personally.

> You don't have to trust your internet uplink, the protocol would work over carrier pigeons or with dead drops.

Be serious. Nobody does this, aside from maybe as a joke.

> You don't have to trust exchanges, just exchange bitcoin for local currency with your neighbor.

You don't have to, yet everyone seems to. Convenience matters, pretending to the contrary is a fool's game.

> And even if you use an exchange you shouldn't store money there anyways.

Crypto fans say this all the damn time, and yet nobody seems to listen. Maybe because it's foolish advice to give? While this advice is technically strong, it fundamentally fails to understand what people using any financial system want and what motivates them. Nobody wants to turn the process of moving money around into an elaborate song and dance; they want to click a few buttons and have it work. If your system counts on either telling people to avoid the most convenient option to accomplish the task, your advice will continually fail.


I just want to check to see if you read the GP all the way through to the final sentence. You're tearing it apart piece by piece as though the author was making those claims, but you neglected to quote the part of the post that revealed their true opinion.

> The minimum required trust is tiny (basically you yourself), but of course as Schneier points out the amount of trust involved in practise isn't nearly as low, and for many people the failure cases are much worse


My mistake.


> If your system counts on either telling people to avoid the most convenient option to accomplish the task, your advice will continually fail.

Louder please - this is applicable far beyond cryptocurrency.


This is pretty much how I interpret it as well, in regards to trust.

One detail Schneier misses though is:

> Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.

The second statement contradicts the first. People who don't like/trust government-backed currencies aren't fanatics. The way the big banks handle money is quite reckless and dangerous, as we've seen time and time again. And buying drugs for your own personal use should be legal (IMO) but is not. Cryptocurrencies are useful. Maybe not to most people, or to Bruce Schneier, but that (blanket) statement is simply false.


This just feels like moving the goalposts, though. Bitcoin has been pushed as this revolutionary thing that's going to fundamentally change currency and payments for everyday people. It hasn't. It likely won't. Maybe some other blockchain-based currency will at some point, but I'm skeptical of that claim.

Beyond that, Bitcoin as a simple store of value is an incredibly risky proposition. Someone upthread posted a link to an article claiming that Bitcoin has been the best performing asset over the last 10 years, but that ignores the gut-wrenching volatility it's gone through.

And even if you stretch to suggest that Bitcoin has been a good investment, a good investment vehicle is generally not a great currency. If I had a $100 bill that, a week later, was worth $50, I'd be pissed. If another week later it was worth $200, I'd be pleased, but would feel very uncomfortable thinking I can rely on that "currency" to continue to pay my bills over the long term.

I'm not saying Bitcoin has failed or that it's useless in general, but as a generic currency replacement I'd say it's pretty weak and unreliable, mainly only suitable for use by people at the margins, people who often can't bear the risk that Bitcoin foists upon them.


Not arguing any of that. Bitcoin is not very useful as a currency. But I trust bitcoin as a concept more than I trust USD, which is currently being devalued/inflated in favor of the stock market, solely for the gain of people who are more well off than most. I'm not a doomsday prophet nor an economist but the future of the world economy is not looking great.

Bitcoin was created as something that governments/banks/corporations couldn't control or manipulate and that's something that's worth a lot in itself, I think.

And as I mentioned Monero is quite convenient for shopping on the dark web, without government intervention.

If you trust your government, banks, politicians, and agree with all the laws and taxes, then yeah, you might truthfully state that cryptocurrencies are useless. But some believe and argue that governments shouldn't have that kind of control and people should have a higher degree of freedom and privacy.

This went a bit off topic perhaps.


Can you please explain what those untrue statements are and what his false understanding is?


The “false understanding” is most likely how he starts at the conclusion of “Bitcoin == bad” and works from that instead. But both types of essays are ok. The former is just an “argumentative” or “persuasive” essay, while the latter is akin to a “compare and contrast” one.


> If the option was a mac with privacy vs a mac without privacy but $10 cheaper, I'd think the market would pick the mac with privacy.

I'm curious about this, and there are some ways we could test it if we had access to sales data.

Amazon sells a Kindle that displays ads on the screen while the device is asleep, and a more expensive model without ads. How many people buy the one without ads? (I'm assuming the device sends back a record as to what ads were seen or interacted with, which has privacy implications.)

Hulu has a plan that shows adverts when you watch a show, and a more expensive plan that doesn't. How many people pay for the more expensive plan?

More generally, what's the price spread that would cause most people to flip from buying one version or the other? To use your example, I'm sure most people would pay for a "mac with privacy" if the difference (on a multi-hundred-dollar or $1k+ product) was only $10. But what if the difference was $50? $100?

And that's not even the exact same example, since a "mac with privacy" might look, to the average user, to be identical in operation to the "mac without privacy". Paying more to avoid advertisements has a clear impact on the experience of using a device/service. But knowing that your $X-more-expensive mac isn't sending application launch data to Apple doesn't change your day-to-day experience much.


Apple buyers are in the premium price segment. Kindle Fire or whatever is for the most price sensitive segment.

Companies pay lots of money per user to secure their computer work. They could save lots of licencing by running obsolete software. Companies that do that are hacked, and sometimes mortally wounded. I'm not so sure users price privacy so cheaply.


> The time for the market to react for a product with the complexity of a mac is decades.

This makes me think of the 2 slit experiment as applied to basketballs. There is a period of time that decisions need to be properly considered, presumably simple decisions need little time, and complex decisions need more time. There is also a period of time that is required to make a decision, and a level at which the decision is made.

There are lowly software engineers making decisions like this, some of which may not even percolate to daily stand-up, yet will require the Supreme Court to unpack 10 years from now. And no one really knows.

It's like trying to pass a basketball through a slit. Yes, there's a theoretical interference pattern, and you can calculate it, but you can't do the experiment because you can't pass the basketball through a slit that small (in the case of basketballs, if I recall, the largest slit is angstroms if not smaller).

So in software development, You've got huge uncertainty in the societal implications of some decisions about what information you're going to pass over the network, but you can't even get them all through daily stand-up, let alone to Congress or the Supreme Court. Somewhere along the way, some of them end up on Hacker News with people mis-quoting Eric Hoffer, “Every great security decision starts off as a movement, turns into a business, and ends up as a racket.”


> It's like trying to pass a basketball through a slit. Yes, there's a theoretical interference pattern, and you can calculate it, but you can't do the experiment because you can't pass the basketball through a slit that small (in the case of basketballs, if I recall, the largest slit is angstroms if not smaller).

I know this doesn't have too much to do with the core of your post, but I want to mention it nevertheless: QM does not imply that there is an interference pattern for basketballs. There might or might not exist one, but we would need an answer to the question of measurement to predict one way or the other.


> The market only acts fairly when the product is a commodity.

The market only acts fairly in the window after a product is commoditized and before regulatory capture happens.

And in some cases that window is closed before it opens.


Whilst I agree with the sentiment, it does occur to me just how many kindles I see with ads.

Is there any data released on ads Vs no ads versions?

That's the closest comparator I can think of.


I've never had a non-eInk Kindle, so maybe it is different for them, but I've had eInk Kindles both with and without ads. My first was without ads. Since you can add the "without ads" option to a "with ads" Kindle later by paying the difference, I bought my second with ads to see how bad it was.

Here are the only differences I've noticed:

• With ads, the sleep screen displays some artwork from a book Amazon is selling [1] and some text suggesting you come to the store and buy books,

• The home screen has an ad at the bottom.

Since when I'm not using the Kindle the sleep screen is hidden behind the cover, and when I am using it I'm somewhere other than the home screen 99.9% of the time, I never saw any reason to add the "without ads" option, and when it was time to replace that Kindle I again went with ads.

I suspect that the vast majority of people that buy the "no ads" option up front do so because when they see "with ads" they are envisioning a web-like experience where the ads are all over the place and intrusive and animated and distracting.

[1] Usually a romance novel for me, even though I've never bought any romance novels from Amazon, nor anything even remotely like a romance novel. In fact, I don't think any book I've bought from them even had any romantic relations between any of its characters.


On the eink kindle, I paid for ad removal just in general principles. I probably could have lived with the lock screen ads, but the home screen ad was unacceptable. Vast majority of my book purchases over the past ten years have been ebooks. I’ve gone a bit sentimental in thinking of the book list as my library. And I didn’t want a billboard of any sort in my library. Real or virtual. Have a feeling there are at least dozens of us who are that allergic to ads.

I have also purchased a Fire during a Black Friday sale. Got rid of the ads on that one too, but for free since some nice person over at xda made an automated process for that. On a side note, with Termux it isn’t entirely horrible as a 7 inch laptop when paired with a hinged keyboard case. A portable system at a similar price to a Pi.


Heh. With ads, this was always one of the ways I could embarrass my Bride. You are reading '50 Shades of Grey' or whatever trashy novel shows up on the cover of her gadget. Always makes her blush.


> Whilst I agree with the sentiment, it does occur to me just how many kindles I see with ads.

> Is there any data released on ads Vs no ads versions?

Do they offer a tracking vs no tracking option too? The absence of adverts does not mean the absence of tracking.


The tracking is somewhat inherent to the software — syncing what page you've read up to in a book between devices (a feature many people find crucial!), cannot really be divorced from having the raw data to create server-side metrics about people's reading habits.

Even if you E2E-encrypt each user's data for cloud storage and have devices join a P2P-keybag, ala iMessage, consider the ad-tech department of your same company as if they were an external adversary for a moment. What would an external adversary do in that situation? Traffic analysis of updates to the cloud-side E2E-encrypted bundle. That alone would still be enough to create a useful advertising profile of the customer's reading habits, since your app is single-purpose — the only reason for that encrypted bundle to be updated, is if the user is flipping pages!

And, together with the fact that your ad-tech department also knows what books the customer is reading (because your device can only be used to read books you sell, and thus books you have transaction records for selling them), this department can probably guess what the user is reading anyway. No matter how much your hardware-product department tries to hide it.


My position on data privacy is slightly different from other people on this site—unlike some, I don’t care all that much if Amazon knows my reading habits, but I do not want it to show ads, provide recommendations, or otherwise tailor my experience based on what it knows. I’m concerned that this “tailoring” puts me in a filter bubble, and locks me in to a narrow set of preferences for the rest of my life.

In this regard, Amazon is among the worst of the big tech companies that I interact with. Google lets me turn off personalization—when I go to Youtube, I see an extremely generic set of recommended videos. But I can’t do it on Amazon.

(I don’t use Facebook, not sure what can be switched off.)


> syncing what page you've read up to in a book (a feature many people find crucial!), cannot really be divorced from having the raw data to create server-side metrics about people's reading habits.

Sure it can. You encrypt the data so the client can read it and the server can't. When you add a new device, you e.g. scan a QR code on your old device so that it can add the decryption key to the new device, and the server never knows what it is.


That scenario is addressed in the next few paragraphs.


The next few paragraphs were not originally in the post. And you obviously get less information from knowing the user has updated some encrypted data than having the specific page of the specific book. It is also not inherently necessary for the server to have even that information; it could be sent directly from one device to the other(s) or via an independent third party relay (e.g. Tor).


This. Distributed consensus is easy and cheap to implement when all participants can be trusted (as opposed to the Byzantine problem that I.e. blockchain tries to solve). It’s even easier if you’re ok with approximately eventual consistency with a reasonably high probability of convergence. Page state seems like it fits into this category


ofcourse it can be divorced, keep the data client-side, as kindle does?


For the non-tablet Kindle, I'm not sure if the differentiation between tracking and ads makes a lot of sense. You (well, I at least) already use Amazon to buy the books that are on my Kindle. I guess they could track how fast I read them. But that seems trivial compared to already knowing what I'm reading.


> Whilst I agree with the sentiment, it does occur to me just how many kindles I see with ads.

True, although the price difference for the Kindle is about 20%. If the discount on a Macbook Air was similar, I'm sure it would be well subscribed.


The discount for what? There are no ads on the Mac.


> If the option was a mac with privacy vs a mac without privacy but $10 cheaper

OP was comparing the Kindle with ads discount to an imaginary Mac without privacy discount.


It sounds like one of the weirder "freedom" arguments. Who is going to pay extra for a Mac without key security features?


They sold a lot of the ad enabled tablets one black friday for like $80 which seemed like a good deal at the time despite not running google apps which most people desire, having ads on the lock screen, and having the worlds shittiest home screen app for android. I bought one for my wife. If you are lazy or not inclined you can actually pay after the fact to remove ads.

Alternatively you can deliberately break the ad functionality, install google apps, and android lets you change your home screen.

1 and 2 worked but they broke the ability to set your own home so the fix is a hacky app to hijack the home button to show the right home of your choosing. This worked for over a year then they repeatedly blacklisted such apps, then it worked but with a 2 second delay which is basically horrible and extensions started crashing the gmail app.

Worst piece of shit ever.


> having the worlds shittiest home screen app for android...Worst piece of shit ever.

> I bought one for my wife.

Aww, that's so sweet! ;)


Prior experience had led me to believe that crappy default experience was relatively common in Android but easy to rectify.

Mea culpa but her replacement is so nice I got one too. Moto g7 powers.


I bought a kindle. It didn't have ads. I did a factory reset. Now it does. This is the first time I've heard about a choice.


I wish i could pay for no ads on everything. If there are only like three ad networks for 99 percent of the internet, and they can track me easily, and they know how much I'm worth to them, can't they just email me every month and say, "if you want a no-Doubleclick ad experience next month, it will cost you $4.65, click here to pay." Not like they aren't already doing the tracking, and this would increase the value to their ad customers since they wouldn't pay for views from people who don't want to see ads.


I think its worthwhile trying to test the hypothesys, but i dont think anyone takes privacy on a book reader with the same passion as they do on a mobile phone.


I imagine it more as a tragedy of commons situation. I don't care that much about my privacy right now to sacrifice a lot of convenience. However if enough consumers rejected products that encroach on privacy, we’d get privacy and keep most of convenience.


"The market only acts fairly when the product is a commodity."

Based on context (rest of your comment), did you mean "substitute good" instead of "commodity"?


> I'd think the market would pick the mac with privacy.

What "privacy" means here is a huge issue. I would not agree that this would be the case. I think you'll find the majority looking to save 10$, which is almost nothing for many people but a non-trivial amount for most in the US.


> A common refrain in arguments that we don't need laws to protect privacy is that the market will take care of it.

Stronger privacy laws hurt Google, Facebook, and Amazon far more than Apple. Most of Apple's privacy gaffs are just bonehead moves like this one which shouldn't happen, but also don't drive revenue.


I've always thought Apple's focus on privacy (putting aside the current incident) is rather clever as Google have no way to respond. Any improvements to privacy undermine their business model.


You'd think that'd be how it worked out, but funnily enough, it's net negative even when Google actively tries to enhance privacy! Using Apple's privacy-sensitive ad blocker standard made people protest the limitations of a fixed count of sites, and competitors spun it into a way for Google to advantage themselves (details on that somewhat murky, i.e. nonexistent)

We have a long way to go as an industry on keeping users _well_ informed on their privacy, and I'm afraid Apple's set us back years by morphing it into an advertising one-liner over actually helping users along.


I don't look at it that way. I think privacy is secondary to Apple, and that they use privacy as a selling point when it suits them. Their overall goal is tight control over the entire computing experience, and in places where that degrades privacy, so be it.


Their goal is making money.

On the iPhone, that means keeping the system's reputation for security and ease of use trumps more or less everything else. On the Mac? Not so much.

For Apple, privacy is a cheap giveaway because unlike Google or Facebook, the way Apple makes money doesn't require they know when my last BM was.


This isn’t that. I’ve been aware of this for some time, pretty sure it was in the security white paper and talked about as a feature.

People forget about CRLs because browsers mostly ignore them.

People just go crazy for any Apple story because it attracts attention. People have been paying to send all sorts of app launch analytics to AV companies for example since the 90s.


> I’ve been aware of this for some time, pretty sure it was in the security white paper and talked about as a feature.

That doesn't invalidate what the parent said. The only way awareness helps is if it's general knowledge. I don't believe it was, and you personally having known about it doesn't make it so.


Lying to the customer about what your product does, or having secret functionality, should be a criminal offence in the same way as breaking and entering or stalking are.

Then, we would find out very quickly what people value.

I firmly believe this ecosystem (as in privacy violating ad and data selling business model) is only dominant because companies are able to mislead with impunity, so it's basically a form of fraud


I agree with this, to some degree.

But I've also known that macOS verifies signatures for as long as it's been doing it. This was no secret, it was advertised as a feature.

I assumed it wasn't being done in plaintext, because who would be so foolish as to code it that way? and I'm still plenty mad about that. Anyone could have checked this at any time, presumably people did, and the only reason it became a story is because the server got really slow and we noticed.

Apple says there will be a fix next year, which... eh better than nothing, not even 10% as good as shipping the feature correctly to begin with.

But of the many things about this episode which are worthy of criticism, Apple being deceitful is nowhere among them. Never happened.


It was mentioned in a developer presentation with very few details as to how it worked. Apple did not go into details; the information presented here was mostly reversed by app developers and security engineers.


What details were missing from the WWDC talk that you would've liked to have seen?


Was it widely known that this feature was implemented as a privacy-leaking phone-home feature? Simply saying "macOS verifies application signatures before running them" doesn't necessarily imply that it's shipping execution data to Apple. All of that can happen offline, if Apple had chosen to implement it that way.


> This was no secret, it was advertised as a feature.

I wish you could prove this.


https://developer.apple.com/videos/play/wwdc2019/703/ The second half of the talk is about the "hardened runtime"

And in the wider tech press

https://appleinsider.com/articles/19/06/03/apples-macos-cata...

"Mac apps, installer packages, and kernel extensions that are signed with Developer ID must also be notarized by Apple in order to run on macOS Catalina"

Even on hacker news https://news.ycombinator.com/item?id=21179970


Firstly, this is not 'advertised' - this is not a material average apple consumer reads.

Secondly, it does not actually say anything about the OS phoning home and preventing the user from launching an app. The appleinside talks vaguelt of 'Notarisation', something thay can be inplemented in a variety of ways, like signing application with a certificate


Here is a Forbes article (do “normal” people read Forbes?) that says to run apps in Catalina one will need permission from Apple. It even anticipated this problem with the server. Sure, it doesn’t mention sockets but remember this is for normal people and I think people are smart enough to assume that Apple is probably not giving permission to launch an App by carrier pigeon.

There are 100s of articles like this. Gatekeeper was a keynote feature.

https://www.forbes.com/sites/ewanspence/2019/12/28/apple-mac...


The act of breaching privacy is technically difficult to prohibit in a way many of us would find palatable.

What should be targeted is the product of said breaches. Something like the blood diamond approach.

If your company has PII, then you by law must be able to produce a consented attestation chain all the way back to the source.

If you do not, then you're charged a fine for every piece of unattested PII on every individual.


- Can you demonstrate the provenance of every phone number, email address and other contact mode on your phone? Note people's birthdays? Sure, you only want to target companies. Make sure you choose your acquaintances wisely, I guess, or make sure you record their grant of permission to email them, because people abuse laws like this to harass each other every single day.

- This also punishes possession, not use. If you think about that for a minute, it should become clear both how this doesn't attack the right problem, and how companies would evade it.

- Finally... how are you going to audit Ford or Geico? Honest question. Who pays for the audit of "every piece of unattested PII on every individual"? How often, what is the dispute mechanism, and who administers that? Seriously - this sounds like a job for a new agency combining significant portions of the IRS, the FBI and PwC.


> This also punishes possession, not use.

If companies were immune to data breaches or leaks, then maybe this wouldn't be such a big deal. But I don't trust most companies to securely hold my data, even if they don't use it at all.

And besides, by the time a company uses the data in a privacy-destroying way, it's too late. The cat is out of the bag. Sure, the law against use could serve as a deterrent, but companies will push the law past the breaking point all the time. If you make possession trigger legal action, you can mop these things up before they get to use your data. Sure, you still have the problem of finding out about that possession. But also consider that if you have laws against possession and "bad" use, and a company does something, you can charge them for both things and hurt them more. That's a larger deterrent.


> Sure, you only want to target companies

Yes. There are many laws (e.g. accounting) that only apply to companies, when it's scale that amplifies harm.

> possession, not use

How are they different, in this context? The latter requires the former, and the former is unprofitable without the latter.

> how are you going to audit Ford or Geico?

As you note, similarly to how we audit now, albeit hopefully more proactively. If the law requires a signed off third-party PII audit, and holds an auditor legally liable for signing off on one... I expect the problem would (mostly) take care of itself.

PII is always going to be a game of edge cases, but we've managed to make it work with PCI and PHI in similarly messy domains.

Right now, companies have GDPR & CCPA to nudge them in data architecture. National laws would just further that. I can attest to major companies retooling how they handle and track consumer data just due to the CCPA.


By the time someone is charged with a crime, the damage is already done.

And, there will be many scammers who are simply out of reach of meaningful legal remedies.


That's true of virtually all criminal laws, though. If you're beat up and your assailant is charged with assault, you've already been beat up.


Yes, which is why you need both laws, and prevention methods.


I thonk we are both arguing that clear consent must be present, and the customer must have clearly agreed to whatever you are doing with the data - that appears similar to GDPR.

However, how do you prove John Doe has actually agreed to this? What if John says he did not click accept button? Do we require digital signature with certificates, given that most people don't have them or know how to use them?

I think the problem is more tractable for physical products running firmware - there you have real proof of purchase, and, at present, firmware that does whatever it wants.


It's analogous to the credit card fraud problem, no? E.g. disputing charges and chargebacks?

I don't work in that space, but my understanding is that the card processors essentially serve as dispute mediators in those instances.

So it would seem unavoidable (although not great) to have some sort of trusted, third-party middle person between collectors and end users, who can handle disputes and vouch for consent.

Blockchain doesn't seem like a solution, given that the problem is precisely in the digital-physical gap. E.g. I have proof of consent (digital) but no way to tie it to a (disputed) act of consent (physical).


This is civil law. You find out by asking employees in court. They aren’t going to risk perjury, a criminal offense, to spare their employer.


Or perhaps change the system that incentivizes companies to violate privacy over and over again


> If your company has PII, then you by law must be able to produce a consented attestation chain all the way back to the source.

So...basically the GDPR? Well, not quite, since the GDPR doesn't require consent attestation, "merely" a legal basis. Of which consent is just one (the most useless one to use as a company).


Yes because there are never unwanted side effects from more laws.


That's human nature. As soon as something beneficial to few and detrimental to others is banned, those who benefit seek to find other ways to continue benefitting, again to the detriment of others.

This doesn't mean we shouldn't continue trying to stop them.

And we stop them through laws.

Common sense is not that common and human decency doesn't scale.


In the US, the typical citizen commits an average of a felony a day. The legal code and associated regulations are so lengthy no one can read all of them. The tax code alone is 2,600 pages and associated rulings 70,000 pages.

When you have so many laws, they can be applied selectively depending on your political status, or to benefit the regulators or their friends. We just caught the sheriff of Santa Clara extorting citizens for tens of thousands of dollars to get concealed carry permits, which is why many people carry illegally, just like criminals.

Creating a law where non-disclosure of the smallest feature opens you up to government regulators harassment is another avenue for graft and corruption. Like when the EU selectively prosecutes US companies, or US regulators selectively harass companies not in lock step with the current administration.

I think if you really need a nanny state to protect you from your own decisions you should be required to give up all your decisions to the state.


For your argument to make sence you have to demonstrate that this amounts to nanny state.

I don't think it does, I think is fraud. For it to be consequences of of my choices, what choice do I make to select a mobile phone carrier that does not sell data of my location? Such choice does not exist.

You can't 'non-disclose' some 'small feature' of a mortgage contract, of a loan, etc. Personal data deserves similar respect.

Lastly, we can and do have different laws for individuals and multi-billion dollar corporations - you cant use this as an argument when we are discussing securities fraud and banking regulations.


Laws can always be applied selectively. They have always been applied selectively.

The point of laws is statistics: you can't discourage everything, you need to discourage enough to have order.

And there is a middle ground between no laws and giving everything up.

Also, I give up my liberties and obey laws in exchange for protection from many nasty things people do when there are no laws.

Based on your examples and vocabulary ("nanny state" is a clear giveaway), you're American. Go live for 5-10 years in a country with lax or non-existent laws and law enforcement. We call those countries bad names for a solid reason.


> In the US, the typical citizen commits an average of a felony

Sorry but that’s just obvious BS. If it were true, you’d include examples and far more people a certain world leader doesn’t like would be “locked up”.


> In the US, the typical citizen commits an average of a felony a day

This is surprising to me. Could you provide examples of such common felonies US citizens commit in ignorance?


I think felony is the more serious one? Misdemeanor being stuff like parking violation? I'd definitely want to see some examples for felonies, too :-)


Sure, but that's not an argument to give up.


Perhaps this is why Apple did not ask for permission from the user to add some delay every time the user launches an application. If the user were presented with the choice what would she choose.

If we look at the example of OCSP in the website certificate context, the notion of the delay added by OCSP (not to mention privacy concerns) being objectionable has already been acknowledged. As a result we have OCSP stapling. For some reason, in the Apple developer certificate context, OCSP is deemed acceptable by default.


> A common refrain in arguments that we don't need to reject closed source software to protect privacy is that being closed source doesn't hide the behaviour, and people will still notice backdoors and privacy leaks. Sometimes they do, sometimes they don't.

And there are cases where it's not practical for "people to notice." For instance: a privacy leak that only uses the cell network connection of a phone, which would avoid easily-sniffed connections.


Even with traffic monitoring, how do you distinguish bad TLS traffic from a machine you don't control to a cloudflare IP to good TLS traffic?


> The market can't act against what it can't see. Privacy loss is often irreversible.

You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy? From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware. Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now. The market seems fine with it.

And I mean, at this point I could just gesture broadly at all of Facebook.


I hear this argument a lot but I think it is exactly OPs point when he says, “The market can’t act against what it can’t see”.

Your average consumer doesn’t know the extent of what they’re trading. Take Facebook, even with high profile stories and documentaries it’s reasonable for your average consumer to assume that what Facebook tracks about them is what they actively give to Facebook themselves.

I’ve had conversations with people that say, “I rarely even post on Facebook” and “If they want to monitor pictures of my food/dog/etc whatever who cares”, without any solid understanding of what even having the app installed alone is giving Facebook.


It's ok, if they will be educated they will care. Just like they care now about not using single use plastics, buying the biggest and most gas guzzling SUV or flying on holidays across the globe.

They won't care even when they'll know. And they might not ever know.


Yes exactly. I don't think an abstract understanding of the costs is enough. If the cost isn't physically or viscerally felt, it just doesn't factor into people's decision making.

This is where the pricing system really comes into great effect. People buy and drive fewer SUVs when gas is more expensive. If we want people to buy fewer SUVs, increase the fuel tax.

Education is not enough, and in fact might not even be necessary at all. Just introduce real costs to capture the "abstract" costs (externalities) and the problem will likely correct itself.


> has "the market" shown any serious signal that it cares about privacy?

Depends on what you consider a serious signal of care. If 'voting with you wallet' is the measure, increasing levels of income inequality, stagnant wages, weakening employee rights through the gig-economy, etc. are effectively taking away that choice, as most market participants cannot afford to make the choice.

Also, what is the paid alternative to Apple or Google photos that allows me to have the same end user experience, without giving up my privacy? "the market" doesn't even have such an offer that I can see. The closest I can find (and that's through here) is photostructure.com, and even that's lacking all the local-ML-foo that makes Apple/Google photos a compelling option over anything else.

> Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now.

Known by who? I'd wager a year's salary that >50% of Samsung TV owners (and bump that number up to >75% of Samsung TV users) do not know this is happening.


Income inequality and stagnant wages etc are not the consequences of Apple and Google.


No one suggested they are. But they still make it much less likely for the average Joe to put money and effort to protect their privacy, and this is known to those that make major pricing decisions.


We’re talking about Apple here. People who can afford a Mac over a cheap pc/chrome book already have disposable income and are making trade-offs with it.


That is not really true. Many people of lower income will buy a Mac after their cheap laptops break because they perceive them, correctly or not, to be the most reliable and thus less expensive on the long term laptops. Sometimes even second hand.

Also, Chromebooks are not viable alternatives for many people especially of lower income. Having to rely on being always online is an issue when you can't always guarantee having internet. Been there, done that.


> That is not really true. Many people of lower income will buy a Mac after their cheap laptops break because they perceive them, correctly or not, to be the most reliable and thus less expensive on the long term laptops.

This is not true. The tiny user-base of OSX compared to Windows is already evidence that most people are not buying MacBooks.


> From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware

I'd argue that's more of what parent was talking about with regards to visibility.

"Privacy" isn't something anyone can see. What you see are the effects of a lack of privacy.

Given the dark market around personal data (albeit less in the EU), how are consumers to attribute effects to specific privacy breaches?

If Apple sells my app history privately to a credit score bureau, and I'm denied a credit card despite having a stellar FICO, how am I supposed to connect those dots?


> You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy?

Is there a pro-privacy Google out there whose products languished while Google's succeeded?

The Silicon Valley VC network did not fund nor support companies that promoted privacy. I can not think of a single example.

Not a single major venture from SV VC network even attempted to innovate the "business model". We can make machines reason now but, alas, a business model that does not depend on eradicating privacy is beyond the reach of the geniuses involved. It is "Impossible"? I think, "undesirable" is more likely. No one is even seriously trying. Point: money behind SV tech giants is not motivated at all to fund the anti-panopticon.

The salient, sobering, facts are that all these companies are sitting on a SV foundation that was and remains solidly "national security", "military", and "intelligence". The euphemism used is to mention SV's "old boy network".

https://steveblank.com/secret-history/


>that's been known for a while now

Ask a representative sample and I wager only a very small percentage of people are actually aware of (1) the breaches of privacy that are happening (e.g. your TV sending mic dumps and screenshots of what you're watching), and (2) the hard consequences of those invasions (that is, beyond the immediate fact that you're being snooped upon), like higher insurance premiums on auto and health, being targeted by your opinions, etc.


To be honest I would expect if you told people "your TV will report what you're watching" they will think "wait, doesn't my cable company already know what I'm watching???"

If you tell them it's the manufacturer this time in addition to the cable company, I'm not sure how many would freak out over the extra entity.


Very few.

I'm not sure how you'd measure it, but there seems to be a huge disconnect between techie privacy advocates and the rest of the world. The former keeps claiming the latter just doesn't understand, or needs to be informed, but I just don't think that's realistic.

I think it's pretty common knowledge that these companies are harvesting all imaginable data to serve users more / better advertisements and to keep them on the site, yet usage continues to grow despite all the scandals. I think advocates need to make more convincing claims to everyone else that they're being harmed.


In my anecdotical experience, that's not remotely true. Yes there is a disconnect between techies and regular people, in that for us it's obvious that these companies are harvesting all this data and processing it in all this ways and sharing it with all these people, but for the majority of people it's not. Even for you, do you think you fully understand how your data is being utilised and what the consequences are?


I think we've been repeatedly shown that people don't care. From Cambridge Analytica, to what Snowden shared, to voice-based assistants (like Amazon Alexa), every instance is met with feigned surprise and then a collective shoulder shrug.

> Even for you, do you think you fully understand how your data is being utilised and what the consequences are?

I don't think anyone can say with certainty, but I read these threads so I'm quite aware they're collecting and monetizing every imaginable thing they can. It's difficult to articulate any measurable / real negative consequences to me, personally.


> You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy? From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware. Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now. The market seems fine with it.

I'd guess that most of that's due to information asymmetry. Privacy losses aren't advertised(and often hidden) and are more difficult to understand, but price is and is understood by everyone.

Take that Samsung example: it's not like they had a bullet point on the feature list saying "Our Smart TVs will let us spy one what you're watching, so we can monetize that information."


On argument to make is that the market only cares about the majority of their customers, not all of them. If 0.5% of their customers has their privacy busted by a backdoor, or that 0.01% of google users have their account arbitrarily deleted, this percentage of users is screwed like no one and the company doesn't suffer any damage.


Yes, unfortunately there is no safe harbor.

Companies can make mistakes or add backdoors and we won't know. See this clusterfuck.

Open Source, likewise, can make mistakes and (much more rarely) add backdoors, and we could know, but few have the resources to so. See heartbleed.


Probably no one cares because Apple’s OCSP checks don’t reduce your privacy.


They should care. The checks are sent unencrypted over HTTP to Apple's OCSP.


Since they don’t identify specific apps you use, so what’s your point?


As far as I understand it, most vendors ship a single digit amount of apps. If you start the Tor browser, everyone on your network will know. If you start Firefox, everyone on your network will know you started a Mozilla product, most likely Firefox. If you start the Zoom client, everyone on your network knows you started the Zoom client.

I don't think the "it's only the vendor" defense of Apple is any good.


On MacOS, developer certificate requests are NOT done for every application launch. Responses are cached for a period of time before a new check is done.

FYI -- Both Firefox and Safari use OCSP to check server certificates. Anybody sniffing your network could figure out which websites you visit. Chrome still uses CRL; it trades precision for performance.


That period of time was 5 minutes.


HTTP is specified in the RFC. Only the developer certificate is checked. OCSP is also used by web browsers to check the revocation status of certificates used for HTTPS connections. Apple leveraged OCSP for its Gatekeeper functionality. This is not the same thing as notarization, which is checked over HTTPS.

https://blog.jacopo.io/en/post/apple-ocsp/

Perhaps you should learn about OCSP before complaining about its use of HTTP.


Vendors MAY use TLS, and Apple didn't (though they say they'll start).

You might want to read the RFC, rather than a blog post about it, before making such confident pronouncments.


> The market can't act against what it can't see. Privacy loss is often irreversible.

I would add that the market can't act to prevent risks that are outside the market and not taken into account by the market.

The big risks from widespread privacy loss are the exploitation of private data by criminals, foreign unconventional warfare by terrorists or hostile states, and the rise of a totalitarian government here in the USA.

Criminal action can to some extent be priced into a market, but the other two really can't be.


> Parts of the US Governments unlawful massive domestic surveillance apparatus were described in the open in IETF drafs and patent documents for years.

References?


Great points.

Related tangent: for those interested in this topic of "unlawful massive domestic surveillance", I heartily recommend Cory Doctorow's "Little Brother" novels -- esp the most recent, "Attack Surface". They get the technical details right while remaining accessible and engaging regardless of the reader's geek acumen.


> arguments that we don't need laws to protect privacy is that the market will take care of it.

Since discussion of Apple's behavior in particular has, somehow, been completely de-railed anyway (a frequent happening on HN) ...

Can't help but observe that the market takes the best care of those who control it. Freely.


> A common refrain in arguments that we don't need laws to protect privacy is that the market will take care of it

Seriously, who has ever been successful at defending that idea ?


Many lobbyists and lawmakers, unfortunately.


It doesnt count as winning the argument if you paid them to agree with you


It does if you convince people to vote for you though


which IETF drafts and patents? Can you point me to some links?

tia


Sure, lemme give you an example:

https://tools.ietf.org/html/draft-cavuto-dtcp-00

This protocol was created so that monitoring infrastructure could reprogram asic-based packet filters on collection routers (optical taps feed routers with half-duplex-mode interfaces), which grab sampled netflow plus specific targets selected by downstream analysis in realtime. It has to be extremely fast so that it can race TCP handshakes.

I don't think it's much of an exaggeration to say that the technical components of almost all the mass surveillance infrastructure is described in open sources. Yes, they don't put "THIS IS FOR SPYING ON ALL THE PEOPLE" on it, but they also don't even bother reliably scrubbing sigint terms like "tasking". Sometimes the functionality is described under the color of "lawful intercept", though not always.

One of the arguments that people made against the existence of widescale internet surveillance -- back before it was proved to exist-- was that it would require so much technology that it would be impossible to keep secret: the conspiracy would have to be too big. But it wasn't kept secret, not really-- we just weren't paying attention to the evidence around us.

For a related patent example: https://patents.google.com/patent/US8031715B1 which has fairly explicit language on the applications:

> The techniques are described herein by way of example to dynamic flow capture (DFC) service cards that can monitor and distribute targeted network communications to content destinations under high traffic rates, even core traffic rates of the Internet, including OC-3, OC-12, OC-48, OC-192, and higher rates. Moreover, the techniques described herein allow control sources (such as Internet service providers, customers, or law enforcement agencies) to tap new or current packet flows within an extremely small period of time after specifying flow capture information, e.g., within 50 milliseconds, even under high-volume networks.

> Further, the techniques can readily be applied in large networks that may have one or more million of concurrent packet flows, and where control sources may define hundreds of thousands of filter criteria entries in order to target specific communications.


So, we should take away the market's incentive to infringe upon our privacy: by making user-tracking illegal.


The laws are already there. If you care about this and have some free time you may try to make a complain to the Irish data protection commission.

I'm not sure if it's infringing though. If Apple says that they do not collect personal data and that the information is thrown away and this is done for a legitimate business purpose or for the customers (i.e. protecting customers), it may well be fine according to the GDPR.


Ya no thanks. Top down regulation will just make startups less likely to enter new disruptive tech. The solution is choice, stop using Apple products and all their shadyness stops being an issue.


> The solution is choice

Correct. Among apple, MS and google you have no choice. That is why regulation is necessary.


> Ya no thanks. Top down regulation will just make startups less likely to enter new disruptive tech. The solution is choice, stop using Apple products and all their shadyness stops being an issue.

And since people have demonstrated that they're not appropriately incentivized to stop, that's where the regulations snap in, which brings us back to where we started: there's a need for it.

Solution could just be to regulate based on a tightly managed definition of age. Enable younger companies to have a bit more flexibility in determining their business model as controls slowly snap in as the company ages. There are some pretty clear loopholes that immediately come to mind (e.g. re-chartering the company every few years and transferring assets) that'll need to somehow be managed, but it should be enough to give companies runway to figure out how to disrupt and monetize while coming into compliance with consumer protections.


How do you provide choice? How do you commoditize an entire hardware and software ecosystem, vertically integrated?

Just like Standard Oil or AT & T were displaced, right? By customers going to their competitors? I'm being sarcastic, obviously :-)


Nothing shady about this. It isn’t logged and it protects customers from malware. That was the purpose. Most customers want that.


> Top down regulation will just make startups less likely to enter new disruptive tech

Im ok with this


> Those who consider that Apple’s current online certificate checks are unnecessary, invasive or controlling should familiarise themselves with how they have come about, and their importance to macOS security. They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all, and what should replace them.

I agree that anyone critiquing Apple's OCSP design should understand it, and the critique should be more nuanced than "just turn that feature off." Computers are now skeleton keys to our lives and we have to go forward rather than back in figuring out how to design them so they can safely do everything we need them to do.

But it's not hard to justify the sudden criticism here -- it happened after Apple's bad design of the OCSP feature broke local applications, drawing a lot more attention to how it worked. It's reasonable to then ask whether other parts of the design were also poor, as Apple itself obviously is from the changes it's already announced.

To take the author up on what should replace OCSP checks -- how about using something like bloom filters for offline checks, and something like haveibeenpwned's k-anonymity for online checks, to remove the possibility that either Apple or a third party could use OCSP for surveillance?


The browser vendors have been looking at this problem for a long time. See https://blog.mozilla.org/security/2020/01/09/crlite-part-1-a... for example (bloom filters included).


Why can't Apple download all footprints of bad apps locally instead of monitoring every single invocation of apps? Is second execution of an app the same security risk as the first one? That's the design flaw.


You mean bad certificates rather than applications.

OCSP can be locally cached, and Apple's implementation does exactly that. But eventually you'll have to refresh the cache and then the implementation needs to be fault tolerant (Apple's wasn't).

OCSP leaks what vendors your installed applications are from. The list of leaked certificates changes daily, so any good implementation is going to check again at least several times a week. If you download the entire database, you're just consuming hundreds of megabytes of bandwidth/storage but aren't removing the need to refresh/expire the cache.

I'd argue the two biggest flaws Apple's system has is bad fault tolerance and also no user accessible opt out (even if just for emergencies).


I wonder how hard it'd be to serve this data via DNS, like "dig -t TXT 0xdeadbeef.ocsp.apple.com". Then you get a nice, distributed architecture with lots of built-in cache handling, and since the data is currently served via HTTP, it wouldn't expose any more data to your ISP than already is today. It would also mean that if you have 100 people in the office and a local DNS cache, then each OCSP query would be made exactly once and then its answer shared among everyone else in the office.


That still tells people what you’re running, though.


Imagine you have a shared office DNS resolver (which is pretty common). That resolver would aggregate all of the requests into one shared, cached stream. Then the question becomes "hey Apple, one person of how ever many thousand are behind me would like to know if Adobe's certificate is still valid". That's reasonably anonymized, I think.


Then the question is, "how much do I trust my ISP/DNS provider?"

Those DNS lookups tell your ISP 1) that you use a mac and 2) that you have an application from a specific developer installed.

I think I trust my ISP less than I trust Apple, here. Am I wrong to do so?


Well, back to the state right now where your ISP can see your plaintext HTTP packets if they want to, so it wouldn't be any worse than the current situation. I guess you could get much the same effect by configuring your company Macs to point at a shared Squid server to cache the GET requests from the OCSP server, but in practice almost no one does that.


Apple says they're going to move to an HTTPS based system, so the relevant comparison is between HTTPS and DNS, not HTTP and DNS.


That only helps if you have a shared resolver.


Pretty much everyone has a shared resolver at some point. Almost no one’s running a local resolver on their laptop these days.


I doubt the full list of hashes of all revoked certs is 100s of MB, and even if it is, the daily update file surely isn't that big.


Right, and given that those certificates expire after some finite amount of time, this wouldn't be a forever-growing CRL, as expired certs could be dropped from the file periodically.


Differential downloads are a solved problem : https://docs.microsoft.com/en-us/windows/deployment/update/p...


> OCSP can be locally cached, and Apple's implementation does exactly that.

In the earlier HN thread when the server was offline, it was said that Apple only cached OCSP results for 5 minutes. Is that not true? If it is true, I don't think that's what GP is asking for as far as local caching.


It was changed after the event and is now 12 hours. Which is probably more appropriate considering that most people don't restart their applications every few minutes.


Just so that I understand this correctly, by caching to you mean the results of a specific check, or as the above was implying, downloading the list of all bad signature and doing the check 100% locally.

The issue from my understanding was half the breakage, but half the fact that Apple was sending back telemetry about what apps you launched.


In answer to the first: Because the information is unreliable without more invasive technologies ensuring that the local file is up to date. To the second: Perhaps not, but if the information on bad actors (app distributors in this instance) you'll continue running a compromised app.

Are you familiar with OCSP conceptually? I have done a reasonable amount of work with signatures and certificates, including OCSP. All my experience is in a commercial, enterprise context but I think these technologies need to start filtering down to the consumer before the capability for security evaporates.

I think it's a consumer-positive direction for Apple to provide this service. I would be interested to hear from someone who holds the view that this is not a service, or disagrees in other ways, but I think this is the right direction for consumers. The alternative, as I see it, is that every person installing an app needs to start searching for CVE notices and headlines in trade papers declaring a compromise.

Apple have applied an enterprise middleware to their infrastructure. I think perhaps they could have been more transparent in the delivery. A lot of the outrage now is driven by people only finding out about the underlying process for the first time. I stand by the right of these companies to choose their business model to disallow (or restrict) execution of apps they believe to be compromised. I also firmly believe in a varied and free market for software, hardware, and infrastructure.

In essence: You can choose to use Apple and do it the Apple way. Equally you can choose to build your computer from components sourced from anywhere, install any free OS, and any apps. Personally I do choose to do it the Apple way, and I am inconvenienced by that from time to time. I curse my computer and its creators on a daily basis. It's part of the relationship we all build with our tools.

got a bit off track towards the end...


The black list of malware is called Xprotect and dates back to 2009. This check for revoked certificates is a different security layer.

The second check of an app is necessary to check for revocation: for a developer that decides that they've been compromised and wants to stop execution of their software. The alternative would be to use certificate revocation lists instead of OCSP. CRL lists can get long, so OCSP is often preferred to CRLs.


Probably it's not necessary to check for revocation that often though. It could even be argued that it's only needed when the binary is updated.

And it's absolutely not normal to fail if that revocation check doesn't succeed anyway.


I've been wondering why CRL couldn't be used if the OCSP goes down or no reply is received. That way you get the benefits of both. Any reason why this would be a bad idea?

Also are CRLs really that bad in practice? I know it would be a bad idea on a smartphone but is it really an issue on a laptop?


The other main issue is bandwidth. For a CRL, Apple has to serve the full list of revoked serial numbers (or some shard of it), even if 90% of users don’t have 90% of the revoked apps installed. I can’t remember where I read this, but I recall that after the Heartbleed disclosure one CA saw their CRL traffic grow by some number in the gigabits per second. Bandwidth is relatively cheap, but still not free (without even considering the argument of “do I need to know that the cert for an app that I will never use is revoked?”).


They have the bandwidth for some pretty big firmware/OS updates, though. This would be a fraction of that size. They download blacklists already for their inbuilt AV. Also if they use something like Git then only the changes will be downloaded rather than the entire CRL.

I can’t help but feel that the issue is something else, not bandwidth.


How many revocations do they do? How about just downloading the whole list to the clients which they can check offline.


It sounds like that's how it used to work, and then it changed for some reason, maybe to do with the size of the list and a need for faster updates.

Actually that unknown exposes the problem with the original article's demand that critics explain what they want to replace Apple's system. No one outside of Apple is in a position to design a system that addresses all of the design constraints -- we don't even know what they all are. But we are in a position to assert some additional design constraints, such as requiring that the system not leak developer certs to eavesdroppers every time an application is run, and expect Apple to figure out a solution that takes them into account.


> It sounds like that's how it used to work, and then it changed for some reason, maybe to do with the size of the list and a need for faster updates.

If that's true, that's a lazy excuse on Apple's part. Differential updates has been a solved problem for many, many years. Hourly or even daily diffs would be tiny (likely much less traffic than the OCSP checks that occur now), and expired certs could be dropped from the local store, so it wouldn't grow without bound. (Sure, ok, people could turn their clocks back and defeat that last bit, but doing that would break other things, too, like TLS to any website with a reasonably recent cert.)


It kind of feels like there's a bit too much noise around this topic.

I'm getting the same feeling I did years ago when it was discovered that the iPhone had a historical database of all the locations you'd been to. There were rather a lot of articles about how Apple were "tracking you everywhere you went" and so on.

The reason it's similar – they are both dumb, technically bad, and privacy-compromising decisions, and in both cases much of the public discussion about it has been a little hysterical and off-base.

Apple should 100% be criticised for this particular failure. It's obviously a bad implementation from a technical and usability point of view; the privacy implications are bad, and this features should not have been able to make it out as-is.

But I've legitimately seen people describe this as "Apple's telemetry" which is just obvious nonsense and distracts from the actual problem – how did such a bad implementation of a useful feature end up in a major commercial product, and how are they going to make sure it doesn't happen again?


It’s actually not at all obvious how a local list of locations used to power suggestions in maps or Siri, is in any way a compromise of privacy or technically bad.

The only thing that made it sound bad were people saying things like “Apple stores your location history”, knowing that it would create the false impression that Apple was uploading location data to their servers.

This situation is similar in that there are people posting misleading inuendo about Apple having some hidden agenda, but the difference is that there do seem like real design problems with the mechanism this time.


Every iOS device connects to Apple's push service and stays connected. The client certificate it uses is tied to the serial number of the device itself, when it registers for the push service.

Apple sees the client IP of the push connection, naturally.

Therefore, based on IP geolocation, Apple really does have coarse location history for every single iOS device by serial number.

Apple is indeed storing your (coarse) location history.


> Therefore, based on IP geolocation, Apple really does have coarse location history for every single iOS device by serial number.

This conflates technical possibility with an implemented system which stores that data and the insinuation that this is used for purposes other than what the user enabled. Do you have an evidence that Apple stores this data and uses it in violation of their privacy policy? You're apparently in Europe so you should be able to file a GDPR request to see exactly what they're storing.


Everything I described is implemented today, and is required for APNS to work. Whether Apple does or does not mine the data in some way that you personally find offensive is not relevant; the fact is that they are presently logging IPs for APNS connections, which have unique identifiers that are related directly to hardware serial numbers in their database. Because IPs generally equal location, they are in possession of location history for each iOS device serial number.

I'm not sure why these (plainly factual) statements are controversial.


There’s no question that they need connections to operate the service but they do not need to retain that information, however, and while you have repeatedly asserted that they do, you have been unable to support that claim. This would be covered by privacy laws in many places so it should be easy to point to their privacy disclosures or the result of an inquiry showing that they do in fact retain connection logs for more than a short period of time.


It stands to reason that user of the service who has agreed to the TOS that governs APNS and App Store sending unique device hardware serial numbers to Apple has also (legally) consented to IP address collection. IP addresses are less unique identifiers than globally unique device serials, so I assume Apple has already secured what passes for "consent" under the relevant privacy laws.


Again, nobody is questioning their access to that information. Where you typically go wrong is by asserting without evidence that they are performing additional activities without disclosing that. Surely you understand that having IPs be visible does not automatically mean retaining those records, much building a searchable database?


I’m with you on this. If Apple genuinely views privacy as a human right (and FWIW I believe that the right people at Apple do), then they need to learn about privacy by design.


I think people who have this viewpoint are largely ignorant of the amount of tracking data that comes out of a mac or iphone, even in first party apps, that you cannot turn off at all.

This is not an isolated incident, and their own OS services are explicitly whitelisted to bypass firewalls and VPNs in Big Sur.

There’s telemetry in most Apple apps, now, and you can’t disable it or opt out, or even block or VPN it in some cases. I encourage you to read their disclosures when you first launch Maps or TV or App Store. Every keystroke in the systemwide search box hits the network by default, too.


This is a good example of what I mean - it’s a rant about telemetry that has nothing to do with the issue in question and as a result conflates a whole mess of different issues.

I am pretty well-informed about most of the data that is being generated and transmitted by my machine. This is an issue where it it totally reasonable to pressure Apple and all other companies to design for privacy as a priority and ensure that any of this data collection can be disabled. I don’t think an effective means to do that is to deliberately conflate and mislead about issues like the one under discussion.


System services bypassing VPN and firewall because they're on a vendor whitelist (that can't be modified due to OS cryptographic protections) is a) fact not rant and b) nothing to do with telemetry.


This is a basic by-the-RFC implementation. The developer who was assigned this just used existing libraries and followed the protocol. This was a rational move on their part. Especially when mucking with x509 has been historically fraught with vulnerabilities.

OCSP has since been improved to increase privacy and security, but the extensions to enable that only considered OCSP in the context of TLS.


Just to correct slightly incorrect perception: there is nothing inherently insecure or vulnerable about X.500/ASN.1/BER/DER parsing, in fact it is probably more sane format to parse than JSON. The perception that it is somehow fraught with parser vulnerabilities comes from various implementations that tried to implement BER/DER parser by transforming something more or less equivalent to ASN.1 grammar into actual parser code by means of C preprocessor macros, which is somewhat obviously wrong approach to the problem, at least in the security context.


Another fun fact about this system: something changed in how the binaries are evaluated and one VST plugins I've downloaded months ago was marked as malware. The plugin is quite popular in community so I think it's unlikely it contains actual malicious code (in fact I've contacted the developer and he said he has done some fixes for Apple's security policies recently). Imagine my shock when I open an old project in Ableton and suddenly some sounds just don't work. This really sucks, I don't want to worry about whether my music will work five or ten years from now (I can imagine I may I want to remix some old piece). I suppose I can err on side of safety and export all tracks to wav.

However, it's not an isolated problem. It feels that every other week something happens that undermines my confidence in Macbook as good device for making music.


It's been good practice for a long time to "freeze" or "render" the tracks out after the song is finished so that the song can be loaded without the plugins.


True, but this shouldn't be necessary in response to anti-consumer behavior.


This is not anti-consumer behavior. Consumers are, overall, protected when they can verify the source of an application or extension on their computers. Their freedom may be limited but it's not a black-and-white "this is anti-consumer".


Some signatures are invalidated due to business disputes on entirely different platforms (Epic dispute on iOS, signatures invalidated, or threatened to be before court order prevented it, on OS X, for no security reason).


Epic violated the Terms of Use for their developer agreement which applies to all platforms. They knew that and they violated it willingly. The court order only prevented it temporarily to reduce the damages that may be incurred and until a determination was made in the initial case.

That is not anti-consumer.


Well even if Epic are the bad guys by violating the ToU willingly, it still impacts the user. As a user I don't want my apps (which I depend on) to stop working, because of a business disagreement.

Revoking signatures and disabling the apps on user devices to protect your business model is definitely anti-consumer in my book.

You could easily see Apple revoking signatures because of DMCA claims. Even faulty ones, like the claim RIAA made against youtube-dl on GitHub.


Of course it impacts the user... And if Epic was found doing something illegal and was shut down or bankrupted, that would also impact the user. Your over-simplification that it's a "business disagreement" is disingenuous and incomplete. The signature revocation system you're claiming is simply to "protect their business model" is the same system that allows Apple to immediately shut down any malware that makes its way into the App Store inadvertently. It's the same system that's been used in the past to protect users from private key leaks.

The only anti-consumer behavior in your situation came from Epic who knowingly violated the rules as a PR stunt.


> The only anti-consumer behavior in your situation came from Epic who knowingly violated the rules as a PR stunt.

Exactly. Never forget, it was Epic who threw their users under the bus, not Apple.

Epic expected you to be a soldier in their fight. They expected you to make a sacrifice you were not willing to make.

That's entirely on Epic.


I was not claiming the revocation system only has the purpose of protecting Apple's business model, but its one of the purposes.

Even though I agree that in the Epic case, most of the blame lies with Epic, I still have a problem with Apple: the signature revocation system is used for more things than removing malware. I think it is user hostile and anti consumer to disable installed apps on other grounds, because the users might be dependent on them.

I'd like to be able to run programs and apps on my machine that are not Apple-approved.


> I'd like to be able to run programs and apps on my machine that are not Apple-approved.

I thought you could anyway. You would right-click the app in Finder and choose Open — from then on, it would continue to open.

Or is that a different mechanism?


Apple promised it would only be used for security related stuff on desktop. I wouldn't want my desktop audio project to break because the VSTs were unsigned due to an iOS app business dispute. That's anti-consumer.


I agree that it's anti-consumer. I disagree that it's Apple that's being anti-consumer. The company developing the app would be anti-consumer for knowingly violating the Terms of Use to try and pull a PR stunt.


That's often done for producing a static performance and mix for distribution.

But when returning to a digital musical work months or years later, oftentimes the idea is to improve or otherwise rework it ... just as live bands do constantly. A non-working essential plug-in (filter, synth, VCO, whatever) might make that much more difficult.

One of the biggest headaches in computer music-making is how much time fighting the tech takes away from the creative process. Noone needs their OS to be adding to their distress. Let alone switching serial-port designs every few years (obsoleting trusted and often expensive equipment).


I assume you upgraded OS, in which case it's annoying but not unusual that plugins stop working.

A machine that's used for making professional music should not be upgraded or connected to the internet. If it's for a hobby... I think we will have to live with the compromise if we want to have the latest security fixes and connect to the internet.


>should not be upgraded

Most DAW-makers are constantly upgrading their software. And they often obsolete their older versions to get in sync with new OS's. 'Keeping the old stuff' sometimes isn't an option.

Physical instruments keep working for decades ... but thanks to OS upgrades, valued digital hardware and/or software instruments (say by Opcode or Native) can be lost to stupid or cavalier changes. Anyone who's been making 'professional music' for long has been bitten many times.


Yes they are obviously upgrading their software because they need to make money and adding features and fixing bugs is a great way to do that.

The only solution to not having a broken music workstation is to never connect that machine to the internet and never update it. Physical instruments keep working for decades because...they are never connected to the internet and never updated.


Completely agree, except I'd say physical electronic instruments have a lifetime of about 20-30 years now. Several synths I own are now non- or half-working because of fading displays, broken floppy drives, power supplies and even chips going bad. The relentless churn of music computer software and hardware setups has also existed since the 80s. I guess it is probably worse for photo & video production.


>I don't want to worry about whether my music will work five or ten years from now

This is exactly what Apple has already done to the iTunes world, music you had a decade ago is suddenly inaccessible


> [...] music you had a decade ago is suddenly inaccessible

Music bought via iTunes doesn't have any DRM since 2009.


But even without DRM, if you go with the default settings and don't download your music locally, you lose access to it if they decide to remove it from their catalog. Same goes for movies/TV shows; I've had both disappear from my iTunes library at various points.


Do you have an example of this?


What is this referring to? Mine seems to work fine.


Interesting, but reading the conclusion I'm fascinated in this affaire how technically knowledgeable people loose common sense to defend their favorite brand: - Per launch verification is terrible for privacy, vis-a-vis Apple and the whole network when it happens in plain text - "They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all", another key issue: user information, consent and control. - Additionally, the public was made aware because it malfunctioned, which is also a security issue. - Considering the current corporate culture, there are legitimate concerns of what those choices might lead towards


No-Logo by Naomi Klein outlined how brands work.

One factor in the irrational defence could be a kind of psychological protection of investment. Apple isnt just another company, its an entire lifestyle ecosystem. Those invested in Apple have the watch, tv, laptop, itunes etc. And together they really do "just work" - the user experience is great!

So to admit that Apple is flawed, that their investment was a bad idea is to admit they were wrong and that their time and money was wasted. No-one wants to be a sucker. Far better therefore to protect your investment. Apple really are genius to pull this off. Apple is part of people's identity.


The article points out that while there are drawbacks to checking app signatures, there have also been documented benefits in terms of uncovering vulnerabilities and making systems more secure, which also has direct privacy benefits to the users whose systems don't become compromised by malware.

The balancing act between freedom and security is never going to not be a debate. Engaging in it in good faith as in the linked article is a reasonable approach (that you don't usually see represented in Klein's oeuvre): consider tradeoffs, counterarguments, and historical context from different perspectives. Apple is flawed sure, because all complex solutions are inherently flawed. They have a responsibility to be more open and transparent, and I'd prefer to see more details and updates to their otherwise laudable security whitepaper [1], and clearer more accessible user-definable toggles. But your or my preferred solution probably isn't the ideal default for most users, or for the ecosystem as a whole.

[1] https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...


Signature checking is not what people are mad about, the only way to get anything not bad from this situation is in the most abstract view that signature checking is not bad

thats the problem with people who dismiss Klein and her "oeuvre" you are so desperate to stay in the middle you refuse to see evidence right in front of your face


Sometimes people just draw different conclusions because they have different values and priorities or willingness to accept incremental progress.

I’m not dismissing Klein, I referred to her “oeuvre” out of my respect for her as an artist.


This is a good explanation about why people defend brands, but in case of Apple, my experience is that the anti-Apple crowd is more emotionally invested in being anti-Apple than the fans are for it. It used to be the other way around when Apple was the underdog, but after the iPhone took over and became a sort of default for certain regions and social circles, the most loyal brand warriors are the anything-but-Apple fans.


The most vocal, emotionally invested anti-apple crowd are those who were once emotionally invested in Apple. Similar to an anti-smoker, ex-cult member, ex-mormon or vegan convert - those who have been burnt are often the loudest. The same psychological effects are in play - people will try to defend the hurt to their ego.

There is no brand of "anti-Apple" - that doesn't really exist as brands don't work like that BUT groups of people do though, and people can have identities of being in a group. (I can't really say I have noticed an anti-apple subreddit either)

One way groups defend themselves is to define the opposition - by defining the opposition to Apple fans as anti-apple-groups, it helps bolster the coherence of the Apple fan group and defend the brand and investment for the individual. We are under attack, close ranks and protect each other.


But they're not wrong at all and their time and money was well invested.

The ecosystem does just work and the user experience is great compared to the alternatives. It's also possible to use only specific products and switch off various cloud or telemetry options. There's still a long way to go to reach a private OS, but Apple has by far the best privacy stance when compared to Google or Microsoft and there is nobody who offers such an OS right now. The best one can hope for is build their own Linux-based distro or use BSD and then one has to be prepared to invest a significant amount of time.

Apple didn't get challenged through the GDPR yet because everyone's busy with Google and Facebook still. But, if you or anyone else would like to lodge a complaint, maybe this will be decided for the customers (I think it's borderline) and we'll get an option to switch it off.


"One factor in the irrational defence could be a kind of psychological protection of investment." The Author of this publication is clearly invested deeply in Apple ecosystem as a developer. One of the reasons that I consider using Mac OS behind hardware firewall in the future is clear realisation of this process. This telemetry malpractice clearly must be prevented by legislative measures. Trust is earned by transparency, not by some kind of Security slogans.


> I'm fascinated in this affaire how technically knowledgeable people loose common sense to defend their favorite brand

What I find weird is regardless of what the discussion is involving Apple, someone needs to pop in with one of these theories about Apple tribalism.

Very very few people are in fact "defending" Apple here. Even among those few, the sentiment is largely that this is bad and Apple is fixing it.


I presume the comment is about the article, which it quotes, so it doesn't matter how few such people exist.


It takes some serious mental gymnastics to see how that part of his comment relates to the article.


> how technically knowledgeable people loose common sense to defend their favorite brand

Someone who says this usually holds an opposing position and simply has their own tribal allegiance. Perhaps assume good faith on the part of those who do not make the same choices you do.


I think there's an element of people desperately wanting to believe that apple is their tribe, rather than just another company.

I can believe Apple do care about privacy, but ultimately they're just another company. For example, I'm sure apple would love the Epic lawsuit to be decided based on a poll of HN users - "I would rather not have the freedom to run whatever I want, because [insert bizarre anecdote]".

Don't project your own beliefs onto apple, vote with your wallet if they annoy you - it's just a trackpad.


Could the people that vote with their wallet please also stop caring about Apple so much, to the extent that they have to rescue those that are still Apple customers and nitpick anything Apple-related to bits? Take a clean break, it's healthier that way.


>I'm fascinated in this affaire how technically knowledgeable people loose common sense to defend their favorite brand...when it happens in plain text

I'm fascinated how technically knowledgeable people don't understand OCSP.

Checking the revocation status of certificates is why OCSP was created. It happens via HTTP. Why? Because you cannot check a certificate used for the HTTPS connection when you are using HTTPS for the connection. Apple leveraged OCSP for Gatekeeper since it does the same thing, checking certificates, in this case a developer certificate. That is all it does.


It's also easy to imagine what the blog posts would look like if they did the same thing except over TLS--in a way that the harmlessness / purpose of the request was not immediately apparent.

I agree with you, though--it seems like they solved a valid problem with the most obvious, commonly-used solution. The real debate is probably just over whether or not the problem is a sufficiently large threat to justify the downsides.


How is certificate checking a terrible idea?

It doesn’t leak the application name or any personal information, and Apple doesn’t store it permanently.


It initially logged IP address and the associated developer ID which was a genuinely bad idea. They've stopped logging IP address now.

The concept here is fine, they just screwed the pooch a bit on implementation. And as usual, HN blew it out of proportion.


Developer ID is an extremely good proxy for application name.


Yeah, the article's last paragraph irks me... to reformulate it in the context of domestic spying, it'd be like saying "NSA's communication monitoring have kept you safe for years, now that you've heard of it, you decide it's a bad idea?".

Most people probably never noticed this phone-home feature existed, just like they never knew that NSA was recording everything. (Obviously anyone who bothered to look under the hood could've seen it, but hey, how many people do that).


The Apple thing was not designed for explicit mass-surveillance; NSA’s programs are. That’s kind of a big difference.


> "Those who consider that Apple’s current online certificate checks are unnecessary, invasive or controlling should familiarise themselves with how they have come about, and their importance to macOS security. They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all, and what should replace them."

A simple opt-out toggle, for privacy reasons, would be a good start... people should stay in control of their own data and be able to choose themselves whether or not they are willing to trade in their privacy (for security in this case).


Apple has said they plan to do this, and also encrypt the checking payload. Sounds good to me, though definitely a privacy failure that they didn't do this in the first place.

The other thing I'd like to see is the app open immediately, w/ the check happening asynchronously in the background. (This seems like super-basic good engineering to me.) No idea if they're planning to fix that or not.


> The other thing I'd like to see is the app open immediately, w/ the check happening asynchronously in the background. (This seems like super-basic good engineering to me.) No idea if they're planning to fix that or not.

How would this work? The point of the check is to block malware from running, and opening without the check would, by definition, negate the entire system. If malware authors get wise to the async scheme, they can write programs that deliver their payload in the opening milliseconds of an app’s execution, while the network call is running (even the fastest pings would leave 1 or 2 ms worth of window).


Fair question. My thinking was the system is already not designed to be failproof, just mitigitive (it turns off entirely if no internet), and that malware would be pretty limited in what it could do in just a few hundred ms.

Waiting to open an app based on a network request is basically just guaranteed to give you a terrible experience some % of the time.

Maybe fancier solutions like a local blacklist are needed. (Which weirdly it looks like Apple had and then moved away from?)


Surely this check could be done on install/first run, then cached?

If you want rapid blacklisting, a frequent call to Apple to say "anything new blacklisted?" would suffice. Same as push notification.


But that would defeat the purpose. Apple can be thought of as the equivalent of the NSA: they "care" about your privacy in the sense that they don't want anybody but themselves to have access to it.

Unfortunately we don't have an Apple competitor that cares enough about your privacy to not want anybody, including themselves, to have access to it.


And yet, this claim about Apple’s intent isn’t made with a shred of evidence.


[flagged]


That article contains no evidence that Apple has a hidden agenda.


[flagged]


Gesturing broadly doesn’t work because it’s not evidence. It is only innuendo.

If you had evidence you’d be able to be specific.


"Apple dropped plan for encrypting backups after FBI complained" doesn't sound privacy oriented to me.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


Navigating the complexities of dealing with different governments is not the same as having their own anti-privacy agenda.

Of course they should encrypt the backups, but perhaps the alternative was going to be some kind of legislation that would be even worse.


> navigating the complexities of dealing with different governments

That's how kids call "handing over all cloud customer data to FBI" these days?

Holy whitewashing batman, that's a lot of speculative mental gymnastics.

To put it simply you have no evidence that supports Apple here other than a "but perhaps the alternative was... whatever I just came up with" .

To paraphrase yourself in another comment: "that's intellectual dishonesty about Apple."

If Apple cared a single bit about privacy it would have encrypted customer data from the begining instead of planning to eventually do it one day only to give up uppon FBI request.


This is turning out to be a bit of a similar case as the iPhone battery degradation performance throttling issue. Instead of clearly messaging what they were doing to your phone, they did things behind the scenes because they knew better, and decided not to give the user the choice to run the phone at full performance.


You clearly need to familiarize yourself with the actual facts of this case. Also, I'm pretty sure that for 99% of users, "full performance" is not characterized by "randomly crashing due to lack of intelligent power management".


The throttled performance is full performance. Without throttling the phone would simply crash.


I sometimes wonder if the mods won't end up banning "political" talk on HN. Because these days everything becomes political, even if it really is a technical issue.

Case to the point: online signature check was a technical decision, to fight malware. It was implemented similarly by other OS vendors (Microsoft) and it's been this way for years.

Now we discover that it has the unfortunate side-effect that it lessens privacy. Apple (and probably other OS vendors) are working to improve that in the future. Also, a technical issue.

It was never about privacy. It was never a political issue. Can we please just discuss it from a technological standpoint?


The technical issue is "can we provide these features without weakening privacy?"

The political issue is "if we can't provide these features without weakening privacy, should we still provide them?"

Aren't they both important points to discuss?


They are, but the difference is that we can fix technical issues, or at least improve them. We can (mostly) agree about what's right and wrong and what's better or worse.

Political issues on the other hand, just end up antagonizing us ever more. We argue endlessly, go on countless tangents and nobody agrees on anything because we see the very issues under different lights, experiences, values and cultures.

I am tired of politics. I just want to get stuff done. Hopefully good stuff, but I’d settle for slightly better.


Politics is about where we go. Tech is about how we arrive there. If you disagree on the direction we are headed, discussing different ways to arrive there seems pointless.

You may agree with the status-quo politics, but other people don't. For them it's not about the tech, it's about the overall direction. For those people the important discussion to have is political.

For instance, I personally am against the current direction macOS and Windows are headed. I have no problem with these kind of security measures as long as there is a button to opt out. Currently, Apple decides for everyone, and doesn't provide ways to opt out for power users. I dislike this, and I feel like discussing better cryptography, security protocols, etc, doesn't address my priorities.


I think this is a terrible attitude. We can improve political issues and we do it by defending our opinions in the public sphere, which has the power to change others' opinions. If you don't work to present your ideas to the world in the best possible light then you will just allow other, weaker ideas to become more convincing in comparison, thus doing a disservice to anyone who could have been convinced otherwise.

I am sorry that politics is tiring but I think that is just a reality of politics being a manifestation of natural forces. The natural world is competitive, it's competitive on the cellular level, the food chain is competitive, and human societies are competitive too. Looking the other way doesn't change the reality, it just guarantees that the opinions of others will become more significant to the world than your own. Just like how our cells compete to form the best possible body, I think we are obligated to compete with our ideas to form the best possible society.


People tire of politics when (they feel) the conversation becomes unconvincing, pedantic, or monotonus. That doesn't mean those people have the wrong attitude, it means the argument is not compelling.


People are "tired of politics" until it affects something they care about.


It's possible that the technical discussion about online signature checking was subsumed by the political discussion--the two are interrelated and if we don't get into these discussions here I don't really see another place for them to happen.

It's easy to shrug it off as simply a technical issue, and it's very convenient for the PR department as well.


When a giant like Apple makes a decision to disallow installing apps that were not downloaded from the macOS-app store (and by preventing me to open the app, what Apple does is basically disallowing the app; my retired father does not know how to circumvent this), then this is a political descision that effects the lives of thousands of mac users.

Computers have become a gateway to the digital world, which makes up a large part of our lives. And Apple is a huge player that has the power to shape the future of computing. Apple's vision of computing is on a trajectory where the end game is clear: Users have no control over their devices and will only be doing what Apple allows them to do. This is the case on iOS already, and macOS will be there in a few years.

Some say: Decide with your purse. If you don't like Apple's vision, then don't buy their products. But this argument is like saying "This is how we handle things in country X, if you don't like it, move somewhere else." Some of us have invested lots in tools and software in Apple land, so changing platforms will take some time. But even if we can change, Apple is shaping the future of this industry, in a way that it restricts freedom and limits options. And this is not a future we should accept. Instead we should be opposing and fighting it.


I've recently downloaded quite a number of Mac programs directly from the web, as well as from Steam. They work just fine, as long as they've been signed.

What makes you say that Apple has made a "decision to disallow installing apps that were not downloaded from the macOS-app store"? That seems kind of obviously untrue.

I mean, I myself sell Mac software outside the Mac App Store and have never had a complaint.

Maybe you were thinking of the iOS app store?


> everything becomes political, even if it really is a technical issue

I'm being reminded of an old song by Tom Lehrer [https://www.youtube.com/watch?v=QEJ9HrZq7Ro]


I think informing the vendor of an OS about every program you run has necessarily a political component and is of relevance for discussion about security. It is the first step to define the threat model and there certainly are additional threats you are exposed to. This data is highly valuable to anyone that might want to infiltrate computer systems.

The technical discussion of signatures is well documented.


> It was never about privacy. It was never a political issue. Can we please just discuss it from a technological standpoint?

No. Unintended consequences are important. There are privacy issues, therefore we need to discuss privacy.


> Case to the point: online signature check was a technical decision, to fight malware.

This is an oversimplification. It also helps protect Apple's business model: you must pay Apple a fee for services (and show ID) to be able to sign your apps for distribution on this platform. Imagine if you had to show ID to get a TLS certificate for your website.

Don't conflate the issue - this is also a move to protect certain streams of Apple services revenue, in addition to protecting users from malware, and it always has been.


I see you asserting this over and over.

What I don't see is you providing any real evidence that this is a core part of the decision-making process.

Apple isn't particularly incentivized to find a different way that avoids the tools they already have that already make it harder and costlier for parties to get around their security mechanisms. That is not the same as making decisions because they protect the business model.

Which is to say, it appears that you're the one oversimplifying and conflating (btw, I do not think that word means what you think it means) some very different motivations.


I agree that there is no direct evidence that this decision was part of their formal decision-making process. But there is still something to be said for designing systems where it's not possible for those negative incentives to exist, whether or not there is any current intention of taking advantage of them.

Of the tens of thousands of people who had a hand in shaping macOS today, it's impossible to say what their collective intentions were in all the decisions they made. So I think it's useless to talk only about the intentions you can prove just by looking at their formal decision-making process. That is why we need to be working to protect privacy at every level with a "defense in depth" approach.

And of course it goes without saying that all major vendors have issues like this and could be working harder to make sure that these incentives don't get created.


> designing systems where it's not possible for those negative incentives to exist

No doubt, but even in simple systems this is considerably more difficult than it sounds. Incentive systems are not easy, and any incentive system is often twisted into a game that produces unexpected poor behaviors. Try to achieve that at a 100+k employee company and you're guaranteed to end up with misaligned or counterintuitive incentives.

The reason something stronger than simple assertion matters here is because Apple actually has added an incentive system: trumpeting privacy as a core feature means they tie their brand to their ability to deliver on privacy. That means that being called out for privacy issues has greater potential harm for the company, and thus its bottom line, in a way that's far more direct and monetarily impactful than $100 annual developer fees ever will be.

Even cynically, you can see the same mechanic at work with the recent reduction of app store percentages for small businesses. Apple has made it a core part of its developer outreach that the app store is a good thing for developers: they've made it part of their brand. If they're called out for something that makes many of those developers disagree, or even for something that makes users perceive that part of the brand as incorrect, it has implications to the whole company's bottom line (not just the developer id sliver of it).

> it's useless to talk only about the intentions you can prove just by looking at their formal decision-making process

For what it's worth, my point isn't tied to formal decision-making, it's tied to the informal parts as well. What I'm saying is that this can be a completely technical decision that relies on the existing business structure without ever taking into account whether it will raise revenue from developer ids. The developer id's existence is a fact. The “DoS resistance” characteristics, if you will, of having the developer ids cost money is a fact. As a system architect, leveraging those facts for system security seems perfectly reasonable. Yes, absolutely they should have taken privacy into account as well. “We haven't used it” and “we won't use it” aren't the same as “we can't use it”.

But here's the thing: to me, the strongest indicator of whether a company is committed to an approach is whether they react positively when they are called out, or whether they double down on their mistakes. Apple was called out here, and they've committed to doing just about all of the things they should be doing. You could use this same framing to say that Apple isn't as committed to developers thriving on their platform as they are to living up to their privacy commitment, of course.

Tl;dr: customers are part of the incentive system, tying your brand to a commitment aggressively as Apple has tied theirs to privacy has an impact on customers, and this is a great way to introduce an additional external forcing function to your internal teams.


Apple charges 10x the market rate for credit card processing on the purchase of mobile apps on iOS. Why do you think this is possible?

Take it from dhh if you don't believe me:

https://mobile.twitter.com/dhh/status/1328339591389175808


Sorry, you seem to have deviated into an unrelated axe you're grinding. Try again, this time with the axe you were originally grinding, which I'll help with:

> [Online signature check] is also a move to protect certain streams of Apple services revenue, in addition to protecting users from malware, and it always has been.

To restate and avoid drifting into another non sequitur, this ascribes intent; that is, it suggests that part of the reason online signature check was added, and part of how it has been evolved, is to protect certain streams of Apple services revenue. That would be your argument, which has no evidence to support it, but you suggest is backed by “facts”[1], which appear to be nowhere to be found.

Are these facts somewhere to be found? Or are you stating hypotheses as facts?

[1] https://news.ycombinator.com/item?id=25210475


I have no axe to grind with Apple. I'm a happy Apple customer and have been for most of 30 years.

The same code that keeps malware from running on a mac (or iphone) keeps non-app-store apps from running on an iphone, or prompts you to move non-notarized apps to the trash on a mac.

It's not some separate thing: the exact same code path that protects the consumer store revenue and developer notarization service revenue also protects users against malware.

EDIT, for clarity: I am speaking of Apple-developed, Apple-owned platform security code, where root keys are not held by anyone other than Apple, not generic crypto primitives or the concept of code signing in general (where we have a P-as-in-public PKI).


Which is the same code that keeps unsigned bootloaders from running on PCs which is the same code that keeps unsigned packages from being installed on Linux systems which is the same code that keeps unsigned browser extensions from running on Firefox which is the same code that shows the scary warning on Windows.

Everyone seems to like code signing.


Lol you have never had to deal with apple's over complicated code signing as a developer.

Adds a lot of wrenches when your just trying to do basic stuff like codesign and push test builds onto a USB connected device from a bash script and it is flaky and undocumented as fuck.

I am honestly jealous of my android counterparts with their far simpler system and first class command line support via adb.


[flagged]


I've not posted any theories, only widely-accepted and recognized facts.


> this is also a move to protect certain streams of Apple services revenue

That's a theory, not a fact, nor is it widely recognized.

By any reasonable accounting of costs, the $99 Apple Developer Program fee isn't meant to be a profit center that Apple is trying to "protect". It mainly helps prevent spam accounts and helps offset the cost of reviewing and distributing free apps in the Mac and iOS app stores.


“ this is also a move to protect certain streams of Apple services revenue”

Citation needed please, Mr. Giuliani.


> What has been puzzling me ever since is that these OCSP checks have been well-known for a couple of years, and only now have attracted attention.

It's not much of a puzzle. Everyone's Mac essentially froze simultaneously, and that's what drew all the initial attention. Then people were digging into why, and the cause of it got a lot of play. Privacy advocates took the opportunity to advocate privacy and had a large and annoyed audience.


Don’t forget to account for selection bias: some people had apps delay launching and they started telling everyone about it as if it was everyone which affected all apps and users. Most of us had no idea there was a problem and thus didn’t feel obligated to start talking about it on social media.


Honest question I'm not an expert: The initial commments in this thread are painting it as a severe privacy violation. (The actual OP article author does not necessarily share this perspetive). How is what is being done with OCSP different in more concerning way for privacy (if it is) from Firefox or Chrome's use of OCSP?


Because when you browse the Internet you know you are browsing it. When you run software, you do not expected "unexpected" Internet use.

You go for a walk, you carry an umbrella, or go dressed. You are at home, you do not expect it to "rain" or for someone to "watch you".


That seems incredibly naive. I can’t think of a single program off the top of my head that doesn’t use the Internet to some extent while running. Even many CLI tools I use for development do update checks (and sometimes analytics) in the background.


> I can’t think of a single program off the top of my head that doesn’t use the Internet to some extent while running.

What OS are you using? Purely off the top of my head, Linux programs I don't expect to be regularly connecting to the overall Internet in the background (at least unless I set an explicit opt-in setting or use a user-initiated action):

- Keepassxc

- Krita

- Blender

- Nearly all of my CLI tools

- digiKam

- VLC

- Audacity

- etc...

Programs I have that do check for updates or otherwise send activity to the Internet on boot:

- Firefox

- Calibre

- Emacs (Spacemacs)

- (Probably) something somewhere else that I'm forgetting?

Admittedly, I'm kind of cheating by using Linux instead of Windows/Mac. Those are OSes that don't have good update managers, so many apps manage updates themselves.

But even with those apps, there's a difference between an app itself initiating an Internet connection opportunistically in the background, and an app initiating an Internet connection that actually interrupts the app from launching. Very few of my native apps (side-eyes Emacs irritably) will stop working or freeze on boot if their update server is down or responding slowly.

Remember that the reason people found out about this in the first place is that they went to start launching apps on their Macs and found out none of them would start. That's the kind of behavior I would expect from a web browser if some global server was getting hammered (even if I would still be irritated to see it), but that I basically never expect for a native app.


> Admittedly, I'm kind of cheating by using Linux instead of Windows/Mac.

I don’t think that is cheating. I was primarily thinking of my own work development environment in macOS but comparing that to Linux is perfectly valid. Again though, I didn’t mean to say, “I bet you can’t name a single program that doesn’t use the Internet!” I just meant to point out that programs using the Internet are probably a fairly considerable majority for most regular users.

I use KeepassXC on macOS and it definitely does check for updates. Does it not do that on Linux?


Almost none of my programs on Linux do that by default, because they're handled by my package manager.

Programs like Spacemacs (updating ELPA repos on boot, which I actually kind of think is a mistake) and Calibre (just kind of doing its own thing) are the exception to that rule, but they're pretty rare in my personal experience. Even Firefox doesn't update itself on my Linux box.

That's kind of why I was thinking of Linux as cheating on some level. Windows/Mac programs basically can't do the same thing, since they don't have the same infrastructure.

> I just meant to point out that programs using the Internet are probably a fairly considerable majority for most regular users.

I would push back a tiny bit on this -- I don't think regular users would be surprised by a native program contacting the Internet, but I do think they would be surprised if that rest request failing meant that the program couldn't launch.


> I do think they would be surprised if that rest request failing meant that the program couldn't launch.

I fully agree. The only point I was aiming to make with my original comment was that the mere act of a program connecting to the Internet “unexpectedly” is by no means abnormal.

> That's kind of why I was thinking of Linux as cheating on some level. Windows/Mac programs basically can't do the same thing, since they don't have the same infrastructure.

MacOS, at least, is definitely headed in that direction with its App Store and the move to Apple silicon.


> That seems incredibly naive. I can’t think of a single program off the top of my head that doesn’t use the Internet to some extent while running.

It is not.

Imagine that I don't have that great wifi coverage in all places around the house. But I take my laptop there.

You know what happens when you have poor wifi connection and you wake up laptop? Even the keyboard+mouse are unresponsive. I was wondering why on earth my keyboard stoppe working on a brand new laptop.

All suddenly started to work when I moved to a different room.

I don't have issues with notarization if and only if it does behave normally when the internet connection is spotty - it is a laptop for gods sake, not a desktop with ethernet.


The MacOS OCSP feature has been documented to be non-functional when there is no internet connection, it does not interrupt or slow down any functioning in that case.


Just like the sibling poster replied, it is not the issue of "no Internet".

You have three cases here: 1. Working internet connection 2. Slow internet connection 3. No internet connection

Everything works fine in 1 and 3, but breaks the OS when in 2 (and it is very common, just walk further from your router and you'll notice it - e.g. when waking up the mac from sleep).

What is strange to me is that I had an old Macbook Air with latest pre-BigSur macos, bought a new Macbook Air (early 2020) with the same OS and I saw the problem only on it - at first I thought it was broken.


That does sound like a bug, and if that's what's going on it makes me question Apple's technical excellence if they forget to put a reasonable timeout on a network call in such a high-impact place. This bug is not really about privacy though.


Slow Internet and no Internet are different things though. I have experienced this issue as well — sometimes after boot my regular apps will just bounce and bounce (in the macOS dock) and never start. Then when I plug in to ethernet and shut off my wifi everything all of a sudden fires up and starts working.


If there isn't a reasonable timeout set, that does sound like a bug. More than 2 seconds sounds pretty unreasonable to me (possibly should be even less), for a service that is willing to no-op give up when there is no network. Someone would have to do some reverse engineering/debugging maybe by observing/manipulating network traffic to be sure what is going on there, unless Apple wants to tell us but I suspect the suspicious wouldn't believe them.

Missing or too-high timeout should be fixed, but I don't think that'd be enough to to satisfy critics in this thread? Would it you?

[Not setting a timeout on a network request is a common bug in, say, web development. It does make me lose some confidence in Apple's technical abilities if they make that bug in a place with such high consequences. But that's different than ill-intent or a privacy violation]

People seem to object to the basic idea of OCSP, which I think means objecting to the basic idea of app signing.

App signing seems reasonable to me (although it is important to me there be a way for users to choose to launch un-signed apps; there still is in MacOS). And OCSP seems important part of a app signing implementation. Improvements to the particular OCSP implementation for both privacy and performance may be advisable though.


>People seem to object to the basic idea of OCSP, which I think means objecting to the basic idea of app signing.

I am. It's one of the reasons I ditched OS X when 10.7 came out despite using Mac OS since 7.6. It's nobody else's business what I run on my machine.


> Missing or too-high timeout should be fixed, but I don't think that'd be enough to to satisfy critics in this thread? Would it you?

A fix in /etc/hosts is all I needed, but if there was a timeout of 2 seconds I wouldn't even notice the problem -> so I wouldn't block notarization.


I absolutely agree with that. And I have experienced a similar issue with my laptop from time to time. It is pretty dumb and seems like poor implementation. I don’t think that is relevant to OPs argument though.


Your choice, not mine.

Like: Houdini Emacs Latex A file manager Audacity A Terminal ... Off the tip of my head, used almost daily.


I didn’t mean to imply that one could not name programs that don’t communicate. Of course they exist. I’m not arguing for this type of behavior, just pointing out that it is pretty much the norm. On macOS I use Little Snitch to find and shutdown must of this extra traffic.


> I can’t think of a single program off the top of my head that doesn’t use the Internet to some extent while running

> I didn’t mean to imply that one could not name programs that don’t communicate.

Can you understand how people would read your first comment that way, though?


Yes, definitely! But in my defense, it is simply not what I tapped out (:

I was trying to think of a regular program I use on my macOS (work) laptop that doesn’t connect to the Internet “unexpectedly” and I came up blank.


Others have already pointed out this is an exaggeration but still, there is truth to it.

I wonder how the field of software development has changed so radically that this is somehow considered acceptable and even perhaps normal?

Back in the 90s to mid 00s even, it was considered a clear violation of user expectations and not even remotely acceptable. Back then at Sun it was a big rule to never initiate such opaque network activity that wasn't directly related to user action and the purpose of the code. Requesting exceptions to that would have to be escalated pretty high up and generally rejected. It was something you Just Don't Do.

Even running half a dozen machines at home, my networks connection was entirely silent except for the occasional NTP packets, unless I was actively doing some user-initiated network activity.

These days, well, the outgoing pipe is always active even when no machine is doing anything. Most of that traffic is just variants of spyware, reporting back to HQ on what the user is doing at all times. This should not be considered normal and its on us as the software industry to try to claw back on this problem.


>I can’t think of a single program off the top of my head that doesn’t use the Internet to some extent while running.

Ok but will those programs magically break if there is no internet? I would then argue nearly all applications work offline just fine. It's the exception that a program REQUIRES an internet connection..


Sure. I totally agree. That statement was in response to this from OP though:

> When you run software, you do not expect (sic) "unexpected" Internet use.

When I run software, I _do_ expect unanticipated Internet use. That’s why I use and love Little Snitch on macOS.


All of that is of course bad. Ask around if people know their CLI tool is phoning home, most people aren't even aware. Let me control if I want to update something. Let me control what information goes out, and when.

There is just no way to defend an underhand tactic that you didn't know about. If it was so necessary and so good and so pure, why does it have to be revealed like that?


  [X] Query OCSP responder servers to confirm the current validity of certificates
You can uncheck this box in firefox.

You cannot uncheck anything in macos.

Arguably firefox won't let you opt-out of automatic updates and a bunch of other annoying stuff, but apple is significantly worse.


>Arguably firefox won't let you opt-out of automatic updates

Yes it will, but you need to create a policy and be using version 60 which includes the Enterprise Policy Engine.

https://support.mozilla.org/en-US/products/firefox-enterpris...

The Enterprise Policy Generator add-on will help create the policy file.


Yeah, but removing it from settings was not something I liked.

I actually did the enterprise thing when I found out about it:

  sudo defaults write /Library/Preferences/org.mozilla.firefox EnterprisePoliciesEnabled -bool TRUE"
  echo "sudo defaults write /Library/Preferences/org.mozilla.firefox DisableAppUpdate -bool TRUE
But I could still not prevent firefox from trying to phone home:

  shavar.services.mozilla.com
  firefox.settings.services.mozilla.com


Chrome doesn't use online OCSP checking, and it's optional in Firefox. Both browsers support OCSP stapling, which doesn't violate privacy and has better failure modes, as well as certificate revocation lists.

Because online OCSP checks damage privacy and don't help much with security, browsers are moving away from them:

https://www.ssl.com/article/how-do-browsers-handle-revoked-s...


The browsers are moving away from OCSP and using other protocols specifically because it's terrible for privacy. Also, the whole online/offline thing as someone else pointed out: it is a violation of the principle of least surprise for launching a local app to make a network request.


> explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all

I didn't "enjoy their benefits" - I hated this change when I switched from Mojave to Catalina, and it severely impacted my workflow.

Catalina's change to OCSP and online validation adds little if any value, compromises privacy, reduces performance, and introduces unnecessary new failure modes. It's simply a bad idea whose negatives greatly outweigh any minimal positives.

> what should replace them

Very obviously a Certificate Revocation List, like we had in Mojave.

This is the right approach and should not have changed in the first place.


A fantastically bad take. "You're more secure because Apple has been careless with your privacy for a long time and you didn't complain before -- be grateful."


I don't agree with the article's statement that this is necessary.

I'm sure it serves a purpose. But it should be more transparent to the user what's going on, and it should be possible to switch it off if the user decides they don't want this.

And really, the article also mentions Apple used to do this with a local cache but stopped doing this in Catalina. The question should be asked why. A local cache arguably offers better protection as it will work even without a network connection whereas the OCSP has no alternative other than failing open or stopping the system from working.


> And really, the article also mentions Apple used to do this with a local cache but stopped doing this in Catalina

This exactly. Local cache works fine, certificate revocation is rare, and a marginal to nonexistent improvement in security is not worth the slowdown, denial of service, and privacy invasion.

Chrome uses a certificate revocation list for basically the entire internet; certainly macOS can (and indeed should) go back to using such a list for developer certificates, as they did in Mojave.


Most things serve a purpose, even if it's a convoluted, dumb, wrong or evil one.


How does Windows check executables? I hope they don't do the same. Does it come with a master list of public keys from manufacturers to check the signature against?

How does that work for new vendors?


Microsoft SmartScreen is very similar to Apple's approach. Although I believe you can disable SmartScreen on Windows still which, afaik, you cannot do on macOS without resorting to "hacks" such as editing the hosts file to loopback the Apple OCSP server.

https://en.wikipedia.org/wiki/Microsoft_SmartScreen


Good thing too because it is easy to screw up signing.

For example, even the dotnet team has trouble with it https://github.com/dotnet/core/issues/5202


Exactly. The policy applies to everyone. Unsigned new binaries (executables) on the internet are not to be trusted by the general public.


Unfortunately Microsoft is that predictable that it is safe to say they already think about removing the option to disable it. Another reason not to use an MS account. I think this is a plain data leak and arguably worse than a trojan can do to your system in context of modern banking security measures. Also I doubt most users are aware that MS gets info on every app you run.


>Does it come with a master list of public keys from manufacturers to check the signature against?

That won't handle revocations.


It will if you update it frequently.


...Do people not know AV's randomly upload files to the cloud to be analyzed?


Security and privacy are not parallel concerns, they’re orthogonal. Strong security absolutely does not imply utmost privacy. I find this to be the most dangerous misconception of the late privacy trend. You can’t just turn security and privacy dials to 11. They’re actually two ends of the same dial, or opposing poles of the same sphere. To increase privacy you must move away from perfect security.

Why? Because security is all about who you trust (and who you don’t). Privacy is about concealing things from people you trust (and especially from those you don’t). Security is best served with strong identity and periodic integrity checks/monitoring. Privacy is best served via anonymity and opacity. If something is private, by definition there lacks the transparency to audit its integrity.

So what I lament is not that a company is trying to achieve both, but rather that as a consumer I’m not educated on the topic and able to make a choice as to where I want to set the dial.

You will continue to see “headlines” like this so long as socially we’re obsessed with trying to implement both security and privacy and fall subject to marketing suggesting some service provides the maximum of both.

If you trust Apple to verify the integrity of apps on your devices and secure your system from unwanted software, then you trust Apple to maintain the privacy around the data needed to achieve such. That’s the whole value prop of their platform and ecosystem. It’s a walled garden with abundant privacy koolaid fountains.

The only reason this is news is because people don’t understand the privacy vs security dichotomy. And because Apple does not provide a way for consumers to choose just how much security they’re comfortable with.

If you don’t trust Apple then stop pretending you do by using their hardware/ecosystem.


While security and privacy are not parallel, they are not orthogonal either.

You can have both. Integrity checks can be done anonymously (e.g.: you could have a p2p network of devices sharing a signed database of certificate revocations).

Encrypting something gives me privacy. Signing something gives me security. Encrypting a signed package gives me both.


They _are_ orthogonal, you’re just saying that you can have some of both which is exactly my my point about them existing on a spectrum.

And it’s not as simple as encrypting data. You have to trust somebody to determine what good integrity looks like and to then verify the integrity information is fresh. The same privacy concern exists if you run OCSP against cypher-text as it does plaintext. You still have a stream of all the things people do. Bad for privacy.

Running a decentralized system means you have to trust all the nodes to not to store data or collude. Same problem in a different way. You simply cannot achieve integrity verification if you don't trust anyone to do it. And this is why they are orthogonal. Trust is not compatible with doubt.

The issue here is that Apple took off the shelf OCSP and applied it in a way it was not designed for. So there _are_ actual problems with their late implementation. They should be fixed. And personally I think OCSP is kinda dumb because it mechanically defeats the advantage of certs (you don’t need a cert if you’re going to phone home for every invocation, just check a hash), but meh.


You're using the extreme cases of each to argue they are "orthogonal". Perhaps at some extreme no one actually lives in, they are orthogonal. Most people don't need extreme privacy or extreme security, so in the cases that matter, this observation (which seems to be a major crux in your argument) is not important.


The point is that the more private you make something the less ability you have to audit its integrity. If sending a 3rd party a list of hashes is a privacy problem, then security is what takes a hit in order to preserve privacy. That’s not an “extreme case”, it’s what’s being discussed in the essay and in this thread.

Similar examples include DNS over TLS vs DNS filtering for content security, and client certs for mutual TLS vs exposing personal information in said cert, and secure neighbor discovery, and IPv6 (can’t have a global IP because someone might track it), the list goes on.

I’m not saying we should pursue security at all costs or privacy at all costs, far from it. I am saying exactly that there’s a balance between the two and moreover that the balancing point may be different for individual people which leads to arguments like we’re seeing here between people who calibrate more on the security end vs people who prefer extreme privacy. And in my experience people very often conflate the two, which makes it hard to have a productive discussion.

Finally, I’d venture to say that the privacy push of late is having impacts on the ability to deploy strong identity because much of the privacy wave lacks the nuance to distinguish between entities you trust and hence with which it’s okay to maintain a stable secure identity, and those that aren’t. Instead the trend lately has been remove stable identifiers (e.g. Apple’s move to fake mac addresses and GDPR’s IPs are PII) and conceal everything no matter what (TLS 1.3 and DoT/DoH although props to Mozilla for making it possible to configure at the network level via DNs).


One major security concern is identity theft so your statement of privacy being orthogonal to security is very likely not generally true, because the amount of parties you share information with increase the risk.

So we need to look at every scenario in particular because a general rule might not exist. I think the mechanism of signature checking and its consequences should be more transparent.

> If you don’t trust Apple then stop pretending you do by using their hardware/ecosystem.

Fair point, but the trust I put in Apple is that I know their intentions which are quite obvious. They want to sell their products. They want their users being safe and content. I trust them for that. Can this trust be extended to not creating barriers that might increase their revenue? Or share user info for profit? Certainly not. I think Apple is better here than competitors, but it is a common error to not attach restricted scopes to security tokens.

edit: To this case there are trivial solutions for signatures that solve the problem without any privacy violation. Maybe Apples process does preserve privacy because they don't cross reference any data. Maybe they just use it for statistical purposes to determine which apps are used. I would be fine with that. But your statement suggest there is no privacy preseving alternative and I think that is technically wrong.


"Privacy is not a feature".


Privacy is a currency you pay hidden charges with.


Privacy is a thing you only get to choose once per a line of events. Once you commit to the non-private option, you are out of luck. No amount of postcautions will fix that.


Directly contradicting current Apple Marketing.

I lothe Apple(and other unethical companies) for lying in their ads.

Any benefits of macOS are instantly gone because you cannot Trust Apple to tell the truth. It's as unreliable as Google keeping a service around.


Apple's marketing basically boils down to "please trust us that we respect your privacy because we tell you so". You can at least reverse engineer the hardware you own and the software on it, but what about cloud services they're pushing so hard? You don't, and can't, know what happens to your data in someone else's infrastructure.


Well we do know they dropped iCloud end-to-end ecryption after FBI complained.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

We also know that Apple handed all chinese cloud data to the goverment by changing host to a state firm.

https://www.theverge.com/2018/2/28/17055088/apple-chinese-ic...


You don’t have any evidence to support the claim that they are lying.


First time on HN? There are like 2 new Apple security or privacy issues every week.

You know about PRISM/edward snowden?

You can verify all of this with almost no effort. Any links I post you won't believe. It's up to you.


You must be new yourself not to have seen how quickly those claims are debunked or shown to be far less exciting than claimed. For example, you’re repeating Greenwald’s misunderstanding of the PRISM report seemingly unaware of the last decade of discussion.


Not one of those security or privacy issues substantiates that Apple has a hidden agenda to collect data on you.

They do substantiate that Apple has a long way to go in terms of technically solving privacy problems.

Yes, the NSA has an agenda to track you.


Ahh my book doesn't actually make you a millionaire in a day,but that's just because I failed technically. Anyway-

"Buy my book, make 1 million dollars in a day".

It's not a hidden agenda, I just suck.


Apple is actually checking certificates. That is the agenda.

Claiming they are doing more requires evidence.


I first started getting an idea that something was amiss with apple years and years ago.

"Apple machines don't get viruses"

Everyone "knew" that. Everyone knew that apple machines were secure.

Turns out there were plenty of things going on with apple just like other companies. The difference was that apple would actively persecute (even prosecute) people who explored or discovered these things. Basically apple marketing was able to prevent a lot of vulnerabilities from publicity. Probably marketing 101, protect your brand.


Please cite evidence for these extraordinary claims. Who, precisely, did Apple "prosecute" for "exploring" security issues?

Please don't post made-up things in public.

As an aside, no, everyone did not "know" that Macs don't get viruses. But yes, most informed people were aware that Macs were somewhat less susceptible to malware for a while. Please don't exaggerate wildly.


I cannot find the specific articles, but there were security researchers who would find some sort of apple problem and would be contacted by apple and go dark. This was probably the era when osx was getting off the ground. I assume they were protecting their brand, but I always thought it was a little ingenuine (on the other hand, security researchers are not all college professors with good intentions)

I think apple finally had to cave when there problems too large to ignore.

https://www.ibtimes.com/apple-yanks-mac-virus-immunity-claim...


I know this because if my internet drops out - the router still is up, but packets are going nowhere - opening an app, especially one of mine, takes north of 10 seconds, which is the time for this thing to give up on talking to Apple.


> They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all, and what should replace them.

I mean, this is very very presumptuous... the people I know who were "in the know" on this--including myself--never upgraded to Catalina (in addition to doing a SIP disable), in no small part to avoid this intrusive behavior.

(Further, there actually was outcry about this a year ago, when a similar issue happened: just, instead of the system not running software at all, it ran all new software super slowly.)


The online component of ocsp is only necessary for revocation lists. There is absolutely no reason to open a network connection every time an app is opened.

The state of OSX is unacceptable.


What I find a lot scarier is that macOS seems to store your local user's password as a hash with Apple.

A few months ago I was signing in on my MacBook, and it asked me (assuming because I did not have a mobile number attached) for my Hackintosh local user's password to 2FA.

May be buried in the depths of the EULA, but I most definitely never agreed for my LOCAL account password to be uploaded to Apple.

At least signature checks can be blocked (for now) on a DNS level. What about my passwords?


If you’re using iCloud then that’s coming from keychain. That’s how you can reset the local password with your Apple ID.


Never been using iCloud, on any of my Apple devices


> What has been puzzling me ever since is that these OCSP checks have been well-known for a couple of years,

This is where you’re wrong. It was known, but not well-known. Users do not expect an HTTP request to be blocking their application executions.

Personally, I don’t see why CRL is not sufficient. Yes I want malicious signatures blacklisted, but can’t I just get a list instead? Some of the reasons CRL is no longer used do not apply to code signatures.


How long they've done it has no bearing on whether it is an outrageously bad idea. It was bad in Catalina and it's bad in Big Sur.

I found it particularly annoying in Catalina to have these slow, unnecessary and intrusive online checks every time you run a Python script and it drove me up a wall until I figured out you could turn off the idiotic behavior in preferences.


The only charitable understanding of this program is that Apple has no actual table connecting software to hashes, but that they could use the information to understand outbreaks of botnets/spyware that they could then help inform ISPs/global law enforcement to help stop.

Is this even reasonable?


The requests contain only app hashes. They do not contain the unique hardware identifier that Apple computers have. They do not contain your Apple ID, identifying you as a user.

Why would you not interpret this charitably as them not actually trying to spy on you? If they wanted to spy on you, why on Earth would they not send the actual valuable information?


I believe you are trying to think about this rationally, but this is not the only thing going on.

When a mac boots up or changes network location, a long list of processes on your machine (like AppleIdAuthAgent, identityservicesd, , and maybe 10 or 20 more) connect to various apple servers associating your actual identity with the ip address. It will continue to do these kinds of things while you are online. And all this is interleaved with oscp requests.


Do you have sources for this? I don’t doubt you, I’d just be interested to read some details about what exactly those services are doing/sending.


this is from observations of my own machines.

I would suggest you install little snitch, turn off everything, and then boot your machine. Maybe switch from ethernet to wifi and observe the kinds of popups you get.

Another method, though hard to associate with specific processes, would be to run tcpdump -i <interface>.

Sometimes it's more useful to run tcpdump and do a capture, then play it back later because DNS lookups by tcpdump itself make it more chatty than it should be. You could also run it with -n but you get less information.


It is not possible to accurately connect an identity with an IP address. Many computers share IP addresses, and many others jump IP addresses frequently.


Let's say you're one out of 10,000 users in a large network sharing a single public IP address.

Anyone trying to identify you just needs to narrow that down from 10,000 to one. This can be done many different ways by combining data sources. You could automate it with algorithms and maybe some machine learning, but it'd also be pretty trivial for a dedicated human to do it.

Browser fingerprint, mac address, software versions, browsing behavior (https doesn't hide URLs), access times, and application hashes will all help narrow down that search.

Maybe your university requires you to install some special software (like one of those locked down browsers for exams), and the attacker knows an approximate range of time when you installed it based on your exam schedule. They could narrow their search by intercepting OCSP requests and filtering for application hashes that match the specific version of the software you most likely downloaded, based on the time you most likely downloaded it.


But why? Why go to these ridiculous lengths to try to extract information from an unreliable source?

Apple already has the device identifier and Apple ID. These are reliable. They do not need combining different data sources and algorithm and machine learning. They are the accurate data already. If they wanted it, they could just send it.

They don't. Why not? If they wanted this information, why on god's green earth would they not just send it?


It's quite obvious. That way you can't get nailed for breaching privacy.

It's exactly the same concept as the NSA saying that they are only collecting metadata and not doing any spying.


It would do absolutely nothing whatsoever for the legality of any collection they might want to do.


Of course, it would. IPs are required data to be gathered. They can be almost all the time correlated with real identities using other requests, but under many data regulation laws they wouldn't be allowed to send personal identifying information that isn't necessary.


GDPR would not agree with you there, I gather.


How does the GDPR agree with the collection of personally identifying data that is not necessary to operation?


I didn't mention Apple. Anyone intercepting the OCSP traffic (which isn't encrypted) could use it to track you.


Shoot, apparently those Tor devs have been completely wasting their time.

Somebody in the security industry should let them know that their work on the network level is useless and unnecessary because leaking IP addresses isn't a real privacy threat.


Accurately. The key word in there was "accurately".


The real key word is "legally". IP addresses + other metadata (browser fingerprinting and the like) can be enough to sufficiently identify an individual, or at least a household, for some purposes, such as making a more effective advertising profile.


Collecting the data based on IP would basically have the exact same legal status as doing it based on other identifiers. No browser is involved here, so no fingerprinting. And finally, Apple is not in the advertising business and has no need for advertising profiles.


Coulda woulda shoulda isn’t evidence.


I'm having trouble figuring out your meaning in this context. Care to explain?


There is no evidence Apple is storing any information collected via OCSP for advertising, or collecting metadata along with it. Effectively all they know is an IP address and a hash identifying a developer certificate that can correspond to any of dozens of apps.


How does "coulda woulda shoulda" mean that?


If IP addresses couldn't in some cases accurately track users, then it wouldn't be a priority to build a network that obscured them.


If they can only do it "in some cases", then they can't do it accurately, is the entire point. It can do it SOMETIMES.

Apple has more accurate information. They are not sending it. Why?


I think you might be confusing accuracy and reliability. But I'm not here to argue about semantics, you can use whatever definition of accuracy you want. IP addresses are large attack vector for deanonymization/tracking, and people should be thinking more about how IP addresses get leaked and in what contexts. Whatever definition of "accuracy" you want to use, I don't think that changes the overall point that IP addresses matter.

To your earlier comment:

> Why would you not interpret this charitably as them not actually trying to spy on you?

I fully agree with this. I think Apple's intention is not to spy on users, it's to A) stop malware, and B) exert more control over their ecosystem in general.

I have a problem with point B, but that's a separate conversation.

To your later point:

> It is not possible to accurately connect an identity with an IP address.

This is just plain wrong; it is possible to accurately connect identities to IP addresses, people do it all the time. Reliability is a separate conversation. If you're working off of a different definition of "accurate", then, whatever, I don't care. But I stand by the point that IP addresses are a privacy risk and that they can be used to track/identify users in the real world, I don't think that's a disputable fact.

The privacy worry here is twofold. First that these requests are (currently) sent in plaintext. To their credit, I think Apple is fixing that issue. Which is good, because plaintext payloads allow adversaries on the same network to potentially sniff which applications you're using.

The second privacy worry is that regardless of whether or not Apple is trying to spy on users, they still might be storing that data, and we only have their word to go on that it'll be stored in a protected way or deleted regularly. If a court order comes down asking Apple to reveal that information, or if it gets hacked, that data increases the threat model for Apple users.

An adversary with access to Apple's data (whether that adversary is a hacker or a government) could theoretically tie app usage to real-world identities if the IP addresses are logged, especially if they can match IP addresses with logging of other data being sent to Apple's servers. That's the point m463 is making in his original comment -- the data here doesn't just exist in isolation, it's IP addresses that an attacker could correlate with other real-world identifying information.


Many computers don't share IP addresses too. And there are times when there is only one macOS under an IP. This is not something to just wave away.


The point is, this information is UNRELIABLE.

Apple has access to information that is ACTUALLY RELIABLE.

However, they choose to not use the reliable information.

Why would you see this, and assume that they, on purpose, decided to NOT use the reliable information, but instead use unreliable information to spy on you?

Why would they do something so bone-headedly stupid?


Do you reject the possibility that they combine the reliable information with the unreliable information to strengthen the latter? I mean, if you have logs stating that a specific Apple ID connected from a specific IP address at a specific time, and they know the identity tied to that Apple ID, it seems reasonable that it would strengthen the claim of "the user at this IP address is most likely this person" quite a bit, especially if the same IP shows up regularly with that Apple ID? I'm not saying they do this currently, but I'm just drawing attention to the fine line between privacy and security. Vigilance isn't necessarily a bad thing, as long as it's grounded in the full reality. It's always possible that one day, Apple's focus in this area may change. When it does, it helps if part of that discussion is already being had.


I reject that, yes, because they could just collect the reliable information. The unreliable information is no value.


>The requests contain only app hashes.

No, application hashes are never sent. People are confusing OCSP and notarization. The requests sent via OCSP check the developer certificate. Notarization checks are done over HTTPS.


The request also contains your ip address. Charitable would be distributing a bloom filter and checking matches locally.


Your IP address is a very weak and unreliable way to identify you. If they actually wanted to identify you, they have much better ways to do it, and those ways are not used.


Is it even app hashes, or developer certificate hashes?


The larger the program, the longer it takes to verify the first time you run it. I can only interpret that as meaning they are basing the whole program.

Source: me, downloading and running various apps


Apparently the initial launch delay is Gatekeeper, which is something different.


Your IP address can be pretty accurately matched to an identity due to PRISM.

Has a third party verified they don’t keep track of IP addresses?


An Apple ID is tied to an identity right now. They don't use it. Why?


What does that even mean? Of course they can identify you, you are knocking their door with the same IP with your iCloud account. Maybe the file you are giving them does not have your uid, but as long as you have connected your Mac to your Apple account you are uniquely identified.


Plenty of computers share a single public IP address. Plenty of computers jump IP addresses constantly. It is not at all reliable trying to tie IP addresses together that way.

If they wanted the information, they would need to send it.


Maybe you have a funny network situation, but my network situation is solid, and my IP address rarely changes. IP addresses are absolutely identifiable information.


They are SOMETIMES identifiable information. They are not RELIABLE for identification.

If you are going to be collecting information, you are not going to choose unreliable information when you have the option to collect reliable information. That would not make sense.


I can right now open my vpn, change my ip address, go to my apple account and it will report that my Mac is "Online", meaning that apple knows that my particular machine is on with a given IP address. It would be completely useless (redundant at the very least) to also include the uid in the signature file.


There might ten other, a hundred other people with the same public IP visible to Apple, if you ISP is running you through a NAT.

That fact that you have connected from an IP recently in no way guarantees that the next connection from that IP will be coming from you. That is exactly why you need to send an identifier for any data being collected.


Every time you change network locations, all of these daemons contact apple again. There are plenty of push services on your machine and apple keeps track of where you are.

also ipv6


And Apple updates your ip continuously in their server as evidenced in the “find my” service.


>They do not contain the unique hardware identifier that Apple computers have

Different part of MacOS can send it and you would have no idea until it malfunction like in this case.


They would not be accurately tied together, though, since these requests are not sent with any kind of identifier.


Who knows, may be they can start to send id too by a special request from the server or send those hashes by other channels in addition to this one, but much later. And server can make such request based on some heuristics or based on some 'black list' of hashes. How can you know for sure? We can only guess to the certain degree without looking into sources.


Why would they NOT want the data now but would want it in some unspecified future?


To avoid being caught could be one reason. Why would they hide the feature without option in GUI to turn it off? We do not know. Possibly they are hiding something else, something bigger, who knows.


OCSP requests don’t identify software, they identify certificates. Each certificate can be used for dozens of different apps.


That is not correct. It does a live check when presented with a certificate, to make sure that certificate has not been revoked for signing malware. It doesn’t store anything. Apple are not saving information. It’s just an online blacklist check. That’s how OCSP works everywhere, it isn’t an Apple thing. They are using the standard protocol as documented in the RFC.


There is nothing in the plain OCSP that prevents the responder server from logging the request along with the originating IP. Any claims that a particular server doesn't do so is either just an assumption or based on trust alone. This is why OCSP-stapling is preferred against plain OCSP in browsers and also why plain OCSP can be disabled. In this particular case, trustd and other system daemons are known to skip VPN and firewall blocks - so it's mandatory information leak.


> It doesn’t store anything. Apple are not saving information.

How do you know?


Is there any UI indication that OCSP checks have been consistently failing for some period of time?

My concern is less local malware (if something malicious has gained the privileges to filter OCSP, it's probably already game over) but rather networks filtering ocsp.apple.com (for whatever reason).


Does it matter? Would anyone (but the most paranoid) care? My guess is that OCSP is supposed to be part of a defense in depth strategy, so it doesn't have to be 100% airtight to achieve its goals


I care that Apple or Microsoft don't get to know what executables I run. That is information highly relevant to security.

And what do mean with paranoid? I think computing is in danger of getting locked down and that would be such a large overall detriment for everybody. It is a matter of having a broad perspective.

It is also political because you create information asymmetry. To be honest, to dismiss it as paranoia might not be a really valuable assessment. Subjective, but I think it fairly lacking.


> I care that Apple or Microsoft don't get to know what executables I run. That is information highly relevant to security.

I think you missed the context of this comment thread. The "Does it matter" and "Would anyone care?" questions were directed at the question of what happens (presumably from a security point of view), if OSCP requests were being blocked. It's not related to the discussion about whether OSCP queries violate privacy or not.


Certainly explains why having terribly slow internet makes my mac insanely slow. Shutting off I cloud helped immensely, but this is probably the deeper root.

Sucks to have a cellphone only internet.


Did people miss how Apple has already said they've removed all the logs retroactively for this and they're adding a feature to opt-out of this?


yes. with no personally identifying details included. and it was publicly documented.


IP + timestamp is a personally identifying detail. Otherwise we wouldn't have Tor.

https://www.theatlantic.com/technology/archive/2014/02/every...


So, can you use macOS offline for extended periods of time?


Yes, it only checks when you're online


It seems that many tend to overlook the main issue with this.

Because of this feature, there are people on this lovely planet of ours who may be in actual physical danger at this very moment.


Citation needed.


Citation to what?

The fact that if you're a journalist in Belarus with a Macbook right now, the kind of apps you open can point you out to authorities controlling the local internet infrastructure in no time?

Or do you expect repressive governments to make press releases explaining in detail how they came to rounding up someone?


OCSP doesn’t send app signatures.


Developer certificates are enough to figure out you're opening, say Tor, or Telegram.

See this thread for people discussing this: https://news.ycombinator.com/item?id=25095438

When they implement a toggle for switching this off, along with data encryption, then this particular concern becomes a non-issue.


Apple doesn't have ads and tracking they said... Anyway that was never worth the premium for me when I can put ad blocking on Lenovo and save a few hundred €.


Surprised this is news to some.


I wonder how long until Macosx checks hashes of all media played on devices.


[flagged]


It has been widely known that these checks were happening.

Not only that, this isn’t the first server problem that impacted launch performance. It’s just the most severe.

The main difference is that this time around there are people who are claiming that Apple is using the OCSP checks for some kind of nefarious tracking purposes.

These people have no evidence.


"The main difference is that this time around there are people who are claiming that Apple is using the OCSP checks for some kind of nefarious tracking purposes."

What proof is there that we should trust Apple? They could tracking for nefarious purposes for all you know. Thats the problem.


That’s true of every single organization and every single individual.

You can always justify a conspiracy theory on the basis that you can’t prove a negative like this.

Let’s consider another conspiracy theory:

“A state actor wants to install spyware, and Apple’s OCSP is a barrier to their goal. They are running an influence campaign to get users to opt out of security protections.”

There is no evidence for this theory.

But “for all you know” certain people posting here have been paid to spread disinformation as part of this conspiracy.

(Just to be clear - there is no evidence for this, and I don’t think it is likely)

In the absence of evidence, it is not rational to completely dismiss either or both possibilities (that Apple has a hidden agenda or that there is a conspiracy to weaken Apple’s security).

What is irrational is to use the absence of evidence to the contrary to convince yourself that something is obviously true.

However on the broader point - I agree that we should not be reliant on trusting Apple for our privacy and security, and cannot afford to be as we move into the future.

We need a public domain infrastructure that produces similar or better security and privacy outcomes to the ones Apple is claiming to provide.


> You can always justify a conspiracy theory on the basis that you can’t prove a negative like this.

Its not about definitively claiming they are being nefarious, its about they CAN be, and Apple isn't transparent enough for us to know if they're not. So its about risk. People can use Apple products, I don't really care, but they risk their privacy when they do, and thats not a risk people should have to take when using an OS.


Any software vendor CAN be nefarious.

It is just innuendo to claim it about a particular one without evidence.

People don’t risk their privacy by trusting Apple any more than they do by trusting anyone else. Almost certainly less so than by trusting a company that makes money out of personal information.

Singling Apple out without evidence is misleading innuendo.

If we want people to have the option not to trust private corporations, we need to create infrastructure that currently doesn’t exist.


> Any software vendor CAN be nefarious.

I'm glad you finally realized this! And thats why people should use FOSS.


I think people should use FOSS, but lying about Apple doesn’t help with that, nor does it solve the problems that FOSS has when it comes to creating a trustworthy ecosystem for end user software delivery.


So you guys know AV's randomly upload ur files to the cloud right?


The irony of arguing that the rapid rate of certificate revocations is proof of the system being necessary and secure. No, it's proof that the system is useless. Code signing is a dead end, and we have known that latest with Stuxnet.


With the system checking for certificate validity Stuxnet would have stopped shortly after its certificate was revoked.

Regardless, Stuxnet example is way off the mark. It was designed to work in air-gapped network and defeat particular set of obstacles.


Any opt out protection racket should be illegal. Even if it is "anonymous", someone could be identified from their behavioural patterns which are unique to each human. I hope that Apple gets at least hundred billion fine for such brazen violation of privacy so they will learn their lesson and they should be ordered to delete all personal data they don't have legitimate business need for.


Why would they try to identify you from your "behavioural patterns", when they already have the device identifier and your Apple ID, which identify you are your computer uniquely?

Which, to be very specific, they _do not send_. They COULD identify you extremely easily, and they specifically chose not to do that.


May be they have other channels which could or already do send device identifiter and Apple ID. How do you know what _else_ they do ?

Who knew they were sending hashes till the recent malfunctioning? What _esle_ we do not know now?


If they wanted that information tied together, they would send it together. Otherwise they have to keep guessing about what goes together with what. That would make no sense.

And it was known this data was being sent.


>That would make no sense.

Hiding such feature without option to turn it off would also make no sense to me but they did it.

>If they wanted that information tied together, they would send it together.

sure, unless they wanted to hide the fact they wanted information tied together. In that case they can always provide the line of argument you provide once they caught and say "Oh, if we wanted to spy we would do it openly and brutally. There is no sense for us to make it complicated." But something is telling me that one who wish to spy would do it in some sophisticated manner. The bottom line we do not know and we _can_not_ know since we did not see _all_ sources.

You also cannot know what they wanted, so your guess is as good as any other? Or somehow you know something we don't?

The bottom line here is the same: who knows.

But I think if they wanted to make it open and secure they would simply put option in GUI with clear text explaining what it does and how it does it with ability to turn this thing off. And they didn't do it this way. For me it can mean 'they are hiding something' possibly. What else they hide and why we simply do not know.


It's sent unencrypted. Anyone else could use said patterns to track and identify, with good reason because they do not have the ID.


Apple also makes apps themselves, and, just like AmazonBasics, has a tendency to clone popular apps (such a tendency that it has a name: to be "sherlocked").

To ignore the fact that Apple receives global platform app popularity data, which provides them a competitive advantage over every other app developer, is somewhat foolish.

Gatekeeper has security benefits, yeah. But this data, just like 3P sales stats to Amazon, has commercial value, even anonymized/aggregated, that permits them benefits over everyone else they are competing with in the app market.

Let's not conflate things that are good for users with things that are good for Apple, and be careful when assigning decisions exclusively to one bucket or the other.

I spoke yesterday on the societal dangers of giving Apple such a pass on privacy violations:

https://www.youtube.com/watch?v=iG-7FpHvv-8




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: