Hacker News new | past | comments | ask | show | jobs | submit login
macOS has checked app signatures online for over 2 years (eclecticlight.co)
614 points by giuliomagnifico 60 days ago | hide | past | favorite | 430 comments



A common refrain in arguments that we don't need laws to protect privacy is that the market will take care of it. The market can't act against what it can't see. Privacy loss is often irreversible.

A common refrain in arguments that we don't need to reject closed source software to protect privacy is that being closed source doesn't hide the behaviour, and people will still notice backdoors and privacy leaks. Sometimes they do, sometimes they don't.

Parts of the US Governments unlawful massive domestic surveillance apparatus were described in the open in IETF drafs and patent documents for years. But people largely didn't notice and regarded reports as conspiracy theories for a long time.

Information being available doesn't necessarily make any difference and making a difference is what matters.


The market only acts fairly when the product is a commodity. The time for the market to react for a product with the complexity of a mac is decades.

As the ecosystem grows, the cost of switching increases. Therefore market starts acting more and more inefficiently.

This is why countries have state intervention in such cases. And anti trust exists.

If the option was a mac with privacy vs a mac without privacy but $10 cheaper, I'd think the market would pick the mac with privacy. No such choice within the budget exists, and if a company has a monopoly on the ecosystem, then the market cannot react when held hostage. The cost to transition to a different ecosystem is thousands of $'s when you've been using a mac your entire life. It's not like you'd want to relearn many things when you get older, perhaps the sunk cost fallacy as many would define it.

It's a bundled deal, it's not like you can pick the parts you like and throw away the ones you don't like it would be in an efficient market.

If even someone like me cannot be bothered to move to more privacy friendly platforms, then I don't have hope for other people. It's just not worth it. Until the majority face the consequences of their apathy, any action you take is fruitless. Let them taste the pain, then offer the solution.

I know I can be tracked. So what? I can't verify the silicon in my device, can I? Gotta trust someone, maybe blockchain will solve the problem of trust among humans, in a verifiable way.


>maybe blockchain will solve the problem of trust among humans

Absolutely not.

https://www.schneier.com/blog/archives/2019/02/blockchain_an...


There's so much wrong with this post I'm not even sure where to start. Literally almost every paragraph starts something untrue. The whole article is written from a false understanding.


If you’re going to claim Schneier is wrong on crypto stuff, you’ll want to bring a suitcase of evidence along if you want people to take your claim seriously.


Well, he does think that one is unable to establish the integrity and authenticity of a message by using DKIM, so...

He does seem to have a weird cult of personality around him but I can't really understand why. He has been irrelevant for quite a while now.


If you’re going to claim Schneier is wrong on crypto stuff, you’ll want to bring a suitcase of evidence along…

How about $348 billion dollars that says he's wrong about his take on Bitcoin?

Look, I get it. I respect Schneier's knowledge on encryption but he is wrong about blockchains and about Bitcoin in particular.

But he wouldn't be the first establishment technologist/economist/politician to be wrong about Bitcoin.

As I write this, the market cap of Bitcoin is a little over $348 billion dollars [1]; there's no way it gets to this valuation if its distributed trust model didn't work.

He's making social and process arguments, not technical ones.

[1]: https://bitbo.io


> How about $348 billion dollars that says he's wrong about his take on Bitcoin?

Market bubbles are a thing. For a while there everyone was convinced that small plush toys were going to help them retire. The market also convinced itself for nearly a decade that "housing prices never go down". Lots of people can be wrong for a surprisingly long time.


Market bubbles are a thing. For a while there everyone was convinced that small plush toys were going to help them retire.

Yes, bubbles are a thing, but Bitcoin appears to be something different. It's passed ever test and attack. It's now being taken seriously by mainstream financial professionals, CEOs of publicly traded companies and Wall Street.

These facts themselves can't prove that Bitcoin is not a bubble but it does mean if you buy into that line of thinking, then encryption and math have to be suspect as well, since Bitcoin's functioning relies on them.

Also, Bitcoin has been the best performing asset of the past 10 years in which most people didn't take it seriously: https://www.bloomberg.com/news/articles/2019-12-31/bitcoin-s...


> Also, Bitcoin has been the best performing asset of the past 10 years in which most people didn't take it seriously

You're literally describing the bubble. An asset that is worth $20k one day, and worth $6k 6 months later, is worthless to anyone except speculators, speculating on... a bubble.

Disclaimer: I am long BTC.


You're literally describing the bubble. An asset that is worth $20k one day, and worth $6k 6 months later, is worthless to anyone except speculators, speculating on... a bubble.

This is short term thinking and a general mischaracterization.

First, we've never seen a new form of money created in realtime, so it's hard to say how it's supposed to perform.

However, nobody should expect something that will fairly soon have the same market cap as gold (about $10 trillion dollars) to not have a lot volatility as it grows. The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.

There's a lot of ups and downs for Apple as it went from darling startup to nearly going out of business in the mid-90's to a $2 trillion dollar market cap today.

As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.

A publicly traded company that puts its treasury of $425 million into Bitcoin isn't speculating: https://www.microstrategy.com/en/bitcoin.


> First, we've never seen a new form of money created in realtime, so it's hard to say how it's supposed to perform.

This hasn't changed with Bitcoin. BTC is money like a gold bar, beanie baby, or block of IPv4 addresses is money.

> However, nobody should expect something that will fairly soon have the same market cap as gold (about $10 trillion dollars) to not have a lot volatility as it grows. The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.

Nobody describes the tech darlings as money, or as an alternative form of currency. They're equities you can invest in, and comes with the expected volatility.

> As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.

It's nice that there's a backcronym that's been created from a typo. I still don't invest in money, I use money to invest in assets.

Disclaimer: I remain long bitcoin (since 2012)


> First, we've never seen a new form of money created in realtime.

Call me when I can buy my latte and groceries with bitcoin, or pay my mortgage. Until then you've made an investment vehicle, not money.

> The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.

Survivorship bias. Pointing to a handful of random successful companies and trying to draw conclusions is literally meaningless.

> As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.

Yeah, this isn't something you actually say about money. Honestly, it would be a kind of cultish thing to say about an investment too. I don't need a mantra for how I invest in my 401k, why does Bitcoin need one?

> A publicly traded company that puts its treasury of $425 million into Bitcoin isn't speculating

That's the literal definition of speculation, a risky one at that.


Schneier is good at times, but he doesn't always do as much due diligence as he should, and he never bothers to interact with his comment threads, answer questions, or publish retractions.

As a result, I've had more than one experience of arguing with people who take his old blog posts as gospel without ever thinking critically about them.

His article denouncing the XKCD password scheme is a gem, he doesn't bother (if it were anyone else I'd be less charitable and say doesn't know how) to calculate the entropy, and then he proposes an alternate scheme he invented that almost certainly provides less entropy and is more vulnerable to dictionary attacks.


I’m actually curious what’s wrong about it? I read it from an outsider perspective and it’s full of very convincing arguments against “blockchains.” You’re absolutely correct that he’s writing from his understanding, but Schneider’s been in the field for decades (more than many Bitcoin proponents are old), so I’m more inclined to believe he knows what he’s talking about than some other random person on the internet.


I think the main reason for the dissonance is that Schneier talks about the trust that happens (and maybe has to happen in real-world scenarios) while the bitcoin community likes to talk about the minimum amount of trust necessary.

You don't have to trust the software, you can verify it or implement your own. You don't have to trust your internet uplink, the protocol would work over carrier pigeons or with dead drops. You don't have to trust exchanges, just exchange bitcoin for local currency with your neighbor. And even if you use an exchange you shouldn't store money there anyways. The minimum required trust is tiny (basically you yourself), but of course as Schneier points out the amount of trust involved in practise isn't nearly as low, and for many people the failure cases are much worse


> You don't have to trust the software, you can verify it or implement your own.

This is a common refrain among software people, but in reality approximately 0% of the market actually rewrites such software on their own. Most people can't code, and the percentage of those who have the time, interest, and specialized programming skills to rewrite their own financial software is an incredibly tiny pool. In practice, you have to trust someone's code.

And auditing doesn't get you very far either. That takes time too, and auditing secure software is really hard. I doubt I could do it, personally.

> You don't have to trust your internet uplink, the protocol would work over carrier pigeons or with dead drops.

Be serious. Nobody does this, aside from maybe as a joke.

> You don't have to trust exchanges, just exchange bitcoin for local currency with your neighbor.

You don't have to, yet everyone seems to. Convenience matters, pretending to the contrary is a fool's game.

> And even if you use an exchange you shouldn't store money there anyways.

Crypto fans say this all the damn time, and yet nobody seems to listen. Maybe because it's foolish advice to give? While this advice is technically strong, it fundamentally fails to understand what people using any financial system want and what motivates them. Nobody wants to turn the process of moving money around into an elaborate song and dance; they want to click a few buttons and have it work. If your system counts on either telling people to avoid the most convenient option to accomplish the task, your advice will continually fail.


I just want to check to see if you read the GP all the way through to the final sentence. You're tearing it apart piece by piece as though the author was making those claims, but you neglected to quote the part of the post that revealed their true opinion.

> The minimum required trust is tiny (basically you yourself), but of course as Schneier points out the amount of trust involved in practise isn't nearly as low, and for many people the failure cases are much worse


My mistake.


> If your system counts on either telling people to avoid the most convenient option to accomplish the task, your advice will continually fail.

Louder please - this is applicable far beyond cryptocurrency.


This is pretty much how I interpret it as well, in regards to trust.

One detail Schneier misses though is:

> Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.

The second statement contradicts the first. People who don't like/trust government-backed currencies aren't fanatics. The way the big banks handle money is quite reckless and dangerous, as we've seen time and time again. And buying drugs for your own personal use should be legal (IMO) but is not. Cryptocurrencies are useful. Maybe not to most people, or to Bruce Schneier, but that (blanket) statement is simply false.


This just feels like moving the goalposts, though. Bitcoin has been pushed as this revolutionary thing that's going to fundamentally change currency and payments for everyday people. It hasn't. It likely won't. Maybe some other blockchain-based currency will at some point, but I'm skeptical of that claim.

Beyond that, Bitcoin as a simple store of value is an incredibly risky proposition. Someone upthread posted a link to an article claiming that Bitcoin has been the best performing asset over the last 10 years, but that ignores the gut-wrenching volatility it's gone through.

And even if you stretch to suggest that Bitcoin has been a good investment, a good investment vehicle is generally not a great currency. If I had a $100 bill that, a week later, was worth $50, I'd be pissed. If another week later it was worth $200, I'd be pleased, but would feel very uncomfortable thinking I can rely on that "currency" to continue to pay my bills over the long term.

I'm not saying Bitcoin has failed or that it's useless in general, but as a generic currency replacement I'd say it's pretty weak and unreliable, mainly only suitable for use by people at the margins, people who often can't bear the risk that Bitcoin foists upon them.


Not arguing any of that. Bitcoin is not very useful as a currency. But I trust bitcoin as a concept more than I trust USD, which is currently being devalued/inflated in favor of the stock market, solely for the gain of people who are more well off than most. I'm not a doomsday prophet nor an economist but the future of the world economy is not looking great.

Bitcoin was created as something that governments/banks/corporations couldn't control or manipulate and that's something that's worth a lot in itself, I think.

And as I mentioned Monero is quite convenient for shopping on the dark web, without government intervention.

If you trust your government, banks, politicians, and agree with all the laws and taxes, then yeah, you might truthfully state that cryptocurrencies are useless. But some believe and argue that governments shouldn't have that kind of control and people should have a higher degree of freedom and privacy.

This went a bit off topic perhaps.


Can you please explain what those untrue statements are and what his false understanding is?


The “false understanding” is most likely how he starts at the conclusion of “Bitcoin == bad” and works from that instead. But both types of essays are ok. The former is just an “argumentative” or “persuasive” essay, while the latter is akin to a “compare and contrast” one.


> If the option was a mac with privacy vs a mac without privacy but $10 cheaper, I'd think the market would pick the mac with privacy.

I'm curious about this, and there are some ways we could test it if we had access to sales data.

Amazon sells a Kindle that displays ads on the screen while the device is asleep, and a more expensive model without ads. How many people buy the one without ads? (I'm assuming the device sends back a record as to what ads were seen or interacted with, which has privacy implications.)

Hulu has a plan that shows adverts when you watch a show, and a more expensive plan that doesn't. How many people pay for the more expensive plan?

More generally, what's the price spread that would cause most people to flip from buying one version or the other? To use your example, I'm sure most people would pay for a "mac with privacy" if the difference (on a multi-hundred-dollar or $1k+ product) was only $10. But what if the difference was $50? $100?

And that's not even the exact same example, since a "mac with privacy" might look, to the average user, to be identical in operation to the "mac without privacy". Paying more to avoid advertisements has a clear impact on the experience of using a device/service. But knowing that your $X-more-expensive mac isn't sending application launch data to Apple doesn't change your day-to-day experience much.


Apple buyers are in the premium price segment. Kindle Fire or whatever is for the most price sensitive segment.

Companies pay lots of money per user to secure their computer work. They could save lots of licencing by running obsolete software. Companies that do that are hacked, and sometimes mortally wounded. I'm not so sure users price privacy so cheaply.


> The time for the market to react for a product with the complexity of a mac is decades.

This makes me think of the 2 slit experiment as applied to basketballs. There is a period of time that decisions need to be properly considered, presumably simple decisions need little time, and complex decisions need more time. There is also a period of time that is required to make a decision, and a level at which the decision is made.

There are lowly software engineers making decisions like this, some of which may not even percolate to daily stand-up, yet will require the Supreme Court to unpack 10 years from now. And no one really knows.

It's like trying to pass a basketball through a slit. Yes, there's a theoretical interference pattern, and you can calculate it, but you can't do the experiment because you can't pass the basketball through a slit that small (in the case of basketballs, if I recall, the largest slit is angstroms if not smaller).

So in software development, You've got huge uncertainty in the societal implications of some decisions about what information you're going to pass over the network, but you can't even get them all through daily stand-up, let alone to Congress or the Supreme Court. Somewhere along the way, some of them end up on Hacker News with people mis-quoting Eric Hoffer, “Every great security decision starts off as a movement, turns into a business, and ends up as a racket.”


> It's like trying to pass a basketball through a slit. Yes, there's a theoretical interference pattern, and you can calculate it, but you can't do the experiment because you can't pass the basketball through a slit that small (in the case of basketballs, if I recall, the largest slit is angstroms if not smaller).

I know this doesn't have too much to do with the core of your post, but I want to mention it nevertheless: QM does not imply that there is an interference pattern for basketballs. There might or might not exist one, but we would need an answer to the question of measurement to predict one way or the other.


> The market only acts fairly when the product is a commodity.

The market only acts fairly in the window after a product is commoditized and before regulatory capture happens.

And in some cases that window is closed before it opens.


Whilst I agree with the sentiment, it does occur to me just how many kindles I see with ads.

Is there any data released on ads Vs no ads versions?

That's the closest comparator I can think of.


I've never had a non-eInk Kindle, so maybe it is different for them, but I've had eInk Kindles both with and without ads. My first was without ads. Since you can add the "without ads" option to a "with ads" Kindle later by paying the difference, I bought my second with ads to see how bad it was.

Here are the only differences I've noticed:

• With ads, the sleep screen displays some artwork from a book Amazon is selling [1] and some text suggesting you come to the store and buy books,

• The home screen has an ad at the bottom.

Since when I'm not using the Kindle the sleep screen is hidden behind the cover, and when I am using it I'm somewhere other than the home screen 99.9% of the time, I never saw any reason to add the "without ads" option, and when it was time to replace that Kindle I again went with ads.

I suspect that the vast majority of people that buy the "no ads" option up front do so because when they see "with ads" they are envisioning a web-like experience where the ads are all over the place and intrusive and animated and distracting.

[1] Usually a romance novel for me, even though I've never bought any romance novels from Amazon, nor anything even remotely like a romance novel. In fact, I don't think any book I've bought from them even had any romantic relations between any of its characters.


On the eink kindle, I paid for ad removal just in general principles. I probably could have lived with the lock screen ads, but the home screen ad was unacceptable. Vast majority of my book purchases over the past ten years have been ebooks. I’ve gone a bit sentimental in thinking of the book list as my library. And I didn’t want a billboard of any sort in my library. Real or virtual. Have a feeling there are at least dozens of us who are that allergic to ads.

I have also purchased a Fire during a Black Friday sale. Got rid of the ads on that one too, but for free since some nice person over at xda made an automated process for that. On a side note, with Termux it isn’t entirely horrible as a 7 inch laptop when paired with a hinged keyboard case. A portable system at a similar price to a Pi.


Heh. With ads, this was always one of the ways I could embarrass my Bride. You are reading '50 Shades of Grey' or whatever trashy novel shows up on the cover of her gadget. Always makes her blush.


> Whilst I agree with the sentiment, it does occur to me just how many kindles I see with ads.

> Is there any data released on ads Vs no ads versions?

Do they offer a tracking vs no tracking option too? The absence of adverts does not mean the absence of tracking.


The tracking is somewhat inherent to the software — syncing what page you've read up to in a book between devices (a feature many people find crucial!), cannot really be divorced from having the raw data to create server-side metrics about people's reading habits.

Even if you E2E-encrypt each user's data for cloud storage and have devices join a P2P-keybag, ala iMessage, consider the ad-tech department of your same company as if they were an external adversary for a moment. What would an external adversary do in that situation? Traffic analysis of updates to the cloud-side E2E-encrypted bundle. That alone would still be enough to create a useful advertising profile of the customer's reading habits, since your app is single-purpose — the only reason for that encrypted bundle to be updated, is if the user is flipping pages!

And, together with the fact that your ad-tech department also knows what books the customer is reading (because your device can only be used to read books you sell, and thus books you have transaction records for selling them), this department can probably guess what the user is reading anyway. No matter how much your hardware-product department tries to hide it.


My position on data privacy is slightly different from other people on this site—unlike some, I don’t care all that much if Amazon knows my reading habits, but I do not want it to show ads, provide recommendations, or otherwise tailor my experience based on what it knows. I’m concerned that this “tailoring” puts me in a filter bubble, and locks me in to a narrow set of preferences for the rest of my life.

In this regard, Amazon is among the worst of the big tech companies that I interact with. Google lets me turn off personalization—when I go to Youtube, I see an extremely generic set of recommended videos. But I can’t do it on Amazon.

(I don’t use Facebook, not sure what can be switched off.)


> syncing what page you've read up to in a book (a feature many people find crucial!), cannot really be divorced from having the raw data to create server-side metrics about people's reading habits.

Sure it can. You encrypt the data so the client can read it and the server can't. When you add a new device, you e.g. scan a QR code on your old device so that it can add the decryption key to the new device, and the server never knows what it is.


That scenario is addressed in the next few paragraphs.


The next few paragraphs were not originally in the post. And you obviously get less information from knowing the user has updated some encrypted data than having the specific page of the specific book. It is also not inherently necessary for the server to have even that information; it could be sent directly from one device to the other(s) or via an independent third party relay (e.g. Tor).


This. Distributed consensus is easy and cheap to implement when all participants can be trusted (as opposed to the Byzantine problem that I.e. blockchain tries to solve). It’s even easier if you’re ok with approximately eventual consistency with a reasonably high probability of convergence. Page state seems like it fits into this category


ofcourse it can be divorced, keep the data client-side, as kindle does?


For the non-tablet Kindle, I'm not sure if the differentiation between tracking and ads makes a lot of sense. You (well, I at least) already use Amazon to buy the books that are on my Kindle. I guess they could track how fast I read them. But that seems trivial compared to already knowing what I'm reading.


> Whilst I agree with the sentiment, it does occur to me just how many kindles I see with ads.

True, although the price difference for the Kindle is about 20%. If the discount on a Macbook Air was similar, I'm sure it would be well subscribed.


The discount for what? There are no ads on the Mac.


> If the option was a mac with privacy vs a mac without privacy but $10 cheaper

OP was comparing the Kindle with ads discount to an imaginary Mac without privacy discount.


It sounds like one of the weirder "freedom" arguments. Who is going to pay extra for a Mac without key security features?


They sold a lot of the ad enabled tablets one black friday for like $80 which seemed like a good deal at the time despite not running google apps which most people desire, having ads on the lock screen, and having the worlds shittiest home screen app for android. I bought one for my wife. If you are lazy or not inclined you can actually pay after the fact to remove ads.

Alternatively you can deliberately break the ad functionality, install google apps, and android lets you change your home screen.

1 and 2 worked but they broke the ability to set your own home so the fix is a hacky app to hijack the home button to show the right home of your choosing. This worked for over a year then they repeatedly blacklisted such apps, then it worked but with a 2 second delay which is basically horrible and extensions started crashing the gmail app.

Worst piece of shit ever.


> having the worlds shittiest home screen app for android...Worst piece of shit ever.

> I bought one for my wife.

Aww, that's so sweet! ;)


Prior experience had led me to believe that crappy default experience was relatively common in Android but easy to rectify.

Mea culpa but her replacement is so nice I got one too. Moto g7 powers.


I bought a kindle. It didn't have ads. I did a factory reset. Now it does. This is the first time I've heard about a choice.


I wish i could pay for no ads on everything. If there are only like three ad networks for 99 percent of the internet, and they can track me easily, and they know how much I'm worth to them, can't they just email me every month and say, "if you want a no-Doubleclick ad experience next month, it will cost you $4.65, click here to pay." Not like they aren't already doing the tracking, and this would increase the value to their ad customers since they wouldn't pay for views from people who don't want to see ads.


I think its worthwhile trying to test the hypothesys, but i dont think anyone takes privacy on a book reader with the same passion as they do on a mobile phone.


I imagine it more as a tragedy of commons situation. I don't care that much about my privacy right now to sacrifice a lot of convenience. However if enough consumers rejected products that encroach on privacy, we’d get privacy and keep most of convenience.


"The market only acts fairly when the product is a commodity."

Based on context (rest of your comment), did you mean "substitute good" instead of "commodity"?


> I'd think the market would pick the mac with privacy.

What "privacy" means here is a huge issue. I would not agree that this would be the case. I think you'll find the majority looking to save 10$, which is almost nothing for many people but a non-trivial amount for most in the US.


> A common refrain in arguments that we don't need laws to protect privacy is that the market will take care of it.

Stronger privacy laws hurt Google, Facebook, and Amazon far more than Apple. Most of Apple's privacy gaffs are just bonehead moves like this one which shouldn't happen, but also don't drive revenue.


I've always thought Apple's focus on privacy (putting aside the current incident) is rather clever as Google have no way to respond. Any improvements to privacy undermine their business model.


You'd think that'd be how it worked out, but funnily enough, it's net negative even when Google actively tries to enhance privacy! Using Apple's privacy-sensitive ad blocker standard made people protest the limitations of a fixed count of sites, and competitors spun it into a way for Google to advantage themselves (details on that somewhat murky, i.e. nonexistent)

We have a long way to go as an industry on keeping users _well_ informed on their privacy, and I'm afraid Apple's set us back years by morphing it into an advertising one-liner over actually helping users along.


I don't look at it that way. I think privacy is secondary to Apple, and that they use privacy as a selling point when it suits them. Their overall goal is tight control over the entire computing experience, and in places where that degrades privacy, so be it.


Their goal is making money.

On the iPhone, that means keeping the system's reputation for security and ease of use trumps more or less everything else. On the Mac? Not so much.

For Apple, privacy is a cheap giveaway because unlike Google or Facebook, the way Apple makes money doesn't require they know when my last BM was.


This isn’t that. I’ve been aware of this for some time, pretty sure it was in the security white paper and talked about as a feature.

People forget about CRLs because browsers mostly ignore them.

People just go crazy for any Apple story because it attracts attention. People have been paying to send all sorts of app launch analytics to AV companies for example since the 90s.


> I’ve been aware of this for some time, pretty sure it was in the security white paper and talked about as a feature.

That doesn't invalidate what the parent said. The only way awareness helps is if it's general knowledge. I don't believe it was, and you personally having known about it doesn't make it so.


Lying to the customer about what your product does, or having secret functionality, should be a criminal offence in the same way as breaking and entering or stalking are.

Then, we would find out very quickly what people value.

I firmly believe this ecosystem (as in privacy violating ad and data selling business model) is only dominant because companies are able to mislead with impunity, so it's basically a form of fraud


I agree with this, to some degree.

But I've also known that macOS verifies signatures for as long as it's been doing it. This was no secret, it was advertised as a feature.

I assumed it wasn't being done in plaintext, because who would be so foolish as to code it that way? and I'm still plenty mad about that. Anyone could have checked this at any time, presumably people did, and the only reason it became a story is because the server got really slow and we noticed.

Apple says there will be a fix next year, which... eh better than nothing, not even 10% as good as shipping the feature correctly to begin with.

But of the many things about this episode which are worthy of criticism, Apple being deceitful is nowhere among them. Never happened.


It was mentioned in a developer presentation with very few details as to how it worked. Apple did not go into details; the information presented here was mostly reversed by app developers and security engineers.


What details were missing from the WWDC talk that you would've liked to have seen?


Was it widely known that this feature was implemented as a privacy-leaking phone-home feature? Simply saying "macOS verifies application signatures before running them" doesn't necessarily imply that it's shipping execution data to Apple. All of that can happen offline, if Apple had chosen to implement it that way.


> This was no secret, it was advertised as a feature.

I wish you could prove this.


https://developer.apple.com/videos/play/wwdc2019/703/ The second half of the talk is about the "hardened runtime"

And in the wider tech press

https://appleinsider.com/articles/19/06/03/apples-macos-cata...

"Mac apps, installer packages, and kernel extensions that are signed with Developer ID must also be notarized by Apple in order to run on macOS Catalina"

Even on hacker news https://news.ycombinator.com/item?id=21179970


Firstly, this is not 'advertised' - this is not a material average apple consumer reads.

Secondly, it does not actually say anything about the OS phoning home and preventing the user from launching an app. The appleinside talks vaguelt of 'Notarisation', something thay can be inplemented in a variety of ways, like signing application with a certificate


Here is a Forbes article (do “normal” people read Forbes?) that says to run apps in Catalina one will need permission from Apple. It even anticipated this problem with the server. Sure, it doesn’t mention sockets but remember this is for normal people and I think people are smart enough to assume that Apple is probably not giving permission to launch an App by carrier pigeon.

There are 100s of articles like this. Gatekeeper was a keynote feature.

https://www.forbes.com/sites/ewanspence/2019/12/28/apple-mac...


The act of breaching privacy is technically difficult to prohibit in a way many of us would find palatable.

What should be targeted is the product of said breaches. Something like the blood diamond approach.

If your company has PII, then you by law must be able to produce a consented attestation chain all the way back to the source.

If you do not, then you're charged a fine for every piece of unattested PII on every individual.


- Can you demonstrate the provenance of every phone number, email address and other contact mode on your phone? Note people's birthdays? Sure, you only want to target companies. Make sure you choose your acquaintances wisely, I guess, or make sure you record their grant of permission to email them, because people abuse laws like this to harass each other every single day.

- This also punishes possession, not use. If you think about that for a minute, it should become clear both how this doesn't attack the right problem, and how companies would evade it.

- Finally... how are you going to audit Ford or Geico? Honest question. Who pays for the audit of "every piece of unattested PII on every individual"? How often, what is the dispute mechanism, and who administers that? Seriously - this sounds like a job for a new agency combining significant portions of the IRS, the FBI and PwC.


> This also punishes possession, not use.

If companies were immune to data breaches or leaks, then maybe this wouldn't be such a big deal. But I don't trust most companies to securely hold my data, even if they don't use it at all.

And besides, by the time a company uses the data in a privacy-destroying way, it's too late. The cat is out of the bag. Sure, the law against use could serve as a deterrent, but companies will push the law past the breaking point all the time. If you make possession trigger legal action, you can mop these things up before they get to use your data. Sure, you still have the problem of finding out about that possession. But also consider that if you have laws against possession and "bad" use, and a company does something, you can charge them for both things and hurt them more. That's a larger deterrent.


> Sure, you only want to target companies

Yes. There are many laws (e.g. accounting) that only apply to companies, when it's scale that amplifies harm.

> possession, not use

How are they different, in this context? The latter requires the former, and the former is unprofitable without the latter.

> how are you going to audit Ford or Geico?

As you note, similarly to how we audit now, albeit hopefully more proactively. If the law requires a signed off third-party PII audit, and holds an auditor legally liable for signing off on one... I expect the problem would (mostly) take care of itself.

PII is always going to be a game of edge cases, but we've managed to make it work with PCI and PHI in similarly messy domains.

Right now, companies have GDPR & CCPA to nudge them in data architecture. National laws would just further that. I can attest to major companies retooling how they handle and track consumer data just due to the CCPA.


By the time someone is charged with a crime, the damage is already done.

And, there will be many scammers who are simply out of reach of meaningful legal remedies.


That's true of virtually all criminal laws, though. If you're beat up and your assailant is charged with assault, you've already been beat up.


Yes, which is why you need both laws, and prevention methods.


I thonk we are both arguing that clear consent must be present, and the customer must have clearly agreed to whatever you are doing with the data - that appears similar to GDPR.

However, how do you prove John Doe has actually agreed to this? What if John says he did not click accept button? Do we require digital signature with certificates, given that most people don't have them or know how to use them?

I think the problem is more tractable for physical products running firmware - there you have real proof of purchase, and, at present, firmware that does whatever it wants.


It's analogous to the credit card fraud problem, no? E.g. disputing charges and chargebacks?

I don't work in that space, but my understanding is that the card processors essentially serve as dispute mediators in those instances.

So it would seem unavoidable (although not great) to have some sort of trusted, third-party middle person between collectors and end users, who can handle disputes and vouch for consent.

Blockchain doesn't seem like a solution, given that the problem is precisely in the digital-physical gap. E.g. I have proof of consent (digital) but no way to tie it to a (disputed) act of consent (physical).


This is civil law. You find out by asking employees in court. They aren’t going to risk perjury, a criminal offense, to spare their employer.


Or perhaps change the system that incentivizes companies to violate privacy over and over again


> If your company has PII, then you by law must be able to produce a consented attestation chain all the way back to the source.

So...basically the GDPR? Well, not quite, since the GDPR doesn't require consent attestation, "merely" a legal basis. Of which consent is just one (the most useless one to use as a company).


Yes because there are never unwanted side effects from more laws.


That's human nature. As soon as something beneficial to few and detrimental to others is banned, those who benefit seek to find other ways to continue benefitting, again to the detriment of others.

This doesn't mean we shouldn't continue trying to stop them.

And we stop them through laws.

Common sense is not that common and human decency doesn't scale.


In the US, the typical citizen commits an average of a felony a day. The legal code and associated regulations are so lengthy no one can read all of them. The tax code alone is 2,600 pages and associated rulings 70,000 pages.

When you have so many laws, they can be applied selectively depending on your political status, or to benefit the regulators or their friends. We just caught the sheriff of Santa Clara extorting citizens for tens of thousands of dollars to get concealed carry permits, which is why many people carry illegally, just like criminals.

Creating a law where non-disclosure of the smallest feature opens you up to government regulators harassment is another avenue for graft and corruption. Like when the EU selectively prosecutes US companies, or US regulators selectively harass companies not in lock step with the current administration.

I think if you really need a nanny state to protect you from your own decisions you should be required to give up all your decisions to the state.


For your argument to make sence you have to demonstrate that this amounts to nanny state.

I don't think it does, I think is fraud. For it to be consequences of of my choices, what choice do I make to select a mobile phone carrier that does not sell data of my location? Such choice does not exist.

You can't 'non-disclose' some 'small feature' of a mortgage contract, of a loan, etc. Personal data deserves similar respect.

Lastly, we can and do have different laws for individuals and multi-billion dollar corporations - you cant use this as an argument when we are discussing securities fraud and banking regulations.


Laws can always be applied selectively. They have always been applied selectively.

The point of laws is statistics: you can't discourage everything, you need to discourage enough to have order.

And there is a middle ground between no laws and giving everything up.

Also, I give up my liberties and obey laws in exchange for protection from many nasty things people do when there are no laws.

Based on your examples and vocabulary ("nanny state" is a clear giveaway), you're American. Go live for 5-10 years in a country with lax or non-existent laws and law enforcement. We call those countries bad names for a solid reason.


> In the US, the typical citizen commits an average of a felony

Sorry but that’s just obvious BS. If it were true, you’d include examples and far more people a certain world leader doesn’t like would be “locked up”.


> In the US, the typical citizen commits an average of a felony a day

This is surprising to me. Could you provide examples of such common felonies US citizens commit in ignorance?


I think felony is the more serious one? Misdemeanor being stuff like parking violation? I'd definitely want to see some examples for felonies, too :-)


Sure, but that's not an argument to give up.


Perhaps this is why Apple did not ask for permission from the user to add some delay every time the user launches an application. If the user were presented with the choice what would she choose.

If we look at the example of OCSP in the website certificate context, the notion of the delay added by OCSP (not to mention privacy concerns) being objectionable has already been acknowledged. As a result we have OCSP stapling. For some reason, in the Apple developer certificate context, OCSP is deemed acceptable by default.


> A common refrain in arguments that we don't need to reject closed source software to protect privacy is that being closed source doesn't hide the behaviour, and people will still notice backdoors and privacy leaks. Sometimes they do, sometimes they don't.

And there are cases where it's not practical for "people to notice." For instance: a privacy leak that only uses the cell network connection of a phone, which would avoid easily-sniffed connections.


Even with traffic monitoring, how do you distinguish bad TLS traffic from a machine you don't control to a cloudflare IP to good TLS traffic?


> The market can't act against what it can't see. Privacy loss is often irreversible.

You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy? From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware. Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now. The market seems fine with it.

And I mean, at this point I could just gesture broadly at all of Facebook.


I hear this argument a lot but I think it is exactly OPs point when he says, “The market can’t act against what it can’t see”.

Your average consumer doesn’t know the extent of what they’re trading. Take Facebook, even with high profile stories and documentaries it’s reasonable for your average consumer to assume that what Facebook tracks about them is what they actively give to Facebook themselves.

I’ve had conversations with people that say, “I rarely even post on Facebook” and “If they want to monitor pictures of my food/dog/etc whatever who cares”, without any solid understanding of what even having the app installed alone is giving Facebook.


It's ok, if they will be educated they will care. Just like they care now about not using single use plastics, buying the biggest and most gas guzzling SUV or flying on holidays across the globe.

They won't care even when they'll know. And they might not ever know.


Yes exactly. I don't think an abstract understanding of the costs is enough. If the cost isn't physically or viscerally felt, it just doesn't factor into people's decision making.

This is where the pricing system really comes into great effect. People buy and drive fewer SUVs when gas is more expensive. If we want people to buy fewer SUVs, increase the fuel tax.

Education is not enough, and in fact might not even be necessary at all. Just introduce real costs to capture the "abstract" costs (externalities) and the problem will likely correct itself.


> has "the market" shown any serious signal that it cares about privacy?

Depends on what you consider a serious signal of care. If 'voting with you wallet' is the measure, increasing levels of income inequality, stagnant wages, weakening employee rights through the gig-economy, etc. are effectively taking away that choice, as most market participants cannot afford to make the choice.

Also, what is the paid alternative to Apple or Google photos that allows me to have the same end user experience, without giving up my privacy? "the market" doesn't even have such an offer that I can see. The closest I can find (and that's through here) is photostructure.com, and even that's lacking all the local-ML-foo that makes Apple/Google photos a compelling option over anything else.

> Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now.

Known by who? I'd wager a year's salary that >50% of Samsung TV owners (and bump that number up to >75% of Samsung TV users) do not know this is happening.


Income inequality and stagnant wages etc are not the consequences of Apple and Google.


No one suggested they are. But they still make it much less likely for the average Joe to put money and effort to protect their privacy, and this is known to those that make major pricing decisions.


We’re talking about Apple here. People who can afford a Mac over a cheap pc/chrome book already have disposable income and are making trade-offs with it.


That is not really true. Many people of lower income will buy a Mac after their cheap laptops break because they perceive them, correctly or not, to be the most reliable and thus less expensive on the long term laptops. Sometimes even second hand.

Also, Chromebooks are not viable alternatives for many people especially of lower income. Having to rely on being always online is an issue when you can't always guarantee having internet. Been there, done that.


> That is not really true. Many people of lower income will buy a Mac after their cheap laptops break because they perceive them, correctly or not, to be the most reliable and thus less expensive on the long term laptops.

This is not true. The tiny user-base of OSX compared to Windows is already evidence that most people are not buying MacBooks.


> From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware

I'd argue that's more of what parent was talking about with regards to visibility.

"Privacy" isn't something anyone can see. What you see are the effects of a lack of privacy.

Given the dark market around personal data (albeit less in the EU), how are consumers to attribute effects to specific privacy breaches?

If Apple sells my app history privately to a credit score bureau, and I'm denied a credit card despite having a stellar FICO, how am I supposed to connect those dots?


> You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy?

Is there a pro-privacy Google out there whose products languished while Google's succeeded?

The Silicon Valley VC network did not fund nor support companies that promoted privacy. I can not think of a single example.

Not a single major venture from SV VC network even attempted to innovate the "business model". We can make machines reason now but, alas, a business model that does not depend on eradicating privacy is beyond the reach of the geniuses involved. It is "Impossible"? I think, "undesirable" is more likely. No one is even seriously trying. Point: money behind SV tech giants is not motivated at all to fund the anti-panopticon.

The salient, sobering, facts are that all these companies are sitting on a SV foundation that was and remains solidly "national security", "military", and "intelligence". The euphemism used is to mention SV's "old boy network".

https://steveblank.com/secret-history/


>that's been known for a while now

Ask a representative sample and I wager only a very small percentage of people are actually aware of (1) the breaches of privacy that are happening (e.g. your TV sending mic dumps and screenshots of what you're watching), and (2) the hard consequences of those invasions (that is, beyond the immediate fact that you're being snooped upon), like higher insurance premiums on auto and health, being targeted by your opinions, etc.


To be honest I would expect if you told people "your TV will report what you're watching" they will think "wait, doesn't my cable company already know what I'm watching???"

If you tell them it's the manufacturer this time in addition to the cable company, I'm not sure how many would freak out over the extra entity.


Very few.

I'm not sure how you'd measure it, but there seems to be a huge disconnect between techie privacy advocates and the rest of the world. The former keeps claiming the latter just doesn't understand, or needs to be informed, but I just don't think that's realistic.

I think it's pretty common knowledge that these companies are harvesting all imaginable data to serve users more / better advertisements and to keep them on the site, yet usage continues to grow despite all the scandals. I think advocates need to make more convincing claims to everyone else that they're being harmed.


In my anecdotical experience, that's not remotely true. Yes there is a disconnect between techies and regular people, in that for us it's obvious that these companies are harvesting all this data and processing it in all this ways and sharing it with all these people, but for the majority of people it's not. Even for you, do you think you fully understand how your data is being utilised and what the consequences are?


I think we've been repeatedly shown that people don't care. From Cambridge Analytica, to what Snowden shared, to voice-based assistants (like Amazon Alexa), every instance is met with feigned surprise and then a collective shoulder shrug.

> Even for you, do you think you fully understand how your data is being utilised and what the consequences are?

I don't think anyone can say with certainty, but I read these threads so I'm quite aware they're collecting and monetizing every imaginable thing they can. It's difficult to articulate any measurable / real negative consequences to me, personally.


> You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy? From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware. Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now. The market seems fine with it.

I'd guess that most of that's due to information asymmetry. Privacy losses aren't advertised(and often hidden) and are more difficult to understand, but price is and is understood by everyone.

Take that Samsung example: it's not like they had a bullet point on the feature list saying "Our Smart TVs will let us spy one what you're watching, so we can monetize that information."


On argument to make is that the market only cares about the majority of their customers, not all of them. If 0.5% of their customers has their privacy busted by a backdoor, or that 0.01% of google users have their account arbitrarily deleted, this percentage of users is screwed like no one and the company doesn't suffer any damage.


Yes, unfortunately there is no safe harbor.

Companies can make mistakes or add backdoors and we won't know. See this clusterfuck.

Open Source, likewise, can make mistakes and (much more rarely) add backdoors, and we could know, but few have the resources to so. See heartbleed.


Probably no one cares because Apple’s OCSP checks don’t reduce your privacy.


They should care. The checks are sent unencrypted over HTTP to Apple's OCSP.


Since they don’t identify specific apps you use, so what’s your point?


As far as I understand it, most vendors ship a single digit amount of apps. If you start the Tor browser, everyone on your network will know. If you start Firefox, everyone on your network will know you started a Mozilla product, most likely Firefox. If you start the Zoom client, everyone on your network knows you started the Zoom client.

I don't think the "it's only the vendor" defense of Apple is any good.


On MacOS, developer certificate requests are NOT done for every application launch. Responses are cached for a period of time before a new check is done.

FYI -- Both Firefox and Safari use OCSP to check server certificates. Anybody sniffing your network could figure out which websites you visit. Chrome still uses CRL; it trades precision for performance.


That period of time was 5 minutes.


HTTP is specified in the RFC. Only the developer certificate is checked. OCSP is also used by web browsers to check the revocation status of certificates used for HTTPS connections. Apple leveraged OCSP for its Gatekeeper functionality. This is not the same thing as notarization, which is checked over HTTPS.

https://blog.jacopo.io/en/post/apple-ocsp/

Perhaps you should learn about OCSP before complaining about its use of HTTP.


Vendors MAY use TLS, and Apple didn't (though they say they'll start).

You might want to read the RFC, rather than a blog post about it, before making such confident pronouncments.


> The market can't act against what it can't see. Privacy loss is often irreversible.

I would add that the market can't act to prevent risks that are outside the market and not taken into account by the market.

The big risks from widespread privacy loss are the exploitation of private data by criminals, foreign unconventional warfare by terrorists or hostile states, and the rise of a totalitarian government here in the USA.

Criminal action can to some extent be priced into a market, but the other two really can't be.


> Parts of the US Governments unlawful massive domestic surveillance apparatus were described in the open in IETF drafs and patent documents for years.

References?


Great points.

Related tangent: for those interested in this topic of "unlawful massive domestic surveillance", I heartily recommend Cory Doctorow's "Little Brother" novels -- esp the most recent, "Attack Surface". They get the technical details right while remaining accessible and engaging regardless of the reader's geek acumen.


> arguments that we don't need laws to protect privacy is that the market will take care of it.

Since discussion of Apple's behavior in particular has, somehow, been completely de-railed anyway (a frequent happening on HN) ...

Can't help but observe that the market takes the best care of those who control it. Freely.


> A common refrain in arguments that we don't need laws to protect privacy is that the market will take care of it

Seriously, who has ever been successful at defending that idea ?


Many lobbyists and lawmakers, unfortunately.


It doesnt count as winning the argument if you paid them to agree with you


It does if you convince people to vote for you though


which IETF drafts and patents? Can you point me to some links?

tia


Sure, lemme give you an example:

https://tools.ietf.org/html/draft-cavuto-dtcp-00

This protocol was created so that monitoring infrastructure could reprogram asic-based packet filters on collection routers (optical taps feed routers with half-duplex-mode interfaces), which grab sampled netflow plus specific targets selected by downstream analysis in realtime. It has to be extremely fast so that it can race TCP handshakes.

I don't think it's much of an exaggeration to say that the technical components of almost all the mass surveillance infrastructure is described in open sources. Yes, they don't put "THIS IS FOR SPYING ON ALL THE PEOPLE" on it, but they also don't even bother reliably scrubbing sigint terms like "tasking". Sometimes the functionality is described under the color of "lawful intercept", though not always.

One of the arguments that people made against the existence of widescale internet surveillance -- back before it was proved to exist-- was that it would require so much technology that it would be impossible to keep secret: the conspiracy would have to be too big. But it wasn't kept secret, not really-- we just weren't paying attention to the evidence around us.

For a related patent example: https://patents.google.com/patent/US8031715B1 which has fairly explicit language on the applications:

> The techniques are described herein by way of example to dynamic flow capture (DFC) service cards that can monitor and distribute targeted network communications to content destinations under high traffic rates, even core traffic rates of the Internet, including OC-3, OC-12, OC-48, OC-192, and higher rates. Moreover, the techniques described herein allow control sources (such as Internet service providers, customers, or law enforcement agencies) to tap new or current packet flows within an extremely small period of time after specifying flow capture information, e.g., within 50 milliseconds, even under high-volume networks.

> Further, the techniques can readily be applied in large networks that may have one or more million of concurrent packet flows, and where control sources may define hundreds of thousands of filter criteria entries in order to target specific communications.


So, we should take away the market's incentive to infringe upon our privacy: by making user-tracking illegal.


The laws are already there. If you care about this and have some free time you may try to make a complain to the Irish data protection commission.

I'm not sure if it's infringing though. If Apple says that they do not collect personal data and that the information is thrown away and this is done for a legitimate business purpose or for the customers (i.e. protecting customers), it may well be fine according to the GDPR.


Ya no thanks. Top down regulation will just make startups less likely to enter new disruptive tech. The solution is choice, stop using Apple products and all their shadyness stops being an issue.


> The solution is choice

Correct. Among apple, MS and google you have no choice. That is why regulation is necessary.


> Ya no thanks. Top down regulation will just make startups less likely to enter new disruptive tech. The solution is choice, stop using Apple products and all their shadyness stops being an issue.

And since people have demonstrated that they're not appropriately incentivized to stop, that's where the regulations snap in, which brings us back to where we started: there's a need for it.

Solution could just be to regulate based on a tightly managed definition of age. Enable younger companies to have a bit more flexibility in determining their business model as controls slowly snap in as the company ages. There are some pretty clear loopholes that immediately come to mind (e.g. re-chartering the company every few years and transferring assets) that'll need to somehow be managed, but it should be enough to give companies runway to figure out how to disrupt and monetize while coming into compliance with consumer protections.


How do you provide choice? How do you commoditize an entire hardware and software ecosystem, vertically integrated?

Just like Standard Oil or AT & T were displaced, right? By customers going to their competitors? I'm being sarcastic, obviously :-)


Nothing shady about this. It isn’t logged and it protects customers from malware. That was the purpose. Most customers want that.


> Top down regulation will just make startups less likely to enter new disruptive tech

Im ok with this


> Those who consider that Apple’s current online certificate checks are unnecessary, invasive or controlling should familiarise themselves with how they have come about, and their importance to macOS security. They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all, and what should replace them.

I agree that anyone critiquing Apple's OCSP design should understand it, and the critique should be more nuanced than "just turn that feature off." Computers are now skeleton keys to our lives and we have to go forward rather than back in figuring out how to design them so they can safely do everything we need them to do.

But it's not hard to justify the sudden criticism here -- it happened after Apple's bad design of the OCSP feature broke local applications, drawing a lot more attention to how it worked. It's reasonable to then ask whether other parts of the design were also poor, as Apple itself obviously is from the changes it's already announced.

To take the author up on what should replace OCSP checks -- how about using something like bloom filters for offline checks, and something like haveibeenpwned's k-anonymity for online checks, to remove the possibility that either Apple or a third party could use OCSP for surveillance?


The browser vendors have been looking at this problem for a long time. See https://blog.mozilla.org/security/2020/01/09/crlite-part-1-a... for example (bloom filters included).


Why can't Apple download all footprints of bad apps locally instead of monitoring every single invocation of apps? Is second execution of an app the same security risk as the first one? That's the design flaw.


You mean bad certificates rather than applications.

OCSP can be locally cached, and Apple's implementation does exactly that. But eventually you'll have to refresh the cache and then the implementation needs to be fault tolerant (Apple's wasn't).

OCSP leaks what vendors your installed applications are from. The list of leaked certificates changes daily, so any good implementation is going to check again at least several times a week. If you download the entire database, you're just consuming hundreds of megabytes of bandwidth/storage but aren't removing the need to refresh/expire the cache.

I'd argue the two biggest flaws Apple's system has is bad fault tolerance and also no user accessible opt out (even if just for emergencies).


I wonder how hard it'd be to serve this data via DNS, like "dig -t TXT 0xdeadbeef.ocsp.apple.com". Then you get a nice, distributed architecture with lots of built-in cache handling, and since the data is currently served via HTTP, it wouldn't expose any more data to your ISP than already is today. It would also mean that if you have 100 people in the office and a local DNS cache, then each OCSP query would be made exactly once and then its answer shared among everyone else in the office.


That still tells people what you’re running, though.


Imagine you have a shared office DNS resolver (which is pretty common). That resolver would aggregate all of the requests into one shared, cached stream. Then the question becomes "hey Apple, one person of how ever many thousand are behind me would like to know if Adobe's certificate is still valid". That's reasonably anonymized, I think.


Then the question is, "how much do I trust my ISP/DNS provider?"

Those DNS lookups tell your ISP 1) that you use a mac and 2) that you have an application from a specific developer installed.

I think I trust my ISP less than I trust Apple, here. Am I wrong to do so?


Well, back to the state right now where your ISP can see your plaintext HTTP packets if they want to, so it wouldn't be any worse than the current situation. I guess you could get much the same effect by configuring your company Macs to point at a shared Squid server to cache the GET requests from the OCSP server, but in practice almost no one does that.


Apple says they're going to move to an HTTPS based system, so the relevant comparison is between HTTPS and DNS, not HTTP and DNS.


That only helps if you have a shared resolver.


Pretty much everyone has a shared resolver at some point. Almost no one’s running a local resolver on their laptop these days.


I doubt the full list of hashes of all revoked certs is 100s of MB, and even if it is, the daily update file surely isn't that big.


Right, and given that those certificates expire after some finite amount of time, this wouldn't be a forever-growing CRL, as expired certs could be dropped from the file periodically.


Differential downloads are a solved problem : https://docs.microsoft.com/en-us/windows/deployment/update/p...


> OCSP can be locally cached, and Apple's implementation does exactly that.

In the earlier HN thread when the server was offline, it was said that Apple only cached OCSP results for 5 minutes. Is that not true? If it is true, I don't think that's what GP is asking for as far as local caching.


It was changed after the event and is now 12 hours. Which is probably more appropriate considering that most people don't restart their applications every few minutes.


Just so that I understand this correctly, by caching to you mean the results of a specific check, or as the above was implying, downloading the list of all bad signature and doing the check 100% locally.

The issue from my understanding was half the breakage, but half the fact that Apple was sending back telemetry about what apps you launched.


In answer to the first: Because the information is unreliable without more invasive technologies ensuring that the local file is up to date. To the second: Perhaps not, but if the information on bad actors (app distributors in this instance) you'll continue running a compromised app.

Are you familiar with OCSP conceptually? I have done a reasonable amount of work with signatures and certificates, including OCSP. All my experience is in a commercial, enterprise context but I think these technologies need to start filtering down to the consumer before the capability for security evaporates.

I think it's a consumer-positive direction for Apple to provide this service. I would be interested to hear from someone who holds the view that this is not a service, or disagrees in other ways, but I think this is the right direction for consumers. The alternative, as I see it, is that every person installing an app needs to start searching for CVE notices and headlines in trade papers declaring a compromise.

Apple have applied an enterprise middleware to their infrastructure. I think perhaps they could have been more transparent in the delivery. A lot of the outrage now is driven by people only finding out about the underlying process for the first time. I stand by the right of these companies to choose their business model to disallow (or restrict) execution of apps they believe to be compromised. I also firmly believe in a varied and free market for software, hardware, and infrastructure.

In essence: You can choose to use Apple and do it the Apple way. Equally you can choose to build your computer from components sourced from anywhere, install any free OS, and any apps. Personally I do choose to do it the Apple way, and I am inconvenienced by that from time to time. I curse my computer and its creators on a daily basis. It's part of the relationship we all build with our tools.

got a bit off track towards the end...


The black list of malware is called Xprotect and dates back to 2009. This check for revoked certificates is a different security layer.

The second check of an app is necessary to check for revocation: for a developer that decides that they've been compromised and wants to stop execution of their software. The alternative would be to use certificate revocation lists instead of OCSP. CRL lists can get long, so OCSP is often preferred to CRLs.


Probably it's not necessary to check for revocation that often though. It could even be argued that it's only needed when the binary is updated.

And it's absolutely not normal to fail if that revocation check doesn't succeed anyway.


I've been wondering why CRL couldn't be used if the OCSP goes down or no reply is received. That way you get the benefits of both. Any reason why this would be a bad idea?

Also are CRLs really that bad in practice? I know it would be a bad idea on a smartphone but is it really an issue on a laptop?


The other main issue is bandwidth. For a CRL, Apple has to serve the full list of revoked serial numbers (or some shard of it), even if 90% of users don’t have 90% of the revoked apps installed. I can’t remember where I read this, but I recall that after the Heartbleed disclosure one CA saw their CRL traffic grow by some number in the gigabits per second. Bandwidth is relatively cheap, but still not free (without even considering the argument of “do I need to know that the cert for an app that I will never use is revoked?”).


They have the bandwidth for some pretty big firmware/OS updates, though. This would be a fraction of that size. They download blacklists already for their inbuilt AV. Also if they use something like Git then only the changes will be downloaded rather than the entire CRL.

I can’t help but feel that the issue is something else, not bandwidth.


How many revocations do they do? How about just downloading the whole list to the clients which they can check offline.


It sounds like that's how it used to work, and then it changed for some reason, maybe to do with the size of the list and a need for faster updates.

Actually that unknown exposes the problem with the original article's demand that critics explain what they want to replace Apple's system. No one outside of Apple is in a position to design a system that addresses all of the design constraints -- we don't even know what they all are. But we are in a position to assert some additional design constraints, such as requiring that the system not leak developer certs to eavesdroppers every time an application is run, and expect Apple to figure out a solution that takes them into account.


> It sounds like that's how it used to work, and then it changed for some reason, maybe to do with the size of the list and a need for faster updates.

If that's true, that's a lazy excuse on Apple's part. Differential updates has been a solved problem for many, many years. Hourly or even daily diffs would be tiny (likely much less traffic than the OCSP checks that occur now), and expired certs could be dropped from the local store, so it wouldn't grow without bound. (Sure, ok, people could turn their clocks back and defeat that last bit, but doing that would break other things, too, like TLS to any website with a reasonably recent cert.)


It kind of feels like there's a bit too much noise around this topic.

I'm getting the same feeling I did years ago when it was discovered that the iPhone had a historical database of all the locations you'd been to. There were rather a lot of articles about how Apple were "tracking you everywhere you went" and so on.

The reason it's similar – they are both dumb, technically bad, and privacy-compromising decisions, and in both cases much of the public discussion about it has been a little hysterical and off-base.

Apple should 100% be criticised for this particular failure. It's obviously a bad implementation from a technical and usability point of view; the privacy implications are bad, and this features should not have been able to make it out as-is.

But I've legitimately seen people describe this as "Apple's telemetry" which is just obvious nonsense and distracts from the actual problem – how did such a bad implementation of a useful feature end up in a major commercial product, and how are they going to make sure it doesn't happen again?


It’s actually not at all obvious how a local list of locations used to power suggestions in maps or Siri, is in any way a compromise of privacy or technically bad.

The only thing that made it sound bad were people saying things like “Apple stores your location history”, knowing that it would create the false impression that Apple was uploading location data to their servers.

This situation is similar in that there are people posting misleading inuendo about Apple having some hidden agenda, but the difference is that there do seem like real design problems with the mechanism this time.


Every iOS device connects to Apple's push service and stays connected. The client certificate it uses is tied to the serial number of the device itself, when it registers for the push service.

Apple sees the client IP of the push connection, naturally.

Therefore, based on IP geolocation, Apple really does have coarse location history for every single iOS device by serial number.

Apple is indeed storing your (coarse) location history.


> Therefore, based on IP geolocation, Apple really does have coarse location history for every single iOS device by serial number.

This conflates technical possibility with an implemented system which stores that data and the insinuation that this is used for purposes other than what the user enabled. Do you have an evidence that Apple stores this data and uses it in violation of their privacy policy? You're apparently in Europe so you should be able to file a GDPR request to see exactly what they're storing.


Everything I described is implemented today, and is required for APNS to work. Whether Apple does or does not mine the data in some way that you personally find offensive is not relevant; the fact is that they are presently logging IPs for APNS connections, which have unique identifiers that are related directly to hardware serial numbers in their database. Because IPs generally equal location, they are in possession of location history for each iOS device serial number.

I'm not sure why these (plainly factual) statements are controversial.


There’s no question that they need connections to operate the service but they do not need to retain that information, however, and while you have repeatedly asserted that they do, you have been unable to support that claim. This would be covered by privacy laws in many places so it should be easy to point to their privacy disclosures or the result of an inquiry showing that they do in fact retain connection logs for more than a short period of time.


It stands to reason that user of the service who has agreed to the TOS that governs APNS and App Store sending unique device hardware serial numbers to Apple has also (legally) consented to IP address collection. IP addresses are less unique identifiers than globally unique device serials, so I assume Apple has already secured what passes for "consent" under the relevant privacy laws.


Again, nobody is questioning their access to that information. Where you typically go wrong is by asserting without evidence that they are performing additional activities without disclosing that. Surely you understand that having IPs be visible does not automatically mean retaining those records, much building a searchable database?


I’m with you on this. If Apple genuinely views privacy as a human right (and FWIW I believe that the right people at Apple do), then they need to learn about privacy by design.


I think people who have this viewpoint are largely ignorant of the amount of tracking data that comes out of a mac or iphone, even in first party apps, that you cannot turn off at all.

This is not an isolated incident, and their own OS services are explicitly whitelisted to bypass firewalls and VPNs in Big Sur.

There’s telemetry in most Apple apps, now, and you can’t disable it or opt out, or even block or VPN it in some cases. I encourage you to read their disclosures when you first launch Maps or TV or App Store. Every keystroke in the systemwide search box hits the network by default, too.


This is a good example of what I mean - it’s a rant about telemetry that has nothing to do with the issue in question and as a result conflates a whole mess of different issues.

I am pretty well-informed about most of the data that is being generated and transmitted by my machine. This is an issue where it it totally reasonable to pressure Apple and all other companies to design for privacy as a priority and ensure that any of this data collection can be disabled. I don’t think an effective means to do that is to deliberately conflate and mislead about issues like the one under discussion.


System services bypassing VPN and firewall because they're on a vendor whitelist (that can't be modified due to OS cryptographic protections) is a) fact not rant and b) nothing to do with telemetry.


This is a basic by-the-RFC implementation. The developer who was assigned this just used existing libraries and followed the protocol. This was a rational move on their part. Especially when mucking with x509 has been historically fraught with vulnerabilities.

OCSP has since been improved to increase privacy and security, but the extensions to enable that only considered OCSP in the context of TLS.


Just to correct slightly incorrect perception: there is nothing inherently insecure or vulnerable about X.500/ASN.1/BER/DER parsing, in fact it is probably more sane format to parse than JSON. The perception that it is somehow fraught with parser vulnerabilities comes from various implementations that tried to implement BER/DER parser by transforming something more or less equivalent to ASN.1 grammar into actual parser code by means of C preprocessor macros, which is somewhat obviously wrong approach to the problem, at least in the security context.


Another fun fact about this system: something changed in how the binaries are evaluated and one VST plugins I've downloaded months ago was marked as malware. The plugin is quite popular in community so I think it's unlikely it contains actual malicious code (in fact I've contacted the developer and he said he has done some fixes for Apple's security policies recently). Imagine my shock when I open an old project in Ableton and suddenly some sounds just don't work. This really sucks, I don't want to worry about whether my music will work five or ten years from now (I can imagine I may I want to remix some old piece). I suppose I can err on side of safety and export all tracks to wav.

However, it's not an isolated problem. It feels that every other week something happens that undermines my confidence in Macbook as good device for making music.


It's been good practice for a long time to "freeze" or "render" the tracks out after the song is finished so that the song can be loaded without the plugins.


True, but this shouldn't be necessary in response to anti-consumer behavior.


This is not anti-consumer behavior. Consumers are, overall, protected when they can verify the source of an application or extension on their computers. Their freedom may be limited but it's not a black-and-white "this is anti-consumer".


Some signatures are invalidated due to business disputes on entirely different platforms (Epic dispute on iOS, signatures invalidated, or threatened to be before court order prevented it, on OS X, for no security reason).


Epic violated the Terms of Use for their developer agreement which applies to all platforms. They knew that and they violated it willingly. The court order only prevented it temporarily to reduce the damages that may be incurred and until a determination was made in the initial case.

That is not anti-consumer.


Well even if Epic are the bad guys by violating the ToU willingly, it still impacts the user. As a user I don't want my apps (which I depend on) to stop working, because of a business disagreement.

Revoking signatures and disabling the apps on user devices to protect your business model is definitely anti-consumer in my book.

You could easily see Apple revoking signatures because of DMCA claims. Even faulty ones, like the claim RIAA made against youtube-dl on GitHub.


Of course it impacts the user... And if Epic was found doing something illegal and was shut down or bankrupted, that would also impact the user. Your over-simplification that it's a "business disagreement" is disingenuous and incomplete. The signature revocation system you're claiming is simply to "protect their business model" is the same system that allows Apple to immediately shut down any malware that makes its way into the App Store inadvertently. It's the same system that's been used in the past to protect users from private key leaks.

The only anti-consumer behavior in your situation came from Epic who knowingly violated the rules as a PR stunt.


> The only anti-consumer behavior in your situation came from Epic who knowingly violated the rules as a PR stunt.

Exactly. Never forget, it was Epic who threw their users under the bus, not Apple.

Epic expected you to be a soldier in their fight. They expected you to make a sacrifice you were not willing to make.

That's entirely on Epic.


I was not claiming the revocation system only has the purpose of protecting Apple's business model, but its one of the purposes.

Even though I agree that in the Epic case, most of the blame lies with Epic, I still have a problem with Apple: the signature revocation system is used for more things than removing malware. I think it is user hostile and anti consumer to disable installed apps on other grounds, because the users might be dependent on them.

I'd like to be able to run programs and apps on my machine that are not Apple-approved.


> I'd like to be able to run programs and apps on my machine that are not Apple-approved.

I thought you could anyway. You would right-click the app in Finder and choose Open — from then on, it would continue to open.

Or is that a different mechanism?


Apple promised it would only be used for security related stuff on desktop. I wouldn't want my desktop audio project to break because the VSTs were unsigned due to an iOS app business dispute. That's anti-consumer.


I agree that it's anti-consumer. I disagree that it's Apple that's being anti-consumer. The company developing the app would be anti-consumer for knowingly violating the Terms of Use to try and pull a PR stunt.


That's often done for producing a static performance and mix for distribution.

But when returning to a digital musical work months or years later, oftentimes the idea is to improve or otherwise rework it ... just as live bands do constantly. A non-working essential plug-in (filter, synth, VCO, whatever) might make that much more difficult.

One of the biggest headaches in computer music-making is how much time fighting the tech takes away from the creative process. Noone needs their OS to be adding to their distress. Let alone switching serial-port designs every few years (obsoleting trusted and often expensive equipment).


I assume you upgraded OS, in which case it's annoying but not unusual that plugins stop working.

A machine that's used for making professional music should not be upgraded or connected to the internet. If it's for a hobby... I think we will have to live with the compromise if we want to have the latest security fixes and connect to the internet.


>should not be upgraded

Most DAW-makers are constantly upgrading their software. And they often obsolete their older versions to get in sync with new OS's. 'Keeping the old stuff' sometimes isn't an option.

Physical instruments keep working for decades ... but thanks to OS upgrades, valued digital hardware and/or software instruments (say by Opcode or Native) can be lost to stupid or cavalier changes. Anyone who's been making 'professional music' for long has been bitten many times.


Yes they are obviously upgrading their software because they need to make money and adding features and fixing bugs is a great way to do that.

The only solution to not having a broken music workstation is to never connect that machine to the internet and never update it. Physical instruments keep working for decades because...they are never connected to the internet and never updated.


Completely agree, except I'd say physical electronic instruments have a lifetime of about 20-30 years now. Several synths I own are now non- or half-working because of fading displays, broken floppy drives, power supplies and even chips going bad. The relentless churn of music computer software and hardware setups has also existed since the 80s. I guess it is probably worse for photo & video production.


>I don't want to worry about whether my music will work five or ten years from now

This is exactly what Apple has already done to the iTunes world, music you had a decade ago is suddenly inaccessible


> [...] music you had a decade ago is suddenly inaccessible

Music bought via iTunes doesn't have any DRM since 2009.


But even without DRM, if you go with the default settings and don't download your music locally, you lose access to it if they decide to remove it from their catalog. Same goes for movies/TV shows; I've had both disappear from my iTunes library at various points.


Do you have an example of this?


What is this referring to? Mine seems to work fine.


Interesting, but reading the conclusion I'm fascinated in this affaire how technically knowledgeable people loose common sense to defend their favorite brand: - Per launch verification is terrible for privacy, vis-a-vis Apple and the whole network when it happens in plain text - "They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all", another key issue: user information, consent and control. - Additionally, the public was made aware because it malfunctioned, which is also a security issue. - Considering the current corporate culture, there are legitimate concerns of what those choices might lead towards


No-Logo by Naomi Klein outlined how brands work.

One factor in the irrational defence could be a kind of psychological protection of investment. Apple isnt just another company, its an entire lifestyle ecosystem. Those invested in Apple have the watch, tv, laptop, itunes etc. And together they really do "just work" - the user experience is great!

So to admit that Apple is flawed, that their investment was a bad idea is to admit they were wrong and that their time and money was wasted. No-one wants to be a sucker. Far better therefore to protect your investment. Apple really are genius to pull this off. Apple is part of people's identity.


The article points out that while there are drawbacks to checking app signatures, there have also been documented benefits in terms of uncovering vulnerabilities and making systems more secure, which also has direct privacy benefits to the users whose systems don't become compromised by malware.

The balancing act between freedom and security is never going to not be a debate. Engaging in it in good faith as in the linked article is a reasonable approach (that you don't usually see represented in Klein's oeuvre): consider tradeoffs, counterarguments, and historical context from different perspectives. Apple is flawed sure, because all complex solutions are inherently flawed. They have a responsibility to be more open and transparent, and I'd prefer to see more details and updates to their otherwise laudable security whitepaper [1], and clearer more accessible user-definable toggles. But your or my preferred solution probably isn't the ideal default for most users, or for the ecosystem as a whole.

[1] https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app...


Signature checking is not what people are mad about, the only way to get anything not bad from this situation is in the most abstract view that signature checking is not bad

thats the problem with people who dismiss Klein and her "oeuvre" you are so desperate to stay in the middle you refuse to see evidence right in front of your face


Sometimes people just draw different conclusions because they have different values and priorities or willingness to accept incremental progress.

I’m not dismissing Klein, I referred to her “oeuvre” out of my respect for her as an artist.


This is a good explanation about why people defend brands, but in case of Apple, my experience is that the anti-Apple crowd is more emotionally invested in being anti-Apple than the fans are for it. It used to be the other way around when Apple was the underdog, but after the iPhone took over and became a sort of default for certain regions and social circles, the most loyal brand warriors are the anything-but-Apple fans.


The most vocal, emotionally invested anti-apple crowd are those who were once emotionally invested in Apple. Similar to an anti-smoker, ex-cult member, ex-mormon or vegan convert - those who have been burnt are often the loudest. The same psychological effects are in play - people will try to defend the hurt to their ego.

There is no brand of "anti-Apple" - that doesn't really exist as brands don't work like that BUT groups of people do though, and people can have identities of being in a group. (I can't really say I have noticed an anti-apple subreddit either)

One way groups defend themselves is to define the opposition - by defining the opposition to Apple fans as anti-apple-groups, it helps bolster the coherence of the Apple fan group and defend the brand and investment for the individual. We are under attack, close ranks and protect each other.


But they're not wrong at all and their time and money was well invested.

The ecosystem does just work and the user experience is great compared to the alternatives. It's also possible to use only specific products and switch off various cloud or telemetry options. There's still a long way to go to reach a private OS, but Apple has by far the best privacy stance when compared to Google or Microsoft and there is nobody who offers such an OS right now. The best one can hope for is build their own Linux-based distro or use BSD and then one has to be prepared to invest a significant amount of time.

Apple didn't get challenged through the GDPR yet because everyone's busy with Google and Facebook still. But, if you or anyone else would like to lodge a complaint, maybe this will be decided for the customers (I think it's borderline) and we'll get an option to switch it off.


"One factor in the irrational defence could be a kind of psychological protection of investment." The Author of this publication is clearly invested deeply in Apple ecosystem as a developer. One of the reasons that I consider using Mac OS behind hardware firewall in the future is clear realisation of this process. This telemetry malpractice clearly must be prevented by legislative measures. Trust is earned by transparency, not by some kind of Security slogans.


> I'm fascinated in this affaire how technically knowledgeable people loose common sense to defend their favorite brand

What I find weird is regardless of what the discussion is involving Apple, someone needs to pop in with one of these theories about Apple tribalism.

Very very few people are in fact "defending" Apple here. Even among those few, the sentiment is largely that this is bad and Apple is fixing it.


I presume the comment is about the article, which it quotes, so it doesn't matter how few such people exist.


It takes some serious mental gymnastics to see how that part of his comment relates to the article.


> how technically knowledgeable people loose common sense to defend their favorite brand

Someone who says this usually holds an opposing position and simply has their own tribal allegiance. Perhaps assume good faith on the part of those who do not make the same choices you do.


I think there's an element of people desperately wanting to believe that apple is their tribe, rather than just another company.

I can believe Apple do care about privacy, but ultimately they're just another company. For example, I'm sure apple would love the Epic lawsuit to be decided based on a poll of HN users - "I would rather not have the freedom to run whatever I want, because [insert bizarre anecdote]".

Don't project your own beliefs onto apple, vote with your wallet if they annoy you - it's just a trackpad.


Could the people that vote with their wallet please also stop caring about Apple so much, to the extent that they have to rescue those that are still Apple customers and nitpick anything Apple-related to bits? Take a clean break, it's healthier that way.


>I'm fascinated in this affaire how technically knowledgeable people loose common sense to defend their favorite brand...when it happens in plain text

I'm fascinated how technically knowledgeable people don't understand OCSP.

Checking the revocation status of certificates is why OCSP was created. It happens via HTTP. Why? Because you cannot check a certificate used for the HTTPS connection when you are using HTTPS for the connection. Apple leveraged OCSP for Gatekeeper since it does the same thing, checking certificates, in this case a developer certificate. That is all it does.


It's also easy to imagine what the blog posts would look like if they did the same thing except over TLS--in a way that the harmlessness / purpose of the request was not immediately apparent.

I agree with you, though--it seems like they solved a valid problem with the most obvious, commonly-used solution. The real debate is probably just over whether or not the problem is a sufficiently large threat to justify the downsides.


How is certificate checking a terrible idea?

It doesn’t leak the application name or any personal information, and Apple doesn’t store it permanently.


It initially logged IP address and the associated developer ID which was a genuinely bad idea. They've stopped logging IP address now.

The concept here is fine, they just screwed the pooch a bit on implementation. And as usual, HN blew it out of proportion.


Developer ID is an extremely good proxy for application name.


Yeah, the article's last paragraph irks me... to reformulate it in the context of domestic spying, it'd be like saying "NSA's communication monitoring have kept you safe for years, now that you've heard of it, you decide it's a bad idea?".

Most people probably never noticed this phone-home feature existed, just like they never knew that NSA was recording everything. (Obviously anyone who bothered to look under the hood could've seen it, but hey, how many people do that).


The Apple thing was not designed for explicit mass-surveillance; NSA’s programs are. That’s kind of a big difference.


> "Those who consider that Apple’s current online certificate checks are unnecessary, invasive or controlling should familiarise themselves with how they have come about, and their importance to macOS security. They should also explain how, having enjoyed their benefits for a couple of years, they’ve suddenly decided they were such a bad idea after all, and what should replace them."

A simple opt-out toggle, for privacy reasons, would be a good start... people should stay in control of their own data and be able to choose themselves whether or not they are willing to trade in their privacy (for security in this case).


Apple has said they plan to do this, and also encrypt the checking payload. Sounds good to me, though definitely a privacy failure that they didn't do this in the first place.

The other thing I'd like to see is the app open immediately, w/ the check happening asynchronously in the background. (This seems like super-basic good engineering to me.) No idea if they're planning to fix that or not.


> The other thing I'd like to see is the app open immediately, w/ the check happening asynchronously in the background. (This seems like super-basic good engineering to me.) No idea if they're planning to fix that or not.

How would this work? The point of the check is to block malware from running, and opening without the check would, by definition, negate the entire system. If malware authors get wise to the async scheme, they can write programs that deliver their payload in the opening milliseconds of an app’s execution, while the network call is running (even the fastest pings would leave 1 or 2 ms worth of window).


Fair question. My thinking was the system is already not designed to be failproof, just mitigitive (it turns off entirely if no internet), and that malware would be pretty limited in what it could do in just a few hundred ms.

Waiting to open an app based on a network request is basically just guaranteed to give you a terrible experience some % of the time.

Maybe fancier solutions like a local blacklist are needed. (Which weirdly it looks like Apple had and then moved away from?)


Surely this check could be done on install/first run, then cached?

If you want rapid blacklisting, a frequent call to Apple to say "anything new blacklisted?" would suffice. Same as push notification.


But that would defeat the purpose. Apple can be thought of as the equivalent of the NSA: they "care" about your privacy in the sense that they don't want anybody but themselves to have access to it.

Unfortunately we don't have an Apple competitor that cares enough about your privacy to not want anybody, including themselves, to have access to it.


And yet, this claim about Apple’s intent isn’t made with a shred of evidence.


[flagged]


That article contains no evidence that Apple has a hidden agenda.


[flagged]


Gesturing broadly doesn’t work because it’s not evidence. It is only innuendo.

If you had evidence you’d be able to be specific.


"Apple dropped plan for encrypting backups after FBI complained" doesn't sound privacy oriented to me.

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...


Navigating the complexities of dealing with different governments is not the same as having their own anti-privacy agenda.

Of course they should encrypt the backups, but perhaps the alternative was going to be some kind of legislation that would be even worse.


> navigating the complexities of dealing with different governments

That's how kids call "handing over all cloud customer data to FBI" these days?

Holy whitewashing batman, that's a lot of speculative mental gymnastics.

To put it simply you have no evidence that supports Apple here other than a "but perhaps the alternative was... whatever I just came up with" .

To paraphrase yourself in another comment: "that's intellectual dishonesty about Apple."

If Apple cared a single bit about privacy it would have encrypted customer data from the begining instead of planning to eventually do it one day only to give up uppon FBI request.


This is turning out to be a bit of a similar case as the iPhone battery degradation performance throttling issue. Instead of clearly messaging what they were doing to your phone, they did things behind the scenes because they knew better, and decided not to give the user the choice to run the phone at full performance.


You clearly need to familiarize yourself with the actual facts of this case. Also, I'm pretty sure that for 99% of users, "full performance" is not characterized by "randomly crashing due to lack of intelligent power management".


The throttled performance is full performance. Without throttling the phone would simply crash.


I sometimes wonder if the mods won't end up banning "political" talk on HN. Because these days everything becomes political, even if it really is a technical issue.

Case to the point: online signature check was a technical decision, to fight malware. It was implemented similarly by other OS vendors (Microsoft) and it's been this way for years.

Now we discover that it has the unfortunate side-effect that it lessens privacy. Apple (and probably other OS vendors) are working to improve that in the future. Also, a technical issue.

It was never about privacy. It was never a political issue. Can we please just discuss it from a technological standpoint?


The technical issue is "can we provide these features without weakening privacy?"

The political issue is "if we can't provide these features without weakening privacy, should we still provide them?"

Aren't they both important points to discuss?


They are, but the difference is that we can fix technical issues, or at least improve them. We can (mostly) agree about what's right and wrong and what's better or worse.

Political issues on the other hand, just end up antagonizing us ever more. We argue endlessly, go on countless tangents and nobody agrees on anything because we see the very issues under different lights, experiences, values and cultures.

I am tired of politics. I just want to get stuff done. Hopefully good stuff, but I’d settle for slightly better.


Politics is about where we go. Tech is about how we arrive there. If you disagree on the direction we are headed, discussing different ways to arrive there seems pointless.

You may agree with the status-quo politics, but other people don't. For them it's not about the tech, it's about the overall direction. For those people the important discussion to have is political.

For instance, I personally am against the current direction macOS and Windows are headed. I have no problem with these kind of security measures as long as there is a button to opt out. Currently, Apple decides for everyone, and doesn't provide ways to opt out for power users. I dislike this, and I feel like discussing better cryptography, security protocols, etc, doesn't address my priorities.


I think this is a terrible attitude. We can improve political issues and we do it by defending our opinions in the public sphere, which has the power to change others' opinions. If you don't work to present your ideas to the world in the best possible light then you will just allow other, weaker ideas to become more convincing in comparison, thus doing a disservice to anyone who could have been convinced otherwise.

I am sorry that politics is tiring but I think that is just a reality of politics being a manifestation of natural forces. The natural world is competitive, it's competitive on the cellular level, the food chain is competitive, and human societies are competitive too. Looking the other way doesn't change the reality, it just guarantees that the opinions of others will become more significant to the world than your own. Just like how our cells compete to form the best possible body, I think we are obligated to compete with our ideas to form the best possible society.


People tire of politics when (they feel) the conversation becomes unconvincing, pedantic, or monotonus. That doesn't mean those people have the wrong attitude, it means the argument is not compelling.


People are "tired of politics" until it affects something they care about.


It's possible that the technical discussion about online signature checking was subsumed by the political discussion--the two are interrelated and if we don't get into these discussions here I don't really see another place for them to happen.

It's easy to shrug it off as simply a technical issue, and it's very convenient for the PR department as well.


When a giant like Apple makes a decision to disallow installing apps that were not downloaded from the macOS-app store (and by preventing me to open the app, what Apple does is basically disallowing the app; my retired father does not know how to circumvent this), then this is a political descision that effects the lives of thousands of mac users.

Computers have become a gateway to the digital world, which makes up a large part of our lives. And Apple is a huge player that has the power to shape the future of computing. Apple's vision of computing is on a trajectory where the end game is clear: Users have no control over their devices and will only be doing what Apple allows them to do. This is the case on iOS already, and macOS will be there in a few years.

Some say: Decide with your purse. If you don't like Apple's vision, then don't buy their products. But this argument is like saying "This is how we handle things in country X, if you don't like it, move somewhere else." Some of us have invested lots in tools and software in Apple land, so changing platforms will take some time. But even if we can change, Apple is shaping the future of this industry, in a way that it restricts freedom and limits options. And this is not a future we should accept. Instead we should be opposing and fighting it.


I've recently downloaded quite a number of Mac programs directly from the web, as well as from Steam. They work just fine, as long as they've been signed.

What makes you say that Apple has made a "decision to disallow installing apps that were not downloaded from the macOS-app store"? That seems kind of obviously untrue.

I mean, I myself sell Mac software outside the Mac App Store and have never had a complaint.

Maybe you were thinking of the iOS app store?


> everything becomes political, even if it really is a technical issue

I'm being reminded of an old song by Tom Lehrer [https://www.youtube.com/watch?v=QEJ9HrZq7Ro]


I think informing the vendor of an OS about every program you run has necessarily a political component and is of relevance for discussion about security. It is the first step to define the threat model and there certainly are additional threats you are exposed to. This data is highly valuable to anyone that might want to infiltrate computer systems.

The technical discussion of signatures is well documented.


> It was never about privacy. It was never a political issue. Can we please just discuss it from a technological standpoint?

No. Unintended consequences are important. There are privacy issues, therefore we need to discuss privacy.


> Case to the point: online signature check was a technical decision, to fight malware.

This is an oversimplification. It also helps protect Apple's business model: you must pay Apple a fee for services (and show ID) to be able to sign your apps for distribution on this platform. Imagine if you had to show ID to get a TLS certificate for your website.

Don't conflate the issue - this is also a move to protect certain streams of Apple services revenue, in addition to protecting users from malware, and it always has been.


I see you asserting this over and over.

What I don't see is you providing any real evidence that this is a core part of the decision-making process.

Apple isn't particularly incentivized to find a different way that avoids the tools they already have that already make it harder and costlier for parties to get around their security mechanisms. That is not the same as making decisions because they protect the business model.

Which is to say, it appears that you're the one oversimplifying and conflating (btw, I do not think that word means what you think it means) some very different motivations.


I agree that there is no direct evidence that this decision was part of their formal decision-making process. But there is still something to be said for designing systems where it's not possible for those negative incentives to exist, whether or not there is any current intention of taking advantage of them.

Of the tens of thousands of people who had a hand in shaping macOS today, it's impossible to say what their collective intentions were in all the decisions they made. So I think it's useless to talk only about the intentions you can prove just by looking at their formal decision-making process. That is why we need to be working to protect privacy at every level with a "defense in depth" approach.

And of course it goes without saying that all major vendors have issues like this and could be working harder to make sure that these incentives don't get created.


> designing systems where it's not possible for those negative incentives to exist

No doubt, but even in simple systems this is considerably more difficult than it sounds. Incentive systems are not easy, and any incentive system is often twisted into a game that produces unexpected poor behaviors. Try to achieve that at a 100+k employee company and you're guaranteed to end up with misaligned or counterintuitive incentives.

The reason something stronger than simple assertion matters here is because Apple actually has added an incentive system: trumpeting privacy as a core feature means they tie their brand to their ability to deliver on privacy. That means that being called out for privacy issues has greater potential harm for the company, and thus its bottom line, in a way that's far more direct and monetarily impactful than $100 annual developer fees ever will be.

Even cynically, you can see the same mechanic at work with the recent reduction of app store percentages for small businesses. Apple has made it a core part of its developer outreach that the app store is a good thing for developers: they've made it part of their brand. If they're called out for something that makes many of those developers disagree, or even for something that makes users perceive that part of the brand as incorrect, it has implications to the whole company's bottom line (not just the developer id sliver of it).

> it's useless to talk only about the intentions you can prove just by looking at their formal decision-making process

For what it's worth, my point isn't tied to formal decision-making, it's tied to the informal parts as well. What I'm saying is that this can be a completely technical decision that relies on the existing business structure without ever taking into account whether it will raise revenue from developer ids. The developer id's existence is a fact. The “DoS resistance” characteristics, if you will, of having the developer ids cost money is a fact. As a system architect, leveraging those facts for system security seems perfectly reasonable. Yes, absolutely they should have taken privacy into account as well. “We haven't used it” and “we won't use it” aren't the same as “we can't use it”.

But here's the thing: to me, the strongest indicator of whether a company is committed to an approach is whether they react positively when they are called out, or whether they double down on their mistakes. Apple was called out here, and they've committed to doing just about all of the things they should be doing. You could use this same framing to say that Apple isn't as committed to developers thriving on their platform as they are to living up to their privacy commitment, of course.

Tl;dr: customers are part of the incentive system, tying your brand to a commitment aggressively as Apple has tied theirs to privacy has an impact on customers, and this is a great way to introduce an additional external forcing function to your internal teams.


Apple charges 10x the market rate for credit card processing on the purchase of mobile apps on iOS. Why do you think this is possible?

Take it from dhh if you don't believe me:

https://mobile.twitter.com/dhh/status/1328339591389175808


Sorry, you seem to have deviated into an unrelated axe you're grinding. Try again, this time with the axe you were originally grinding, which I'll help with:

> [Online signature check] is also a move to protect certain streams of Apple services revenue, in addition to protecting users from malware, and it always has been.

To restate and avoid drifting into another non sequitur, this ascribes intent; that is, it suggests that part of the reason online signature check was added, and part of how it has been evolved, is to protect certain streams of Apple services revenue. That would be your argument, which has no evidence to support it, but you suggest is backed by “facts”[1], which appear to be nowhere to be found.

Are these facts somewhere to be found? Or are you stating hypotheses as facts?

[1] https://news.ycombinator.com/item?id=25210475


I have no axe to grind with Apple. I'm a happy Apple customer and have been for most of 30 years.

The same code that keeps malware from running on a mac (or iphone) keeps non-app-store apps from running on an iphone, or prompts you to move non-notarized apps to the trash on a mac.

It's not some separate thing: the exact same code path that protects the consumer store revenue and developer notarization service revenue also protects users against malware.

EDIT, for clarity: I am speaking of Apple-developed, Apple-owned platform security code, where root keys are not held by anyone other than Apple, not generic crypto primitives or the concept of code signing in general (where we have a P-as-in-public PKI).


Which is the same code that keeps unsigned bootloaders from running on PCs which is the same code that keeps unsigned packages from being installed on Linux systems which is the same code that keeps unsigned browser extensions from running on Firefox which is the same code that shows the scary warning on Windows.

Everyone seems to like code signing.


Lol you have never had to deal with apple's over complicated code signing as a developer.

Adds a lot of wrenches when your just trying to do basic stuff like codesign and push test builds onto a USB connected device from a bash script and it is flaky and undocumented as fuck.

I am honestly jealous of my android counterparts with their far simpler system and first class command line support via adb.


[flagged]


I've not posted any theories, only widely-accepted and recognized facts.


> this is also a move to protect certain streams of Apple services revenue

That's a theory, not a fact, nor is it widely recognized.

By any reasonable accounting of costs, the $99 Apple Developer Program fee isn't meant to be a profit center that Apple is trying to "protect". It mainly helps prevent spam accounts and helps offset the cost of reviewing and distributing free apps in the Mac and iOS app stores.


“ this is also a move to protect certain streams of Apple services revenue”

Citation needed please, Mr. Giuliani.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: