A common refrain in arguments that we don't need to reject closed source software to protect privacy is that being closed source doesn't hide the behaviour, and people will still notice backdoors and privacy leaks. Sometimes they do, sometimes they don't.
Parts of the US Governments unlawful massive domestic surveillance apparatus were described in the open in IETF drafs and patent documents for years. But people largely didn't notice and regarded reports as conspiracy theories for a long time.
Information being available doesn't necessarily make any difference and making a difference is what matters.
As the ecosystem grows, the cost of switching increases. Therefore market starts acting more and more inefficiently.
This is why countries have state intervention in such cases. And anti trust exists.
If the option was a mac with privacy vs a mac without privacy but $10 cheaper, I'd think the market would pick the mac with privacy. No such choice within the budget exists, and if a company has a monopoly on the ecosystem, then the market cannot react when held hostage. The cost to transition to a different ecosystem is thousands of $'s when you've been using a mac your entire life. It's not like you'd want to relearn many things when you get older, perhaps the sunk cost fallacy as many would define it.
It's a bundled deal, it's not like you can pick the parts you like and throw away the ones you don't like it would be in an efficient market.
If even someone like me cannot be bothered to move to more privacy friendly platforms, then I don't have hope for other people. It's just not worth it. Until the majority face the consequences of their apathy, any action you take is fruitless. Let them taste the pain, then offer the solution.
I know I can be tracked. So what? I can't verify the silicon in my device, can I? Gotta trust someone, maybe blockchain will solve the problem of trust among humans, in a verifiable way.
He does seem to have a weird cult of personality around him but I can't really understand why. He has been irrelevant for quite a while now.
How about $348 billion dollars that says he's wrong about his take on Bitcoin?
Look, I get it. I respect Schneier's knowledge on encryption but he is wrong about blockchains and about Bitcoin in particular.
But he wouldn't be the first establishment technologist/economist/politician to be wrong about Bitcoin.
As I write this, the market cap of Bitcoin is a little over $348 billion dollars ; there's no way it gets to this valuation if its distributed trust model didn't work.
He's making social and process arguments, not technical ones.
Market bubbles are a thing. For a while there everyone was convinced that small plush toys were going to help them retire. The market also convinced itself for nearly a decade that "housing prices never go down". Lots of people can be wrong for a surprisingly long time.
Yes, bubbles are a thing, but Bitcoin appears to be something different. It's passed ever test and attack. It's now being taken seriously by mainstream financial professionals, CEOs of publicly traded companies and Wall Street.
These facts themselves can't prove that Bitcoin is not a bubble but it does mean if you buy into that line of thinking, then encryption and math have to be suspect as well, since Bitcoin's functioning relies on them.
Also, Bitcoin has been the best performing asset of the past 10 years in which most people didn't take it seriously: https://www.bloomberg.com/news/articles/2019-12-31/bitcoin-s...
You're literally describing the bubble. An asset that is worth $20k one day, and worth $6k 6 months later, is worthless to anyone except speculators, speculating on... a bubble.
Disclaimer: I am long BTC.
This is short term thinking and a general mischaracterization.
First, we've never seen a new form of money created in realtime, so it's hard to say how it's supposed to perform.
However, nobody should expect something that will fairly soon have the same market cap as gold (about $10 trillion dollars) to not have a lot volatility as it grows. The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.
There's a lot of ups and downs for Apple as it went from darling startup to nearly going out of business in the mid-90's to a $2 trillion dollar market cap today.
As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.
A publicly traded company that puts its treasury of $425 million into Bitcoin isn't speculating: https://www.microstrategy.com/en/bitcoin.
This hasn't changed with Bitcoin. BTC is money like a gold bar, beanie baby, or block of IPv4 addresses is money.
> However, nobody should expect something that will fairly soon have the same market cap as gold (about $10 trillion dollars) to not have a lot volatility as it grows. The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.
Nobody describes the tech darlings as money, or as an alternative form of currency. They're equities you can invest in, and comes with the expected volatility.
> As you may know, the mantra in the bitcoin community is to HODL—hold on for dear life, not to time the market. For long term investors, the ups and downs don't matter.
It's nice that there's a backcronym that's been created from a typo. I still don't invest in money, I use money to invest in assets.
Disclaimer: I remain long bitcoin (since 2012)
Call me when I can buy my latte and groceries with bitcoin, or pay my mortgage. Until then you've made an investment vehicle, not money.
> The tech darlings of today—Apple, Google, Twitter, etc. were also quite volatile as they grew.
Survivorship bias. Pointing to a handful of random successful companies and trying to draw conclusions is literally meaningless.
Yeah, this isn't something you actually say about money. Honestly, it would be a kind of cultish thing to say about an investment too. I don't need a mantra for how I invest in my 401k, why does Bitcoin need one?
> A publicly traded company that puts its treasury of $425 million into Bitcoin isn't speculating
That's the literal definition of speculation, a risky one at that.
As a result, I've had more than one experience of arguing with people who take his old blog posts as gospel without ever thinking critically about them.
His article denouncing the XKCD password scheme is a gem, he doesn't bother (if it were anyone else I'd be less charitable and say doesn't know how) to calculate the entropy, and then he proposes an alternate scheme he invented that almost certainly provides less entropy and is more vulnerable to dictionary attacks.
You don't have to trust the software, you can verify it or implement your own. You don't have to trust your internet uplink, the protocol would work over carrier pigeons or with dead drops. You don't have to trust exchanges, just exchange bitcoin for local currency with your neighbor. And even if you use an exchange you shouldn't store money there anyways. The minimum required trust is tiny (basically you yourself), but of course as Schneier points out the amount of trust involved in practise isn't nearly as low, and for many people the failure cases are much worse
This is a common refrain among software people, but in reality approximately 0% of the market actually rewrites such software on their own. Most people can't code, and the percentage of those who have the time, interest, and specialized programming skills to rewrite their own financial software is an incredibly tiny pool. In practice, you have to trust someone's code.
And auditing doesn't get you very far either. That takes time too, and auditing secure software is really hard. I doubt I could do it, personally.
> You don't have to trust your internet uplink, the protocol would work over carrier pigeons or with dead drops.
Be serious. Nobody does this, aside from maybe as a joke.
> You don't have to trust exchanges, just exchange bitcoin for local currency with your neighbor.
You don't have to, yet everyone seems to. Convenience matters, pretending to the contrary is a fool's game.
> And even if you use an exchange you shouldn't store money there anyways.
Crypto fans say this all the damn time, and yet nobody seems to listen. Maybe because it's foolish advice to give? While this advice is technically strong, it fundamentally fails to understand what people using any financial system want and what motivates them. Nobody wants to turn the process of moving money around into an elaborate song and dance; they want to click a few buttons and have it work. If your system counts on either telling people to avoid the most convenient option to accomplish the task, your advice will continually fail.
> The minimum required trust is tiny (basically you yourself), but of course as Schneier points out the amount of trust involved in practise isn't nearly as low, and for many people the failure cases are much worse
Louder please - this is applicable far beyond cryptocurrency.
One detail Schneier misses though is:
> Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.
The second statement contradicts the first. People who don't like/trust government-backed currencies aren't fanatics. The way the big banks handle money is quite reckless and dangerous, as we've seen time and time again. And buying drugs for your own personal use should be legal (IMO) but is not. Cryptocurrencies are useful. Maybe not to most people, or to Bruce Schneier, but that (blanket) statement is simply false.
Beyond that, Bitcoin as a simple store of value is an incredibly risky proposition. Someone upthread posted a link to an article claiming that Bitcoin has been the best performing asset over the last 10 years, but that ignores the gut-wrenching volatility it's gone through.
And even if you stretch to suggest that Bitcoin has been a good investment, a good investment vehicle is generally not a great currency. If I had a $100 bill that, a week later, was worth $50, I'd be pissed. If another week later it was worth $200, I'd be pleased, but would feel very uncomfortable thinking I can rely on that "currency" to continue to pay my bills over the long term.
I'm not saying Bitcoin has failed or that it's useless in general, but as a generic currency replacement I'd say it's pretty weak and unreliable, mainly only suitable for use by people at the margins, people who often can't bear the risk that Bitcoin foists upon them.
Bitcoin was created as something that governments/banks/corporations couldn't control or manipulate and that's something that's worth a lot in itself, I think.
And as I mentioned Monero is quite convenient for shopping on the dark web, without government intervention.
If you trust your government, banks, politicians, and agree with all the laws and taxes, then yeah, you might truthfully state that cryptocurrencies are useless. But some believe and argue that governments shouldn't have that kind of control and people should have a higher degree of freedom and privacy.
This went a bit off topic perhaps.
I'm curious about this, and there are some ways we could test it if we had access to sales data.
Amazon sells a Kindle that displays ads on the screen while the device is asleep, and a more expensive model without ads. How many people buy the one without ads? (I'm assuming the device sends back a record as to what ads were seen or interacted with, which has privacy implications.)
Hulu has a plan that shows adverts when you watch a show, and a more expensive plan that doesn't. How many people pay for the more expensive plan?
More generally, what's the price spread that would cause most people to flip from buying one version or the other? To use your example, I'm sure most people would pay for a "mac with privacy" if the difference (on a multi-hundred-dollar or $1k+ product) was only $10. But what if the difference was $50? $100?
And that's not even the exact same example, since a "mac with privacy" might look, to the average user, to be identical in operation to the "mac without privacy". Paying more to avoid advertisements has a clear impact on the experience of using a device/service. But knowing that your $X-more-expensive mac isn't sending application launch data to Apple doesn't change your day-to-day experience much.
Companies pay lots of money per user to secure their computer work. They could save lots of licencing by running obsolete software. Companies that do that are hacked, and sometimes mortally wounded. I'm not so sure users price privacy so cheaply.
This makes me think of the 2 slit experiment as applied to basketballs. There is a period of time that decisions need to be properly considered, presumably simple decisions need little time, and complex decisions need more time. There is also a period of time that is required to make a decision, and a level at which the decision is made.
There are lowly software engineers making decisions like this, some of which may not even percolate to daily stand-up, yet will require the Supreme Court to unpack 10 years from now. And no one really knows.
It's like trying to pass a basketball through a slit. Yes, there's a theoretical interference pattern, and you can calculate it, but you can't do the experiment because you can't pass the basketball through a slit that small (in the case of basketballs, if I recall, the largest slit is angstroms if not smaller).
So in software development, You've got huge uncertainty in the societal implications of some decisions about what information you're going to pass over the network, but you can't even get them all through daily stand-up, let alone to Congress or the Supreme Court. Somewhere along the way, some of them end up on Hacker News with people mis-quoting Eric Hoffer, “Every great security decision starts off as a movement, turns into a business, and ends up as a racket.”
I know this doesn't have too much to do with the core of your post, but I want to mention it nevertheless: QM does not imply that there is an interference pattern for basketballs. There might or might not exist one, but we would need an answer to the question of measurement to predict one way or the other.
The market only acts fairly in the window after a product is commoditized and before regulatory capture happens.
And in some cases that window is closed before it opens.
Is there any data released on ads Vs no ads versions?
That's the closest comparator I can think of.
Here are the only differences I've noticed:
• With ads, the sleep screen displays some artwork from a book Amazon is selling  and some text suggesting you come to the store and buy books,
• The home screen has an ad at the bottom.
Since when I'm not using the Kindle the sleep screen is hidden behind the cover, and when I am using it I'm somewhere other than the home screen 99.9% of the time, I never saw any reason to add the "without ads" option, and when it was time to replace that Kindle I again went with ads.
I suspect that the vast majority of people that buy the "no ads" option up front do so because when they see "with ads" they are envisioning a web-like experience where the ads are all over the place and intrusive and animated and distracting.
 Usually a romance novel for me, even though I've never bought any romance novels from Amazon, nor anything even remotely like a romance novel. In fact, I don't think any book I've bought from them even had any romantic relations between any of its characters.
I have also purchased a Fire during a Black Friday sale. Got rid of the ads on that one too, but for free since some nice person over at xda made an automated process for that. On a side note, with Termux it isn’t entirely horrible as a 7 inch laptop when paired with a hinged keyboard case. A portable system at a similar price to a Pi.
> Is there any data released on ads Vs no ads versions?
Do they offer a tracking vs no tracking option too? The absence of adverts does not mean the absence of tracking.
Even if you E2E-encrypt each user's data for cloud storage and have devices join a P2P-keybag, ala iMessage, consider the ad-tech department of your same company as if they were an external adversary for a moment. What would an external adversary do in that situation? Traffic analysis of updates to the cloud-side E2E-encrypted bundle. That alone would still be enough to create a useful advertising profile of the customer's reading habits, since your app is single-purpose — the only reason for that encrypted bundle to be updated, is if the user is flipping pages!
And, together with the fact that your ad-tech department also knows what books the customer is reading (because your device can only be used to read books you sell, and thus books you have transaction records for selling them), this department can probably guess what the user is reading anyway. No matter how much your hardware-product department tries to hide it.
In this regard, Amazon is among the worst of the big tech companies that I interact with. Google lets me turn off personalization—when I go to Youtube, I see an extremely generic set of recommended videos. But I can’t do it on Amazon.
(I don’t use Facebook, not sure what can be switched off.)
Sure it can. You encrypt the data so the client can read it and the server can't. When you add a new device, you e.g. scan a QR code on your old device so that it can add the decryption key to the new device, and the server never knows what it is.
True, although the price difference for the Kindle is about 20%. If the discount on a Macbook Air was similar, I'm sure it would be well subscribed.
OP was comparing the Kindle with ads discount to an imaginary Mac without privacy discount.
Alternatively you can deliberately break the ad functionality, install google apps, and android lets you change your home screen.
1 and 2 worked but they broke the ability to set your own home so the fix is a hacky app to hijack the home button to show the right home of your choosing. This worked for over a year then they repeatedly blacklisted such apps, then it worked but with a 2 second delay which is basically horrible and extensions started crashing the gmail app.
Worst piece of shit ever.
> I bought one for my wife.
Aww, that's so sweet! ;)
Mea culpa but her replacement is so nice I got one too. Moto g7 powers.
Based on context (rest of your comment), did you mean "substitute good" instead of "commodity"?
What "privacy" means here is a huge issue.
I would not agree that this would be the case.
I think you'll find the majority looking to save 10$, which is almost nothing for many people but a non-trivial amount for most in the US.
Stronger privacy laws hurt Google, Facebook, and Amazon far more than Apple. Most of Apple's privacy gaffs are just bonehead moves like this one which shouldn't happen, but also don't drive revenue.
We have a long way to go as an industry on keeping users _well_ informed on their privacy, and I'm afraid Apple's set us back years by morphing it into an advertising one-liner over actually helping users along.
On the iPhone, that means keeping the system's reputation for security and ease of use trumps more or less everything else. On the Mac? Not so much.
For Apple, privacy is a cheap giveaway because unlike Google or Facebook, the way Apple makes money doesn't require they know when my last BM was.
People forget about CRLs because browsers mostly ignore them.
People just go crazy for any Apple story because it attracts attention. People have been paying to send all sorts of app launch analytics to AV companies for example since the 90s.
That doesn't invalidate what the parent said. The only way awareness helps is if it's general knowledge. I don't believe it was, and you personally having known about it doesn't make it so.
Then, we would find out very quickly what people value.
I firmly believe this ecosystem (as in privacy violating ad and data selling business model) is only dominant because companies are able to mislead with impunity, so it's basically a form of fraud
But I've also known that macOS verifies signatures for as long as it's been doing it. This was no secret, it was advertised as a feature.
I assumed it wasn't being done in plaintext, because who would be so foolish as to code it that way? and I'm still plenty mad about that. Anyone could have checked this at any time, presumably people did, and the only reason it became a story is because the server got really slow and we noticed.
Apple says there will be a fix next year, which... eh better than nothing, not even 10% as good as shipping the feature correctly to begin with.
But of the many things about this episode which are worthy of criticism, Apple being deceitful is nowhere among them. Never happened.
I wish you could prove this.
And in the wider tech press
"Mac apps, installer packages, and kernel extensions that are signed with Developer ID must also be notarized by Apple in order to run on macOS Catalina"
Even on hacker news
Secondly, it does not actually say anything about the OS phoning home and preventing the user from launching an app. The appleinside talks vaguelt of 'Notarisation', something thay can be inplemented in a variety of ways, like signing application with a certificate
There are 100s of articles like this. Gatekeeper was a keynote feature.
What should be targeted is the product of said breaches. Something like the blood diamond approach.
If your company has PII, then you by law must be able to produce a consented attestation chain all the way back to the source.
If you do not, then you're charged a fine for every piece of unattested PII on every individual.
- This also punishes possession, not use. If you think about that for a minute, it should become clear both how this doesn't attack the right problem, and how companies would evade it.
- Finally... how are you going to audit Ford or Geico? Honest question. Who pays for the audit of "every piece of unattested PII on every individual"? How often, what is the dispute mechanism, and who administers that? Seriously - this sounds like a job for a new agency combining significant portions of the IRS, the FBI and PwC.
If companies were immune to data breaches or leaks, then maybe this wouldn't be such a big deal. But I don't trust most companies to securely hold my data, even if they don't use it at all.
And besides, by the time a company uses the data in a privacy-destroying way, it's too late. The cat is out of the bag. Sure, the law against use could serve as a deterrent, but companies will push the law past the breaking point all the time. If you make possession trigger legal action, you can mop these things up before they get to use your data. Sure, you still have the problem of finding out about that possession. But also consider that if you have laws against possession and "bad" use, and a company does something, you can charge them for both things and hurt them more. That's a larger deterrent.
Yes. There are many laws (e.g. accounting) that only apply to companies, when it's scale that amplifies harm.
> possession, not use
How are they different, in this context? The latter requires the former, and the former is unprofitable without the latter.
> how are you going to audit Ford or Geico?
As you note, similarly to how we audit now, albeit hopefully more proactively. If the law requires a signed off third-party PII audit, and holds an auditor legally liable for signing off on one... I expect the problem would (mostly) take care of itself.
PII is always going to be a game of edge cases, but we've managed to make it work with PCI and PHI in similarly messy domains.
Right now, companies have GDPR & CCPA to nudge them in data architecture. National laws would just further that. I can attest to major companies retooling how they handle and track consumer data just due to the CCPA.
And, there will be many scammers who are simply out of reach of meaningful legal remedies.
However, how do you prove John Doe has actually agreed to this? What if John says he did not click accept button? Do we require digital signature with certificates, given that most people don't have them or know how to use them?
I think the problem is more tractable for physical products running firmware - there you have real proof of purchase, and, at present, firmware that does whatever it wants.
I don't work in that space, but my understanding is that the card processors essentially serve as dispute mediators in those instances.
So it would seem unavoidable (although not great) to have some sort of trusted, third-party middle person between collectors and end users, who can handle disputes and vouch for consent.
Blockchain doesn't seem like a solution, given that the problem is precisely in the digital-physical gap. E.g. I have proof of consent (digital) but no way to tie it to a (disputed) act of consent (physical).
So...basically the GDPR? Well, not quite, since the GDPR doesn't require consent attestation, "merely" a legal basis. Of which consent is just one (the most useless one to use as a company).
This doesn't mean we shouldn't continue trying to stop them.
And we stop them through laws.
Common sense is not that common and human decency doesn't scale.
When you have so many laws, they can be applied selectively depending on your political status, or to benefit the regulators or their friends. We just caught the sheriff of Santa Clara extorting citizens for tens of thousands of dollars to get concealed carry permits, which is why many people carry illegally, just like criminals.
Creating a law where non-disclosure of the smallest feature opens you up to government regulators harassment is another avenue for graft and corruption. Like when the EU selectively prosecutes US companies, or US regulators selectively harass companies not in lock step with the current administration.
I think if you really need a nanny state to protect you from your own decisions you should be required to give up all your decisions to the state.
I don't think it does, I think is fraud. For it to be consequences of of my choices, what choice do I make to select a mobile phone carrier that does not sell data of my location? Such choice does not exist.
You can't 'non-disclose' some 'small feature' of a mortgage contract, of a loan, etc. Personal data deserves similar respect.
Lastly, we can and do have different laws for individuals and multi-billion dollar corporations - you cant use this as an argument when we are discussing securities fraud and banking regulations.
The point of laws is statistics: you can't discourage everything, you need to discourage enough to have order.
And there is a middle ground between no laws and giving everything up.
Also, I give up my liberties and obey laws in exchange for protection from many nasty things people do when there are no laws.
Based on your examples and vocabulary ("nanny state" is a clear giveaway), you're American. Go live for 5-10 years in a country with lax or non-existent laws and law enforcement. We call those countries bad names for a solid reason.
Sorry but that’s just obvious BS. If it were true, you’d include examples and far more people a certain world leader doesn’t like would be “locked up”.
This is surprising to me. Could you provide examples of such common felonies US citizens commit in ignorance?
If we look at the example of OCSP in the website certificate context, the notion of the delay added by OCSP (not to mention privacy concerns) being objectionable has already been acknowledged. As a result we have OCSP stapling. For some reason, in the Apple developer certificate context, OCSP is deemed acceptable by default.
And there are cases where it's not practical for "people to notice." For instance: a privacy leak that only uses the cell network connection of a phone, which would avoid easily-sniffed connections.
You're not wrong, but on the other hand has "the market" shown any serious signal that it cares about privacy? From what I can see people seem more than glad to trade privacy and personal information for free services and cheaper hardware. Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now. The market seems fine with it.
And I mean, at this point I could just gesture broadly at all of Facebook.
Your average consumer doesn’t know the extent of what they’re trading. Take Facebook, even with high profile stories and documentaries it’s reasonable for your average consumer to assume that what Facebook tracks about them is what they actively give to Facebook themselves.
I’ve had conversations with people that say, “I rarely even post on Facebook” and “If they want to monitor pictures of my food/dog/etc whatever who cares”, without any solid understanding of what even having the app installed alone is giving Facebook.
They won't care even when they'll know. And they might not ever know.
This is where the pricing system really comes into great effect. People buy and drive fewer SUVs when gas is more expensive. If we want people to buy fewer SUVs, increase the fuel tax.
Education is not enough, and in fact might not even be necessary at all. Just introduce real costs to capture the "abstract" costs (externalities) and the problem will likely correct itself.
Depends on what you consider a serious signal of care. If 'voting with you wallet' is the measure, increasing levels of income inequality, stagnant wages, weakening employee rights through the gig-economy, etc. are effectively taking away that choice, as most market participants cannot afford to make the choice.
Also, what is the paid alternative to Apple or Google photos that allows me to have the same end user experience, without giving up my privacy? "the market" doesn't even have such an offer that I can see. The closest I can find (and that's through here) is photostructure.com, and even that's lacking all the local-ML-foo that makes Apple/Google photos a compelling option over anything else.
> Take Samsung putting ads on their "smart" TV's UI and screenshotting what people are watching for profiling, that's been known for a while now.
Known by who? I'd wager a year's salary that >50% of Samsung TV owners (and bump that number up to >75% of Samsung TV users) do not know this is happening.
Also, Chromebooks are not viable alternatives for many people especially of lower income. Having to rely on being always online is an issue when you can't always guarantee having internet. Been there, done that.
This is not true. The tiny user-base of OSX compared to Windows is already evidence that most people are not buying MacBooks.
I'd argue that's more of what parent was talking about with regards to visibility.
"Privacy" isn't something anyone can see. What you see are the effects of a lack of privacy.
Given the dark market around personal data (albeit less in the EU), how are consumers to attribute effects to specific privacy breaches?
If Apple sells my app history privately to a credit score bureau, and I'm denied a credit card despite having a stellar FICO, how am I supposed to connect those dots?
Is there a pro-privacy Google out there whose products languished while Google's succeeded?
The Silicon Valley VC network did not fund nor support companies that promoted privacy. I can not think of a single example.
Not a single major venture from SV VC network even attempted to innovate the "business model". We can make machines reason now but, alas, a business model that does not depend on eradicating privacy is beyond the reach of the geniuses involved. It is "Impossible"? I think, "undesirable" is more likely. No one is even seriously trying. Point: money behind SV tech giants is not motivated at all to fund the anti-panopticon.
The salient, sobering, facts are that all these companies are sitting on a SV foundation that was and remains solidly "national security", "military", and "intelligence". The euphemism used is to mention SV's "old boy network".
Ask a representative sample and I wager only a very small percentage of people are actually aware of (1) the breaches of privacy that are happening (e.g. your TV sending mic dumps and screenshots of what you're watching), and (2) the hard consequences of those invasions (that is, beyond the immediate fact that you're being snooped upon), like higher insurance premiums on auto and health, being targeted by your opinions, etc.
If you tell them it's the manufacturer this time in addition to the cable company, I'm not sure how many would freak out over the extra entity.
I'm not sure how you'd measure it, but there seems to be a huge disconnect between techie privacy advocates and the rest of the world. The former keeps claiming the latter just doesn't understand, or needs to be informed, but I just don't think that's realistic.
I think it's pretty common knowledge that these companies are harvesting all imaginable data to serve users more / better advertisements and to keep them on the site, yet usage continues to grow despite all the scandals. I think advocates need to make more convincing claims to everyone else that they're being harmed.
> Even for you, do you think you fully understand how your data is being utilised and what the consequences are?
I don't think anyone can say with certainty, but I read these threads so I'm quite aware they're collecting and monetizing every imaginable thing they can. It's difficult to articulate any measurable / real negative consequences to me, personally.
I'd guess that most of that's due to information asymmetry. Privacy losses aren't advertised(and often hidden) and are more difficult to understand, but price is and is understood by everyone.
Take that Samsung example: it's not like they had a bullet point on the feature list saying "Our Smart TVs will let us spy one what you're watching, so we can monetize that information."
Companies can make mistakes or add backdoors and we won't know. See this clusterfuck.
Open Source, likewise, can make mistakes and (much more rarely) add backdoors, and we could know, but few have the resources to so. See heartbleed.
I don't think the "it's only the vendor" defense of Apple is any good.
FYI -- Both Firefox and Safari use OCSP to check server certificates. Anybody sniffing your network could figure out which websites you visit. Chrome still uses CRL; it trades precision for performance.
Perhaps you should learn about OCSP before complaining about its use of HTTP.
You might want to read the RFC, rather than a blog post about it, before making such confident pronouncments.
I would add that the market can't act to prevent risks that are outside the market and not taken into account by the market.
The big risks from widespread privacy loss are the exploitation of private data by criminals, foreign unconventional warfare by terrorists or hostile states, and the rise of a totalitarian government here in the USA.
Criminal action can to some extent be priced into a market, but the other two really can't be.
Related tangent: for those interested in this topic of "unlawful massive domestic surveillance", I heartily recommend Cory Doctorow's "Little Brother" novels -- esp the most recent, "Attack Surface". They get the technical details right while remaining accessible and engaging regardless of the reader's geek acumen.
Since discussion of Apple's behavior in particular has, somehow, been completely de-railed anyway (a frequent happening on HN) ...
Can't help but observe that the market takes the best care of those who control it. Freely.
Seriously, who has ever been successful at defending that idea ?
This protocol was created so that monitoring infrastructure could reprogram asic-based packet filters on collection routers (optical taps feed routers with half-duplex-mode interfaces), which grab sampled netflow plus specific targets selected by downstream analysis in realtime. It has to be extremely fast so that it can race TCP handshakes.
I don't think it's much of an exaggeration to say that the technical components of almost all the mass surveillance infrastructure is described in open sources. Yes, they don't put "THIS IS FOR SPYING ON ALL THE PEOPLE" on it, but they also don't even bother reliably scrubbing sigint terms like "tasking". Sometimes the functionality is described under the color of "lawful intercept", though not always.
One of the arguments that people made against the existence of widescale internet surveillance -- back before it was proved to exist-- was that it would require so much technology that it would be impossible to keep secret: the conspiracy would have to be too big. But it wasn't kept secret, not really-- we just weren't paying attention to the evidence around us.
For a related patent example: https://patents.google.com/patent/US8031715B1 which has fairly explicit language on the applications:
> The techniques are described herein by way of example to dynamic flow capture (DFC) service cards that can monitor and distribute targeted network communications to content destinations under high traffic rates, even core traffic rates of the Internet, including OC-3, OC-12, OC-48, OC-192, and higher rates. Moreover, the techniques described herein allow control sources (such as Internet service providers, customers, or law enforcement agencies) to tap new or current packet flows within an extremely small period of time after specifying flow capture information, e.g., within 50 milliseconds, even under high-volume networks.
> Further, the techniques can readily be applied in large networks that may have one or more million of concurrent packet flows, and where control sources may define hundreds of thousands of filter criteria entries in order to target specific communications.
I'm not sure if it's infringing though. If Apple says that they do not collect personal data and that the information is thrown away and this is done for a legitimate business purpose or for the customers (i.e. protecting customers), it may well be fine according to the GDPR.
Correct. Among apple, MS and google you have no choice. That is why regulation is necessary.
And since people have demonstrated that they're not appropriately incentivized to stop, that's where the regulations snap in, which brings us back to where we started: there's a need for it.
Solution could just be to regulate based on a tightly managed definition of age. Enable younger companies to have a bit more flexibility in determining their business model as controls slowly snap in as the company ages. There are some pretty clear loopholes that immediately come to mind (e.g. re-chartering the company every few years and transferring assets) that'll need to somehow be managed, but it should be enough to give companies runway to figure out how to disrupt and monetize while coming into compliance with consumer protections.
Just like Standard Oil or AT & T were displaced, right? By customers going to their competitors? I'm being sarcastic, obviously :-)
Im ok with this
I agree that anyone critiquing Apple's OCSP design should understand it, and the critique should be more nuanced than "just turn that feature off." Computers are now skeleton keys to our lives and we have to go forward rather than back in figuring out how to design them so they can safely do everything we need them to do.
But it's not hard to justify the sudden criticism here -- it happened after Apple's bad design of the OCSP feature broke local applications, drawing a lot more attention to how it worked. It's reasonable to then ask whether other parts of the design were also poor, as Apple itself obviously is from the changes it's already announced.
To take the author up on what should replace OCSP checks -- how about using something like bloom filters for offline checks, and something like haveibeenpwned's k-anonymity for online checks, to remove the possibility that either Apple or a third party could use OCSP for surveillance?
OCSP can be locally cached, and Apple's implementation does exactly that. But eventually you'll have to refresh the cache and then the implementation needs to be fault tolerant (Apple's wasn't).
OCSP leaks what vendors your installed applications are from. The list of leaked certificates changes daily, so any good implementation is going to check again at least several times a week. If you download the entire database, you're just consuming hundreds of megabytes of bandwidth/storage but aren't removing the need to refresh/expire the cache.
I'd argue the two biggest flaws Apple's system has is bad fault tolerance and also no user accessible opt out (even if just for emergencies).
Those DNS lookups tell your ISP 1) that you use a mac and 2) that you have an application from a specific developer installed.
I think I trust my ISP less than I trust Apple, here. Am I wrong to do so?
In the earlier HN thread when the server was offline, it was said that Apple only cached OCSP results for 5 minutes. Is that not true? If it is true, I don't think that's what GP is asking for as far as local caching.
The issue from my understanding was half the breakage, but half the fact that Apple was sending back telemetry about what apps you launched.
Are you familiar with OCSP conceptually? I have done a reasonable amount of work with signatures and certificates, including OCSP. All my experience is in a commercial, enterprise context but I think these technologies need to start filtering down to the consumer before the capability for security evaporates.
I think it's a consumer-positive direction for Apple to provide this service. I would be interested to hear from someone who holds the view that this is not a service, or disagrees in other ways, but I think this is the right direction for consumers. The alternative, as I see it, is that every person installing an app needs to start searching for CVE notices and headlines in trade papers declaring a compromise.
Apple have applied an enterprise middleware to their infrastructure. I think perhaps they could have been more transparent in the delivery. A lot of the outrage now is driven by people only finding out about the underlying process for the first time. I stand by the right of these companies to choose their business model to disallow (or restrict) execution of apps they believe to be compromised. I also firmly believe in a varied and free market for software, hardware, and infrastructure.
In essence: You can choose to use Apple and do it the Apple way. Equally you can choose to build your computer from components sourced from anywhere, install any free OS, and any apps. Personally I do choose to do it the Apple way, and I am inconvenienced by that from time to time. I curse my computer and its creators on a daily basis. It's part of the relationship we all build with our tools.
got a bit off track towards the end...
The second check of an app is necessary to check for revocation: for a developer that decides that they've been compromised and wants to stop execution of their software. The alternative would be to use certificate revocation lists instead of OCSP. CRL lists can get long, so OCSP is often preferred to CRLs.
And it's absolutely not normal to fail if that revocation check doesn't succeed anyway.
Also are CRLs really that bad in practice? I know it would be a bad idea on a smartphone but is it really an issue on a laptop?
I can’t help but feel that the issue is something else, not bandwidth.
Actually that unknown exposes the problem with the original article's demand that critics explain what they want to replace Apple's system. No one outside of Apple is in a position to design a system that addresses all of the design constraints -- we don't even know what they all are. But we are in a position to assert some additional design constraints, such as requiring that the system not leak developer certs to eavesdroppers every time an application is run, and expect Apple to figure out a solution that takes them into account.
If that's true, that's a lazy excuse on Apple's part. Differential updates has been a solved problem for many, many years. Hourly or even daily diffs would be tiny (likely much less traffic than the OCSP checks that occur now), and expired certs could be dropped from the local store, so it wouldn't grow without bound. (Sure, ok, people could turn their clocks back and defeat that last bit, but doing that would break other things, too, like TLS to any website with a reasonably recent cert.)
I'm getting the same feeling I did years ago when it was discovered that the iPhone had a historical database of all the locations you'd been to. There were rather a lot of articles about how Apple were "tracking you everywhere you went" and so on.
The reason it's similar – they are both dumb, technically bad, and privacy-compromising decisions, and in both cases much of the public discussion about it has been a little hysterical and off-base.
Apple should 100% be criticised for this particular failure. It's obviously a bad implementation from a technical and usability point of view; the privacy implications are bad, and this features should not have been able to make it out as-is.
But I've legitimately seen people describe this as "Apple's telemetry" which is just obvious nonsense and distracts from the actual problem – how did such a bad implementation of a useful feature end up in a major commercial product, and how are they going to make sure it doesn't happen again?
The only thing that made it sound bad were people saying things like “Apple stores your location history”, knowing that it would create the false impression that Apple was uploading location data to their servers.
This situation is similar in that there are people posting misleading inuendo about Apple having some hidden agenda, but the difference is that there do seem like real design problems with the mechanism this time.
Apple sees the client IP of the push connection, naturally.
Therefore, based on IP geolocation, Apple really does have coarse location history for every single iOS device by serial number.
Apple is indeed storing your (coarse) location history.
I'm not sure why these (plainly factual) statements are controversial.
This is not an isolated incident, and their own OS services are explicitly whitelisted to bypass firewalls and VPNs in Big Sur.
There’s telemetry in most Apple apps, now, and you can’t disable it or opt out, or even block or VPN it in some cases. I encourage you to read their disclosures when you first launch Maps or TV or App Store. Every keystroke in the systemwide search box hits the network by default, too.
I am pretty well-informed about most of the data that is being generated and transmitted by my machine. This is an issue where it it totally reasonable to pressure Apple and all other companies to design for privacy as a priority and ensure that any of this data collection can be disabled. I don’t think an effective means to do that is to deliberately conflate and mislead about issues like the one under discussion.
OCSP has since been improved to increase privacy and security, but the extensions to enable that only considered OCSP in the context of TLS.
However, it's not an isolated problem. It feels that every other week something happens that undermines my confidence in Macbook as good device for making music.
That is not anti-consumer.
Revoking signatures and disabling the apps on user devices to protect your business model is definitely anti-consumer in my book.
You could easily see Apple revoking signatures because of DMCA claims. Even faulty ones, like the claim RIAA made against youtube-dl on GitHub.
The only anti-consumer behavior in your situation came from Epic who knowingly violated the rules as a PR stunt.
Exactly. Never forget, it was Epic who threw their users under the bus, not Apple.
Epic expected you to be a soldier in their fight. They expected you to make a sacrifice you were not willing to make.
That's entirely on Epic.
Even though I agree that in the Epic case, most of the blame lies with Epic, I still have a problem with Apple: the signature revocation system is used for more things than removing malware. I think it is user hostile and anti consumer to disable installed apps on other grounds, because the users might be dependent on them.
I'd like to be able to run programs and apps on my machine that are not Apple-approved.
I thought you could anyway. You would right-click the app in Finder and choose Open — from then on, it would continue to open.
Or is that a different mechanism?
But when returning to a digital musical work months or years later, oftentimes the idea is to improve or otherwise rework it ... just as live bands do constantly. A non-working essential plug-in (filter, synth, VCO, whatever) might make that much more difficult.
One of the biggest headaches in computer music-making is how much time fighting the tech takes away from the creative process. Noone needs their OS to be adding to their distress. Let alone switching serial-port designs every few years (obsoleting trusted and often expensive equipment).
A machine that's used for making professional music should not be upgraded or connected to the internet. If it's for a hobby... I think we will have to live with the compromise if we want to have the latest security fixes and connect to the internet.
Most DAW-makers are constantly upgrading their software. And they often obsolete their older versions to get in sync with new OS's. 'Keeping the old stuff' sometimes isn't an option.
Physical instruments keep working for decades ... but thanks to OS upgrades, valued digital hardware and/or software instruments (say by Opcode or Native) can be lost to stupid or cavalier changes. Anyone who's been making 'professional music' for long has been bitten many times.
The only solution to not having a broken music workstation is to never connect that machine to the internet and never update it. Physical instruments keep working for decades because...they are never connected to the internet and never updated.
This is exactly what Apple has already done to the iTunes world, music you had a decade ago is suddenly inaccessible
Music bought via iTunes doesn't have any DRM since 2009.
One factor in the irrational defence could be a kind of psychological protection of investment. Apple isnt just another company, its an entire lifestyle ecosystem. Those invested in Apple have the watch, tv, laptop, itunes etc. And together they really do "just work" - the user experience is great!
So to admit that Apple is flawed, that their investment was a bad idea is to admit they were wrong and that their time and money was wasted. No-one wants to be a sucker. Far better therefore to protect your investment. Apple really are genius to pull this off. Apple is part of people's identity.
The balancing act between freedom and security is never going to not be a debate. Engaging in it in good faith as in the linked article is a reasonable approach (that you don't usually see represented in Klein's oeuvre): consider tradeoffs, counterarguments, and historical context from different perspectives. Apple is flawed sure, because all complex solutions are inherently flawed. They have a responsibility to be more open and transparent, and I'd prefer to see more details and updates to their otherwise laudable security whitepaper , and clearer more accessible user-definable toggles. But your or my preferred solution probably isn't the ideal default for most users, or for the ecosystem as a whole.
thats the problem with people who dismiss Klein and her "oeuvre" you are so desperate to stay in the middle you refuse to see evidence right in front of your face
I’m not dismissing Klein, I referred to her “oeuvre” out of my respect for her as an artist.
There is no brand of "anti-Apple" - that doesn't really exist as brands don't work like that BUT groups of people do though, and people can have identities of being in a group. (I can't really say I have noticed an anti-apple subreddit either)
One way groups defend themselves is to define the opposition - by defining the opposition to Apple fans as anti-apple-groups, it helps bolster the coherence of the Apple fan group and defend the brand and investment for the individual. We are under attack, close ranks and protect each other.
The ecosystem does just work and the user experience is great compared to the alternatives. It's also possible to use only specific products and switch off various cloud or telemetry options. There's still a long way to go to reach a private OS, but Apple has by far the best privacy stance when compared to Google or Microsoft and there is nobody who offers such an OS right now. The best one can hope for is build their own Linux-based distro or use BSD and then one has to be prepared to invest a significant amount of time.
Apple didn't get challenged through the GDPR yet because everyone's busy with Google and Facebook still. But, if you or anyone else would like to lodge a complaint, maybe this will be decided for the customers (I think it's borderline) and we'll get an option to switch it off.
What I find weird is regardless of what the discussion is involving Apple, someone needs to pop in with one of these theories about Apple tribalism.
Very very few people are in fact "defending" Apple here. Even among those few, the sentiment is largely that this is bad and Apple is fixing it.
Someone who says this usually holds an opposing position and simply has their own tribal allegiance. Perhaps assume good faith on the part of those who do not make the same choices you do.
I can believe Apple do care about privacy, but ultimately they're just another company. For example, I'm sure apple would love the Epic lawsuit to be decided based on a poll of HN users - "I would rather not have the freedom to run whatever I want, because [insert bizarre anecdote]".
Don't project your own beliefs onto apple, vote with your wallet if they annoy you - it's just a trackpad.
I'm fascinated how technically knowledgeable people don't understand OCSP.
Checking the revocation status of certificates is why OCSP was created. It happens via HTTP. Why? Because you cannot check a certificate used for the HTTPS connection when you are using HTTPS for the connection. Apple leveraged OCSP for Gatekeeper since it does the same thing, checking certificates, in this case a developer certificate. That is all it does.
I agree with you, though--it seems like they solved a valid problem with the most obvious, commonly-used solution. The real debate is probably just over whether or not the problem is a sufficiently large threat to justify the downsides.
It doesn’t leak the application name or any personal information, and Apple doesn’t store it permanently.
The concept here is fine, they just screwed the pooch a bit on implementation. And as usual, HN blew it out of proportion.
Most people probably never noticed this phone-home feature existed, just like they never knew that NSA was recording everything. (Obviously anyone who bothered to look under the hood could've seen it, but hey, how many people do that).
A simple opt-out toggle, for privacy reasons, would be a good start... people should stay in control of their own data and be able to choose themselves whether or not they are willing to trade in their privacy (for security in this case).
The other thing I'd like to see is the app open immediately, w/ the check happening asynchronously in the background. (This seems like super-basic good engineering to me.) No idea if they're planning to fix that or not.
How would this work? The point of the check is to block malware from running, and opening without the check would, by definition, negate the entire system. If malware authors get wise to the async scheme, they can write programs that deliver their payload in the opening milliseconds of an app’s execution, while the network call is running (even the fastest pings would leave 1 or 2 ms worth of window).
Waiting to open an app based on a network request is basically just guaranteed to give you a terrible experience some % of the time.
Maybe fancier solutions like a local blacklist are needed. (Which weirdly it looks like Apple had and then moved away from?)
If you want rapid blacklisting, a frequent call to Apple to say "anything new blacklisted?" would suffice. Same as push notification.
Unfortunately we don't have an Apple competitor that cares enough about your privacy to not want anybody, including themselves, to have access to it.
If you had evidence you’d be able to be specific.
Of course they should encrypt the backups, but perhaps the alternative was going to be some kind of legislation that would be even worse.
That's how kids call "handing over all cloud customer data to FBI" these days?
Holy whitewashing batman, that's a lot of speculative mental gymnastics.
To put it simply you have no evidence that supports Apple here other than a "but perhaps the alternative was... whatever I just came up with" .
To paraphrase yourself in another comment: "that's intellectual dishonesty about Apple."
If Apple cared a single bit about privacy it would have encrypted customer data from the begining instead of planning to eventually do it one day only to give up uppon FBI request.
Case to the point: online signature check was a technical decision, to fight malware. It was implemented similarly by other OS vendors (Microsoft) and it's been this way for years.
Now we discover that it has the unfortunate side-effect that it lessens privacy. Apple (and probably other OS vendors) are working to improve that in the future. Also, a technical issue.
It was never about privacy. It was never a political issue. Can we please just discuss it from a technological standpoint?
The political issue is "if we can't provide these features without weakening privacy, should we still provide them?"
Aren't they both important points to discuss?
Political issues on the other hand, just end up antagonizing us ever more. We argue endlessly, go on countless tangents and nobody agrees on anything because we see the very issues under different lights, experiences, values and cultures.
I am tired of politics. I just want to get stuff done. Hopefully good stuff, but I’d settle for slightly better.
You may agree with the status-quo politics, but other people don't. For them it's not about the tech, it's about the overall direction. For those people the important discussion to have is political.
For instance, I personally am against the current direction macOS and Windows are headed. I have no problem with these kind of security measures as long as there is a button to opt out. Currently, Apple decides for everyone, and doesn't provide ways to opt out for power users. I dislike this, and I feel like discussing better cryptography, security protocols, etc, doesn't address my priorities.
I am sorry that politics is tiring but I think that is just a reality of politics being a manifestation of natural forces. The natural world is competitive, it's competitive on the cellular level, the food chain is competitive, and human societies are competitive too. Looking the other way doesn't change the reality, it just guarantees that the opinions of others will become more significant to the world than your own. Just like how our cells compete to form the best possible body, I think we are obligated to compete with our ideas to form the best possible society.
It's easy to shrug it off as simply a technical issue, and it's very convenient for the PR department as well.
Computers have become a gateway to the digital world, which makes up a large part of our lives. And Apple is a huge player that has the power to shape the future of computing. Apple's vision of computing is on a trajectory where the end game is clear: Users have no control over their devices and will only be doing what Apple allows them to do. This is the case on iOS already, and macOS will be there in a few years.
Some say: Decide with your purse. If you don't like Apple's vision, then don't buy their products. But this argument is like saying "This is how we handle things in country X, if you don't like it, move somewhere else." Some of us have invested lots in tools and software in Apple land, so changing platforms will take some time. But even if we can change, Apple is shaping the future of this industry, in a way that it restricts freedom and limits options. And this is not a future we should accept. Instead we should be opposing and fighting it.
What makes you say that Apple has made a "decision to disallow installing apps that were not downloaded from the macOS-app store"? That seems kind of obviously untrue.
I mean, I myself sell Mac software outside the Mac App Store and have never had a complaint.
Maybe you were thinking of the iOS app store?
I'm being reminded of an old song by Tom Lehrer [https://www.youtube.com/watch?v=QEJ9HrZq7Ro]
The technical discussion of signatures is well documented.
No. Unintended consequences are important. There are privacy issues, therefore we need to discuss privacy.
This is an oversimplification. It also helps protect Apple's business model: you must pay Apple a fee for services (and show ID) to be able to sign your apps for distribution on this platform. Imagine if you had to show ID to get a TLS certificate for your website.
Don't conflate the issue - this is also a move to protect certain streams of Apple services revenue, in addition to protecting users from malware, and it always has been.
What I don't see is you providing any real evidence that this is a core part of the decision-making process.
Apple isn't particularly incentivized to find a different way that avoids the tools they already have that already make it harder and costlier for parties to get around their security mechanisms. That is not the same as making decisions because they protect the business model.
Which is to say, it appears that you're the one oversimplifying and conflating (btw, I do not think that word means what you think it means) some very different motivations.
Of the tens of thousands of people who had a hand in shaping macOS today, it's impossible to say what their collective intentions were in all the decisions they made. So I think it's useless to talk only about the intentions you can prove just by looking at their formal decision-making process. That is why we need to be working to protect privacy at every level with a "defense in depth" approach.
And of course it goes without saying that all major vendors have issues like this and could be working harder to make sure that these incentives don't get created.
No doubt, but even in simple systems this is considerably more difficult than it sounds. Incentive systems are not easy, and any incentive system is often twisted into a game that produces unexpected poor behaviors. Try to achieve that at a 100+k employee company and you're guaranteed to end up with misaligned or counterintuitive incentives.
The reason something stronger than simple assertion matters here is because Apple actually has added an incentive system: trumpeting privacy as a core feature means they tie their brand to their ability to deliver on privacy. That means that being called out for privacy issues has greater potential harm for the company, and thus its bottom line, in a way that's far more direct and monetarily impactful than $100 annual developer fees ever will be.
Even cynically, you can see the same mechanic at work with the recent reduction of app store percentages for small businesses. Apple has made it a core part of its developer outreach that the app store is a good thing for developers: they've made it part of their brand. If they're called out for something that makes many of those developers disagree, or even for something that makes users perceive that part of the brand as incorrect, it has implications to the whole company's bottom line (not just the developer id sliver of it).
> it's useless to talk only about the intentions you can prove just by looking at their formal decision-making process
For what it's worth, my point isn't tied to formal decision-making, it's tied to the informal parts as well. What I'm saying is that this can be a completely technical decision that relies on the existing business structure without ever taking into account whether it will raise revenue from developer ids. The developer id's existence is a fact. The “DoS resistance” characteristics, if you will, of having the developer ids cost money is a fact. As a system architect, leveraging those facts for system security seems perfectly reasonable. Yes, absolutely they should have taken privacy into account as well. “We haven't used it” and “we won't use it” aren't the same as “we can't use it”.
But here's the thing: to me, the strongest indicator of whether a company is committed to an approach is whether they react positively when they are called out, or whether they double down on their mistakes. Apple was called out here, and they've committed to doing just about all of the things they should be doing. You could use this same framing to say that Apple isn't as committed to developers thriving on their platform as they are to living up to their privacy commitment, of course.
Tl;dr: customers are part of the incentive system, tying your brand to a commitment aggressively as Apple has tied theirs to privacy has an impact on customers, and this is a great way to introduce an additional external forcing function to your internal teams.
Take it from dhh if you don't believe me:
> [Online signature check] is also a move to protect certain streams of Apple services revenue, in addition to protecting users from malware, and it always has been.
To restate and avoid drifting into another non sequitur, this ascribes intent; that is, it suggests that part of the reason online signature check was added, and part of how it has been evolved, is to protect certain streams of Apple services revenue. That would be your argument, which has no evidence to support it, but you suggest is backed by “facts”, which appear to be nowhere to be found.
Are these facts somewhere to be found? Or are you stating hypotheses as facts?
The same code that keeps malware from running on a mac (or iphone) keeps non-app-store apps from running on an iphone, or prompts you to move non-notarized apps to the trash on a mac.
It's not some separate thing: the exact same code path that protects the consumer store revenue and developer notarization service revenue also protects users against malware.
EDIT, for clarity: I am speaking of Apple-developed, Apple-owned platform security code, where root keys are not held by anyone other than Apple, not generic crypto primitives or the concept of code signing in general (where we have a P-as-in-public PKI).
Everyone seems to like code signing.
Adds a lot of wrenches when your just trying to do basic stuff like codesign and push test builds onto a USB connected device from a bash script and it is flaky and undocumented as fuck.
I am honestly jealous of my android counterparts with their far simpler system and first class command line support via adb.
That's a theory, not a fact, nor is it widely recognized.
By any reasonable accounting of costs, the $99 Apple Developer Program fee isn't meant to be a profit center that Apple is trying to "protect". It mainly helps prevent spam accounts and helps offset the cost of reviewing and distributing free apps in the Mac and iOS app stores.
Citation needed please, Mr. Giuliani.